ABSTRACT

While HTTP adaptive streaming (HAS) technology has been very successful, it also generally introduces a significant amount of live delay as experienced by the end viewer.

Multiple elements in the video preparation and delivery chain contribute to live delay, and many of these elements are unique to HAS systems versus traditional streaming systems such as RTSP and RTMP.

This paper describes how improvements both in the structure of the media, the delivery workflow, and the media player can be combined to produce a system that compares well with broadcast. The paper concludes with a preview of advances in delivery technology (such as HTTP2) that will improve the experience even more in the near future.

INTRODUCTION

While HTTP adaptive streaming (HAS) technology has been very successful in delivering stable over-the-top video experiences at large scale, the technology has a number of important limitations as well.

One significant limitation is the introduction of large delays for live content, presenting the content to the viewer far later than in traditional broadcast systems. This issue makes it very difficult for today’s broadcasters to provide over-the-top digital services for sports and other live events with quality-of-experience that compares well to earlier broadcast systems.

Working around these limitations is challenging due to the fact that the delays are not caused by a single component, but are introduced throughout the delivery system.

This paper provides a detailed analysis of the mechanisms of HTTP adaptive streaming systems that lead to this delay, showing how each element in the delivery system contributes to the problem.The paper describes how improvements both in the structure of the media, the delivery workflow, and the media player can be combined to produce a system that compares well with broadcast. The paper concludes with a preview of advances in delivery technology (such as HTTP2) that will improve the experience even more in the near future.

PREVIOUS WORK

This paper builds on earlier work by Swaminathan et al (1) and by Bouzakaria, et al (2) which describe the use of reduced size delivery units and HTTP chunked transfer coding to eliminate delay in HAS systems caused by the need to accumulate at least one complete segment of media before that media can be transferred to a downstream processor.

These works note that this behavior introduces delay of at least the duration of the segment (which is commonly 4s-10s long), plus any additional delay associated with connection setup time, etc.

To address the impact of segment size, these systems break up individual media segments into smaller duration units (“chunks”), which can be incrementally transferred using HTTP’s “chunked transfer encoding” capability.This allows the transfer to begin before the entire segment has been accumulated.

Live delay then becomes a function of chunk duration, rather than of segment duration.

While this method is effective in reducing live delay introduced by segment duration, it does have some drawbacks.

This technique relies on specialized behavior by the media source and origin server to begin sending chunked data before the segment is complete. If either element is not aware of the need to send data as soon as it becomes available, it will negate the reduction in live delay.

In addition, standard HTTP cache semantics do not describe how a caching proxy should behave when receiving chunked content, and in most cases a standard proxy would not retransmit the chunked media until the entire segment had arrived. Again, this reintroduces live delay into the system.

In this paper, we will extend the architecture described in this previous work to include a multi-hop HTTP caching layer that is consistent with the topology of many Content Delivery Networks (CDN) operating today.

We will also describe techniques that utilize new capabilities in HTTP to decrease live delay without introducing specialized behaviors at the HTTP layer.

DOWNLOAD THE FULL TECH PAPER BELOW