Worldwide over-the-top (OTT) video revenue reached 154 billion U.S. dollars in 2022. And with the growth of international streaming services, it’s expected that this valuation will continue to increase. With consumer demand for content soaring, internet access widening, and mobile primacy cemented, video streaming has become a societal staple. For users, video streaming is now an everyday necessity so common that most of us take it for granted. But for streaming platforms it’s a complex – and global – operation.
But how does video streaming work? And what are some common streaming challenges that come with broadcasting at scale?
In a nutshell, video streaming is a technology that enables viewers to watch content online without needing to download an entire media file.
Streaming media relies on the use of streaming protocols - standardized methods for the segmentation and transmission of data. Codecs are an essential component of these protocols and consist of two parts: an encoder (which compresses the media file) and a decoder (that decompresses the media file).
From start to finish, the process of streaming a video looks something like this:
In the case of live streaming, video signals are converted into a compressed digital signal and multicast over a web server.
When streaming to dispersed audiences, it’s common to use a geographically diverse network of servers called a content delivery network (CDN). CDNs allow media files to be stored at the “network edge” closer to the end-user’s physical geographical location to improve the delivery and performance of streams.
The global success of OTT streaming services, including the likes of Netflix and Disney+ as well as live streaming platforms, also comes with mounting challenges. The wider and more geographically dispersed the audience, the more streaming challenges and potential points of failure. Physical equipment can break, demand surges can cause downtime, networks can fail.
When asked what tends to break when live streaming at scale, Paramount’s SVP of ad operations, Jarred Wilichinsky, said “everything. Everything breaks. With digital, nothing is 100%”.
The architectural demands of maintaining seamless performance and uptime when streaming to global audiences are huge. But there are provisions that platforms can put in place to overcome these streaming challenges and improve the quality of global broadcasts.
One of the biggest streaming challenges is the many potential points of failure, so building redundancy into your infrastructure is the first rule of streaming at scale. Identify the most critical potential points of failure and invest in building redundancy for those areas. When it comes to your underlying infrastructure, there are four categories of redundancy to consider: in-server redundancy, backup, building resiliency across multiple dedicated streaming servers, and disaster recovery planning.
“You need to go through that stuff ahead of time” says Adam Miller, CEO of Nomad. And that means planning out as many scenarios as possible from disaster to recovery.
That’s exactly the approach Meta takes when preparing to scale live streaming to millions of simultaneous viewers via Facebook Live and Facebook Watch. As content has expanded to include shows and event coverage, Meta has had to find ways to support broadcast-quality live streams at scale. To do this Meta built in redundancy at every point in its delivery infrastructure from transport through to playback to ensure that it could withstand most types of failure and support streams with unprecedented viewership.
Many large-scale live streaming services also use bonded internet connections to ensure reliable broadcasts. It involves using multiple simultaneous internet connections and an algorithm to distribute incoming data packets amongst the available connections. Bonded connections add an extra layer of redundancy by ensuring that the volume of data distributed to each internet connection corresponds to its relative strength. So no connection will ever be given more than it can handle.
As the pressure on content providers mounts, the capabilities of a traditional CDN are increasingly falling short. This is because most CDNs are located either at distributed data centers or various points of presence (PoP) within the internet exchange. Whilst this does help limit the distance content must travel to reach the end user, these systems are still too centralized to guarantee high quality end-user experiences.
To stream at scale and deliver the real-time responses that end users expect, content providers are turning to the edge. Edge CDNs are deployed at core nodes in the inner and outer network edge. Caching content at the network edge results in a faster transfer of data, better quality of service for end users, and a lighter centralized data load resulting in increased network capacity.
Load balancing is integral to effectively manage CDN traffic for global live streams. Distributing traffic across multiple CDNs reduces the load on each and improves efficiency. When streaming globally, the most effective approach is to organize traffic distribution by region. Much of the time this comes down to identifying which CDNs have the best streaming infrastructure in particular geographies. According to Joshua Jonson, director of solutions architects at EdgeNext, it’s all about knowing “which provider is a dominant player in that general area [and] who has got the infrastructure”.
Streaming to 117 million members worldwide comes with significant technical challenges. So Netflix leverages machine learning to provide high-quality streaming experiences to its end users amidst fluctuating conditions. Using statistical modelling and machine learning, Netflix can constantly observe network and device conditions as well as video quality for every session. In turn, this improves network quality prediction, video quality adaption during playback and allows for predictive caching and device anomaly detection.
As streaming continues to gain popularity globally, new products designed to improve video distribution are regularly emerging. Beamr, for example, is a content-adaptive bitrate technology designed to reduce video bitrate, file size, and CDN costs. Beamr has made its video optimization technology globally available as a SaaS product and is now launching a partnership with Nvidia which will enable the Beamr solution to integrate with all Nvidia codecs including AVC, HEVC, and AV1.
It’s hoped that this new API will help to further accelerate video optimization and cost reduction. Bob Pette, Nvidia’s vice president of professional visualization says the integration will “provide content providers with significant bitrate and storage reduction without compromising on live-streaming quality”.
Streaming has gone global. And that means streaming providers must deal with mounting technical challenges. Challenges like maintaining video quality over unpredictable networks, delivering content to geographically diverse audiences, and contending with multiple potential points of failure. Whilst this is certainly not an easy assignment, the success of SVOD and AVOD platforms like Netflix and Facebook Watch have already shown us that it is achievable. The key ingredients to success? Build for failure, embrace innovation, and leverage the edge.