An Intellyx BrainBlog by Jason English
Part 4 of 4 of the Edge of Experience Series for Hydrolix
In this series, I started out at the edge of experience, explaining how observability data pours in from a wide, distributed network of constantly changing and intermittently connected web pages, endpoints and edge devices. My colleague Jason Bloomberg continued the thread, highlighting how Hydrolix ‘cracked the code’ for optimizing, indexing and filtering logs and high-cardinality telemetry data sets for fast scalability. Then, Jason went on to describe where the buck stops for owning customer experience (or CX) within a brand.
For this final installment of this Edge of Experience series, let’s turn our perspective around, and look at the customer experience challenge from its origin at the service provider.
Delivering streaming data over the Internet—from its point of origin, perhaps a transaction on a mainframe or a file on a server somewhere in a datacenter or cloud instance, to an end user on an edge device somewhere else in the world—has always been a complex process behind the scenes.
Now try delivering high-bandwidth, real-time streaming video feeds from massively hyperscaled clusters to end customers, in addition to telemetry about that data stream. In this rarified air, any performance lag or issue—even one that lasts a few seconds—can be catastrophic, causing churn among paid subscribers with high expectations.
Streaming TV offers us a glimpse into one of the ultimate competitive arenas for digital businesses, a literal series finale for QoE that any enterprise serving up mission-critical data to customers can draw insights from.
Delivering higher definition for defining moments
Video, whether it is streamed from a live camera feed in a stadium or an encoded video file of a movie, is about as bandwidth-intensive and data rich as it gets.
Some of us might remember good old 640×480 standard definition television delivered over the air or through a cable provider, where each of 30 frames per second contained just over 300K pixels. Contrast that with today’s OTT (over-the-top) Internet-based 4K video streams at 3840×2160, which multiplies to roughly 8.3 million pixels per frame. Plus, providers may stream alternate feeds for 1080p and 720p resolution endpoint players, and 5.1 multichannel audio instead of 2 channel stereo.
To achieve the high bitrate level customers expect, with quick launch times, and smooth-looking, glitch-free viewing, streaming video providers must leverage a complex mix of cloud services, network fabric, CDNs, and OTT technologies, including installed player software or browser plugins on the authorized end user device, whether it is a smart TV or a mobile phone.
From the viewpoint of a video streaming service provider, let’s not be so concerned about the relative size or number of video streams moving toward customers over software-defined networks, and instead focus on information—data and metadata—about the journey of those streams.
Streaming providers need real-time monitoring and advanced analytics to identify and mitigate issues as they appear. The quantity of telemetry data (logs, metrics, events, etc.) generated across this entire origin-to-edge (O2E) delivery chain is immense. Standard observability tooling and tracking tools designed for normal websites and apps can’t scale to keep up with the flood of data.
Capitalizing on a heterogeneous environment
As if getting our arms around all this telemetry data for delivering a video to a subscriber wasn’t complicated enough, business realities throw another monkey wrench into our observability plans. We need revenue to keep streaming, after all.
A paid-service streamer that charges customers a direct subscription fee would obviously want to verify that viewers were paid subscribers using a valid account, and if so, that they receive an excellent QoE. But there are other ways customers could subscribe or pay, for instance through another streamer or smart TV provider (for instance, HBO on Prime, or Paramount on Roku etc.).
As a service provider, I’d want to know that my subscribers enjoy a seamless service, however it is delivered and consumed. But not all video streaming services and channels are paid, or even subscription-based.
Advertising models introduce complications
Yes, advertising still funds the production and distribution of video content—especially in the United States—though maybe not as dominantly as it did in the Golden Age of SD TV. Ads underwrite just about every free streaming service, and even many of the paid ones, who are now offering lower-tier subscriptions with mandatory ad blocks included.
Since advertisers also want data about the performance and reach of their ads in order to justify the ROI of new media buys, we have a new constituent at the table. Advertisers don’t see volume and QoE in the same way as the streamers. The number of times an ad gets delivered to the edge in a session isn’t as important to them as customer engagement.
Connected or Smart TV providers like Samsung and Roku have also entered the fray, with their own ad channels displayed within navigation screens and “live TV” offerings. Some of the forward-thinking advertisers in these various channels want to measure viewership down to the minute of video played, as well as targeting users in specific geo-locations.
While it’s great that ad revenue helps sustain the streaming industry, look at the disruptive knock-on effects of ads on websites for the last 20-plus years, or within native mobile apps such as games. As adtech vendors moved beyond standard banner ads, employing popovers and multi-step clickthroughs in order to grab user attention, long load times and interrupted sessions caused user QoE to lag.
Great! Now we have to worry about the performance of our own streaming service, as well as multiple third parties such as advertisers and the endpoint device vendors themselves. The sheer volume of telemetry data required will be immense, putting real-time observability out of reach of general-purpose tooling.
Tracking from site to screen with Hydrolix and Mux
As we discussed in our first chapter, CDNs have been around for a long time, but streaming services must now unlock telemetry data from multiple CDN providers, in the context of content vendors, advertising constituents and end user cohorts in multiple regions, in order to attain real-time observability into QoE.
Hydrolix-powered TrafficPeak for Akamai, and Hydrolix Cascade for AWS which provides a data lake within AWS S3 buckets, can take in and retain complete log data from multiple CDNs at low cost, with sub-second query times for real-time and historical analytics.
Through a new partnership and integration with Mux, Hydrolix can now take in telemetry data directly from viewers on connected TVs, mobile player apps and web browsers. Now our engineers can finally drill down into individual client-side viewing sessions, to spot minor glitches and major performance issues, for faster remediation of the root causes of interruptions.
Figure 1. View of Mux client-side viewer telemetry within Hydrolix Cascade metrics dashboard, showing user levels, bitrates, throughput and buffer starvation rates that could impact QoE. Image Source: Hydrolix.
As an additional benefit, we can also use this detailed client data to spot pirated streams and disconnect unauthorized freeloaders.
Hydrolix Cascade combines this rich user session-level data with telemetry from multiple CDNs, broadcast data from live camera transcoders like Zixi, event data from video-on-demand (VOD) encoders and players such as Bitmovin and other technologies at the source and endpoint.
Finally, our streaming business can form a holistic picture of QoE from origin to screen, to solve acute problems as they arise while improving longer-term trendlines for user engagement and experience metrics.
The Intellyx Take
A brand that earns the time and attention of its customers thanks to a high quality of experience will gain more market share than its competitors.
Defending your brand in a hyper-competitive environment like video streaming may not seem like it applies to you, but if you operate a digital business, it may finally be time to learn some brand new observability tricks, and reset our view of client telemetry data in this multi-CDN O2E context.
If the costs are affordable and the technology is proven, you have nothing to lose but the lag!
©2025 Intellyx B.V. Intellyx is editorially responsible for this document. At the time of writing, Hydrolix is an Intellyx customer, and Akamai is a former Intellyx customer. None of the other organizations mentioned here are Intellyx customers. No AI bots were used to write this content. Image sources: Adobe Image Express feature image, Hydrolix product screenshot.