The State of AI in On-Demand Streaming in 2026
This is a tale of a few different AI use cases that involve research, localization, advertising, and UX. The first is a public broadcaster in Austria. The second is a TV OS. The third is a well-known vendor. The last is a major media company. What all of these have in common is that their AI applications have moved from the proof-of-concept stage to full commercial implementation.
AI In Public Broadcasting
Austrian public service broadcaster ORF runs four TV channels, 12 radio stations, online news websites, and a streaming service. Three years ago, it began developing a pilot project called AiDitor. Designed for use by journalists in newsrooms, says ORF chief innovation officer Stefan Kollinger, AiDitor “has thousands of users throughout the company and hundreds of daily users.”
ORF incorporated different AI models to build a research agent that references multiple trustworthy data sources, wire services, web content, and video-, audio-, and text-based news content. Kollinger describes AiDitor as “an autonomous researching agent that is delivering research based on these sources. It takes 10–20 minutes [to return results] depending on how complex the research is.”

ORF AiDitor
According to Kollinger, AiDitor started as “an efficiency machine.” The process begins with an online article written in a traditional way. “From this online article,” he says, “you may need text for the news show in the evening. You may need a radio piece so you can repurpose your research notes or already finished products into something you want. This could be easy stuff like a translation. This could be way more complex stuff where you change the medium from TV to radio or from radio to podcast and from online text to whatever you need.”
Social media postings proved to be one of the hottest use cases for AiDitor, which ORF uses to ensure that the broadcaster’s social media content follows corporate brand standards.
“What we did differently with AI is we used all of the new, very fast models to build an engaging interface that is very useful, especially for journalists in need of speed,” Kollinger says, “so they can very quickly transcribe something by laptop, smartphone, iPad, or in our asset management system and get a very fast response.”
Other beneficial use cases for AiDitor include transcription and titling. “We are using it for generating subtitles live and on demand for our regional news shows,” Kollinger notes. “For a decade, you were reading that data is the new oil. Media companies have a lot of oil, but you never see it. Generative AI enables us to get more and more value out of our data.”
Previously, audio work fell to engineers. Now AI models enable the whole company to get rid of background and room noise without requiring any engineering experience. “All you need to do is drag and drop your audio file or enter the ID from the system, and it’s done,” Kollinger says. “It’s like we upskilled the whole company. Three thousand people can do audio engineering now. That’s how you make things successful.”
TiVo and AI-Driven Localization
“Over the past few years, we’ve been trying to find ways to get people to content as fast as possible,” says Dylan Wondra, VP of product management at Xperi, TiVo’s parent company. “We have started to use AI in localization. That’s one area where it’s very clear how you can use AI very fast. We went from using manual translators to then using tools to speed that up, and that has helped us grow. We’ve been able to expand very quickly by using AI in workflows like content onboarding—getting metadata into systems so that it can show up in recommendations.”
Since TiVo OS launched in August 2022, Wondra recalls, TiVo has expanded from use only in the U.S. to its current presence in 40 countries, which creates localization needs that didn’t exist before. “When you go into a particular country, you have to make sure to get what we call the ‘local heroes’—local content—and make sure it’s localized, make sure the user interface is localized, and make sure our voice solutions are localized to the particular languages in that country,” Wondra notes.
Wondra goes on to say that the bulk of his time over the last few years has been spent figuring out “how you take a product that was built for one market but then expand it smartly across all the other markets so that people can enjoy the product.”
TiVo has been a leader in being able to take content in to satisfy metadata rules, Wondra adds, whether it’s a public broadcaster in Europe, a broadcaster in the U.S., or a big Tier 1 OTT provider that doesn’t want you using its data for X, Y, and Z.
TiVo is known as an early adopter of metadata, building on its 2014 acquisition of Digitalsmiths, which was a recommendation and search service for pay TV. Previously, the classification of a lot of recommendation origins was more based on machine learning. Now, TiVo recommendations leverage AI, as social listening and consumer trends influence what content consumers may want to watch.

TiVo content recommendations
“We don’t always have data about you; we don’t know what you do inside apps or whatnot,” Wondra says. “So how do we work with our partners to kind of bring that internal experience from their app into the OS experience on the TV?”
Being able to ingest that data, know what the rules are, understand how to use it, and then have it go through the system to the UI to make sure you can find it through search or use it for recommendations is one of the platform’s strengths. Wondra adds that there are other ways to provide recommendations aside from the traditional path.
Given that TiVo only has access to a part of a consumer’s viewing behavior at any given time, Wondra asks, “How do we work with a partner to say, ‘Hey, what is the next episode they need to watch?’ There’s also editorial stuff like what’s trending out on the internet. How do you start to show that to expose users to other content that we may not know about? There are a lot of different tools you can use, and AI plays a part in how you source that stuff.”
Wondra goes on to say that TiVo has “a cool voice solution where you can do simple genre searches.” Another advantage of AI-driven voice queries is “continued conversation” queries. “Say you’re looking for movies from the ’90s, but only the ones with Tom Hanks,” Wondra explains. “In most solutions, it would switch from ’90s movies then to new Tom Hanks movies. But we look at the intent of how users are using their voice to ask, ‘Hey, is this the same search, or is it different?’ And I think that’s where our search hits on something that other searches may not.”
Comcast: Understanding Content Sentiment
Comcast is using AI to increase its advertisers’ understanding of content sentiment by examining as much as every frame of content, according to Peter Gibson, Comcast Technology Solutions’ VP of product. “When you run video AI above the standard IAB tags,” Gibson says, you gain a “deep understanding of the emotion of a particular scene.” The idea is to answer questions like, “‘Is this generating a positive emotion? Is it a good opportunity for an ad placement, or is it in the middle of a scene?’ Those are the things that we can pass off to then curate and match them to give a higher CPM,” he explains. This approach brings a “significant lift to the consumer brand recall of an advertisement.” Gibson notes that AI-powered contextual advertising has been “in the market for a couple of years now, but 2025 is when we really started to see it take off.” Prior to 2025, most forays into contextual were just “early-stage, initial proof-of-concept scoping efforts.”
Real deployments began in 2025, and “I think that’s going to accelerate in 2026,” he says.
What’s changed in the past year? Gibson explains that it’s mostly “the maturity of the integrations between the partner tooling. Maybe there’s some inertia there to start. What we saw early on was just a real rush of, ‘OK, this is great. Now what do we do with it?’” The challenge, he says, is “really proving out the business value,” addressing key questions on the commercial side like, “We can run this metadata, but is it going to translate into more engagement on our app?”
In a partnership with FreeWheel that included both contextual and non-contextual approaches, there was a 20% lift for contextual on brand recall for one of three top keywords and a 30% lift for one of two. In cases in which just one particular keyword was associated with a brand, Comcast saw a 38% lift from contextual compared to non-contextual.
Advertisers and brands, Gibson says, are now seeing contextual increase engagement and brand recall for consumers. “It’s really [about] getting the data and the metrics from those use cases to show not only the customers themselves, but the industry at large, that this is adding value, and this is where to make a business case internally for your video AI investments. You can see some good ROI based on that,” he notes. Gibson argues that it is not necessary in all instances to run AI across an entire asset to see measurable results. “You can achieve some efficiencies even on the processing side of it and just analyze the particular frames that you need and then use them,” he says. “It’s more about how we refine the volume of metadata that comes down to the specific use case or business case that we’re trying to drive.”
How long will it take to see an impact? “In some cases, it could be months if there’s just a little bit more oversight and QA and testing,” Gibson states. “It’s very quick to implement from a workflow standpoint in terms of passing the content through.”
Subbing, Dubbing, and Localization
The sophistication of AI-enabled subbing and dubbing has come a long way. Early-stage localized accuracy was not great. Now we can not only see accurate subbing and dubbing, but we can also hear commentators for larger-scale events talk in a localized language, much as NBC did with the Paris Olympics in 2024. “We’re partnering with localization company CAMB.AI on localized streams that preserve the annotation and voice of the commentator and make it available in that particular language,” Gibson says. “They support close to 200 languages globally.”

Localization solution provider CAMB.AI supports close to 200 languages for live and on-demand subbing and dubbing.
In 2026, AI-enabled capabilities for replacing commentators’ voices have reached a high level of quality and sophistication, providing a much more engaging, enjoyable consumer experience for content localized to a particular region.
Disney and Streaming Personalization
One ongoing and growing issue streaming consumers struggle with is the so-called “paradox of choice,” the feeling of being overwhelmed that comes from having too many content choices due to inadequate curation. This brings us to the next use case. While there are many instances in which AI improves efficiencies for internal workflows, one public-facing purpose AI increasingly serves is addressing the lack of tailoring or personalization in the user experience. “We want to empower a new generation of fandom that’s more interactive and immersive,” said Erin Teague, EVP of product management for Disney Entertainment and ESPN, during Disney’s recent Global Tech & Data Showcase at CES. Younger generations of viewers, including Gen Alpha, expect to be able to react, go deeper, and share their media experiences. This will require wholesale revisions of how companies think about delivering content. Bringing a much more personalized experience will be the first step. “Over time,” according to Teague, “we’ll evolve the experience as we explore applications for a variety of formats, categories, and content types for a dynamic feed of just what you’re interested in—from sports to news to entertainment—refreshed, in real time, based on your last visit.”
“When people turn on products, the question is always, ‘How do you build this habit-forming thing?’” says TiVo’s Wondra. “To do that, you just have to eliminate a lot of barriers to how they get to content, finding ways to simplify and give them what they want as fast as they can so they can just enjoy the lean-back TV experience.”
The underlying issue, Wondra explains, is that there are “so many apps, so much content, so many buttons on the remote control. It’s very interesting for me to hear that people only watch news, sports, and this show or this app.” Wondra says he never hears, “I want the broad spectrum of everything that’s on TV.” Rethinking the TV experience has never been more important. “Let’s start with our focus on Gen Alpha,” Teague noted. “These are the kids generally born in the 2010s or later—the first AI-native generation. And the interesting thing is they don’t see stories as something that happens to them. Instead, they expect more agency. They expect to interact with entertainment. Fans don’t just watch anymore; they react and research and remix. A dad and his daughter aren’t just streaming a Marvel show. They’re pausing to debate theories, looking up backstories on their phone, sharing clips with friends,” she said.

ESPN’s personalized SportsCenter For You
Personalization has been around on social for a long time. Finally, the streaming interface is catching up. On the ESPN side, “We launched SportsCenter For You,” Teague shared. “We essentially flipped the format on its head. Rather than produce just one SportsCenter for the masses, we are now creating hundreds of thousands of unique SportsCenters for individual fans.” The key to personalization is adapting the viewing experience to the way your audience consumes content. “Sports fans aren’t just watching the game on a single screen anymore; they’re also tracking their fantasy lineup or specific stats on another screen and then jumping into the group chat when something incredible happens,” Teague said. “This is not distracted viewing—this is entertainment in people’s lives today.”
AI Use Cases in 2026
Today’s AI use cases are fascinating. They save time; they enable streaming applications at scale, in multiple languages, and with extensive content metadata; and they use data differently from how it’s been used in the past. Some are fully implemented; others are still at the proof-of-concept stage. Trying something out, seeing if it works, and determining if it scales without the public eye on you seems easy compared to creating new experiences for viewers.
However, optimizing the viewer experience will remain most challenging for streaming publishers and platforms. I spoke with representatives from several companies for this article who never even mentioned AI and their UX. Several did mention that they were looking forward to new product designs, but they weren’t talking about how their products have already been changed by AI. It’s better UX that will keep media consumers engaged. This is where media companies really need to double down.
Related Articles
CTV contextual advertising, like many things in life, is all about making good decisions, and making informed decisions based on a wealth of data means leveraging the right tools—often AI-driven—to gather and distill and interpret that data. Sometimes developing sound contextual media plans involves working with in-house tech and other times it means working with third-party tools, as Team Whistle (a DAZN company) president Joe Caporoso and Intersection CEO Chris Grosso explain in this discussion with SVTA subject matter expert Bhavesh Upadhyaya in this clip from Streaming Media Connect 2026.
09 Mar 2026
Streaming Media Connect December was all about live and featured exclusive keynote fireside chats with Rebecca Sirmons of NASA+ and Neal Roberts of WarnerBros. Discovery and a slate of live streaming panels packed with speakers from Peacock TV, Paramount+, Google, Globo, EZDRM, nanocosmos, CommScope, Starz, Professional Fighters League, LG, and more. Check out a playlist with Streaming Media Connect December sessions on Streaming Media's YouTube channel to catch the sessions you missed and revel in the ones you want to relive through the magic of VOD.
11 Dec 2025
Comcast Advertising has released new research on the effects of contextual alignment in advertising. In this Q&A, Kristin Shumaker, Comcast Advertising Analyst, and Larry Allen, VP of Global Strategy for Addressable, Data, and Measurement Partnerships for FreeWheel, expand on the details of the report's findings and how it shows the ways that advertisers and publishers can best harness the power of contextual alignment.
25 Sep 2025
The buzz around AI for subbing and dubbing and localizing streaming content is that it makes it far easier than it's ever been before. But that doesn't mean it's without significant technical challenges-particularly for companies like Interra Systems who develop the enabling tech-as Interra Engineering Manager Sana Asfar explains as she enumerates the key challenges and how to overcome them in this conversation with IntelliVid's Steve Vonder Haar at Streaming Media Connect 2025.
30 Mar 2025