-->
Register now to save your FREE seat for Streaming Media Connect, Dec 9-11!

AWS GM of M&E Samira Panah Bakhtiar Talks Gen AI and Interactive Sports Streaming

Article Featured Image

For this issue’s cover story, I interviewed Samira Panah Bakhtiar, Amazon Web Services (AWS) general manager of me­­dia, entertainment, games, and sports, about how AWS and agentic AI are changing the ways streaming workflows are executed. In her interview, she spoke both to the business side of the house and the underlying tech. More technical information on AWS’s open protocols for agent interoperability and interagency communication related to the agentic AI topics we touched on is available at go2sm.com/mcp1 and go2sm.com/mcp2.

Nadine Krefetz: What are your responsibilities at AWS?

Samira Panah Bakhtiar: As general manager of media, entertainment, games, and sports, I look after traditional media and entertainment customers, including broadcasters and streamers, as well as game developers, game distributors, companies that focus on regulated gaming industries and betting industries, and publications and sports leagues. It’s my job to ensure that our customers are able to increase their reach, relevance, and, hopefully, revenue and profitability.

AWS Bedrock Values

Krefetz: Can you give me some insight into how AWS is thinking about gen AI services?

Bakhtiar: Within Amazon, we have a gen AI division that is focused on a myriad of things. One of them is the creation of [our own] foundational models, called Nova. There are seven types of models whose capabilities range from advanced reasoning to handling multimodal inputs and multimodal outputs. With these multimodal outputs, Nova can take text and create imagery, film, or text-to-speech models. We also recently launched AgentCore and Kiro.

You can think of it as three layers of a stack. At the bottom level, we have purpose-built chip­sets for customers, providing infrastructure to help with inference and training as well as HPC [high-performance computing] for customers who want to focus on building out their own models. Not every customer wants to do that.

For inference and training specifically in the middle of our stack, we have a fully managed gen AI service called Amazon Bedrock. With Bedrock, our customers can access popular foundational models from companies like Anthropic, AI21, or Stability AI. The top layer of the stack is a service called Amazon Q. For developers, this helps with no-code development to accelerate the creation of new software and applications. This is a gen AI assistant that customers can query and talk to in natural language.

aws bedrock
AWS’s fully managed gen AI service Amazon Bedrock

Krefetz: How do we know a company’s confidential intellectual property is not being used to train a large language model?

Bakhtiar: When you tune a foundational model on AWS, we base it on a private copy of that model. This means your data is not shared with model providers and is not used to improve the base models. You use AWS PrivateLink to establish private connectivity from your Virtual Private Cloud (VPC) to Amazon Bedrock without having to expose your VPC to internet traffic.

AWS offers an uncapped intellectual property indemnity for copyright claims arising from generative output from Amazon models and gen AI services. This means M&E customers are protected from third-party claims alleging copyright infringement by the output generated in response to inputs or other data they provide.

Unlocking Customer Value With Gen AI Models

Krefetz: How is data privacy maintained when working with a customer’s library of content?

Bakhtiar: Technologically, as a core component of Amazon Bedrock, any of the data that is being leveraged by a customer to train a model exists within their own VPC. It never leaves that environment, even if they’re using a third-party model. So, you’re able to train and query and engage with essentially a version of that model in a very controlled area, and none of that data will leave.

Krefetz: Companies have said it’s really too expensive to go back to their libraries. Does it make any sense for them to go into their archive because you’re charging money for them to process all of this?

Bakhtiar: I think it’s the most practical way a company can unlock the existing value from their intellectual property. So many customers right now are focused on joint ventures or opportunities to do bespoke collaborations. And I’m not saying that they shouldn’t. I think this becomes a situation where they’re sitting on decades and decades of intellectual property that could be monetized in different ways.

Sure, training a model may be the most glamorous or most lucrative of the batch, but there are also practical ways in which they can use it to create documentaries or enhance the broadcaster experience. We recently came out of the NBA Finals. Imagine being able to access video content from decades ago to provide a side-by-side comparison relative to what a viewer is experiencing in the Eastern Conference Finals. It’s not the same type of clips that are used over and over again in circulation, but you can go back in time and use natural language to be able to find a very, very similar thing. It really enhances the broadcast, and I think that is what helps to create those opportunities in the future.

Guiding Media Migration

Krefetz: I’ve been told that metadata does not travel well in the media workflow. If you’re working with content from ingest all the way to ad insertion, how do you guarantee that the metadata travels with the content?

Bakhtiar: AWS Media2Cloud helps customers facilitate the migration of their digital assets to the cloud. It sets up serverless ingestion and analysis workflows to move a customer’s video assets and associated metadata to AWS. During the migration, it analyzes and extracts
machine learning metadata. The Media2Cloud on AWS Guidance workflow ensures that each asset is assigned a unique identifier upon ingest, and extracted metadata is persistently linked to this identifier.

At NAB 2025, the Public Broadcasting Service (PBS) discussed how the company is grappling with a significant metadata deficit across its content distribution channels, including its media manager and search functionality. The current metadata structure lacks nuanced classifications such as topic, sentiment, style, and mood, which are crucial for enhancing digital products like PBS’s recommendation engines and vector search capabilities. PBS built a proof of concept lev­er­aging advanced AI technologies to auto­ma­tically extract and store metadata from the existing PBS content library. This implementation has the potential to significantly enhance content discovery, improve viewer experience, and unlock new value from PBS’s extensive content library.

Costs of Gen AI Implementation

Krefetz: Can you give me sample costs for a gen AI activity?

Bakhtiar: In gen AI workflows, inputs (video, image, text) are translated into tokens, and the price is determined at per-token input and output levels, so costs will vary based on use case and model selection.

For example, if a broadcaster used AWS to create contextually relevant insights and taxonomies for advertising with gen AI, it would cost about $0.36 and take 25 seconds to process a 12-minute video clip.

A news organization could use gen AI to archive and enrich the content of a 5-minute and 36-second 12.3GB video file for about $0.0385, with an annual storage cost of $0.15.

[AWS outlines pricing so that customers can understand their options at go2sm.com/pricing.]

AWS can also help customers with their gen AI implementations through the Generative AI Innovation Center.

Use Cases: Agentic AI in Action

Krefetz: How is agentic AI being used in media workflows?

Bakhtiar: Agents enable gen AI applications to automate multistep tasks connecting company systems, APIs, and data sources. This multi-agent collaboration allows developers to build, deploy, and manage multiple specialized agents working together. Each agent focuses on specific tasks under the coordination of a supervisor agent, which breaks down intricate processes into manageable steps to ensure precision and reliability.

Agentic AI is already driving value for our M&E customers, from content creation to distribution and audience engagement. Allow me to share a few examples of high-impact use cases where autonomous, intelligent systems are fundamentally changing how M&E companies operate.

Deutsche Fußball Liga (DFL) developed an AI live commentary solution working closely with Sportec Solutions AG, a joint venture between Deltatre and DFL. This generates real-time, automated commentaries about match events as they happen, with an end-to-end average workflow of 7–12 seconds. Commentaries are generated in different languages and writing styles simultaneously, and since the latency is within the broadcasting delay, it can efficiently sync with video distribution.

DFL is using gen AI tools in three core areas: fan experience (automated translation, personalization, and localization), media production, and data services (automated recognition of match events, applying the statistics portfolio developed for the Bundesliga to other leagues).

aws ai bundesliga
How Bundesliga’s gen AI-powered live commentary works

dfl gen ai live ticker
DFL’s gen AI live ticker in action

The NFL has developed real-time player tracking for every player, on every play, on every inch of the field. The league uses this data to develop metrics and analyze athlete and team performance beyond the box score. One area of performance is pressure probability, an automated machine learning system that estimates pressure applied to a quarterback from individual rushers and a defensive team. This estimates an average player’s behavior within a play, identifies defensive and offensive players, and identifies which blockers blocked which rushers throughout a play.

Along with providing a novel measurement for NFL defense that enriches game and sports analytics, the model identifies rusher/blockers with 99% accuracy (a 50%-plus increase compared to baseline for certain player positions) and matches rusher/blocker pairs with 97% average precision.

The NFL is also using AWS for video semantic search for its archive of facts, stats, game day coverage, athlete press interviews, and media assets over the last 100 years, including millions of audio and video clips, as well as still images. The NFL feeds more than 20,000 plays captured from multiple angles into its library each season. To manage this vast archive of data, the NFL leverages var­ious large language models through Amazon Bedrock, which sits atop the NFL’s Next Gen Stats database. Instead of navigating complex database queries or checkbox-laden search interfaces, users can simply ask for what they need using natural language. This helps them find clips that fit specific parameters so they can create content, such as highlight packages for games, narrative storytelling elements, and content packages for NFL digital or socials teams.

nfl assessing tackle probability
Assessing tackle probability in the NFL’s machine learning-based analysis solution

The last use case I’ll share is around automated advertising sales support. New ad formats across different regions make it difficult for ad sellers to keep up with innovation, training, and standards. This slows down sales readiness and can impact advertisers’ results. Now, M&E companies can gather, analyze, and synthesize ad products, formats, and standards from various sources using a RAG [retrieval-augmented generation] application with AI agents.

This type of agentic workflow can help sales teams efficiently answer domain-specific seller questions using a chat-based assistant built on a multimodal knowledgebase. An example of this comes from Twitch, an interactive livestreaming service and global community that creates unique, live, and unpredictable experiences from the interactions of millions. The company was facing significant challenges in their ad sales process due to scattered documentation and slow response times to advertisers. They successfully implemented an innovative solution using AWS technologies. By deploying a RAG application with the gen AI workflow on Amazon Bedrock and leveraging a vector database using Titan, Twitch created a Slack-integrated chat assistant. This AI-powered tool empowered their sales teams to quickly access information and respond to queries efficiently.

Launched in February 2024, the implementation has yielded impressive results: Users of the system have generated 25% more Twitch revenue year-to-date compared to non-users and a remarkable 120% increase in revenue compared to self-service accounts.

Driving F1’s Drive to Survive

Krefetz: While there are a lot of public case studies that you have, I want to also focus on Formula 1 (F1) because there is a wide breadth of things going on.

Bakhtiar: They’re doing a phenomenal job of monetizing their intellectual property across a multitude of different windows. F1 moved over 100,000 hours of their archival content to AWS. They started with a very European-centric audience, and then they created a docuseries that’s
on Netflix, Formula 1: Drive to Survive. And then all of a sudden, from my perspective, they were everywhere in the U.S.

Netflix F1 Drive to survive

Taking the content that they have, they can breathe new life into it and use that to attract new audiences so that people are attracted to the core experience, which is the race itself. But then they can also want to participate in the viewing experiences across streaming services, as well as engage with merchandising.

The optionalities become so much more pervasive, which is the ability to really think about having your intellectual property window into new areas. If you look at a cost assessment of being able to migrate your archives to the cloud and run foundational models across them in order to be able to do semantic search, it may seem like a big bill, but if you can start to think about all of the various new product propositions that you can launch as a result of having that data, that’s where the goodness is.

Krefetz: One topic that comes up in AWS’s F1 case study analysis using agentic AI is root cause analysis. Why is it valuable to develop a root cause analysis agent?

Bakhtiar: F1 racing engineers must triage critical issues during live racing, such as network degradation, to one of their APIs. Due to the event schedule and change freeze periods, it can take up to 3 weeks to triage, test, and resolve a critical issue, requiring investigation across development, operations, infrastructure, and networking teams.

We built an agentic assistant for a root cause analysis using Amazon Bedrock. Agents are provided with instruction to create an orchestration plan that automates repetitive tasks, to invoke APIs, and to access knowledgebases to troubleshoot issues, narrow down on the root cause, and significantly reduce the manual intervention required to fix recurrent issues during and after live events.

Krefetz: In what other areas is F1 using AI?

Bakhtiar: We’re working with them on production and also on feeding back data to inform race strategy. And we’re working with them to be able to help them search across their archival content.

F1 uses data-driven storytelling during live broadcasts. Designed to augment the capabilities of F1’s production team, Track Pulse consolidates various data sources and insights into a unified experience. Users can scan stories and select those of interest to view more detail. Each story is updated in real time and prioritized based on relevance.

By surfacing the right data for each story, production teams have the information they need to time the story at the highest impact moment and create graphics packages that provide fans with meaningful insight. Production teams can also use Track Pulse to monitor unfolding stories and anticipate which narratives to feature throughout the broadcast.

Prime’s X-Ray Vision

Krefetz: What is a gen AI feature developed by AWS that crosses different media, such as sports and entertainment?

Bakhtiar: Prime Video X-Ray insights. With sports viewing, you don’t want fans to leave your core viewing experience. X-Ray allows your streaming platform to provide other attributes that really complement the native viewing experience. There are a lot of people out there who very much care about the data associated with any type of sport.

At the core, what it’s driving is a stickiness to the viewing experience that will allow for fans to continue to come to your platform because not only are they going to get the premier viewing experience, but they’re also going to get a lot of that behind-the-scenes information that makes the experience that much more interactive.

amazon prime video x-ray for live sports
Prime Video X-Ray for live sports

Krefetz: Are you able to monetize this feature, or is the purpose to improve the viewing experience and engender customer loyalty?

Bakhtiar: I think that would point more toward audience retention, audience delight. If audiences are delighted, they’re likely going to stay. So, we’ll stick with that notion of customer obsession.

Content Authenticity Concerns

Krefetz: How are you participating in C2PA (Coalition for Content Provenance and Authenticity)?

Bakhtiar: Content credentials will be added to images created using Amazon Titan Image Generator to increase transparency. AWS ElementalMediaConvert supports the embedding of a C2PA manifest to provide content provenance and authenticity. This data creates a traceable record of the content’s origin and modification. [See documentation here.]

Gen Z, Gen Alpha, Gen AI, and the Future of Media

Krefetz: What are some typical questions your customers ask?

Bakhtiar: One big question I get all the time is ‘What do you believe the future of media and entertainment is going to look like?’ The future of entertainment is immersive, interactive, multidimensional, and incredibly personalized. If you look at those types of attributes of the future of viewing experiences, the technology has to evolve in order to be able to do all of those things.

I spent a ton of time looking at the data relative to what’s occurring in the industry, and there are a few things happening. One is a real shift in the way that content is being consumed, and that’s primarily being driven by younger audiences wanting to consume content in a different way, specifically Gen Z.

We expect Gen Alpha to have a much stronger preference for social content and user-generated content like scroll viewing. Vertical viewing on mobile phones is their primary form of entertainment consumption, followed by gaming, music, and podcasts.

Krefetz: What are the most important things media companies should be evaluating in their workflow to serve content to younger viewers?

Bakhtiar: To effectively serve Gen Z and Gen Alpha viewers, organizations need to evaluate their workflows for low-latency streaming, multi-platform content distribution, real-time personalization capabilities, and interactive features.

Low-latency streaming delivers next-generation content (e.g., game streaming, shoppable video, UGC) and instant, interactive experiences and real-time engagement opportunities. Multi-platform content distribution reaches younger audiences across multiple devices and platforms, where they consume content as they are trending away from traditional linear platforms. Personalization and AI-generated targeted content, content recommendations, and customized offerings for audience-specific context align with younger viewers’ expectations for personalized experiences.

Streaming Covers
Free
for qualified subscribers
Subscribe Now Current Issue Past Issues
Related Articles

G&L CEO Alexander Leschinsky Talks C2PA and Today's Digital Content Landscape

Provenance and authenticity of content have become critical issues as Generative AI-derived content and disinformation flood the digital world. The Coalition from Content Provenance and Authenticity (C2PA) has formed to cryptographically prove content authenticity in today's digital landscape, with an eye to combatting fake content and restoring trust in media. In this interview with Streaming Media's Steve Nathans-Kelly, G&L Systemhaus CEO Alexander Leschinsky discusses the challenges and limitations of C2PA--particularly with live content--and G&L's role in helping broadcasters and media companies implement C2PA standards.

Create a Streaming Service at Scale Within Minutes with AWS Streaming Service Wizard