-->
Register now to save your FREE seat for Streaming Media Connect, Dec 9-11!

SVTA’s Bhavesh Upadhyaya Talks Tactical and Ethical Use of AI in Streaming

Streaming Video Technology Association (SVTA) Live Operations Expert Bhavesh Upadhyaya argues that it’s time to move on from vague discussions of AI as streaming’s “next big thing” to emphasizing practical applications, addressing real-world streaming workflow and delivery problems AI can solve, as well as preparing for the inevitable ways AI will impact the streaming business in this interview with Future Frames Podcast’s Doug Daulton on the show floor at Streaming Media 2025.

Daulton begins by asking Upadhyaya to walk through his background in the streaming industry. “I've been in the streaming business since 2007, worked on the Beijing Olympics and worked on the video workflows, and I've also spent time in video operations with iStreamPlanet and Verizon Digital Media, and then most recently running product for video QoE for WarnerBros. Discovery,” Uphadyaya says. “Now I'm providing consulting advice for a lot of folks on how to integrate AI and how to look at optimizing operations." 

Tactical Use of AI

Daulton asks Upadhyaya if he could share some key takeaways from the panels he’s moderated at the event, including a session on AI and live streaming (with Daulton as a panelist) that focused in particular in leveraging AI for automation and analytics.

"Part of what we talked about in one of the panels is the tactical use of AI, and one of the topics was, how do you eat the elephant? It's just way too big," Upadhyaya says of trying to assess AI's emerging role in the industry when it's become so massive. "So when you're talking to other friends and colleagues over here about problems that they're facing and you're looking and saying, 'How does AI tackle that problem? How do you use machine learning to tackle that problem or generative AI to help you with answering a question or creating a question for this problem and then apply that?' I think that's been a more advanced and a more practical use case of AI instead of just listening to industry pundits saying, 'AI is the next big thing.' It's like, no, here's the actual use of AI." 

Recalling a relatively recent industry tech bubble that burst as it remained a solution in search of a problem, Daulton says, "I compare this to the big 3D boom that happened around 2015 where everything was going to be 3D, and that didn't ring true for me at the time. The market wasn't there yet. The digital interfaces weren't there yet."

But even amidst all the hype, Daulton insists, AI's onslaught feels different. The AU buzz at NAB, he says, "didn't feel like [3D]. It felt like the rubber hitting the road. How can we use it? But at the same time, to your point, there's a lot of hype, particularly on the consumer side, and I think there's this perception in the marketplace of this kind of monolithic AI that's suddenly going to control your life, but in a good way. One of the things I think is really intriguing here is Gen AI is getting all the buzz, but in our panel and in others I sat in on, it was about tactical AI. How are you using it in very specific ways to accomplish very specific things that either become a force multiplier for your team or create new opportunities for new work because you're taking things off of their plate that may be repetitive and able to let them open up their creativity and solve new problems and open new markets." 

Drawing another analogy to consumer technology that (in some respects) quickly surpassed the capabilities of what businesses and institutions had had available just a short time before, Upadhyaya says, "Just like what we saw when mobile devices started permeating business environments, everyone's bringing their own device, and suddenly you had a device that was more powerful being used on a consumer grade by employees of a company. AI is like that with ChatGPT and Gemini and Claude. But when you actually need to use it for work, you have to understand that you can't ask a system that's very non-deterministic to do something that you need validity or certainty on. And so what we're seeing in terms of the tactical use of AI is being able to understand it like a product with product cycle and versioning and understanding that the hallucinations at least mean that the answer that you're getting has a confidence score. And how are you training the models, and how are you training yourself to understand that you're looking for a higher confidence in the response coming back from the AI?"

That's where human intervention remains critical, Upadhyaya contends. At this point, a human needs to say, "'How do I take it the rest of the way to give myself a real answer?' I think that's when we're talking about tactical uses of AI and understanding [that it can] take away 80% of my work because I know that it's confident at that level and once it's not confident, I need to have a smarter human being that understands what my real purpose is to help solve and answer the last bit." 

The Value of Versioning AI Models

Returning to Upadhyaya’s point about “versioning” AI models, which he admits he hadn’t heard about before, Daulton says, “I think that’s the kind of workflows that people have to think about.”

“Absolutely,” Upadhyaya agrees. “AI is such a huge beast and there are the generative ways of using it. And then there's the 'backend API calls' ways of using it and setting it to a backend model and versioning that as well too, so that when you're doing something in a batch or in a continuous 24x7 process where you're just having it look at data and process it for you. That's where that versioning and the confidence becomes very important as well, which is a very different example than the generative AI of a chatbot or answering customer information.”

"If it's mission-critical enough and you have enough compute, you can be setting the agent to do things that a human being could do but not as fast, and it could save you reputationally and financially," Daulton comments. "There are lots of ways it can be beneficial, but you have to be intentional and thoughtful about it as you stand up these systems."

Ethical AI

Another topic that Upadhyaya mentions that he concedes they could “probably touch on forever” are the implications of “standing up these systems” and letting them do their work independently or largely so. "When does AI or one of those guardrail systems block something that should have been let go or let go of something that should have been blocked? Instilling that trust, especially when there's a consumer or a customer on the other side looking at the response," is also a critical issue, he argues. 

“You're preaching to the choir on that one," Daulton agrees. "I think that's one of the real challenges. And it's not just [intellectual property] chain management, which is clearly a critical concern, as we see through the lawsuits that have been filed by Disney and others. And that's a legitimate concern. I have friends in the industry during the writers' strike when this first started to surface, they were like, 'AI's the devil.' I'd say, 'It's not, but the guilds are absolutely right to be taking this to the strike and making sure that they at least start the conversation about legal protections.' So there are all of those IP issues, but there are also issues of job loss and helping define how you do it in a way that supports creators and supports not just creatives but technical people, and doesn't eliminate their jobs." 

"Those are things that we as human beings, just by ourselves, as individuals, we can't stop that. That's an evolutionary change that's going to happen," Upadhyaya says. |The businesses are going to change, but how do we prepare for that? How do we change our mindset as well too to keep up with that? And I think that's what's going to help most of us succeed: continually educating ourselves." 

"Are you familiar with Mo Gadwdat, former CBO of Google X, and one of the people responsible for bringing DeepMind over?" Daulton asks. "Since he left Google he speaks a lot and he speaks a lot about AI. And one of the things he says is, we're going to be in this kind of rough period where we figure things out. And ultimately, he says, it's not that AI is dangerous, it's that we're dangerous and there will be bad actors who will use it to do terrible things. And we have to be proactive in thinking about it both from a business level and how it can serve us and how we can put guardrails wherever we need to." 

“The thing is, we've been through this before," Upadhyaya says. "Generative AI being hosted in these massive data centers is just like in the olden days when we had mainframes until finally, bit by bit, PCs came around and then, bit by bit, mobile devices came around. We're going through the same cycle. Soon we're going to get MacBooks and iPhones and Windows machines that have enough memory to run small models, but we need to take the lessons that we took from that cycle to apply them to this cycle.”

Join us December 9-11 and tune in for more great conversations at Streaming Media Connect! Registration is free and open now!

Streaming Covers
Free
for qualified subscribers
Subscribe Now Current Issue Past Issues
Related Articles

I2A2 CEO Renard T. Jenkins Talks Hollywood Since the Strikes at Streaming Media 2025

Renard T. Jenkins, president and CEO of I2A2 Technologies and VP of the Hollywood Professional Association, sat down with Streaming Media contributing editor Timothy Fore-Siglin at Streaming Media 2025 to discuss how Hollywood has changed in the last few years, the importance of prioritizing good metadata, and why students shouldn't rush into using AI.

EZDRM’s Olga Kornienko Talks C2PA, Content Authenticity, and DRM at Streaming Media 2025

In a candid interview at Streaming Media 2025, EZDRM COO and co-founder Olga Kornienko and Future Frames producer Doug Daulton trade helpful analogies in discussing the growing importance of content provenance in the age of generative AI and deepfakes, as well as the role that the C2PA standard and DRM providers like EZDRM can play in protecting content authenticity and content creators' rights in a changing media landscape.

Ex-Disney Tech Ops Expert Sarge Sargent Talks AI/ML and Building Better Dashboards

In this wide-ranging interview from Streaming Media 2025, streaming industry vets Sarge Sargent and Timothy Fore-Siglin talk leveraging and deploying machine learning (ML) and generative AI (Gen AI) beyond the hype.

Sneak Preview: AI and Live Streaming: Automation and Analytics at Streaming Media 2025

On Tuesday, October 7, SVTA's Bhavesh Upadhyaya will moderate the Streaming Media 2025 panel "AI and Live Streaming: Automation and Analytics." AI and machine learning are enabling new efficiencies in streaming workflows and taking live stream monitoring to new levels for those who know how to leverage them. Join this panel to see the cutting edge of real-time production workflows, automation, and predictive analytics.