-->
Register now to save your FREE seat for Streaming Media Connect, May 12-14!

NAB 2026: Things You Might Have Missed

Article Featured Image

After years of writing up the nooks and crannies of what used to be a massive show, this year’s NAB felt like a desert. Years ago, I’d walk three huge halls and catch five or six fascinating, off-the-beaten-path companies to write up.

This year’s milestone? It was the first year that Monday felt less crowded than Sunday. I happened to wander into West Hall around 3pm on Monday and instead of crashing into rows of people around the AWS booth, I found myself somewhat alone on the beautiful parquet floors I’d never noticed before.

So frankly, I wasn’t sure how to attack this column. Forgive me if the things you might have missed below are, well, slightly conceptual. Here goes.

Death of SaaS, Birth of Agentic Software

It’s only been a few months since Matt Shumer’s viral LinkedIn post “Something Big is Happening” and Dario Amodei’s cries of “We are near the end of the exponential,” and old-school AI engineering legends like Andrej Karpathy are talking about a revolution in software and grappling with Cursor, Claude, and cloud--just like me! Except I hadn’t set foot in an IDE until February.

I am not a fan of the phrase “vibe coding” at this point since the latest generation of models coded themselves—thank you very much—and are now quite capable of handling your silly media workflows.

Since software has already eaten our world, I’ve “red-pilled” on the idea that agentic creation of software would be set to blow the stack off our economy yet again.

What would I see at NAB that would either validate or invalidate these ideas? I went to the show with that question in mind.

And I found very little on the show floor, frankly. More on that below.

But thankfully I attended the Devoncroft Summit where an entire panel discussed this topic with several notable insights. The first came from the moderator Vince Pizzica, a noted industry advisor and investor, who boldly stated, “If you’re just learning about agentic software right now, you’re too late.”

devoncroft summit 2026 agentic ai in media
Devoncroft Summit’s Agentic AI in Media session

There were tough words about SaaS models. Both Ross Dagan, EVP and Head of News Operations and Transformation at CBS News and Stations, and Jon Roberts, Chief Technology Officer of ITN, expressed frustration with the state of play of software licensing. Dagan said, “If you come to me with a SaaS business model, it better be a total no-brainer.”

But what I found most notable was a response by Brinton Miller, EVP & CTO, Media, Technology, and Operations at Warner Bros. Discovery, to a question he mis-heard. The question was about “data.” Specifically, technical data coming from agentic flows, and whether that might have value. Miller lit up, misheard “data” as “debt,” and showed the audience what was on his mind: the revolutionary capability of agentic software development to reduce technical debt.

“We’re very interested in scenarios where, for example, we may have three or four legacy systems being used in the back end,” Miller said. “If we can use agentic software to abstract those from the users of the systems, we give ourselves freedom to better manage the technical debt.”

This idea—the power of agentic software development, and not merely a move toward the use of agents in media operations directly—has huge implications. New software matures and hardens faster. Old software can benefit from agentic software to make integrations simpler, more elegant, and more supportable.

Agentic software means features can get rolled out faster and with a more iterative process with the deepest domain experts in the company.

Roadmap prioritization is a thing of the past. Or at least, it moves to a more strategic, high-stakes game of “building for the future” instead of the more typical reactive, difficult prioritization common today.

On this front was another standout quote from the panel, offered by Adrian Koziol of Rogers Sports & Media. “Using essentially one outside agentic developer we were able to build what we feel is a vendor-quality tool that we’re demonstrating tomorrow at the show,” Koziol said.

Miller echoed this optimism for agentic coding but with a stark warning: “For some software vendors, having public, accurate, and up-to-date APIs and documentation for large portfolios of customized systems is not easy.”

He also plainly stated the obvious: “If we can’t send out our agents to figure out how to better integrate with your core systems, that’s not tenable long-term.”

Differentiated Agentic Media Solutions

I know you missed this one because there weren’t any.

There were plenty of media assistants, or agents that could be one-shot prompted to do interesting things. But they all looked remarkably like what you’d imagine. And the chat interface is bound to be limited and commoditized.

But never mind all that horse sense. Whether you have a MAM, like Dalet, or you produce embeddings, like TwelveLabs, you probably also now rolled out some version of a chat-based media assistant, just like Dalia (Dalet) or Rodeo (TwelveLabs).

These are impressive in their own ways. But agentic chat on top of a library is awfully close to what I might call Claude Cowork, and it’s not the innovation I wanted to find.

First, how many truly ad hoc or one-off workflows exist to be solved inside a media organization?

For example, on the floor of the show, I was easily taken with the ability to one-shot prompt a library for some cool things. “Find me the clips with goals being scored from yesterday and then build me an approval interface so I can send certain of them to my distribution partner.”

It’s incredible to see systems like this at work.

But take a beat and you realize—wait, don’t people do this every day? For sophisticated use cases that people do every day, why would you insist on one-shot prompting it?

Moreover, isn’t this the workflow the largest LLMs will certainly be best at?

I noticed far too many booths claiming they were “agnostic to the LLM” and very few booths touting their special approach to keeping up with the latest models and running deep, domain-specific evaluations.

My view? Each model has a personality. You just have to get used to it.

Anthropic has a senior executive in charge of Character. Think about that.

amanda askell agentic
Scottish philosopher, AI researcher, and Anthropic AI alignment leader Amanda Askell

The future of media operations is not a bunch of ad-hoc humans one-prompting agents.

It’s a few, elite, trained execs that are mastering hundreds of workflows with software and LLMs together, under strict supervisory control:

The operative word that was not at NAB this year, but will certainly be here in 2027: “harnesses.”

Sounds Difficult to Transcribe

There’s been continued innovation in the domain of audio stem separation since the last show as well, with implications for post-production of live assets created inside venues—baseball stadiums, hockey arenas, and the like.

Why is this so important?

So glad you asked.

First, the accuracy of the transcript is foundational to all downstream workflows. Depending on the production value of the live event, it’s quite possible a clean transcription doesn’t happen. This is more common than you think!

Second, audio cues are an incredible modality for human processing that few of us pay attention to, unless we’re editors. If there’s a crowd, and a point is scored, there’s a good chance that crowd noise will rise clearly above a threshold.

Then there are those edge use cases. What if you need the footage only? Maybe the commentary itself needs to be stripped for a fair use purpose. (This is something I heard from a prospect, and when I tested Demucs, had a half-success.)

At the end of 2025, Meta released SAM Audio, a unified multimodal model that isolates any sound from a complex mixture using text, visual-click, or time-span prompts. Who else would show up with something new?

I was glad to see Korea’s Gaudio Lab make its NAB debut. Ken Kirk told me that my experience of garbled remains from stripping commentary was due to the variation of acoustics in all venues. Gaudio’s Studio Pro gives fine control over these kinds of variables and can automatically reconstruct clean Dialogue, Music, and Effects (D/M/E) tracks even when original production stems are unavailable. It also excels in complex scenarios like dialogue with vocal music playing underneath.

gaudio labs studio pro audio stem separator
Gaudio Labs Studio Pro audio stem separator

Then there’s AudioShake, the industry leader. Meta's own benchmark ranked it #1 among targeted source-separation models. They pushed hard on the live-broadcast angle, releasing Dialogue RT at latencies as low as 11 ms.

That’s very powerful for live production, producing simultaneous clean dialogue and background stems for independent control and analysis downstream. AudioShake also recently announced the ability to remove copyright music explicitly from noisy environments like stadiums or live events, while preserving dialogue, crowd noise, and effects.

Provenance, Deep Fakes, and C2PA (Coalition for Content Provenance & Authority)

You definitely missed a tiny table, no backdrop, and two hard-working gents trying to save the world from dystopia. I tripped over the Coalition for Content Provenance and Authenticity somewhere in Central Hall next to a gaggle of young college students and creator types looking confused and disoriented, much like the broadcast engineering visitors around them.

It seems obvious to me, at least after some time following generative video trends on X, that we are going to be entering an era filled with hybrid videos. This is a new kind of “deep fake” that is about mixing real and fake imagery together—primarily, I suppose, for entertainment value.

In that world, news and sports organizations will have to create an ecosystem and technical architecture for maintaining complete provenance and authenticity of all of their assets, real and otherwise.

At NAB 2026, C2PA's story was about traction. Sony demo’d an end-to-end content authenticity workflow, anchored on their PXW-Z300 camcorder.

sony europe c2pa compliant workflow
Sony C2PA-compliant workflow

The BBC remains involved via Project Origin (with Microsoft, CBC/Radio Canada, and The New York Times), and C2PA's steering committee now spans Adobe, Google, Amazon, Meta, Sony, OpenAI, Intel, Publicis, Truepic, and others. The latest spec extends provenance to live streaming via CMAF segment signing—a long-awaited capability for DASH/HLS pipelines. But with all that said, routine use across the ecosystem isn't there yet. Reuters Institute survey has C2PA-tagged news content at under 1% of the total.

So, what’s your take on NAB 2026? What did I miss? What did you see at NAB that hasn’t been written up?

Share this article on LinkedIn and let’s get discussing. Or, email me at Brian@RingDigital.tv

Streaming Covers
Free
for qualified subscribers
Subscribe Now Current Issue Past Issues
Related Articles

NAB 2026: AI-Powered Video Creation with Avid and Google

Avid and Google teamed up for a tech preview at NAB 2026 to demonstrate new generative video functionality for Avid Media Composer. This workflow tool brings us to the front of the lens, where we've moved one step closer to building realistic content with Gen AI.

NAB 2026: NVIDIA’s Not-So-Secret AI Agents

One demo I saw at NAB 2026 covered using agents to create content. Obviously, agents need to be managed so they don't think too far outside the box. In this demo, NVIDIA talked about their control plane for a multitask agent project that helps create both a script and animated characters.

NAB 2026: AMC Global Media's AWS-Driven AI Journey

My visit to NAB 2026 skewed heavily toward talking about workflow changes with AI. I tried to keep my travels on the convention floor to very specific examples of AI's impact on streaming workflows, and this article will explore the most interesting ones I found. First up is how AMC Global Media (newly rebranded from AMC Networks) is working with AWS to leverage AI to create better content access.

NAB Is Full of Big Ideas—Live Sports Need Ideas That Work

As we head into another NAB, I'm less interested in what looks impressive in a booth and more interested in what is actually ready to survive in production.

Multiviews on Live Sports Streaming at NAB 2026

Perhaps the booths that cropped up on my NAB 2026 itinerary and the sessions I squeezed in between simply skewed this way, but it seems like everyone's favorite sport in streaming this year is, well, sports. Certainly, when it comes to live, the pinnacle of achievement appears to be seamless delivery of premium sports events at global scale, with an eye to "meeting fans exactly where they are"—crossing national, lingual, and cultural borders via localization, and transcending generations through autogenerated highlights and verticalized mobile delivery. And when it comes to traditional landscape sports video streamed to conventional CTV glass, multiview more than ever seems to be the name of the game in 2026.