Q & A: Paramount Global CTO Phil Wiser Talks AI in Media
After delivering a presentation on “AI’s Impact on Media” at HumanX in San Francisco on April 7, Paramount Global CTO and Head of Multiplatform Innovations Phil Wiser did an open Q & A with attendees expanding on the key themes of his presentations. A partial transcription of that conversation follows. Note that Wiser’s comments are his opinions, and do not necessarily represent Paramount.

Paramount CTO Phil Wiser at HumanX 2026
Q: How do you integrate AI within a merger?
Wiser: The core of the talk that I just gave was that in this moment of AI, you have an opportunity to do M&A much better because you can go in and analyze all the relevant data on both sides. I was hypothesizing that you can create these very sophisticated models that don’t exist today.
Particularly in media, I don’t think anyone’s at a point where you’re going to make a decision between two major AI investments. Anyone that’s doing AI should obviously think that most investments are transitory and you write them off after a year or so. That’s a good thing relative to past worlds where you had these very expensive, long software engineering efforts on both sides and you had to make system choices. You can be a lot more flexible now.
Q: What kind of risks do you see in using AI in the enterprise?
Wiser: We do enterprise deals with Microsoft, OpenAI, or Anthropic, and there’s a little bit of a trust [issue]: “You won’t use my corporate data to train your model and I’m going to be safe in this environment.” There’s a bit of an open question like, “Really? How do I know? I don’t really have a way to verify.”
I think that’s something that corporations are going to have to get a better handle on. How are you doing some level of verification and enforcement around protection of your data as you’re exposing it to more and more of these AI systems?
Q: How do you drive AI adoption?
Wiser: When this wave started in 2022 at Paramount, I said I would run a top-down and bottom-up strategy. The bottom up is going to be tool access that’s as ubiquitous as I can get away with, and I’m going to partner up with our chief legal officer to facilitate that.
If I’ve got a problem where too many people are adopting AI, that’s a good problem. I did one of the first deals with Runway. Now we’ve got one of the biggest user communities [in the industry]. My view was, you have to run the experiment. Put it out there, measure it and the things that are working, amplify, and lean in.
But I would say having the bottom-up without the top-down direction also will only get you to maybe 10, 15% of the organization. You really need the top-down pressure to drive true change. So now we’re doing a little bit of both.
Q: For any adoption of an AI use case, how are you approaching reliability?
Wiser: The question is really around risk and how to deal with risk and the reliability of AI systems. The simple answer is [having a] human in the loop. We definitely engineer all of our processes to make sure we have clear articulation of human intervention points.
Our adoption in the core functions of the company is not mature enough that we’re introducing any enterprise risk yet. The big issue that comes up for us is IP risk, and that’s something we do spend a decent amount of time thinking about.
But it’s been fascinating to look at models that have been built in an IP-friendly manner. The performance of those models is below those that are training on everything. I think the whole industry is trying to come to terms with how much we’re going to hold the line on a clean model that’s not as performant and puts us at a competitive disadvantage. Nobody has really figured out that line yet.
My approach on the risk to IP has been engagement at the highest levels of all the big foundation model companies and working with them to determine how to put some controls in. I think it’s been pretty effective.
There are companies that are out there saying, “Look, we’re going to police the internet.” I think you need something like that. I just don't know how it’s going to play out. If it’s an infringing use that reduces our IP rights, clearly, we’re going to go all the way in on that. If it’s something that’s interesting because a community of people are riffing on a brand that we have and doing interesting things, we don’t really want to stop that, right? We want people to engage with our brands in new ways.
Q: In the last year, what have been the best things AI has brought to your workflows?
Wiser: We’ve got a whole factory floor that processes [all of the content we deliver]. It’s high volume and we have very high bars in terms of quality. It’s very complex because we need, for example, to take a new episode of a television show, and localize that for the globe almost immediately. In the good old days of TV, you would air it in the US and then windows could be many months [for worldwide distribution].
Now in a streaming world, it’s day-in-day globally, which means you need all of those localization assets almost immediately. Clearly, that’s a great application of AI to just help you do that translation, whether it’s subtitling or dubbing. I think that is a big opportunity that’s not fully realized.
The other [opportunity] is taking all these assets in for a single television episode. We may have hundreds of assets to go along with it. Now, stitching them together and understanding which of those work together, we’re doing with AI. We’re having AI look at all the video, pick the best asset, do all the synchronization, and build the package that we would deliver to an Apple or an Amazon, which is saving a tremendous amount of time and increasing reliability and [accelerating] delivery.
The other area that’s interesting is compliance. When we create a show that we can air in the US, we may not be able to air that in a Middle Eastern country just because of their rules around content and what is allowed or not allowed. Normally, we would have compliance teams watching that and saying, “No, you can’t have that shown.” And we would edit that out.
Now we’re using AI to detect those compliance issues and then auto-edit to create a version that would work in that region. I think there’s a lot of that stuff that’s more behind the scenes [editor’s note: normally we would look further into this, but the group Q & A format didn’t allow it], but really impactful in terms of our ability to monetize our assets.
Q: How are staff and hiring changing?
Wiser: It’s an interesting question of how we’re thinking about talent differently, either incumbent talent or new talent we’re bringing in. I’m challenging people on how clearly they can articulate their knowledge and their ideas. If they can communicate to me in a very clear way, they’re going to be able to communicate to an agent or an AI model in a clear way. I’m looking for domain experts that are clear thinkers that are able to take their ideas and represent them.
Q: What role do you think the evolution of available technology is going to have in determining what the next stage of streaming is going to be?
Wiser: It’s really going to be the consumption devices. 2D video is really durable. It’s been working for a century or more. I do think that Mark Zuckerberg is right. There is some 3D version of entertainment that's going to happen. Where streaming will get interesting potentially [is delivering to] to new devices.
The other is the format. We’re still living with kind of a traditional TV format in terms of episodic television and others. People have tried shorts, but we still have that distance between a TikTok experience and a traditional TV experience.
It will be interesting to see how those worlds start converging because traditional media is probably not going to let YouTube just grab more and more share of attention over time, which has been the trend. I think that tension between longish-form and short-form [content] will still continue to play out.
What I think is super-interesting that I haven’t seen people really jump on yet is shifting the production process. I’m not a film creator. I’ve never made a TV show, so I’m talking purely from a observer standpoint. It’s very linear. It’s like the way we used to make software.
I think there’s an opportunity to invert that model and make it much more agile because the most expensive part of that production is at the end. So, when you see all that work come to screen, you’re like, “Oh my God, that doesn’t work.” It’s very time-intensive, very cost-prohibitive to make some of those changes. And that’s why sometimes you get a movie that comes out and say, “Why would they ever make that?” Because you didn’t see it fully come together until the end.
Now you can see it fully come together at the beginning. It may not be final pixels, but you can iterate on the story narrative, you can iterate on the shots and the moments up front in an agile way. So, if you can double the number of iterations on evolving the story and the way it looks and the way it flows up front, you can have a higher success ratio.
That’s my theory. We’ll see if it plays out. If you reduce risks upfront, you’re willing to take more shots on goal because you’re going to see that early in the process.
Related Articles
This is a tale of a few different AI use cases that involve research, localization, advertising, and UX. The first is a public broadcaster in Austria. The second is a TV OS. The third is a well-known vendor. The last is a major media company. What all of these have in common is that their AI applications have moved from the proof-of-concept stage to full commercial implementation.
26 Mar 2026
As with all streaming workflows, AI has steadily crept into the live streaming technology stack. In some cases, the impact is incremental, in others, profound. From production to monetization, here's a quick overview of where AI has become relevant for live event producers and engineers, and some areas where, surprisingly, it hasn't.
26 Mar 2026
Can generative AI (gen AI) rip things apart and then try to glue it all back together with more insight, more intelligence, more efficiency, and more value? It really depends on what the application is. Within the video workflow, some tasks are good matches for using gen AI, and others aren't.
28 Mar 2025
AI-driven dubbing has recently gained attention as major platforms like Amazon Prime Video and YouTube roll out new tools designed to expand their content's global reach. Amazon is testing AI-assisted dubbing on licensed content, while YouTube has introduced auto-dubbing for thousands of channels. Both efforts reflect a growing belief that dubbing can help platforms engage new audiences—but the results so far have been mixed.
19 Mar 2025