Addressing AI-Associated Authenticity Issues at Streaming Media Connect
On Thursday, May 22, at Streaming Media Connect, Nadine Krefetz, Consultant, Reality Software, moderated the panel “In AI We Trust? AI and Content Authenticity,” which looked at how being assured of content validity is more important now than ever before in a world of AI-created synthetic content and deepfakes.
The panelists were Andy Beach, Media & AI Strategist, Alchemy Creations, and Conference Chair, Streaming Media 2025; Renard Jenkins, President and CEO, I2A2 Technologies, Studios & Labs; Lindsay Stewart, CEO and Founder, Stringr; and Manny Ahmed, Founder and CEO, OpenOrigins.
Key takeaways included the following:
- Authentication, including tracking what has been done to a piece of media to understand the level of AI involvement, is paramount to building trust in the company working with the piece of media.
- There are organizations working on standardizing authentication practices, but no one has cracked the definitive solution yet. It will take global cooperation.
- Humans will continue to have job roles that aren’t subsumed by AI—because AI still can’t create knowledge, it can only synthesize it.
- Companies looking to use vendors that employ AI should ask questions about the underlying data and whether it was ethically sourced.
Authentication Leads to Trust
Krefetz opened the discussion by posing the question, Whose reputation has the most potential for harm with AI? Stewart came at this question from her experience in news, saying this field must walk a careful line with AI. Jenkins echoed that the news business is an area of concern, along with advertising.
Ahmed believes that the origin point of all media needs to be accessible so that anyone can see what has been done to that piece of media. Beach added that questions surround how to track authenticity, such as verifying a real versus an AI-created photo. There shouldn’t be speculation about whether something is real, he said. A big part of tracking authenticity is what to disclose and when to disclose it when it comes to what goes into a piece of media’s creation.
Stewart shared the process at Stringr: Stringr contributors are verified by providing their phone number, and Stringr can track their location. Every video clip has that base level of verification, and then humans view it. Stewart said customers required that sense of comfort in order to say yes to working with Stringr. She noted that Stringr uses AI for transformation (e.g., text-to-video voiceover), not creation of imagery.
Ahmed pointed out that human vetting becomes unscalable with lots of content, so there needs to be a new way to authenticate on a large scale.
Standardization in Authentication
Krefetz invited the panelists to talk about image authentication and related workflows. Jenkins thinks it will be a global industry effort to standardize an authentication approach. Beach agreed, saying that companies need to prioritize adding the correct metadata to each piece of media. Stewart made the point that the authentication process can’t be overly difficult, or people won’t engage with it.
The panelists discussed organizations that are currently working toward these authentication guidelines, standards, and solutions. Opportunities exist with blockchain—for example, Ahmed described the way physical cameras initiate metadata the moment a photo is taken, but screenshots can live on the blockchain. Metadata can be tampered with, and the blockchain can’t be. The panelists are also in favor of using open source tools and organizations to help solve the problem.
AI and Future Jobs
The panelists agreed that the opportunity exists for AI to lead people to interesting jobs. Jenkins had previously mused that verifying authenticity is a probable new job role for someone. Humans need to be educated about AI, Beach had asserted, but the field is changing rapidly, so the education process can’t ever actually stop. Later in the discussion, Stewart and Ahmed spoke about the continued importance of humans being involved with AI outputs. At this point in time, people have to have a baseline knowledge in order to check that an AI is giving them correct information, Stewart explained, and Ahmed chimed in that doing deep research is still a human skill. AI can’t expand the scope of human knowledge, he said. Jenkins thinks jobs will focus on creating efficiency by incorporating AI into a company’s pipeline. Stewart agreed, citing “drudgery jobs” like transcriptions. But original data collection must be done by humans, because, she noted, “A robot can’t interview the mayor.”
Questions to Be Asking
Krefetz concluded the panel with the prompt, What are questions that companies using partially or fully AI-sourced content should be asking of vendors? Stewart noted that at Stringr, customers ask, What’s the underlying dataset being used? Is it pre-licensed? Stringr shares the party that created the data with the customer and assures them it is all licensed, she said.
Ahmed suggested, What is the work you’re doing to prove your data? And Beach contributed, How is a tech company tracking human interaction with AI? He believes that knowing this answer will help with any copyright issues pertaining to the work.
Jenkins added, Can you help ensure that whatever we do is ethically sourced and responsibly built? There needs to be a conversation about how people define “ethically,” which will lead to better outcomes as people understand others’ viewpoints—thus keeping that human element in the process.
Related Articles
On Wednesday, May 21, at Streaming Media Connect, Ashling Digital Founder & CEO Michael Nagle moderated the panel "FAST Break: Packaging and Selling Sports on FAST." The panelists set out to answer these questions: What do FAST sports channels look like? What kind of sports programming works on FAST? How do successful FAST sports channels target audiences and satisfy advertisers at the same time?
22 May 2025
On Thursday, May 22, Nadine Krefetz will moderate the Streaming Media Connect May panel "In AI We Trust? AI and Content Authenticity," featuring Think Alchemy's Andy Beach, I2A's Renard Jenkins, and Stringr's Lindsay Stewart. As generative AI is finding its way into streaming in many ways, this panel looks at two popular use cases: deep fakes and captioning. We'll discuss initiatives like C2PA to address emerging content authenticity issues and evaluate the current state of the art and tech.
05 May 2025
It is not overstating the case to claim that the health of the media business in the coming years will rest on the extent to which content provenance becomes an organic part of our patterns of creation and consumption. The commercial value of services that deliver such trusted content can be significantly greater than those that do not. Linking a system of content provenance with the next evolution of digital rights management (DRM) should give content creators and rights holders better control over the protection and revenue potential of these particularly valuable digital assets.
26 Mar 2025