Video: How IRIS.TV Implements Machine Learning in Production Environments
Learn more about real-world video AI implementations at Streaming Media's next event.
Watch the complete video of this panel, AI104A: How AI is Revolutionizing Publishing, in the Streaming Media Conference Video Portal.
Read the complete transcript of this clip:
Field Garthwaite: I'm going to quickly cover, from a high level, how we are implementing machine learning in production environments.
The first is creating a common data model. This is something that, for example, Facebook, Google, Amazon, Netflix, they're very good at doing this, but kind of traditional media companies have been a little bit slow to adopt.
The second component, which the video touched on, is how our APIs implement into video players to actually capture audience behavior. What we'll touch on briefly is how we do that in an GDPR-compliant way, which is a very relevant topic for those of you dealing with it right now. Lastly, how you essentially develop insight and kind of get it to teams in an actual way.
The first component here we're going to talk about is metadata ingestion and taxonomy creation, and then we'll kind of move through those other steps. The major takeaway here, in terms of creating a common data model, is that every business is unique. If you're a broadcaster, a news publisher, you may have some common ground with other news publishers and best practices you can adopt. But you're also going to have topics, like sections of your paper or areas that you have specific coverage.
There's a number of examples that Kara will talk about, but one is like the Olympics or other kind of special series. Having a taxonomy and common data model around that makes the data model machine learning manipulable in the future, so you can actually structure business rules around it.
The second piece, the API integration, is fairly straightforward, like most APIs. But IRIS.TV kind of sits on top of any player. In this use case, this is kind of a Brightcove backend. IRIS.TV is also able to implement API installation, so if you have your own video player as Gannett does now, then it's also kind of easy to integrate.
Finally, in terms of how this personalizes video, a little bit of background: Setting up instruction and using NLP to create that contextual kind of relevant data on an asset level sets up the first kind of machine learning system, as well as what device, time of day, other contextual information, and finally cohort analysis.
Those three sit under a business-rules engine that allows an editorial team or product team, like Gannett and USA Today, to control the machine learning.
Limelight's Jason Hofmann, Citrix' Josh Gray, and REELY's Cullen Gallagher discuss best practices for training AI systems at Streaming Media East 2018.
Google's Matthieu Lorrain cautions of the risks of doing AI for its own sake in this clip from Streaming Media West 2018.
RealEyes Director of Technology Jun Heider discusses the importance of internal self-assessment and which use-case elements to consider when choosing a platform for video AI in this clip from Streaming Media East 2018.
RealEyes Media Director of Technology Jun Heider identifies the key players in the AI platform space in this clip from Streaming Media East 2018.
RealEyes Director of Technology Jun Heider outlines the first steps in choosing an AI platform in this clip from his presentation at Streaming Media East 2018.
Microsoft Principal Product Manager Rafah Hosn makes the case for reinforcement learning as a machine learning paradigm for content personalization in this clip from Streaming Media East 2018.
Microsoft Principal Product Manager Rafah Hosn discusses the benefits and limitations of a content personalization strategy based on supervised machine learning in this clip from Streaming Media East 2018.
Microsoft Principal Product Manager Rafah Hosn explains how Microsoft's machine learning-driven decision services helps brands target viewers and increase engagement in this clip from Streaming Media East 2018.
Comcast Technical Solutions Architect Ribal Najjar discusses how operationalizing commonalities between QoE and QoS metrics to deliver a "super-powerful" dataset in this clip from Streaming Media East 2018.
Comcast Technical Solutions Architect Ribal Najjar defines video QoE both in terms of subjective experience and qualitative measurement in this clip from Streaming Media East 2018.
Gannett Senior Director Kara Chiles discusses how USA Today leveraged IRIS.TV and data to localize and personalize their Winter Olympics 2018 coverage in this clip from Streaming Media East 2018.
ZoneTV's Tom Sauer describes how machine learning can be used to overhaul the TV world and deliver more individualized experiences in this clip from Streaming Media East 2018.
REELY CEO Cullen Gallagher makes the business-growth case for content owners developing an AI strategy in this clip from Streaming Media East 2018.
IBM Watson Media's David Clevinger discusses how media entities are currently using video AI in this clip from Streaming Media East 2018.
Citrix Principal Architect Josh Gray explains how video enables higher-acuity metrics analysis in this clip from Streaming Media East 2018.
Limelight VP of Architecture Jason Hofmann discusses how AI impacts content delivery optimization in this clip from Streaming Media East 2018.
Citrix' Josh Gray provides tips on AI model development and Reality Software's Nadine Krefetz and IBM's David Clevinger speculate on the possibilities of metadata-as-a-service in this clip from Streaming Media East 2018.
Google's Leonidas Kantothanassis explores the vast range of applications for machine learning in the media workflow and supply change in this clip from his Content Delivery Summit keynote.
Companies and Suppliers Mentioned