Streaming Media

 
Streaming Media on Facebook Streaming Media on Twitter Streaming Media on LinkedIn Streaming Media on Google+ Streaming Media on YouTube
Sponsors

AI Will Soon Bring Huge Changes to Live Video Production
The industry has barely scratched the surface on how artificial intelligence can be used. In the next few years, look for AI to automate mundane areas of live streaming.
Learn more about the companies mentioned in this article in the Sourcebook:
{0}
{0}

Artificial intelligence (AI), deep learning, and natural language processing will be the next transformative technologies for streaming. They all will have an impact on streaming through all stages of production, from content creation to consumption. With the proliferation of AI in many different industries, there’s no doubt that it will be heavily used for live streaming on a wider scale in the near future.

Some of the companies and technologies that are making headway in this space include Google Cloud Video Intelligence, Conviva’s Video AI Architecture, Nvidia DLA, and IBM’s Watson technology. All of these technologies currently deploy AI in varying degrees—especially in the cloud—but we’ll soon see AI making inroads into other facets of streaming as well.

AI can help replace the production workforce behind the camera and even perform mundane and time-consuming tasks that involve labor-intensive content/data management. Currently, AI is being used in viewer metrics, network and technical troubleshooting, and ad serving, but there are other potential uses that remain virtually untapped.

Smart Camera Tracking and Video Frame Composition

Although there are currently several motion-tracking camera systems that allow automated tracking of moving subjects in front of the camera, they all require producers to place transmitters or sensors on the subject. AI will be able to track speakers, athletes, or entertainers without needing any type of additional hardware or sensors. Deep learning algorithms will analyze the video and follow people doing different activities, whether on a stage or in other environments, while simultaneously keeping them perfectly framed within the camera. Even now, this technology enables drones to follow athletes sprinting on a field and tracks the targets with unrelenting precision.

In addition, there is a direct correlation between creative visual storytelling and mathematics. The key components of video imaging—frame rates, focal lengths, aperture, and composition—are based on ratios and require at least a basic understanding of the math behind them to use them effectively.

The Golden Ratio (a proportion, prized for millennia by artists, architects, and scientists alike, in which the ratio of two numbers is the same as the ratio of their sum to the larger of the two quantities) can be programmed into deep-learning-based visual perception algorithms. Thus, AI-enabled cameras can be optimized to capture the most aesthetically pleasing video images for the human eye, a task that has traditionally been performed by camera operators. AI will eventually replace the need for a camera operator in most cases. In addition, AI will be programmed to track subjects using the golden ratio and the principals of visual hierarchy as its foundation.

Real-Time Video Switching

Deep learning algorithms are automating the editing and video creation process, and will assist in bringing AI to real-time video switching as well. Intelligent software will select optimum cameras shots or angles based on the content of the stream by using facial, emotional, gesture, clothing, body, color recognition, and other imaging data and cues. The program will determine what is in each frame of the stream and decide if it is a wide, medium, or close-up angle, along with choosing what subject matter or person it includes. The software will analyze the audio, video, and other aspects of the stream and switch a full event or show by recognizing faces, speech, movements, or events based on many other factors.

These auto-mixing features will be included in video switchers in the future to allow for a completely AI-switched production. It will eventually replace the role of a technical director for live events.

Computer-vision-based video switchers can work independently on embedded systems or devices on-premises using existing hardware. Cameras can even leverage a networked cloud server if needed.

Creating Automated Actions and Triggers for Real-Time Graphics, Animations, or CG Characters

A neural network can identify target objects or people with facial recognition, which can trigger production events such as generating a lower-third for a presenter at a conference. Facial recognition could also generate graphical statistics on a particular player on the field, or even allow control of a CG character to be inserted into a stream.

Cognitive technology will be prevalent in everything—sports, eSports, corporate communications, education, and live events. This will integrate data-driven assets and visualizations that change according to specific actions, times, locations, or dynamic data in relation to the stream.

Audio Analysis

Natural Language Processing (NLP) allows for automated live transcription, translation, interpretation, captioning, and audio description for use in meetings, lectures, or events. This would be useful for multinational corporations that need live captioning for town halls, product launches, or general communications in multiple languages for a worldwide audience.

Video Analytics and Metadata Extraction for Data Management

As companies get much more involved with streaming, the sheer volume of data generated from video is increasing exponentially. The information derived from this data can be leveraged beyond what humans can extract manually.

AI will interpret streaming content and extract metadata by generating descriptive tags, categories, and summaries automatically. This will allow for more intelligent analytics, content insights, and better content management, paving the way for efficient methods of monetizing video through targeted ads.

Monitoring Social Media Sentiment and Auto Sharing

Social media monitoring allows brands to gauge online conversations and sentiment analysis, tracking audience reaction in real time. This allows for immediate customization or adjustment of the content to suit the audience’s stated preferences. Natural language algorithms will pull data from the stream and capture major topics and keywords, and then compile screenshots, video scripts, and highlight clips that can be used for marketing purposes or automatically uploaded to social media.

Artificial intelligence will be a powerful tool for companies in the streaming industry to leverage once they are able to unlock its full potential. We have just begun to scratch the surface of the full power of AI for intelligent streaming. The examples above are just a small sample of how AI can enhance live streaming by making live content more engaging and efficient, as well as allowing cost savings from production to delivery. AI will propel content owners, media producers, and advertisers into a new creative mindset of making smart and compelling content.

[This article appears in the October 2017 issue of Streaming Media Magazine as "Get Ready for Autonomous Streaming."]

Related Articles
AI provides a simple and fast way to harvest a variety of metadata from video libraries. With this addition to Microsoft's Cognitive Services, developers can try it out for free.
With help from Microsoft and Ooyala, ZoneTV is creating a system that streams online videos to pay TV users and learns from their preferences.
Conviva says quality internet streams are only possible with intelligent realtime detection, and HBO is using Conviva's new Video AI Platform to do just that with HBO GO and HBO NOW
Live video helps companies save on travel costs, and leads to patients getting better medical care. Here are the three areas succeeding the most with live video.
Live Streaming Summit panelists agree: Live, user-generated content will continue to compete with—and possibly surpass—professionally produced content, and we need to build tools and technologies to keep pace