Can AI Make the Streaming Video Experience Even Better Than TV?
No, Skynet is not preparing to wipe out humanity anytime soon, but artificial intelligence (AI), along with its cousin machine learning (ML), are starting to make waves in industries across the globe. From alerting physicians to potential, harmful pharmaceutical interactions to handling customer service inquiries to powering self-driving cars, AI is re-imagining what were once manually intensive tasks.
But these technologies aren’t new. Pioneered in the ’60s and ’70s by such luminaries as Marvin Minsky and Douglas Lenat, humanity’s quest to create synthetic intelligence grounded in silicon and algorithms has been ongoing. It’s only in recent years, though, that the technology has reached a state where it can be more easily applied to business processes that have become digitized. In many cases, the application of AI and ML can lead to improved business efficiency, cost savings, and even staff reductions. But what can such technology do for streaming video?
The adoption and proliferation of streaming video faces some serious challenges, most notably scale and quality. As more people watch, with the expectations of high definition and even 4K, the greater strain on the delivery infrastructure (ultimately causing failures) and the more bandwidth consumed, all of which conspire to undermine the end-user experience. But what if video distributors could deploy computer intelligence such as AI that automatically makes decisions to improve the delivery and, consequently, the experience?
Consider how much data is being generated by streaming servers and client players. This data could be automatically fed into AI systems, which could use the results to improve delivery, such as by rerouting to different CDNs when there is congestion. But that doesn’t really solve the problem. There are still users who experience poor video quality while the switch is being made. What if that same system could proactively switch before failure even happens? This is the real power of AI—making operational decisions without input. Of course, the system naturally becomes smarter over time.
But video delivery isn’t the only place where AI might benefit. Take encoding, for example. Encoding vendors are trying to perfect video compression that considers the nature of the scene itself (i.e., context-aware). By doing so, encoding time can be reduced (because not all of the scenes require the same level of compression) while still reducing the overall size of the finished video, thereby saving bandwidth for the end user without sacrificing quality. These context-aware encoders are backed by intelligent systems that may, at some point, even be able to rewrite encoding profiles to optimize compression without any human intervention.
And AI can be used to improve the video experience in other ways as well. How about personalization and recommendation? How about content discovery? In both of those scenarios, intelligent systems could learn so much about a user’s viewing behavior as to truly understand what the user likes to watch—but not just based on past data. Instead, when combined with technologies like facial recognition (using a front-facing camera on a laptop, for example), the system could learn to understand the emotional state of the user and, when combined with other data (like time of day, weather, calendar activities, etc.), might suggest videos across platforms that the user would be most likely to enjoy watching. When combined with interactivity within the video, this might lead to very personalized recommendations across ecommerce platforms like Amazon.
The future video experience is more than just “lean back.” It will integrate social and commerce. It will be driven by data. And as distributors deploy AI systems (at the edge of the network, of course, to reduce latency), which become more intelligent over time, the gap between traditional TV and streaming will grow. Forget Skynet. Think TVNet. Watching video will never be the same.
[This article appears in the September 2018 issue of Streaming Media magazine as "Can AI Make the Streaming Video Experience Better Than TV?"]
For events like the royal wedding and the World Cup, machine learning and AI are taking center stage.
Meet the big four players in the video artificial intelligence space, then learn how they can speed up time-consuming tasks like generating metadata or creating transcriptions.
Microsoft and artist Mel Chin collaborated on an art installation that was part physical, part virtual, and completely essential.
Publishers can set the module to notify outside partners of QoE problems, automating a process now done by technicians and engineers.
Soccer fans can view World Clips soon after they occur, then share them with friends, thanks to an AI-powered system that speeds the workflow.
The machines aren't taking over; they're just helping video publishers achieve their goals more efficiently and effectively.