Save your seat for Streaming Media NYC this May. Register Now!

Introduction: Foundations of AI-Based Codec Development

While many companies leverage AI to enhance the performance of existing codecs like H.264, HEVC, and AV1, UK-based Deep Render is developing a fully AI-based codec. To promote the understanding of AI codec development, Deep Render's CTO and Co-founder, Arsalan Zafar, has launched an educational resource titled "Foundations of AI-Based Codec Development: An Introductory Course." In a recent interview with Streaming Media Magazine, Zafar provided insights into the course's content, target audience, and expected outcomes.

Zafar began by explaining that Deep Render's mission is "to pioneer the next generation of image and video codecs utilizing machine learning." Then he shared that in the five years since its inception, Deep Render developed the world's first AI codec, which he claimed delivered up to an 80% bandwidth savings over H.264.

Why did Zafar produce this course? In his words, "Have you ever wondered how AI-based codecs work?" Deep Render has been immersed in this pursuit, and Zafar wants to demystify it for the wider codec community. To accomplish this, the course explores the historical evolution of AI and machine learning, highlighting key milestones and recent advancements.

The course starts with the fundamental principles of machine learning—architecture, loss function, and training—which are crucial for comprehending the underlying concepts of AI-based codecs. It then explores the architecture of AI-based codecs, demonstrating how neural network layers and machine learning primitives are configured to optimize compression efficiency. Formulating the objective function, which defines the rate-distortion tradeoff, is a crucial aspect of AI-based compression. Zafar also explains how rate and distortion are defined in a differentiable manner, enabling efficient optimization through backpropagation.

ai-based codecs course objectives

Zafar then guides learners through the encoding and decoding process, highlighting the shift from traditional methods to a machine learning-driven approach. This practical approach to learning ensures that learners can apply the knowledge gained from the course to real-world scenarios.

The course later provides an overview of the production aspects of AI-based codecs, detailing the transition from training to inference regimes and the utilization of neural processing units for efficient execution. Then it explores the unique features and advantages of AI-based codecs, like adaptability to domain-specific content and scalability with hardware advancements. Zafar underscores the potential for rapid innovation and updates in the AI-based codec ecosystem, facilitating faster rollout and adaptation to evolving industry needs.

In addition to theoretical discussions, the course offers practical demonstrations and examples of AI-based compression, showcasing both successful outcomes and potential challenges. Arsalan invites learners to engage with Deep Render's demos and resources to further explore the capabilities of AI-based codecs.

The course is designed for a highly technical audience, primarily video codec engineers who are looking to integrate AI-based codecs into their pipelines, though it's also useful to other technologists who simply want to explore the application of AI to video codec development. Zafar describes it as a primer on AI-based compression, exploring detailed algorithms while addressing real-world production considerations like the required playback platforms for distribution. Despite its concise duration of 16 minutes, the course provides a comprehensive overview of AI-video codec development, making it a valuable resource for anyone interested in this field.

The free resource is available on YouTube at https://bit.ly/AI_codec_course.

Streaming Covers
for qualified subscribers
Subscribe Now Current Issue Past Issues
Related Articles

’Round the Horn at NAB 2024: Videon, Telestream, Phenix, Ateme, V-Nova, Twelve Labs, Norsk, Dolby, and NETINT

Any NAB report is like the story of the blind man and the elephant: what you experience is what you touch, representing a fraction of the whole and perhaps not even a good sample. That being said, here's what I touched during the show. Many of these experiences are accompanied by video that I shot of the interviews.

iSize Claims Massive Performance Savings for its Debut AI Codec

BitSave promises 70% bitrate savings via machine learning, and currently works with H.264, with H.264, H.265, and VP9 on the way