-->
Save your seat for Streaming Media NYC this May. Register Now!

Buyers' Guide to Unified Communications 2017

Article Featured Image

The continued evolution of unified communications (also known as unified communications and collaboration, or UCC) has relied on a number of key learnings from the videoconferencing, Voice over Internet Protocol (VoIP), and streaming industries. After all, if the goal is to unify all voice- and video-based enterprise communications—from live conferences to searchable on-demand streams, delivered to any type of end-point device—there needs to be collaboration between vendors in each of the three industries.

This Buyers’ Guide attempts to provide insight into UCC trends and the impact that each may have on purchasing products and services that combine streaming with other forms of enterprise video. There’s another Buyers’ Guide in this year’s Industry Sourcebook that deals with enterprise video platforms, and the two guides can be used hand-in-hand when shopping for enterprise streaming video solutions.

Let’s look briefly at how videoconferencing solutions are merging more tightly with streaming solutions.

Just Another End Point

One newer approach to merging streaming media and videoconferencing has been tried numerous times, and in numerous ways, over the almost 20-year history of collaborative streaming.

The controller for multipoint videoconferences—the kind made by Cisco, Lifesize, Polycom, Tandberg, or Vidyo—is called a multipoint control unit (MCU). Multiple conference rooms connect to the MCU, with each of these rooms or “end points” able to see the other rooms, although not at the same time.

If only three to four end points join, the resulting screen image can accommodate all rooms in a tiled, Hollywood Squares format. Otherwise, switching between rooms is often determined by the host of the event (known as “chair control”) or by an algorithm that tracks who is speaking and switching accordingly (known as “video follows audio” based on the fact that someone at a particular end-point room often needs to speak for 3–5 seconds before the video switches over to them).

The new-old idea is to add some form of streaming encoder or recorder to a multipoint video call, which acts as an invisible end-point participant, recording the multipoint call. Since H.323, the IP standard for videoconferencing, relies on a profile of the H.264 codec, it’s then easier to package up the multipoint recording to publish to an online video platform or even a learning management system (LMS) for education or enterprise.

Sonic Foundry, a company that has two decades of experience with rich-media recording and tight integrations with various LMS offerings, has recently launched a version of this end-point recording solution that moves a step closer toward a holistic unified communications approach.

Streaming UCC Graphics

Rather than just record the multipoint H.323 call, Sonic Foundry’s Mediasite Join also records the H.239 synchronized data from the second screen.

Most videoconferencing setups today have one screen to view participants and another to view the PowerPoint or webpages being shown by the call’s moderator. This two-screen setup, based on an idea from PictureTel called People+Content back in 2000, allowed graphics to be sent to one screen and video to another, rather than earlier solutions that used a video-based document camera or VGA-to-composite video converter. Prior to that, those of us in videoconferencing had to rely on something called T.120, which was the basis for Microsoft’s NetMeeting.

The two big issues with H.239 over the past 15-odd years have been how to capture the graphics in synch with the video without blowing the bandwidth budget and how to decide which end point gets to deliver the graphics channel to all the other end points.

The first issue has a practical challenge, as unified communications relies on guaranteeing that both the talking-heads video and simultaneous graphics stay in sync during on-demand playback. One way to approach this was to record a second stream, solely dedicated to the graphics channel, but that required a significant amount of bandwidth to record graphics, most of which are just presentation slides, or static images, that might last for minutes on end.

For a number of years, then, the industry had to rely on two different quality encodes: one for talking heads and a higher-quality one for graphics.

The second issue has been addressed in H.239 by an inelegant solution: For multipoint conferencing, only one end point in the conference can send the additional graphics channel to all other end points. On the other hand, this second channel is essentially a broadcast (unidirectional video) stream rather than a collaborative (bidirectional video) stream. That reduces the computational complexity at the end points, since they only need to decode this second channel.

The advent of high-definition (HD) cameras made this task a bit easier, since the resolution of both the graphics and an HD camera were about on par, but the impending advent of 4K presents a daunting task of sending two simultaneous 4K streams.

Making UCC Content Searchable

Some solutions use a frame grabber to snag still images of the graphics, in an attempt to keep the overall bandwidth down. This works, as long as there is no full-motion video being delivered on the H.239 channel, so look for solutions that can differentiate between still images and full-motion video being delivered on the second video channel.

Streaming Covers
Free
for qualified subscribers
Subscribe Now Current Issue Past Issues
Related Articles

Streaming Meets Unified Communications: Convergence Is on the Way

Social media, online video platforms, and knowledge management systems are coming together to create the future of enterprise communications.

VBrick Brings Streaming to Unified Communications

OCS Streaming Gateway and Enterprise Media System integrates streaming video into Microsoft Office Communicator and SharePoint