Where Videoconferencing And Streaming Technologies Intersect - Part 2
As with any capture environment, when one is producing a live video event through an integrated videoconferencing and streaming system there is a delay introduced by the preparation of the media stream for viewing by its intended audience. Producers and audiences must recognize that the streaming medium is not intended to be a real time interactive environment. How much delay a solution introduces depends on many factors but there can be as little as 2 seconds and up to 12 or 15 seconds introduced in the process of converting media between the conferencing and streaming platforms.
Systems that prepare multiple data rates or multiple streaming media format streams simultaneously will introduce additional delays, if the processing capacity of the platform is not up to the task. As with other features, the customer must weigh the benefits of adding complexity (such as creating and targeting special media to the benefit of the various audience sub-segments) against the potential risks of failure or delays these additional features place on the preparation of media.
What Can Be Done To Reduce Latency?
Some vendors emphasize the low latency characteristics of their integrated systems and offer the ability for audience at a streaming media terminal and speaker at a videoconferencing system to interact using voice without awkward pauses. The variation in latency introduced by IP networks may play havoc on the experience so this should be tested thoroughly. The best experience will be found on networks where the quality of service (latency, jitter and total bandwidth) is guaranteed by the providers.
Although having a system locally is not a priori a guarantee that there will not be latency, a multipoint conferencing system positioned between the capture system and the encoder is sure to introduce several hundred milliseconds to the transmission of the original content to the streaming media encoder.
More than in simple videoconferencing, reducing video and audio noise is consistently a good idea to improve performance of the complete system that ends in a streaming media session.
To reduce the "noise" in video, one should avoid lightweight fabric backdrops that could move due to slight movements by the speakers. The speaker or speakers should never be in front of a window where wind might be moving tree branches, or clouds passing by in the sky. Even motion of these natural objects on a closed blind can cause visual noise. Another source of noise is due to low light conditions so increasing light is a consistent key to higher production values.
Every effort should also be made to ensure that extraneous noise from the local environment (fans on computers, building ventilation systems, cell phones, automotive traffic) is not introduced at the time of the content capture. A studio explicitly removes these sources of noise, but a videoconferencing system is far less likely to be found in a sound proof facility. Proper selection of audio algorithms can also make a difference. In a high bandwidth session, G.722 produces the best quality standard-based results, but proprietary algorithms in the players can improve voice quality even further.
Some facilities may be prone to very minor power fluctuations when a copier, air conditioner or other power-hungry device cycles. Such variation in power may not be detected by the human eye, however, when encoded can cause difficult to remove anomalies in the final production. An uninterruptible power supply can normalize power fluctuations and reduce the likelihood of there being an impact of "brown outs" on the final rich media experience.
Some studios can re-encode captured video and audio content at lower data rates, using different formats, and also improve synchronization of audio and video after the fact by introducing slight delays in video.
Next Page: Content Management functionality