Streaming Media

Streaming Media on Facebook Streaming Media on Twitter Streaming Media on LinkedIn
 

Tutorial: How to Leverage IBM Watson Media’s Latest Interactive Webcasting Features

Last year, I wrote about some of the new interactivity features that I was starting to incorporate in my webcasts on the IBM platform. In this article, I'm going to discuss ongoing issues and opportunities with the polls and registration gate, plus two new features: slides and captions.

IBM Watson Media is the new name for IBM Cloud Video, the company’s live-streaming platform. I started producing webcasts on this platform back in 2010 when it was called Ustream, and offered a pay-per-use, whitelabel solution called Watershed.

Last year, my company produced more than 50 corporate webcasts on this platform, in addition to two weekly church services. Over the years, I have also produced live broadcasts on several livestreaming webcast and webinar services. Some were tests for clients who desired additional webinar-like interactivity features not readily available on standard webcast services and who needed to move their broadcasts off of end-of-life Flash-based platforms; other times I used the platform that the client was paying for or was accustomed to using, such as Facebook Live, YouTube Live, and Livestream.

As a live-stream platform, I appreciate that IBM Watson Media has a reliable CDN, offers multiple quality levels for streaming, and lets me choose to manually enter RTMP and Channel info into my choice of encoder or log into my account and select the appropriate channel directly from vMix, the software I use for most of my streaming productions.

Although this isn’t an issue for the Vimeo-owned Livestream today, I remember a time when Livestream sold hardware locked to its service and didn’t allow its users to manually enter an RTMP code into other webcast encoders.

Last year, I wrote about some of the new interactivity features that I was starting to incorporate in my webcasts on the IBM platform. These included the registration gate, chat, Q&A box, and polls. In this article, I’m going to discuss ongoing issues and opportunities with the polls and registration gate, plus two new features: slides and captions.

Registration Gate and Polls

Some of my clients need to know who is watching their webcasts. Mainly, these are professional associations that offer credits to their members for attending training, whether at a conference or in an online live stream. The registration gate serves to collect the name and email, along with data from several other optional fields, from each viewer. A leads report can be generated as a CSV file. As viewers can register in advance of the live broadcast, the report includes a field to report whether the viewer watched the webcast or not and the type of broadcast they watched.

Advance registration is a feature that begs to be paired with an automated reminder email. Currently, the producer must download the leads report and manually send a reminder email. This is not a deal-breaker, but in some ways, it negates the point of advance registration.

The report also has some issues. At one point, my reports were not accurately recording whether a viewer watched the webcast or not. I would guess that this has more to do with changes to the way browsers handled user cookies and tracked their viewership.

This is no longer an issue, but I’m noticing that the report is always telling me that viewers have watched a live broadcast in cases where I have uploaded edited content for ondemand viewing. I always explain to my clients that this report only indicates whether a viewer watched some portion of the webcast—but cannot confirm if they watched just 5 seconds, or actively viewed the entire presentation, or minimized their window and put the volume on mute while doing something else. I encourage them to use other independent methods of attendance verification.

In this vein, my big feature request with the registration gate and polls is that the two features work together. Presently, polls offer a way for viewers to answer multiple-choice questions that appear as pop-ups on top of the video. The producer can view the results in a pie chart and see the number of times each option was selected (Figure 1, below).

Figure 1. IBM poll results

This is great for gauging audience understanding and potentially voting on issues, but there’s a missed opportunity. If the individual registrants’ information could be tied to their answers, then the polls feature could be used to administer tests and to create multiple checkpoints to verify whether a viewer was still actively engaged in the webcast.

It could also be a way for a poll-watcher to verify that votes are coming from registered individuals and to ensure that each individual only casts one vote (as opposed to a viewer watching the broadcast on two devices and voting twice). There is also no easy way to push the results to the webcast.

Slides

Almost all of our broadcasts involve one or more camera angles and a computer presentation. I can think of only a few times where we have broadcast a single video camera feed without adding some form of graphics or presentation, even if it was just a holding slide.

Typically, we prefer to use a video switcher to switch the full-screen view between the camera feed(s) and the computer presentation(s). This method requires us, as producers, to pay attention to the subject matter and make timing decisions on the fly as to what to show, when, and for how long.

It also gives us options to switch to when there is nothing to show on one of the inputs. With some of our more technical presentations, our clients prefer that we broadcast using picture-in-picture or picture-by-picture (Figure 2, below).

Figure 2. A picture-bypicture, slideand- presenter configuration for widescreen slides

Picture-in-picture works best when a client designs their slides with an unused space for us to insert the video into. This can either be a blank placeholder in each slide or a logo or text element that is shown to the in-person audience but is covered by video for the webcast audience.

The benefit of this method is that there is no wasted space, as the entire 1920x1080 video signal in an HD broadcast can be used up with content, and slides are designed not to place anything where the video will go. We rarely have clients select the picture-in-picture method because it requires a set template be used, and presenters often have their own presentations designed without consideration for a video window to be superimposed on top of their slides.

The picture-by-picture method is what we use most frequently when our clients want to show video and presentation at the same time. We use vMix to design a composite image with multiple elements. The advantage of this approach is that we can set the size and position of each element, including cropping the widescreen video frame if the full width isn’t needed.

The disadvantage of this approach is that it wastes a lot of space in the 16:9 broadcast frame, and everything needs to be resized. Typically, we add a background color, logos, and titles to fill the unused space. I find this method gives me the most flexibility, especially when there are multiple presenters and their respective presentations are a mix of 4:3 and 16:9 aspect ratios, as I can design two different looks and select the better one for each presenter.

Ultimately, the picture-by-picture method puts all the control in my hands as the producer and none in the viewers’ hands. This isn’t always a bad thing, but IBM has an option for displaying slides next to video in the video player.

The Slides feature allows you to upload a PDF of a slide deck and use the Remote Broadcast Console to advance the slides in time with the presentation (Figure 3, below). The viewers see a picture-by-picture display with the slides next to the video, and they have control over which of the two inputs is the dominant one.

Figure 3. IBM’s Remote Broadcast Console

I love this feature as an option that I can present to my clients who all have different needs and preferences. There are some limitations that you need to be aware of, though.

For one, the slides have to be uploaded as PDF files. This is a good idea to ensure that text in non-standard fonts look the same as it does on the presenter’s computer, but this adds a step compared to saving the file as a native PowerPoint file.

This matters in live production. Presenters are notorious for making last-minute changes to their slides, and this can add a lot of stress compared to simply splitting the computer signal going to the projector and taking a feed into your video switcher. It also limits what you can show in the second window to PDF files alone. Again, it’s not a deal-breaker, as you can use the video window if you want to play a video, perform a software demonstration, or navigate a website.

Slides also have to be in widescreen format. I tested with what Microsoft calls its “standard” 4:3 aspect ratio, despite widescreen now being the default in current PowerPoint releases. The slides are displayed in an anamorphically correct manner, but the slide window within the broadcast window is widescreen. This results in wasted space and a missed opportunity to have 4:3 slides fill more space.

Advance preparation and presenter cooperation could negate these limitations, but there is no getting around the next issue. In my test slide, I put as a title, “synchronized slides and video.” Unfortunately, in practice this isn’t the case.

Webcasts enjoy a high degree of stability thanks to the way webcast services encode the video and deliver it in chunks to the viewer. This method adds a broadcast latency or delay, typically 15–30 seconds. On the viewer end, it also builds up a buffer of pre-downloaded video. The benefit of this approach is that if a viewer has a momentary slowdown in her internet connection, it eats into her buffer and the video doesn’t stop.

While I prefer broadcasting with a healthy buffer because it leads to a better playback experience, it causes issues for the Slides feature in that the slides and video have different broadcast delays. I noticed that slides took about 4 seconds from the time I advanced them in the remote broadcast console before they advanced in the video player.

The video, on the other hand, took 15 seconds, so generally the slides arrived before the video. This disparity generally increases on longer broadcasts, and individual users accumulate more latency if their internet feeds buffer.

While not an issue for presentations where each slide is up for a long time and timing isn’t critical, for presentations where slides are timed for dramatic effect, the combination of the webcast producer having to manually advance the slide deck and there being a broadcast delay in the video means that slide timing won’t be as precise as a direct feed from the presenter’s laptop.

This slide synchronization with the video issue affects only the live broadcast to this degree. In my tests of replaying a live recording, the slides were better synchronized with the video, although the slides still arrived a few seconds before the video did. But they fell within a range that I would consider close enough.

Related Articles
How is enterprise streaming effectively leveraging interactivity? Erdal Kilinc discusses the ways his company Deal Room Events has discovered that interactivity has benefited its clients in dynamic and unexpected forms that incorporate marketing, media, and customer engagement.
"Traditional" webcasting services like Wowza and AWS Cloud, though powerful and reliable, often came with expensive usage costs and longterm commitments that made it difficult for newer webcasters still gaining a foothold in the industry. Newer services like Mux and Pushr offer more flexible pay-as-you-go arrangements with less expensive transcoding, DVR, and storage costs that make them more accessible to newer webcasters, as Robert Reinhardt explains in this clip from Streaming Media East 2022.