-->
Save your seat for Streaming Media NYC this May. Register Now!

Review: Telestream Wirecast Gear 3

Article Featured Image

Testing Gear

Our first test involved the production setup shown in Figure 2, with two video inputs, one from an attached webcam, the other an iPhone via NDI, and PowerPoint via Desktop Presenter (Figure 4, below). I’m streaming this to Facebook Live at 1080p@4Mbps, recording the program output at 1080p@10 Mbps, both with the NVIDIA codec, and recording both video inputs as separate ISO streams using QuickTime and the x264 codec at 10 Mbps. This gives me a pristine recording of both speakers plus the program out.

first wirecast project

Figure 4. Our first project, a simple webinar-like production, was no problem for Wirecast.

Figure 5 (below) shows the feed out to Facebook and CPU utilization over the first 2-3 minutes, which is right around 20%. Note that Wirecast shows CPU utilization and dropped frames on the bottom right, just beneath the browser. You probably can’t see it because of sizing, but it’s reporting 14% utilization with no dropped frames. Impressive, but not surprising, given the simple nature of the project.

Figure 5. Project 2 involved Telestream Rendezvous.

The second project involved a local host and two remote speakers input via Wirecast’s Rendezvous feature (which you can read about here). I streamed that project to Facebook Live at 4 Mbps encoding with the NVIDIA codec and stored the program stream to disk using the NVIDIA codec at 10 Mbps. As you can see at the bottom of Figure 5, CPU utilization hovered just above 20%, with 18% shown by Wirecast.

Then testing transitioned to Gear’s internal capture capabilities, first at 1080p, then at 4K. In both cases I had one input split into four separate feeds via various splitters, resulting in four incoming feeds of the same content. I also incorporated chromakey testing to assess how the unit performed during that demanding function.

Benchmarking Gear

To get a feel for how to configure the system for these tests, I started with some streaming and recording benchmarking. In the first set of tests, I cycled through the x264 codec, the NVIDIA codec, and the MainConcept codec streaming to Facebook Live for 30 seconds each. At 4-8%, NVIDIA encoding was slightly more efficient than x264 which averaged 8-10%.

When comparing the NVIDIA and x264 captured streams, the NVIDIA clips had measurably better quality than x264 in PSNR/VMAF comparisons, though I noticed a very slight orange color cast in some test clips. I reported this to Telestream who reported that “the color space shift in NVENC is a bug in Wirecast and we’re fixing it now for a near-term future release.” Given the superior overall quality and efficiency of the NVIDIA codec, it become the go-to codec for streaming in all later tests.

Next, I focused on recording the program output, with four options, H.264 encoding with NVIDIA and x264, and QuickTime using ProRes and MJPEG (I didn’t try Windows Media). I configured x264 and NVIDIA to 10 Mbps for the 1080p output and recorded with all four options. Again, NVIDIA was the most efficient, and the file sizes were obviously much lower than either QuickTime option. So, it was NVIDIA again for the program recording.

For ISO recording, you can use QuickTime with one of five ProRes flavors or x264, which I configured at 10 Mpbs. To test capacity, I started encoding one ISO feed with ProRes 422 HD, then x264, then two with ProRes, and two with x264, then three and four with each. In each case, x264 was the most efficient option by far, making x264 my choice for subsequent ISO testing.

In Figure 6 (below), note the shots on the bottom row shown. First, the four-window shot shown in the program window, then the single inputs, then two chromakey shots, one 1080p source, one 4K source, both over a moving background. Then a 4K version of Tears of Steel, followed by the same chromakey videos over a static background.

Figure 6. This 1080p project served as the basis for the next set of tests.

To test performance, I cycled through the shots shown in Figure 7 (below) for ten seconds each and recorded CPU usage while streaming to Facebook Live at 4 Mbps using the NVIDIA codec but not recording. As you can see, Wirecast handled the four-shot composition very efficiently, but CPU usage jumped significantly for the next two clips which involved chromakeys over a moving background. Next was the 4K version of Tears of Steel, which maintained about 20% utilization followed by the two greenscreen videos, now over a static image.

1080p project

Figure 7. 1080p project efficiency during production switching

Note that 20-30% CPU utilization isn’t scary performance-wise, though the unit's very loud fans kick in when CPU exceeds 20% or so, so it often sounds worse than it really is. However, the CPU spikes in the transitions between the various VOD clips did get my attention, and Telestream is exploring this as another possible bug.

One very common use case for Wirecast might be playing back a recorded webinar, so I tested this with the last clip on the right in Figure 6 which was a Zoom webinar recording. Unlike the chromakey-based videos, this played back with very low CPU utilization, right around the same usage as the composition that starts Figure 7. So, take the chromakey out of the equation and Wirecast plays VOD files very efficiently.

Next, I transitioned to 4K testing, with Figure 8 (below) showing a project with a 4K canvas with four 4K inputs via the Gear’s internal capture cards. On the bottom row, you see several production elements that I used to benchmark performance. The first is a virtual set with a small greenscreen   window, then four-shot, two-shot, and three-shot compositions to simulate the normal switching you’d experience in a live production. After the three-shot, I inserted the 4K greenscreen over a live and then static (blue) background, then Tears of Steel and the same virtual set for the closing.

4k project

Figure 8. Here’s the 4K project.

Figure 9 (below) shows CPU utilization while cycling through the shots in the timeline for ten seconds each, while streaming at 4 Mbps to Facebook Live and recording at 30 Mbps, both using the NVIDIA codec. Again, you see the spikes when you hit the disk-based greenscreen content, but for the normal switching among 4K live sources while streaming and recording, CPU utilization is absolutely fabulous.

cpu utilization

Figure 9. CPU Utilization while recording and streaming a 4K project.

The 4K ISO picture is pretty grim; recording at 30 Mbps using the QuickTime and x264, CPU utilization hit 60% storing two streams and topped out after three. I’m not sure why you’d need ISO recordings of local cameras, but if you do, you’ll have to find another device to do that.

Angsting About Chromakeying

After reading through these results, I was left with one important question: Does chromakey in general goose the system, or is it disk-based chromakey? To test this, I created a 1080p project with three sources of chromakey input: NDI (from a computer running Premiere Pro on the same network), the previous disk-based file, and video captured via SDI. I configured all three sources over dynamic and static backgrounds. Then I created a three-shot from the input which I placed at the start and end of the production.

During my tests, I streamed a 4 Mbps stream to Facebook Live and recorded the program output at 10 Mbps, both using the NVIDIA codec. Then I played each shot for ten seconds and recorded CPU utilization, as shown in Figure 10 (below).

cpu utilization with chromakey

Figure 10. CPU utilization with various sources of chromakey footage.

The data indicate that deploying disk-based video content via chromakey--which seems like a low-probability activity for most live event producers--does consume more CPU than either NDI or SDI, which require about equal resources. Still, 20% leaves plenty of overhead.

Finishing Up: The Kitchen Sink Shot

As a final test I duplicated the kitchen sink project that I finished with in my last review of the previous Intel-based generation of Gear. This involved four 1080p inputs, all with titles, PowerPoint input via Remote Presenter, a chromakey overlay of a greenscreen video input via NDI, and a logo.

Last time out, while delivering a stream to YouTube and Facebook, storing a program feed, and saving each camera via an ISO file, CPU utilization averaged between 60-80%. This time, as you can see in Figure 11 (below), CPU utilization averaged right around 40% under the same conditions.  I doubt you can see the caution notice in the ISO recording in the top middle, but Wirecast did drop 163 frames at the start of the ISO recording, and none thereafter.   

kitchen sink project                                                              

Figure 11. The kitchen sink project

The two projects weren’t exactly the same, and your mileage will certainly vary, but this result is consistent with my general feeling about the new Wirecast Gear system. Whether it’s the CPU change, the updated software, or both, the new Gear model feels like a very efficient platform for Wirecast publishing, all the way up to and including 4K events.

Streaming Covers
Free
for qualified subscribers
Subscribe Now Current Issue Past Issues
Related Articles

Telestream Wirecast 16 and The Next Generation of Professional Live Streaming Software

Telestream recently released Wirecast 16. This demo highlights some recently released features that that Telestream believes will prove game-changers in live streaming production.

Telestream Talks Wirecast Gear, NDI Support, and PTZ Camera Control

Telestream's Shane Scrimager walks Streaming Media's Marc Franklin through key features of the Wirecast Gear including I/O, NDI support, PTZ camera control, and more in this interview from the show floor at Streaming Media West 2022.

Companies and Suppliers Mentioned