-->
Save your seat for Streaming Media NYC this May. Register Now!

Configuring Servers For Streaming

In order to help you better understand the server, we’ve included a drawing, below, of what a high-level enterprise network might look like. For purposes of this discussion we’ll cover mostly the server’s role.

Above: an enterprise streaming media network. 1) source media 2) FireWire or SDI 3) capture card and encoders 4) 100 Base-T Ethernet 5) rack-mount servers for VOD or streaming 6) media database server 7) Fibre Channel Storage Area Network (SAN) 8) caching servers 9) Internet HTTP server or ISP 10) clients

Tip: Distribute your processing power across multiple servers. It is better to have four 2-processor servers than two 4-processor servers. This provides a measure of fault tolerance (high availability) and server load balancing.

Tip: Dedicate separate computers for streaming media server software. Streaming server software companies strongly recommend you separate streaming servers from HTTP Web servers. Mixing these environments on the same computer will severely impact performance, manageability, reliability, and scalability. Besides, it’s very likely any cost savings will be offset by hardware upgrades to accommodate the additional performance requirements of both anyway.

Tip: Specifically, don’t put on the same server Windows Media Services with HTTP enabled and Windows NT or 2000 with Internet Information Services (IIS) enabled. You can enable HTTP on Windows Media Services so it’ll send media HTTP if it can’t get through MMS. HTTP enabled in Windows Media Services and IIS enabled in NT or 2000 clash. Microsoft doesn’t recommend it because the two vie for resources. If you must put them on the same server (for testing purposes), at least disable the IIS content indexer, FTP, SMTP and World Wide Web publishing services if they are not needed. These services use resources that could be dedicated to serving streams.

Tip: Place video on-demand servers closer to storage and live streaming servers closer to the outbound pipe. The limiting factor for on-demand streaming is how fast media can be lifted off of the storage disk, whereas the limiting factor for live streaming is how fast media can get through the network and to the outbound pipe. Place VOD servers closest to storage for rapid media retrieval and live streaming servers closest to the external Internet, bypassing as many network components as possible.

Tip: The best place for a UDP streaming server is in what’s known as the demilitarized zone (DMZ), the protected area directly behind the firewall. Here, UDP traffic can take place away from the main corporate server on a periphery network, yet still have the protection of the firewall.

Tip: If you’re setting up the server(s) in an enterprise, hook the streaming server to a Fast Ethernet or OC3 ATM-switched network segment. Put the encoder and the streaming server it talks to on the same network segment apart from corporate network activity, and restrict bandwidth to it. Any switched network will let you adjust the bandwidth load going to each segment so you can place restrictions specifically for streaming or video on-demand. You’re going to want to have control over the bandwidth allotted from your corporate network to the streaming server so the streaming server doesn’t swamp your network during heavy loads.

Tip: Install the most direct and fastest data link you can between the streaming server and the encoder. If they are in the same building, Fast Ethernet will work. If they’re located in separate buildings, a high-speed data connection such as Gigabit Ethernet or ATM will provide the necessary performance requirements.

Tip: If uptime is your main goal, limit the number of servers for each encoder. Encoders are currently the least reliable box in the streaming media chain. Every time one goes on the fritz and needs to be rebooted (which can be often), it also interrupts service to the server it’s attached to. To isolate each incident, set up one server for each encoder instead of a pool of servers shared by all encoders. That way if one encoder fails and needs to be rebooted, it will disable only the streams to one server and won’t kill all the streams across the pool of servers. This is certainly not the most scalable solution, and it’s not recommended for everyone, but it can be a useful solution in some cases.

Tip: Set up redundant server farms for two offices connected by a WAN rather than have a user from location B have to pull a stream from location A over a slow WAN. Users at each location will have faster local access to media than if they had to share media across a slow WAN. The simplest way to do this is to replicate content from location A to location B and set up separate Domain Name Systems (DNS) in each location that can serve the media inquiries of clients in their respective locations. The other option is to install a faster connection between the two offices. But, in just about all cases, it’s more affordable to install hardware than to pay $5,000 a month, or more, for a faster link.

Tip: Use DNS Round Robin for server scalability, which distributes user requests across a bank of servers on a rotating basis. The benefit to this approach is that DNS Round Robin provides a cheap and dirty method of server load distribution. The downside is that it does not measure a server’s current load and therefore has no way to tell if an individual server is being overloaded.

Tip: Upgrade to more sophisticated load balancing such as Microsoft’s Windows Load Balancing Service (WLBS) or hardware IP load-balancing solutions by Cisco or Enterasys before adding another server to the pool if you’re experiencing peak usage on one server and moderate to little on the others. These products not only monitor usage for more even distribution of traffic on the network, but also offer an added measure of fault tolerance because if a server goes down, they will automatically remove it from the availability list and redirect clients to operational servers in the pool.

Tip: For on-demand streaming, your biggest challenge will be making sure the storage can keep up with user requests for access. For dedicated server storage, SCSI is the fastest connection going. For fast shared disk across a network, look into Fibre Channel storage area networks (SANs) and iSCSI. Fibre Channel is a fast data interface standard used to interconnect storage devices so they can communicate at very high speeds (currently, 2.125Gbps but up to 10Gbps in future implementations). What’s also nice about Fibre Channel is that it can connect two storage systems located up to six miles apart. You can physically locate RAID storage closest to video servers and locate CD-ROM towers closest to audio streaming servers, yet connect the two on the same storage network for shared access. iSCSI is a fairly new development in storage protocols that might prove to be beneficial in replicating data from one storage location to another. It’s a protocol that uses SCSI commands for storage wrapped in IP protocols for sharing storage over a network.

Tip: If you’re streaming content through a corporate Intranet, lucky you. You can feasibly go with multicasting for live streaming so that multiple users can share the same bandwidth. The total aggregate bandwidth requirements for multicasting are much less than those for unicasting, and that can save a bundle in bandwidth costs. Both Microsoft and Real support multicasting. To set up multicasting, you will need to enable routers and switches in your network to accept IP multicast traffic.

Page 3: Server Hardware Pointers >>

Streaming Covers
Free
for qualified subscribers
Subscribe Now Current Issue Past Issues