Dying to Stream: The Rise in Violent Acts Streamed Live Online
Another week, another story of a horrific act live streamed across the internet. Live video makes this possible. Now, how does the industry stop it?
Learn more about the companies mentioned in this article in the Sourcebook:
"Content is king,” we like to repeat frequently amongst ourselves at trade shows and industry forums, even when those events are focused on the technical challenges of streaming video. In reality, we have multiple kings, as content varies widely between video platform types: business-to-business (B2B), business-to-consumer (B2C), and social media platforms.
On social media, production quality is less important than content—those compelling, can’t-lookaway content snippets that we Share or Like dozens of times per day. When it was just cats and dancing babies, there was a clear line of demarcation between serious and silly, and social media was the campground in which silly often pitched its tent alongside inane, ludicrous, and vapid.
However, a growing trend in social media live video streams—which started with the launch of Periscope and Vine, and has continued today into the mainstream with the Facebook Live platform—has parents and authorities worried: Livestreaming platforms are being used to narrowcast selfharm or criminal activity against others.
A recent Wall Street Journal article highlighted Facebook’s recent dilemma dealing with the dark side of live social media streaming, as numbers of livestream incidents—assaults, shootings, robberies, and additional suicides, sometimes as many as two per day—has been steadily rising on Facebook Live throughout 2016 and early 2017.
The problem actually started several years ago, not too long after Periscope hit the market with its 24-hour-archive option.
“What will happen is likely to be very shocking,” a teenage girl in the Egly suburb of Paris stated in an archived version of her mid-2016 Periscope live feed, retained after her suicide broadcast at the Egly train station.
Law enforcement tried to reach the girl’s location—she had been broadcasting for a number of minutes prior to the event, even posting multiple videos, including one in which she warned her audience, “If there are underage minors watching later on, don’t stay.” But police were too late to save the girl.
Is the answer to prescreen live streams, in the same way that broadcast television has a multisecond “dump” button so that they don’t run afoul of the Federal Communications Commission and its decency laws? Probably not. But if the number of “shock videos” rises on social live streaming platforms, it’s likely that they’ll either be litigated or regulated. Or maybe even both, as some events have continued to be broadcast for hours after the companies were warned about the type of objectionable or illicit content being shown in a particular live stream.
Twitter, which is Periscope’s parent company, faced questions about a number of acts—from kidnapping to rape to suicide—that were being livestreamed via Periscope.
“To maintain a healthy platform, explicit graphic content is not allowed,” noted Twitter spokesperson Ian Plunkett in May 2016, after the Egly suicide incident. “Explicit graphic content includes ... bodily harm. Periscope is not for content that is intended to incite violence, or includes a direct and specific threat of violence to others.”
The consolidation of Twitter and Periscope, a text broadcast tool merged with a live video streaming tool, means it’s even easier to draw an audience, clearing a streamlined path toward selfpromotion of selfharm live streams.
Speaking personally, as a father, the ease of live narrowcasting on social media platforms concerns me, if not for my own daughters then for friends of theirs. Teens will be teens, of course, but in their yearning for attention, they sometimes make splitsecond choices they later regret.
As an industry, we shouldn’t help them make those choices in front of a global audience. Our best and brightest need to address ways to safeguard against this type of easy livestream access, yet the “we don’t allow that type of content” mantra is too often used by the major players. We all know that mantra is just a coveryour-ass approach that makes the overall industry appear as if it condones this type of unfiltered content. It won’t do.
This article appears in the April/May 2017 issue of Streaming Media magazine as "Dying to Stream."