Facebook's Artificial Intelligence to Detect Live-streamed Shootings
Live-streaming is allowing, if not encouraging, active shooters to grasp the attention they want from a wide audience easily. The white supremacist attack in Christchurch, for example, is a prime example of how these recordings can go viral extremely quickly, which leads to it being quite difficult to take down, especially manually. What's more, even after the live stream of the mass shooting on Facebook Live was taken down after less than an hour, there was an influx of uploads across the website, which meant Facebook's detection system simply was unable to keep up.
As such, Facebook has always struggled to deal with these extremist footage spreading through its streaming service, even though it already has systems to identify and take down copyrighted or pornographic material immediately. However, the company is now turning to artificial intelligence in the hope that it can efficiently eliminate these posts.
In order to achieve this, Facebook needs to acquire huge amounts of data so that it can start training artificial intelligence. Therefore, along with working with law enforcement in the UK, it's partnering up with police in the US to obtain hours upon hours of video of a person holding a firearm in a first person perspective during training programs.
By providing them with body cams free of charge, Facebook will have access to these recordings, thus enabling them to begin developing an algorithm which they can train to detect shootings in live-streams at a faster pace.
Not only would the automatic detection system combat the spread of terrorist videos but will also alert the police almost immediately, which could lead to numerous more lives being saved in the process.
Facebook has approached the Met police with the idea, and officers are set to start providing footage in October.