Christchurch shooting live stream reportedly didn’t have 'enough gore' for system to catch it

Facebook has been under fire after the livestream of the Christchurch Mosque shooting was widely shared on its platform
ANDREW HARNIK/AP
Facebook has been under fire after the livestream of the Christchurch Mosque shooting was widely shared on its platform

A top Facebook director specialising in counter-terrorism told a committee of US members of congress that their system failed to detect the livestream of the Christchurch shooting because it was not "gruesome" enough, The Daily Beast is claiming.

Brian Fishman told members of the US House Homeland Security Committee in a closed-door briefing on March 27 that there was "not enough gore" in the video for artificial intelligence to catch it, it reported.

The American website quoted an anonymous staffer who was in the room as saying there was significant backlash to Fishman's claim. 

One member of Congress reportedly said the video was so violent that it looked like a graphic video game. 

READ MORE:
* Facebook points finger at media for role in spread of Christchurch video
* Christchurch mosque shootings: Inside YouTube's struggles to shut down video
Facebook, Google face widening crackdown over online content
Supreme Court Justice to head Royal Commission of Inquiry

Facebook's powerful  AI software has the ability to scrub images and videos which breach community guidelines. Earlier this year it announced new software which could detect and remove non-consensual intimate images - often called revenge porn - automatically, before it is even reported. 

In the past, Facebook had relied on users to report content which was in breach of guidelines, and then used image matching software to prevent it from reappearing elsewhere on the site. 

300,000 copies of the graphic video were shared publicly on Facebook in the first 24 hours after the Christchurch attack.
GEORGE HEARD/STUFF
300,000 copies of the graphic video were shared publicly on Facebook in the first 24 hours after the Christchurch attack.

Facebook said that the same image matching software is used to take down terrorist videos, though in the case of the March Christchurch shooting, in which 50 people died, it ran into some trouble. 

Chris Sonderby, Facebook VP and Deputy General Counsel, said in a blog post that "some variants such as screen recordings were more difficult to detect, so we expanded to additional detection systems including the use of audio technology".

The Christchurch video was viewed under 200 times during the live broadcast and about 4000 times total before it was taken down. The first user report didn't come until 12 minutes after the livestream ended. 

Facebook said that reports it received for live videos were "prioritised for accelerated review."

Facebook said the original live video was viewed less than 200 times.
TVNZ
Facebook said the original live video was viewed less than 200 times.

Guy Rosen, Facebook VP of Product Management, said "we saw a core community of bad actors working together to continually re-upload edited versions of this video in ways designed to defeat our detection."

In total, Facebook said they removed about 1.5 million videos of the attack in the first 24 hours. 1.2 million uploads were blocked immediately, but 300,000 made it through. 

Stuff