New Zealand Mosque attack pushes Facebook to change rules around Live streaming

0
A man reacts as he speaks on a mobile phone outside a mosque in central Christchurch, New Zealand, Friday, March 15, 2019. A witness says many people have been killed in a mass shooting at a mosque in the New Zealand city of Christchurch.(AP Photo/Mark Baker)

 Following the New Zealand mosque attack which was streamed live on Facebook, the social networking giant has tightened the rules that apply specifically to the Live camera feature on its platform.

As part of its updated policies, anyone who violates Facebook’s most serious policies would be restricted from using Live for set periods of time – for example 30 days, starting on their first offence.

“We will now apply a ‘one strike’ policy to Live in connection with a broader range of offences. For instance, someone who shares a link to a statement from a terrorist group with no context will now be immediately blocked from using Live for a set period of time,” Guy Rosen, Vice President, Integrity at Facebook, wrote in a blog post on Tuesday.

After the New Zealand attacks at two mosques in Christchurch city that claimed 51 lives in March, Facebook claimed it removed 1.5 million videos of the C attacks within the first 24 hours itself. It also said it blocked 1.2 million of them at upload, meaning they would not have been seen by users.

The original 17-minute video of the attack was viewed 4,000 times before it was removed from the platform.

“Our goal is to minimise risk of abuse on Live while enabling people to use Live in a positive way every day. We plan on extending these restrictions to other areas over the coming weeks, beginning with preventing those same people from creating ads on Facebook,” Rosen said.

Previously, if someone posted content that violated the platform’s Community Standards, Facebook would take down the posts and block the users for a while.

Facebook logo.

In some cases, users were banned from Facebook services altogether, either because of repeated low-level violations, or, in rare cases, because of a single egregious violation.

To develop technology that would help Facebook improve image and video analysis, the company is investing $7.5 million in new research partnerships with leading academics from major universities.

“This work will be critical for our broader efforts against manipulated media, including deepfakes — videos intentionally manipulated to depict events that never occurred. We hope it will also help us to more effectively fight organised bad actors who try to outwit our systems as we saw happen after the Christchurch attack,” Rosen added.

What do you think about this article, let us know?