Facebook is actively creating new terror content on the website with its auto-generation feature, alleges a whistleblower who analysed over 3,000 Facebook profiles of individuals expressing affiliation with terror or hate groups.
The social networking giant last year claimed that with the use of advanced Artificial Intelligence (AI) technology and a growing team of expert human reviewers it could now block 99 per cent of the terrorist content of Islamic State (IS), Al Qaeda, and affiliated groups before it was reported by users.
The study by the anonymous whistleblower working with the US-based non-profit National Whistleblower Center showed that terror and hate speech content are proliferating on Facebook.
The whistleblower found that 317 profiles out of the 3,228 surveyed contained the flag or symbol of a terrorist group in their profile images, cover photo, or featured photos on their publicly accessible profiles.
The five-month study also detailed hundreds of other individuals who had publicly and openly shared images, posts, and propaganda of the IS, Al Qaeda and other known terror groups, including media that appeared to be of their own militant activity.
Alleging that Facebook is likely breaking the law by “misleading” shareholders about terror and hate content, the whistleblower has filed a petition with the US Securities and Exchange Commission.
After the five-month study period ending in December 2018, the whistleblower found that terror and hate content generated by Facebook is “Liked” by thousands of Facebook users.
These Likes provide yet another means for individuals affiliated to extremist groups to network and recruit, the study said.
Facebook came under increased scrutiny after the New Zealand mosque shooting in March was live streamed on the platform.
Co-founder of the social networking site Chris Hughes earlier this week stressed that it is time to break up Facebook and said the government must hold its CEO Mark Zuckerberg “accountable”.