FACEBOOK has been accused of turning down the chance to use an artificial intelligence tool which could have helped the firm detect online hate speech in near real-time.
Executives from Finland-based Utopia Analytics – which has created an AI content moderation tool it says can understand any language – said Facebook turned down offers to use the firm’s technology in 2018.
The AI company said it offered to create Facebook a tool in two weeks that could help it better moderate hate speech content originating in Sri Lanka, amid rising tensions in the country and reports of more hate speech appearing online.
Appearing before MPs on the House of Commons Digital, Culture, Media and Sport select committee on disinformation, Utopia chairman Tom Packalen said that when it approached the social network at that time, Facebook was “not interested” in its technology.
READ MORE: Facebook to launch new cryptocurrency called Libra
Utopia says its tools are able to understand context as well as informal and slang language and can analyse previous publishing decisions made by human moderators to inform its decisions, which then take place in “milliseconds”.
In a further statement, Utopia chief executive Mari-Sanna Paukkeri said: “In March 2018 we showed Facebook that we could get rid of the majority of the hate speech from their site within milliseconds of it appearing.
“Facebook have repeatedly claimed that this technology does not exist but despite what they may say, we have been using it successfully for over three years in many countries and with many businesses.”
Paukkeri also claimed that if implemented, the tools could have made a difference in preventing or warning of the Easter terror attacks in Sri Lanka, which killed more than 250 people.
In the aftermath of the attacks, Sri Lankan authorities blocked social media amid concerns it was being used to incite violence in the country. Paukkeri said: “It is a shame that Facebook decided that their internal considerations were more important than getting rid of the inflammatory rhetoric that was posted on their site.”
In response, Facebook said AI was an important tool in content moderation but said more research was still needed into the issue. The company also pointed to its own technology as being capable of spotting and removing hate speech.
Why are you making commenting on The National only available to subscribers?
We know there are thousands of National readers who want to debate, argue and go back and forth in the comments section of our stories. We’ve got the most informed readers in Scotland, asking each other the big questions about the future of our country.
Unfortunately, though, these important debates are being spoiled by a vocal minority of trolls who aren’t really interested in the issues, try to derail the conversations, register under fake names, and post vile abuse.
So that’s why we’ve decided to make the ability to comment only available to our paying subscribers. That way, all the trolls who post abuse on our website will have to pay if they want to join the debate – and risk a permanent ban from the account that they subscribe with.
The conversation will go back to what it should be about – people who care passionately about the issues, but disagree constructively on what we should do about them. Let’s get that debate started!
Callum Baird, Editor of The National
Comments: Our rules
We want our comments to be a lively and valuable part of our community - a place where readers can debate and engage with the most important local issues. The ability to comment on our stories is a privilege, not a right, however, and that privilege may be withdrawn if it is abused or misused.
Please report any comments that break our rules.
Read the rules here