dark mode light mode Search
Search

Facebook is testing AI to tackle hate speech in groups 

Facebook looking to take AI to another level by deploying the tools into flagging Haye speech in private groups.

Social apps have made the use of AI more versatile and the most recent example is Facebook looking to take AI to another level by deploying the tools into flagging Haye speech in private groups.

In a blog post, the social network announced that it’s currently testing AI tools to checkmate fighting and unhealthy conversations that go on in its groups. This according to Facebook will go well in assisting more than 70 million people who run and moderate groups on its platform. The decision comes after Facebook late last year recorded more than 1.8 billion user participation in groups from a total of about 2.85 billion monthly users.

When a conversation is detected as “contentious or “unhealthy”, the AI decides when to send out “conflict alerts” to the group administrators. AI is also built to help moderators who have really large groups and are unable to oversee to clean up badly behaved group members. This might also restrict temporarily how frequent members can post on a group and equally how fast comments can be made on individual posts. 

“The company’s AI will use several signals from conversations to determine when to send a conflict alert, including comment reply times and the volume of comments on a post. Some administrators already have keyword alerts set up that can spot topics that may lead to arguments, as well.” A Facebook spokesperson said.

Facebook has carried out a close test on the AI by using a group called “Other Peoples Puppies”. A sample is shown when one user responds to another’s post writing  “Shut up you are soooo dumb. Stop talking about ORGANIC FOOD, you idiot!!!”

Another user responded “IDIOTS!” and added, “If this nonsense keeps happening, I’m leaving the group!”

This conversation prompts the “Moderation Alerts” at the top and beneath, several words appear in black type within grey bubbles. To the right, the word “Conflict” appears in blue, in a blue bubble.

Industry experts have however relayed their concerns over this move by Facebook, informing that AI is not completely capable of detecting hate speeches as it could misunderstand subtle messages which could be disappointing for users.

Total
0
Shares