dark mode light mode Search
Search

Facebook’s AI Systems Fight Offensive Content

Facebook’s Artificial Intelligence systems now report more offensive photos than humans do; or so they say. This AI can remove content that is perceived as offensive before they are seen by people. This includes pornographic content, hate speech, violent content and posts that violate Facebook’s terms of service.

According to Tech Crunch’s report, a bully, jilted ex-lover, stalker, terrorist or troll could post offensive photos to someone’s wall, a group, event or the feed. And by the time the content is marked as offensive and taken down by Facebook, it may have already caused the damage. AI is helping to eliminate such as it can scan images that are uploaded before anyone ever sees them. “Today we have more offensive photos being reported by AI algorithms than by people,” said Joaquin Candela, Facebook’s Director of Engineering for Applied Machine Learning.

But I wonder, what if the AI sees a picture that is like a form of art and then it misinterprets it, what would happen?

Total
0
Shares