AI moderation on social media poses unexpected problem, new study shows
Cambridge, Massachusetts - A new study published in Science Advances showing AI to be a harsher judge of social media posts than human counterparts.
AI moderators are used instead of people to clear the backlogs of complaints and reports generated by the fickle billions of social media users, but "often do not replicate human decisions about rule violations," the researchers found.
Not only that, but AI is "likely to make different, often harsher judgements than humans would," if not trained with the right data.
"I think most artificial intelligence/machine-learning researchers assume that the human judgements in data and labels are biased, but this result is saying something worse. These models are not even reproducing already-biased human judgements because the data they’re being trained on has a flaw," said Marzyeh Ghassemi, an assistant professor at Massachusetts Institute of Technology (MIT).
"We are going to end up with systems that are going to have extremely harsh moderations, much harsher than what humans would do. Humans would see nuance or make another distinction, whereas these models don’t," Ghassemi warned.
AI is coming on in leaps and bounds, with new developments being announced at breakneck speed, a pace that has many – including the "godfather" of the technology, Geoffrey Hinton – worried about the future.
Cover photo: 123RF/prima91