Meta's new supercomputer promises to improve moderation on social media – but can it?
Mountain View, California - The next step in crazy-fast computing is here, now that Facebook/Meta has a new supercomputer that is all about making artificial intelligence better and faster.
Facebook/Meta launched its newly built Artificial Intelligence Research SuperCluster (RSC), along with plans to make it the fastest supercomputer in the world by mid-2022.
Meta's new machine is already blisteringly quick, running 20 times faster than the company's older technologies, and by the time this year's upgrades are through, it should be able to crush millions more operations per second than the average smartphone.
The RSC's computing muscle is what you get if you connect 6,080 high-end graphics cards and link them to storage devices that can hold the equivalent of over 500 years of 24/7 full-HD video recording.
And if the absolutely bonkers scale of this supercomputer sounds big now, it gets even better. Meta plans to bump the graphic card number up to 16,000 and increase the storage space to hold exabytes-worth of data – enough space to hold 3,000 Libraries of Congress.
According to the tech giant, that massive scale is necessary for the sheer amount of testing and tasks the RSC is supposed to handle, including learning how to "analyze text, images, and video together" and teaching AI how to better moderate social media.
The supercomputer will also be a testing ground for making new AI systems that can translate another language instantly, which would let people around the world talk to each other in their respective native languages while playing virtual reality games or working together in the "metaverse."
AI moderation concerns
But Meta's plans for the RSC to help with moderation have a couple of problems, according to experts and even Facebook's own internal documents.
Meta AI moderation isn't spread equally across the world, instead focusing heavily on the US, Brazil, and India, according to The Verge's reporting on whistleblower Frances Haugen's leaked FB documents.
Studies also bring up two other big questions: can AI moderation get the job done right every time and is it really the solution to toxic and harmful content?
In 2017, a study from the University of Cornell and Qatar's Computing Research Institute showed that AI moderation doesn't always get the nuances of language, and doesn't correctly identify hate speech 100% of the time, often classifying it as "just" harassment.
A 2020 study from the Transatlantic Working Group on Content Moderation Online and Freedom of Expression also showed that AI can't grasp the full context of what you post online, and still isn't 100% accurate.
The group also raised the concern that AI mods basically operate on the basis of programming choices made by people – with all their natural biases and blind spots.
And even if those issues of inaccuracy and subjectivity were somehow addressed, it still isn't clear that social media platforms should to rely on AI to keep their spaces safe.
Senior Microsoft researcher Tarleton Gillespie, who works on the impacts of social media, released a 2020 paper where he points out that the feedback loop of growing social media use and AI moderation. As platforms grow to unimaginable scales – billions of users – companies increasingly lean on automation to keep this growth going, without ever stopping to consider that this may be exactly what makes good moderation almost impossible.
As Gillespie put it, "Maybe some platforms are simply too big."
Even the computing power of Meta's new behemoth won't be able to answer these questions definitively.
Cover photo: Collage: IMAGO / Westend61, NurPhoto