Facebook has been hoping to strengthen its artificial intelligent software to better detect harmful, illegal, or malicious posts on the social media platform. Much of the job of content moderation currently falls to humans, which frankly sounds like the most awful job imaginable. But in an effort to demonstrate it’s taking content moderation seriously, Facebook recently showed off how good its software is becoming at spotting drugs. In fact, Facebook brags, the software can correctly identify whether a photo is of marijuana or broccoli tempura.
At a recent Facebook developers conference, the company’s chief technology officer Mike Schroepfer used the marijuana/broccoli example to demonstrate AI’s increasing savviness. CNET reports AI isn’t yet smart enough to be a total fix for content moderation, but it is able to flag some images that it’s learning could be of, say, illegal drugs or nudity. That could help identify instances of Facebook users attempting to sell illegal drugs on the platform; the AI software is learning to match photos with certain keywords to spot nefarious activity—though presumably, no one is captioning their photo “illegal marijuana for sale.”
That the software can now distinguish between similar-looking drugs and vegetables is, I guess, progress, but the site still faces criticism that it isn’t doing enough to curb hate speech and violent images. Facebook, along with other social media sites, was most recently criticized for failing to pull videos of the New Zealand mosque shooting in the wake of the March attacks. In the face of such horrific footage, the broccoli/marijuana “feat” hardly seems like cause for celebration.