DARPA has by far the greatest mission statement of any government organization: developing gonzo technologies to prevent and create strategic surprise.
And this month the Defense Advanced Research Projects Agency has created a strategic surprise to beat the band — the SocialSim sarcasm detector. It's an artificial intelligence program developed at the University of Central Florida that can understand mockery on Twitter and Facebook and, you just know it, probably TikTok too.
Oh great, you say. Just what we need.
Only now, DARPA will know, deep in its twisted little AI bones, that you mean the diametric opposite: We don't need this sarcasm-detector at all.
But maybe we do. Long have users of Twitter complained that there's no font for sarcasm, which can lead to misunderstandings, sometimes trivial, sometimes grievous.
DARPA's new AI is founded in something called sentiment analysis, which automates the detection of emotion, intention and torque in online language.
But how? It's an algorithm; it doesn't detect sarcasm because it consults sarcasm's first principles — those I'd like to see — but because it has churned through sarcasm specimens for years and can now distinguish eye-rolling inverted insults from sincerity.
On a stupid level, the SocialSim sarcasm detector can be used by companies trying to separate the wheat from the chaff of salty customer feedback. On a possibly more useful level, it can be used to interpret exchanges between bad actors planning nefarious acts. "Someone should Tase that teacher" on a kid's Zoom chat might not immediately trigger a SWAT team.