All the world’s a stage, and now men and women aren’t the only players. A Microsoft researcher’s analysis using artificial intelligence to break down Shakespeare is a nifty trick showing off some shiny software. But it’s also a reminder in an increasingly automated age of what exactly makes us human.
The Microsoft project uses natural language — processing techniques to map out emotions in William Shakespeare’s text. The test is designed to prompt people who already (at least sort of) understand Shakespeare to consider his works in new ways, and to help those who have trouble understanding pierce the complexity. Romeo, it reveals in colorful graphs, feels everything more keenly than his Capulet lover Juliet, despite prevailing stereotypes of stoic masculinity. “King Lear” tells a story of steady decline, whereas “Coriolanus” has peaks and nadirs aplenty to signal a bumpier narrative ride.
As useful an educational tool as this system might be, the Bard’s greatest admirers may be unable to resist raising an eyebrow. Do readers really need an algorithm to tell them that Romeo is eye-rollingly mopey, or that things go more or less right for Macbeth until they start going very wrong? Isn’t part of the point of studying Shakespeare today that it’s overwhelming and foreign until, suddenly, it’s familiar? These objections might all be secondary to a more powerful fear: the thought that a computer can read Shakespeare just as well as we can seems to take the human out of the humanities.
Which is why it is reassuring to learn that, as advanced as machine-learning has become and as far-reaching as the implications of the technology may be, Microsoft’s tool thought that “The Comedy of Errors” was, well, a tragedy. That’s because the slapstick physicality in the play confused it. Algorithms have trouble distinguishing friendly teasing from cruel mockery, which would stymie any computer that tried to make sense of Mercutio. They struggle to tell truth from lies — which, of course, would render Iago a lot less interesting. Sarcasm is an ongoing computational quandary.
None of this should surprise anyone who follows social-media sites’ losing battles against the trolls of the alt-right, whose tendency to mask racism in irony makes them difficult to root out using automated content moderation tools. Balancing the benefits more humanlike AI could bring with the risk for abuse is a tortured task from a practical point of view. From a more human one, however, it can be hard not to hope the tide of technological change will roll in slowly.
FROM AN EDITORIAL IN THE WASHINGTON POST