A concerned mother texted her pediatrician to ask if she should switch her 2-year-old from "1 percent milk to whiskey."
Fortunately for her, she didn't end up getting reported to child-protection authorities. The doctor realized that she had been the victim of an autocorrection that had changed "whole milk" to "whiskey."
Nearly everyone who does any sort of electronic communicating has been burned by an autocorrect program that changed a benign message into a major embarrassment. So why don't the programmers fix this problem?
They're trying, but it's not nearly as easy as it might seem to the non-techie world.
Autocorrect originated with word processing programs of the 1980s. The language used was checked against a dictionary to make sure the spelling was correct.
The programs have gotten more sophisticated over the years, but that has created new problems as it solved old ones.
Tech companies such as Google, Facebook and Apple employ dozens of linguists — or "natural language programmers," as they are known — to analyze language patterns and to track slang, even pop culture. But one person's hip jargon ("sick" equals "awesome") can be another person's literal undoing ("sick" equals "ill").
A linguistic jungle
Johan Schalkwyk, an engineer who leads speech efforts at Google, said, "Keeping up with slang and trending acronyms is like a jungle" — a jungle full of cultural land mines.
Autocorrect programs can do amazing things, including fix mistakes caused by hitting the wrong keys (the "fat finger" phenomenon) and analyzing conversations with specific text correspondents.
Apple's iOS 8 operating system, released in September, even purports to know how your tone changes by medium — that is, "the casual style" you may use in texting vs. "the more formal language" you are likely to use in e-mail, as the company put it in a statement. And it adjusts for whom you are communicating with, knowing that your choice of words with a buddy is probably more laid-back than it would be with your boss.
Even smarter phones
Smartphones now are able to suggest not just words but entire phrases. And the more you use it, the more it remembers, paying attention to repeated words, sentence structure and tone.
All of which is fine, except that it turns the notion of the guiltless autocorrect on its head. These days, autocorrections are likely to tell the person on the receiving end something about you.
"A lot of the time, you can't even replicate it because it's so personalized," said Ben Zimmer, chairman of the new-words committee at the American Dialect Society, which is devoted to the study of the English language.
Programs that lean toward your most common usages often create the biggest embarrassments. Take that e-mail thread when you want to "loop in" another party but the "l" in loop turns into a "p." Or the note to tell a friend you are "in a cab" that translates to "in the can."
Caroline McCarthy, a digital consultant, has a colleague named Aran with whom she texts regularly. Even though his name is saved in her contacts list, to the iPhone he is "Arab Aran." And Bridget Todd, a social media editor at MSNBC, has on more than one occasion referred to her friend Whitney as "Whitey." (Todd is African-American; Whitney is not.)
But the more we fail, the more understanding we have about those fails — or at least we hope so.
Patricia Duncan Moran's Facebook post to her great-granddaughter on her birthday ended with: "See you soon. Love, Great Grandmaster Flash." Facebook's autocomplete feature had made the assumption that "Grandma" was short for "Grandmaster" and adjusted accordingly, tagging the hip-hop pioneer in the process.
"You have to be careful, and I learned that lesson," said Moran. "My family still says when they see me, 'Here comes Flash!' "