When customers need help, they want to talk to other people. If they have to interact with machines instead — like chatbots powered by artificial intelligence — they don't want the machines to sound too human.
People expect AI bots to be competent, but do not want them to be humanlike
Marat Bakpayev, assistant marketing professor at the University of Minnesota-Duluth's Labovitz School of Business and Economics, studies how people react to language used by artificial intelligence programs.
A joke from a human customer service agent, for example, signals cooperation and helpfulness.
But an AI agent with a sense of humor can come across as uncooperative and even threatening to the person on the other end of the interaction, according to Marat Bakpayev, assistant marketing professor at the University of Minnesota-Duluth's Labovitz School of Business and Economics.
That's why, as ChatGPT-4 and other generative AI models increasingly mimic human language, Bakpayev advises brands and marketers to keep consumer-machine interactions simple and straightforward.
Bakpayev bases his recommendations on results from a large, multiyear research project on AI and language. He's working on it with Ann Kronrod, a marketing professor and linguist at the University of Massachusetts at Lowell.
"Essentially I am hoping to understand how we talk with objects because now we're living in a new world," Bakpayev said.
Companies are incorporating AI agents into their businesses at an accelerating rate, so the question is not whether they will use it but how, Bakpayev said. In the case of customer service, AI is a highly scalable way to automate many of those processes.
But "if you're designing an interaction, if you think it's good that AI jumps in with an idiom or a metaphor — it's what people would do — no," Bakpayev said. "[Machines] aren't seen as people. … Make it more basic, more literal. That is what consumers want."
Brands need to enlist a variety of disciplines — Bakpayev mentions linguists, sociologists and anthropologists, among others — to get consumer-machine interactions right. They need to consider the context, user experience and language they use with consumers.
"When we're doing that human-to-AI communication, there should be the understanding that what goes into that should be carefully designed," Bakpayev said. "It's becoming one of those domains of how do you design the interaction? There are designers that are creating the conversations now and that's something that companies definitely should pay more attention to and look into from a linguistic, conversational viewpoint."
In one test in Bakpayev's research project, participants were more likely to book a hotel room with a human service agent than an artificial one after each had offered humorous responses to an inquiry. In another, the intention to book a room was significantly lower when the AI agent was joking than when it was serious, according to the extended abstract of a paper Bakpayev and Kronrod are writing.
"When it's too human, we don't want that and we feel threatened," Bakpayev said of interactions with machine agents.
The difference, Bakpayev said, comes down to expectations and perceptions.
Consumers expect human service agents to use figurative language — such as humor, metaphor or hyperbole. A consumer perceives a human agent who makes a joke or an irrelevant aside as a more cooperative conversational partner, Bakpayev said. That's even though the use of figurative language may break the customary rules of consumer-agent interactions.
Underpinning that perception, is what is referred to in psychology as theory of mind, or the ability to understand other people by inferring their mental state.
"It's like our minds are talking right at this point in time [of a conversation]," Bakpayev said. "We want to have that human-to-human communication."
People, however, don't expect machine agents to be able to infer their mental state, and therefore, don't see them as conversationally cooperative. Their guard goes up when AI models use humor or other figurative language.
Humans are more comfortable interacting with AI service models when they communicate in literal terms.
Bakpayev is approaching the project as an expert in consumer research and consumption, not in AI, though one with longstanding interests in technology, robotics and science fiction. The dissertation for his doctorate degree in management, completed in France, was on the acceptance of robots by senior consumers.
The abstract of his paper with Kronrod includes references to the sci-fi film "Blade Runner" and a century-old play, "R.U.R," for Rossum's Universal Robots, which introduced the word "robot." It proposes further research on consumer service interactions to help marketers "avoid unwelcome outcomes of the race towards humanlike AI conversation."
Todd Nelson is a freelance writer in Lake Elmo. His e-mail is todd_nelson@mac.com.
Solventum, Minnesota’s newest major public company, raised its guidance Thursday and reported $2.08 billion in sales.