It’s the stuff of science fiction.

The crew of the Starship Enterprise talks to the replicator in “Star Trek.” Joaquin Phoenix’s character falls for his chatty operating system, Samantha, in “Her.”

But what we’ve seen on the screen is looking more like reality. Talk to your smartphone, Google Glass, even the thermostat — and they talk back.

“It’s actually happening,” said Greg Sullivan, a director of product marketing for Windows Phone at Microsoft. “It’s only very recently that it’s really becoming real.”

The gadgets around us are going beyond understanding simple commands and taking part in conversation, albeit one that’s often stilted and programmed.

Ask “How are you, Siri?”

“Excellent!” the iPhone digital assistant will respond.

Talking technology is shifting from novel to useful, and its likely that everything from washing machines to driverless cars will be commanded by voice rather than buttons.

Yet as science fiction becomes fact, human users continue to struggle with robo-conversations. It can be awkward talking to a machine and so much can get lost in translation.

Plus, the more a machine talks smart, the more we expect it to actually be smart.

Complex chatter

Nina Hale talks to her devices all the time. She tells her Google Glass to navigate home and often dictates texts and e-mails to her smartphone. To her, it’s just easier than typing. Still, it feels weird.

“I feel quite shy doing it in front of people,” said Hale, CEO of Minneapolis-based digital marketing agency Nina Hale Inc. “I’m always kind of turning my back.”

Talking to technology isn’t just embarrassing. Most of the programs are dogged by glitches, as well.

Anyone who’s tried dictating a text knows the perils: the microphone picks up background noise, homonyms cause trouble, and tough-to-pronounce names are a lost cause. As someone who works in search engine marketing, however, Hale’s invested in learning how computers process language.

Researchers have been exploring those same topics since the 1950s, said Prof. Ray Mooney, director of the Artificial Intelligence Laboratory at the University of Texas.

“Really understanding language is hard,” he said. “The human mind evolved over millions of years. It’s hard for us as engineers to reconstruct it in a matter of decades.”

Progress has been incremental, said Mooney, but most of us are finally starting to notice it because the hardware — especially those little smartphone computers in our pockets — are powerful enough to do speech recognition and more natural language processing.

Voice-activated devices usually respond to simple commands, but multiple questions or complicated grammar tends to make programs like Siri go haywire.

Mooney, who counts “2001: A Space Odyssey” among his favorite movies, said it’ll be a while before machines have human-like conversation skills.

“The minute you want to start talking about how to solve the crisis in the Middle East, don’t expect your iPhone to have an interesting response,” he said.

More or less human

Logically, people know that talking gadgets have limitations, but that doesn’t make the digital flubs less aggravating.

That’s because speech — no matter if it’s with a person or a machine — comes with expectations.

The brain treats all voices the same, according to Clifford Nass and Scott Brave, authors of “Wired for Speech.” Even if it’s a computer talking, a human listener will assign gender and personality traits to the voice and make assumptions about trustworthiness based on what they hear.

And if the voice doesn’t deliver?

“Socially inept interfaces suffer the same fate as socially inept individuals: They are ineffective, criticized and shunned,” Nass and Brave wrote.

People tend to like human-sounding interfaces, as long as they’re judged to be competent.

Microsoft embraced that idea with Cortana, the company’s new talking digital assistant. Her answers are colloquial, even if she’s being evasive:

“Do you like the Vikings, Cortana?”

“Y’know, with questions like that, I don’t form opinions,” she’ll say.

And she’s not as amorphous as Siri. Cortana’s name references a character from the “Halo” video games.

“It’s not for nothing that she has a personality,” said Sullivan, of Microsoft. “If a machine responds in a human-like way, then the person will continue to talk to it like a person.”

The hope is that people will share more information with Cortana, making it easier for her to learn their preferences, daily schedules and anything else a personal assistant should know.

While Microsoft is making Cortana as human as possible, Honeywell decided to keep its voice-activated, Internet-connected thermostat more machine like. While the gadget responds to conversational commands, its voice is mechanical.

“People wanted the device to be a device,” said Tony Uttley, general manager of home comfort and energy systems at Honeywell. “They didn’t want it to sound more human. They wanted to make sure it was still a thing.”

But with a greeting like “Hello, thermostat” (the phrase that activates the device), it’s tempting to talk like a human anyway, testing its boundaries.

Honeywell understood that, so it crowdsourced queries and found that people didn’t just want to ask the thermostat to turn the temperature up or down a few degrees. They said things like “I’m hot” or “It’s cold.” Others wanted to know the outdoor temperature or the time of day, so they’ve been updating the gadget’s spoken repertoire.

Uttley discovered one pleasant surprise: Even when talking to a thermostat, people often say “please” and “thank you.”

So much for the digital age ruining our manners.