AI bots have joined the chat in Twin Cities neighborhood Facebook groups

Meta said the bots spark conversation, but one skeptical Lakeville resident said, “We’re not talking to people anymore.”

The Minnesota Star Tribune
August 9, 2025 at 2:08PM
Twin Cities metro residents have noticed more AI bots in neighborhood Facebook groups. (Michael Dwyer/The Associated Press)

The post that appeared on Buffy Sobol Johnson’s Facebook feed gave the Lakeville resident pause.

“What was your experience with the worst restaurant in Lakeville?” someone recently asked, unleashing a torrent of negative comments into a group meant to foster community.

But the person behind the post wasn’t a person at all.

“AI assistants” are the newest members of Facebook groups that have for years served as digital town squares in Twin Cities suburbs. Hidden behind profile pictures of cartoon wizards and other fantastical figures who convey an air of expertise, these accounts are infusing community Facebook groups with repetitive, one-sided questions that some residents suspect are designed to drum up clicks.

The trend is alarming some neighbors about the creep of artificial intelligence into their personal lives, though Facebook parent company Meta said in a statement to the Minnesota Star Tribune that moderators can disable AI features anytime.

“We’re always exploring AI features for Facebook Groups that can spark meaningful conversations and connections with other members,” a Meta spokeswoman said.

But the company’s intentions haven’t warmed many people to the technology, which includes bots that go by the names LakeGuide in Lakeville, RoseGuide in Rosemount and Bville Buddy in Burnsville.

Their presence, some neighbors say, is eroding the power of online spaces that have over the years become a resource for suburbanites — where gripes about traffic, callouts for contractors and lost dog fliers can spark unlikely connections despite the occasional dust-up in the comments.

“We’re not talking to people anymore,” Johnson said. “It’s not giving any value to the group.”

Robot or resident?

Not long ago, Lisa Barrett was a Minnesota transplant grateful to Facebook groups for introducing her to interesting spots in an unfamiliar community.

So the Apple Valley resident was quick to return the favor a few years later, answering the questions that began flooding south metro Facebook groups.

The inquiries were similarly formatted, a straightforward question written in large text against a colorful background. What’s your favorite bakery in Lakeville? The best park? The nicest trail?

Scrutinizing the account behind the posts confirmed Barrett’s suspicion: It was AI.

“You think you’re being helpful,” she said, before realizing the callouts aren’t coming from people and wondering, “Why am I bothering to help? I could just sit and chat with Alexa. No different than her asking me questions and me replying.”

It’s not just Barrett who struggled at first to distinguish robot from resident.

Researchers at the University of Notre Dame tested artificial intelligence’s ability to approximate human behavior when they asked people to chat with an account on social media — then decide if it was a person or AI.

The participants were wrong nearly 60% of the time.

“I don’t think we’re, as a society, ready to really understand how sophisticated the bots are, and how easy it is to personify them to have human characteristics,” said Paul Brenner, the senior author of the 2024 study and a Notre Dame professor.

Brenner said social media users’ inability to easily identify AI means there’s no simple way to tamp down on the technology.

Possible approaches include legislation that would require social media platforms to more strictly regulate the presence of bots in private groups, moderators who proactively disable artificial intelligence tools and social media users willing to lean into skepticism.

If you’re not sure if a post is AI-generated, “don’t sit there on your own and try to figure it out,” he said. “Talk to your neighbor, family member. Don’t ask online, ‘Is this person a bot?’ We’ve forgotten how to go next door and talk to a real human being.”

‘We’re not ready for this’

Facebook contends its AI assistants help members of private groups “get instant answers and suggestions,” and some people have embraced the new technology.

Ashley Robeck said she can’t help but respond to the bot-generated posts that pepper an online group for Rosemount neighbors.

“The question is almost irresistible,” she said, adding that answering it “for sure helps boost engagement and the algorithm.”

Jeffrey A. Hall, a University of Kansas communications professor who studies new media, said social media companies are likely using bots that post questions at a rapid clip to maximize likes, comments and their corollary — revenue.

But research shows AI-generated content might repel people from online communities where they’re seeking genuine connection, Hall said, unless bots improve so rapidly that people can no longer determine what’s real or fake.

Lakeville resident Chris Bovitz is skeptical about the accuracy and intentions of AI. That’s why he has developed a habit of making cheeky comments disparaging artificial intelligence on bot-generated posts. It’s a small way the software engineer has managed to interfere with the new technology.

Still, Bovitz said the groups’ human members for now have the upper hand.

“People are still pretty decent about picking out the legitimate posts versus these AI posts,” he said. His attempts to gum up the works, he added, are “my way of being a saboteur, of saying we’re not ready for this.”

about the writer

about the writer

Eva Herscowitz

Reporter

Eva Herscowitz covers Dakota and Scott counties for the Star Tribune.

See Moreicon

More from Twin Cities Suburbs

See More