AI image of Alex Pretti’s killing is the latest altered photo amid ICE surge in Minneapolis

Look closely, slow down, think critically before believing what you see on the internet, experts caution.

The Minnesota Star Tribune
January 29, 2026 at 12:00PM
At left is a screenshot from bystander video of federal agents shooting Alex Pretti in Minneapolis on Jan. 24. At right is an AI-enhanced image of that same scene. CREDIT - Left: screengrab via video posted on Drop Site News on X Right: AI enhanced version, screengrab via @fred_guttenberg on X

A widely shared image of federal agents surrounding ICU nurse Alex Pretti as one agent holds a gun to the back of his head appears as real as it does horrific.

But a closer look at the photo reveals a headless agent. Such bodily distortion is a red flag that an image used artificial intelligence. In this case, AI enhanced a low-quality screenshot of a bystander video, digital forensic experts said.

It’s the latest altered imagery from Minneapolis to make the rounds online during the federal government’s immigration enforcement surge. Other digitally manipulated images circulated after Renee Good’s killing by a federal agent. The White House also shared a fake image of activist and attorney Nekima Levy Armstrong, edited to make it appear that she was crying during her recent arrest for disrupting a church service. Video from the arrest showed there were no tears.

AI-enhanced and manipulated images are a new obstacles in the court of public opinion. Their proliferation online is eroding trust and enflaming divisions.

After the police killing of George Floyd in 2020, there wasn’t a flurry of fake images or videos on social media, though there was plenty of disagreement over what happened and who was at fault. Similar arguments are still at play in the federal agent killings of Good and Pretti, but now people are debating whether images are even real.

“I think details can get mistaken or altered in a way that is dangerous in these very volatile situations,” said digital forensics expert Hany Farid, a professor at the University of California, Berkeley. “In the fog of war and in conflict, it is just really messy, and we are simply adding noise to an already complicated and difficult situation.”

An AI image that purported to show the federal agent accused of killing Good quickly appeared online after her Jan. 7 death. An image of the man in a face mask used AI to enhance what his face might look like, leading to misidentification and misinformation targeting the wrong person. The Minnesota Star Tribune identified the agent as Jonathan Ross. His face did not match the original AI generated photo.

Peter Adams, senior vice president of research and design at the News Literacy Project, a nonpartisan education nonprofit, said in a statement to the Star Tribune that the flood of AI-generated images from Minneapolis “are an example of how synthetic visuals can spread confusion and further divide Americans about important issues.”

The News Literacy Project also has addressed a fake video following Good’s killing that included AI-generated stickers on Good’s vehicle appearing to say “Antifa Super Soldier” and “I like my ICE crushed.”

“People can be susceptible to misinformation when a rumor appears to confirm a preconceived belief,” the Literacy Project said on its Rumor Guard website. “If someone thinks the woman killed by an ICE agent in Minnesota in January 2026 was a ‘deranged leftist,’ as some politicians allege, they may be more prone to believe a claim that confirms that opinion."

The risks

Some fake imagery is so fantastical that it’s easy to spot, such as a shark swimming in a flooded street. But as technology advances, AI is getting harder to identify.

Adams said it’s difficult to recognize AI-generated content because it’s increasingly realistic, “and because we’re all vulnerable to the influence of our own biases, which can cause us to under-scrutinize things we want to believe.”

The AI image of Pretti’s killing is more nuanced than many, Farid said, because it combines something real with hallucinated elements.

In court, the edited image would never be admissible as evidence. But in the court of public opinion, an image that is based in truth but fabricated can make for difficult debates.

If someone calls out a friend for sharing the AI-generated image of Pretti and says, “This is fake,” for example, someone can argue that the person is siding with federal agents when really the person is only pointing out the image is digitally altered.

But when an AI image is inventing reality, like the one shared by the White House of Levy Armstrong, it’s “incredibly inappropriate,“ Farid said. The image now carries a disclaimer on X: “This photo has been digitally altered to make Nekima Levy Armstrong appear to be in distress.”

A photo, at left, posted Jan. 22 to U.S. Secretary of Homeland Security Kristi Noem's X account shows Nekima Levy Armstrong after her arrest. Minutes later, the White House’s X account posted an altered image, at right, of Levy Armstrong. (Photos via @Sec_Noem and @WhiteHouse on X)

Farid said such decisions are ultimately “eroding trust in the White House.”

“When you intermix fake and real, I don’t trust you anymore. And if I don’t trust you, when you want me to trust you, why should I? So I think what this White House is not understanding is this is not to their advantage.”

He said state-sponsored propogandists in Russia, China, North Korea and Iran use AI images to disrupt society, elections and social cohesion. To do so, he said, they don’t have to get people to believe lies, they just have to “muddy the waters.”

Fake images are nothing new, but AI is supercharging misinformation and “there’s no putting the genie back in the bottle.”

“This is our new reality, and it’s … getting very, very messy when you add on top of that the hyperpolarization of Donald Trump and MAGA, when you add on to that just the speed at which everything is happening.”

The solutions

If you can’t identify the source or authenticity of an image or video, don’t share it, the News Literacy Project advises.

AI-generated photos and videos are often cinematic, with camera angles that are obviously different from videos taken by a person holding a cellphone.

“As this technology continues to improve, the work of standards-based newsrooms and other verified sources becomes even more essential to the public’s understanding of events,” Adams said.

The AI enhanced image of Pretti’s killing was never published by credible news outlets.

Farid doesn’t recommend using apps to determine if something is real because they are not reliable. Better resources are websites such as FactCheck.org, Politifact, Snopes and AP Fact Check, he said, and Google’s reverse-image search is a great way to track down more information.

Farid said that if people are only getting their news and information from social media, they are more susceptible to believing something fake is real.

Everyone is at risk of falling for AI images, he said, though research shows baby boomers are more impressionable than younger audiences who grew up with the internet.

“The real poison here is not AI, it’s social media,” he said. “AI is just supercharging it. But if people could make these fake images and fake videos and there was no delivery mechanism, I mean, honestly, who cares? The problem is not the content itself. The problem is that these social media platforms eagerly absorb it and amplify it because it’s good for business.”

He said that once people realize that some social media sites are home to manipulated images and reports, they will be more likely to look elsewhere for credible information.

At the very least, he advises people to slow down, think critically and look closely at images before spreading misinformation. He said images are made in an instant, often to provoke strong reactions and sow discord.

“Everybody’s moving at a speed, and with an emotion, by design that is not setting ourselves up for success.”

Dan Evon, who teaches verification and debunking skills at the News Literacy Project, said in a statement that big news events are “magnets for viral rumors to spread.”

“While credible news outlets are busy trying to verify facts, events happen quickly and bad actors fill information gaps with falsehoods to capitalize on attention. It’s important to be on the lookout for false claims and AI-generated content during events like these.”

about the writer

about the writer

Kim Hyatt

Reporter

Kim Hyatt reports on North Central Minnesota. She previously covered Hennepin County courts.

See Moreicon