A widely shared image of federal agents surrounding ICU nurse Alex Pretti as one agent holds a gun to the back of his head appears as real as it does horrific.
But a closer look at the photo reveals a headless agent. Such bodily distortion is a red flag that an image used artificial intelligence. In this case, AI enhanced a low-quality screenshot of a bystander video, digital forensic experts said.
It’s the latest altered imagery from Minneapolis to make the rounds online during the federal government’s immigration enforcement surge. Other digitally manipulated images circulated after Renee Good’s killing by a federal agent. The White House also shared a fake image of activist and attorney Nekima Levy Armstrong, edited to make it appear that she was crying during her recent arrest for disrupting a church service. Video from the arrest showed there were no tears.
AI-enhanced and manipulated images are a new obstacles in the court of public opinion. Their proliferation online is eroding trust and enflaming divisions.
After the police killing of George Floyd in 2020, there wasn’t a flurry of fake images or videos on social media, though there was plenty of disagreement over what happened and who was at fault. Similar arguments are still at play in the federal agent killings of Good and Pretti, but now people are debating whether images are even real.
“I think details can get mistaken or altered in a way that is dangerous in these very volatile situations,” said digital forensics expert Hany Farid, a professor at the University of California, Berkeley. “In the fog of war and in conflict, it is just really messy, and we are simply adding noise to an already complicated and difficult situation.”
An AI image that purported to show the federal agent accused of killing Good quickly appeared online after her Jan. 7 death. An image of the man in a face mask used AI to enhance what his face might look like, leading to misidentification and misinformation targeting the wrong person. The Minnesota Star Tribune identified the agent as Jonathan Ross. His face did not match the original AI generated photo.
Peter Adams, senior vice president of research and design at the News Literacy Project, a nonpartisan education nonprofit, said in a statement to the Star Tribune that the flood of AI-generated images from Minneapolis “are an example of how synthetic visuals can spread confusion and further divide Americans about important issues.”