Thursday

19-06-2025 Vol 19

Is AI Slop Distorting Our Reality? Elected Officials Using Deepfakes Raise Red Flags

In recent months, social media platforms have become a breeding ground for AI-generated images—some humorous, some uncanny, and others blatantly misleading. But what happens when elected officials begin to join in, using artificial intelligence not for entertainment but to bolster political narratives? For many, that’s where the line should be drawn.

The growing prevalence of “AI slop”—a term now used to describe poorly made or deceptive AI-generated content—has sparked concern among technologists, lawmakers, and voters alike. While AI art and filters have been around for years, the sharp uptick in politically charged AI imagery has added a new layer of urgency to the national conversation about misinformation and accountability.

From doctored campaign images to synthetic videos designed to depict opponents in unfavorable or downright false situations, the misuse of AI content is blurring the line between reality and fiction. And while some of these posts are easily dismissed as satire or parody, others are more insidious—crafted with just enough believability to sway public opinion, stoke outrage, or create doubt.

“When people, especially elected officials or governing bodies in an official capacity, post AI-generated images to push their own narrative, it erodes trust and fuels division,” says Brian Sathianathan, Co-Founder and CTO of Iterate. “Some states in the US are working on legislation to require disclosures on AI-generated political ads or enabling platforms to take down deepfakes. That’s a step in the right direction. But we still don’t have any solid federal rules, and that’s leaving a big gap.”

That gap, critics argue, is creating fertile ground for manipulation. While deepfakes and synthetic media have long been the concern of cybersecurity experts and media watchdogs, their use by public figures has triggered a wave of scrutiny that now extends to Capitol Hill. In several high-profile cases, lawmakers and local officials have either shared or created AI-generated images that mimic real events or portray fictional scenarios to back their political messaging.

For example, earlier this year, a mayoral candidate in a major U.S. city posted an image of a crime scene allegedly illustrating rising violence under an incumbent’s leadership—only for it to be revealed that the image was AI-generated and based on no real event. Although the candidate later issued a clarification, the original post had already gone viral.

The implications for democracy are real. Political campaigns have always leaned on compelling imagery, but when those images are fabricated with the ease of a few prompts and clicks, the potential for damage escalates. This is especially troubling in a media environment where many users engage with content passively, often without taking the time to verify what they’re seeing.

“We need clear limits on how AI can be used in political speech—especially by elected officials,” Sathianathan adds. “If someone is speaking in an official capacity, the public deserves to know whether what they’re seeing or hearing is real. Otherwise, AI just becomes another tool for misinformation.”

Some states are attempting to take the lead. California, Texas, and New York have introduced bills aimed at regulating deepfakes in political advertising, while others are considering rules that would require AI-generated content to be labeled clearly. However, without unified federal guidelines, enforcement remains inconsistent and easily circumvented.

Social media platforms are also under pressure to act. Several have introduced policies to label or remove deceptive AI content, but critics argue these steps are reactive rather than preventative. Moreover, the platforms themselves are not immune to bias or political pressure, further complicating their role as arbiters of truth.

As generative AI continues to improve and becomes more accessible, the need for clear rules around its use in political communication becomes increasingly urgent. Without transparency and accountability, the public risks becoming disoriented in a sea of synthetic content—where every image can be questioned, and every truth diluted.

In the end, this isn’t just a technology issue—it’s a matter of public trust. As elected officials turn to AI tools to shape political messaging, the risk isn’t only distortion, but the erosion of confidence in what voters see and hear.

Headlines Team