INTERNATIONAL

Disinformation – AI-Enhanced Images From West Asia Conflict Raise Concerns Over Misleading Narratives

Disinformation –  The ongoing conflict in West Asia has triggered a surge of misleading online content, much of it powered by artificial intelligence. While entirely fabricated visuals have been widely discussed, experts say a more subtle form of manipulation is now spreading across social media: real photographs that have been altered or enhanced using AI tools. These modified images often look sharper and more dramatic than the originals, but specialists warn that such adjustments can quietly change how events are perceived by the public.

West asia ai image disinformatio

Subtle Alterations Found in Widely Shared Conflict Image

One widely circulated photograph shows a US pilot kneeling on the ground after ejecting from his aircraft, confronted by a Kuwaiti civilian shortly after landing. The image appeared clear and highly detailed and was shared extensively across social media platforms, with some news outlets also publishing it.

However, a closer inspection revealed a strange detail. The pilot appeared to have only four fingers on each hand, raising questions about whether the image had been digitally modified.

Investigators from AFP examined the photograph using artificial intelligence detection tools and discovered an invisible marker known as SynthID. This watermark is designed to identify images that have been processed or generated using Google’s AI technology.

Evidence Suggests Real Event With Altered Visual Quality

Despite the AI marker, the incident itself appears to be genuine. A video showing the same moment began circulating online on March 2. Analysts were able to verify the location using satellite imagery, and the event matched reports from that day stating that Kuwait had mistakenly shot down three US military aircraft.

Further investigation uncovered an earlier version of the same photograph shared on Telegram. The earlier image depicted the same scene but was significantly blurrier and lacked the sharp detail visible in the widely circulated version.

AI verification systems concluded that the low-quality image was authentic. Experts believe this original photo was likely processed using AI tools to enhance clarity and detail, producing the more striking version that later spread online.

Experts Warn Enhancement Can Change Perception

Artificial intelligence can improve resolution and visual quality, but specialists warn that these enhancements may unintentionally change key elements of an image.

Evangelos Kanoulas, a professor of artificial intelligence at the University of Amsterdam, explained that AI enhancement can modify textures, lighting, facial details, and background elements. These adjustments may make an image appear more realistic than the original, even if subtle alterations have occurred.

According to Kanoulas, such changes can influence how viewers interpret events. For instance, AI processing might make a crowd appear larger, intensify facial expressions, or highlight dramatic details that were less visible in the original image.

Example From Iraq Shows Similar Pattern

A similar situation emerged after Iranian strikes targeted areas near Erbil airport in Iraq on March 1. Social media users shared an image showing what appeared to be an enormous blaze rising near the airport.

Detection tools again found signs that Google AI technology had been used in the image. However, investigators later located the original version, which showed the same location but with a much smaller fire and a less dramatic smoke plume.

The enhanced image amplified the flames and color intensity, creating a more striking visual impression than the real scene.

Blurred Line Between Enhancement and Fabrication

Specialists say the difference between improving an image and creating misleading content can sometimes be difficult to define.

James O’Brien, a computer science professor at the University of California, Berkeley, noted that even minor changes to an image can significantly affect how viewers understand an event. Adjustments to lighting, scale, or expression may unintentionally alter the narrative conveyed by the picture.

Another challenge is that generative AI systems can occasionally introduce details that were never present in the original material.

Previous Viral Case Shows AI Misinterpretation

Such issues were also seen earlier this year following the shooting of Alex Pretti by federal immigration agents in Minneapolis in January.

An AI-enhanced version of a frame taken from a real video spread widely online. The image showed Pretti collapsing to his knees while officers stood nearby, with one pointing a firearm.

In the original video frame, the object in Pretti’s hand was a mobile phone. But after AI processing sharpened the image, some viewers mistakenly believed he was holding a weapon.

Growing Concerns Over Public Trust in Images

As the conflict linked to US-Israeli strikes on Iran continues, analysts warn that the spread of AI-enhanced images without clear labeling could deepen public confusion.

O’Brien said such visuals are already affecting how people interpret information during major global events. When manipulated or enhanced images circulate widely, it becomes harder for audiences to distinguish authentic documentation from altered material.

Kanoulas added that the broader consequence may be a decline in trust. When viewers realize that some images have been modified, they may begin to question the authenticity of genuine photographs as well.

Back to top button