Tiny Tweaks That Blind AI

RisingAttacK is a targeted exploit that can fool top AI vision systems by making tiny tweaks to images that are imperceptible to humans.

Nick Bild
15 days agoMachine Learning & AI

In the span of a decade, computer vision algorithms have gone from being little more than an academic curiosity to something that many of us entrust our lives with on a daily basis. Self-driving vehicles, for instance, heavily rely on these algorithms to stay on the road, avoid other vehicles and pedestrians, and obey the rules of the road. To put our safety in the hands of any technology, it is important that we have a very high level of confidence in it. So are computer vision systems deserving of the trust we have placed in them?

You might not be so sure after hearing about recent research conducted at North Carolina State University. A group of researchers has demonstrated just how easy it is to trick many computer vision algorithms. With a novel, targeted approach, they have shown that a few small tweaks to any object can render it invisible to the system. But because the tweaks are so small and targeted, they may be imperceptible to a human, rendering the attack very difficult to detect.

Called RisingAttacK, the exploit belongs to a class known as adversarial attacks, in which malicious actors subtly manipulate the data that an AI model receives. What sets RisingAttacK apart from past approaches is its precision. Rather than blindly altering pixels, it homes in on the exact visual features the AI deems most important and makes the smallest possible changes needed to fool it.

The process begins by mapping every salient feature in a benign image and ranking their importance for the AI’s current task. Using Sequential Quadratic Programming, RisingAttacK then calculates how sensitive each feature is and crafts an optimized perturbation (a microscopic nudge in pixel values) that derails the model’s interpretation while leaving the image essentially unchanged to the human eye.

When pitted against four of the most widely deployed vision backbones — ResNet‑50, DenseNet‑121, ViT‑B and DEiT‑B — the attack achieved near‑perfect success. It was able not only to knock out the model’s top choice, but also to reorder the entire ranked list of up to 30 categories. This holistic manipulation matters because many applications, from medical triage systems to search engines, rely on more than just the single best guess.

While RisingAttacK may be concerning, the team developed it with the ultimate goal of giving us more confidence in computer vision. The more potential vulnerabilities we can discover, the more we can harden AI algorithms against similar attacks, preventing potential disasters in the future.

The team is now investigating whether the same strategy can crack open large language models and multimodal systems, and, perhaps more importantly, how to defend against such attacks. Until then, this research serves as an important reminder that the smarter our machines become, the smarter their adversaries will be. So we must continuously scrutinize these systems to keep them safe from future attacks.

Nick Bild
R&D, creativity, and building the next big thing you never knew you wanted are my specialties.
Latest articles
Sponsored articles
Related articles
Latest articles
Read more
Related articles