VISIVE.AI

New AI Attack Technique Manipulates What Vision Systems See

Researchers have developed a new method called RisingAttacK, which can manipulate AI computer vision systems to control what they 'see' in images.

Jul 01, 2025Source: Visive.ai
New AI Attack Technique Manipulates What Vision Systems See

Researchers have unveiled a novel method, RisingAttacK, that can manipulate artificial intelligence (AI) computer vision systems, allowing attackers to control what the AI 'sees.' This breakthrough highlights the vulnerabilities in AI vision systems, which are critical in various applications, including autonomous vehicles and medical diagnostics.

Adversarial attacks, where data is manipulated to influence AI decisions, pose significant risks. For instance, a hacker could alter an AI's ability to detect traffic signals, pedestrians, or other vehicles, leading to potential accidents. In medical settings, a hacker could tamper with X-ray machines, causing AI systems to provide incorrect diagnoses.

Tianfu Wu, an associate professor of electrical and computer engineering at North Carolina State University and co-corresponding author of the study, emphasized the importance of identifying these vulnerabilities. 'These AI vision systems are often used in contexts that can affect human health and safety. Identifying vulnerabilities is an important step in making these systems secure,' Wu stated.

RisingAttacK works by identifying and manipulating visual features in images. The process involves several steps. First, it identifies all visual features in an image. Then, it determines which features are most critical to achieving the attack's goal. For example, if the goal is to prevent the AI from recognizing a car, RisingAttacK identifies the key features that help the AI recognize a car.

Next, RisingAttacK calculates the AI system's sensitivity to changes in data, particularly in the key features. This allows the technique to make minimal, targeted changes that are imperceptible to the human eye but effective in manipulating the AI's perception. 'The end result is that two images may look identical to human eyes, but the AI would see a car in one image and not in the other,' Wu explained.

The researchers tested RisingAttacK on the four most widely used AI vision programs: ResNet-50, DenseNet-121, ViTB, and DEiT-B. The technique was successful in manipulating all four systems. 'While we demonstrated RisingAttacK's ability to manipulate vision models, we are now exploring its effectiveness against other AI systems, such as large language models,' Wu added.

The study, titled 'Adversarial Perturbations Are Formed by Iteratively Learning Linear Combinations of the Right Singular Vectors of the Adversarial Jacobian,' will be presented at the International Conference of Machine Learning (ICML 2025) in Vancouver, Canada. Moving forward, the researchers aim to develop defensive techniques to mitigate these attacks.

Understanding and addressing these vulnerabilities is crucial for ensuring the security and reliability of AI systems in critical applications. As AI continues to integrate into various industries, robust security measures will be essential to protect against potential threats.

Frequently Asked Questions

What is RisingAttacK?

RisingAttacK is a new method that can manipulate AI computer vision systems to control what they 'see' in images, making minimal, targeted changes that are imperceptible to the human eye.

How does RisingAttacK work?

RisingAttacK identifies key visual features in an image, calculates the AI system's sensitivity to changes in these features, and makes minimal, targeted alterations to manipulate the AI's perception.

What are the potential risks of adversarial attacks on AI vision systems?

Adversarial attacks can cause AI systems to misidentify objects, leading to potential accidents in autonomous vehicles or incorrect diagnoses in medical settings.

Which AI vision systems were tested with RisingAttacK?

RisingAttacK was tested on four of the most widely used AI vision programs: ResNet-50, DenseNet-121, ViTB, and DEiT-B, and it was effective against all of them.

What is the next step for researchers after demonstrating RisingAttacK?

The researchers are now exploring the effectiveness of RisingAttacK against other AI systems, such as large language models, and aim to develop defensive techniques to mitigate these attacks.

Related News Articles

Image for Barclays Leverages AI for Enhanced Productivity and Workload Efficiency

Barclays Leverages AI for Enhanced Productivity and Workload Efficiency

Read Article →
Image for Master Your Digital Journey with Agentic AI

Master Your Digital Journey with Agentic AI

Read Article →
Image for Chanakya Thunuguntla's AI Solution to Rebuild the U.S. Workforce

Chanakya Thunuguntla's AI Solution to Rebuild the U.S. Workforce

Read Article →
Image for Palantir vs. Nvidia: The Better AI Investment

Palantir vs. Nvidia: The Better AI Investment

Read Article →
Image for DDN Releases Comprehensive Guide for Corporate Directors on AI Oversight

DDN Releases Comprehensive Guide for Corporate Directors on AI Oversight

Read Article →
Image for Autonomous Robots: The Future of Modern Industry

Autonomous Robots: The Future of Modern Industry

Read Article →