New Method Manipulates AI Vision Systems to See Anything
Researchers from North Carolina State University have developed RisingAttacK, a technique that can manipulate AI vision systems to see or not see specific objects, raising security concerns.
US engineers have developed a new method to manipulate artificial intelligence (AI) computer vision systems, allowing them to control what the AI “sees.” This method, called RisingAttacK, has been tested and found effective at manipulating all of the most widely used AI vision systems.
The research team, led by Tianfu Wu, an associate professor of electrical and computer engineering at North Carolina State University, demonstrated that RisingAttacK can make significant changes to how AI systems interpret images. This is particularly important in contexts where AI vision systems are used to affect human health and safety, such as in autonomous vehicles, healthcare, and security applications.
"We wanted to find an effective way of hacking AI vision systems because these vision systems are often used in contexts that can affect human health and safety," said Wu. "Identifying vulnerabilities is an important step in making these systems secure, since you must identify a vulnerability in order to defend against it."
The method involves a series of operations that make minimal changes to an image to achieve the desired manipulation. First, RisingAttacK identifies all the visual features in the image. Then, it determines which features are most important for the AI to identify the target object. By making targeted changes to these features, the system can manipulate the AI's perception.
For example, if the goal is to prevent the AI from identifying a car, RisingAttacK can make changes to the image that are imperceptible to the human eye but significant enough for the AI to miss the car. This could have serious implications for autonomous vehicles, where the AI must accurately detect traffic signals, pedestrians, and other vehicles.
RisingAttacK has been tested against four of the most commonly used vision AI programs: ResNet-50, DenseNet-121, ViTB, and DEiT-B. The technique was effective at manipulating all four programs, demonstrating its robustness and versatility.
Wu emphasized that while the research team has shown the effectiveness of RisingAttacK in manipulating vision models, they are now working on developing techniques to defend against such attacks. "Moving forward, the goal is to develop techniques that can successfully defend against such attacks," added Wu.
In their paper, researchers revealed that white-box targeted adversarial attacks expose core vulnerabilities in Deep Neural Networks (DNNs). They addressed two key challenges: how many target classes can be attacked simultaneously in a specified order, known as the ordered top-K attack problem, and how to compute the corresponding adversarial perturbations for a given benign image directly in the image space.
To solve these challenges, researchers introduced RisingAttacK, a novel Sequential Quadratic Programming (SQP)-based method that exploits the structure of the adversarial Jacobian. Extensive experiments on ImageNet-1k across six ordered top-K levels and four models show that RisingAttacK consistently surpasses the state-of-the-art QuadAttacK.
The research highlights the importance of continuous efforts to identify and mitigate vulnerabilities in AI systems, ensuring their reliability and security in critical applications.
Frequently Asked Questions
What is RisingAttacK?
RisingAttacK is a technique developed by researchers at North Carolina State University that can manipulate AI vision systems to see or not see specific objects in images.
How does RisingAttacK work?
RisingAttacK works by identifying and making minimal changes to key visual features in an image, allowing it to manipulate the AI's perception without being noticeable to the human eye.
Why is RisingAttacK significant?
RisingAttacK is significant because it highlights vulnerabilities in AI vision systems that are critical for applications affecting human health and safety, such as autonomous vehicles and healthcare technologies.
Which AI vision systems were tested with RisingAttacK?
RisingAttacK was tested against four of the most commonly used vision AI programs: ResNet-50, DenseNet-121, ViTB, and DEiT-B, and was effective in manipulating all of them.
What are the next steps for the researchers?
The researchers are now focusing on developing techniques to defend against adversarial attacks like RisingAttacK to ensure the security and reliability of AI systems.