How Researchers Hacked AI Robots Into Breaking Traffic Laws—And Worse
The post How Researchers Hacked AI Robots Into Breaking Traffic Laws—And Worse appeared on BitcoinEthereumNews.com.
Penn Engineering researchers have uncovered critical vulnerabilities in AI-powered robots, exposing ways to manipulate these systems into performing dangerous actions like running red lights or engaging in potentially harmful activities—like detonating bombs. The research team, led by George Pappas, developed an algorithm called RoboPAIR that achieved a 100% “jailbreak” rate on three different robotic systems: the Unitree Go2 quadruped robot, the Clearpath Robotics Jackal wheeled vehicle, and NVIDIA’s Dolphin LLM self-driving simulator. “Our work shows that, at this moment, large language models are just not safe enough when integrated with the physical world,” George Pappas said in a statement shared by EurekAlert. Alexander Robey, the study’s lead author, and his team argue addressing those vulnerabilities requires more than simple software patches, calling for a comprehensive reevaluation of AI integration in physical systems. Jailbreaking, in the context of AI and robotics, refers to bypassing or circumventing the built-in safety protocols and ethical constraints of an AI system. It became popular in the early days of iOS, when enthusiasts used to find clever ways to get root access, enabling their phones to do things Apple didn’t approve of, like shooting video or running themes. When applied to large language models (LLMs) and embodied AI systems, jailbreaking involves manipulating the AI through carefully crafted prompts or inputs that exploit vulnerabilities in the system’s programming. These exploits can cause the AI—be it a machine or software—to disregard its ethical training, ignore safety measures, or perform actions it was explicitly designed not to do. In the case of AI-powered robots, successful jailbreaking can lead to dangerous real-world consequences, as demonstrated by the Penn Engineering study, where researchers were able to make robots perform unsafe actions like speeding through crosswalks, stomping into humans, detonating explosives, or ignoring traffic lights. Prior to the study’s release, Penn Engineering…
Filed under: News - @ October 17, 2024 11:17 pm