A major security flaw in artificially intelligent systems could threaten human lives, according to a new study.
Robotic systems that use AI to make decisions could be broken and those systems are not safe, researchers have warned.
The new work looked at large language models, or LLMs, the technology that underpins systems such as ChatGPT. Similar technology is also used in robotics, to govern the decisions of real-world machines.
But that technology has security vulnerabilities and weaknesses that could be exploited by hackers to use the systems in unintended ways, according to the new research from the University of Pennsylvania.
"Our work shows that, at this moment, large language models are just not safe enough when integrated with the physical world," said George Pappas, a professor at the university.
Professor Pappas and his colleagues demonstrated that it was possible to bypass security guardrails in a host of systems that are currently in use. They include a self-driving system that could be hacked to make the car drive through crossings, for instance.
The researchers behind the paper are working with the creators of those systems to identify the weaknesses and work against them. But they cautioned that it should require a total rethink of how such systems are made, rather than patching up specific vulnerabilities.
"The findings of this paper make abundantly clear that having a safety-first approach is critical to unlocking responsible innovation," said Vijay Kumar, another coauthor from the University of Pennsylvania. "We must address intrinsic vulnerabilities before deploying AI-enabled robots in the real world.
"Indeed our research is developing a framework for verification and validation that ensures only actions that conform to social norms can - and should - be taken by robotic systems."