Make wishtv.com your home page

Purdue research to protect software for battlefield operations

A soldier hand-launches a drone during operational testing at Fort Benning, Georgia. (photo courtesy of the U.S. Army Operational Test Command/Tad Browning)

WEST LAFAYETTE, Ind. (Inside INdiana Business) – Purdue University says it will lead a research partnership with Princeton University that will research ways to make machine learning algorithms more secure for battlefield-bound autonomous software. Purdue says the research will help protect software that relies on algorithms to make decisions and adapt on the battlefield.

The university says there is potential for adversaries to hack into the artificial intelligence software of drones and other unmanned machines used on the battlefield, creating the need for additional research to protect the software.

The project is part of the Army Research Laboratory Army Artificial Intelligence Institute and is supported by a five-year, $3.7 million grant. Purdue says the A2I2 program is a new, multi-faceted initiative that aims to build an infrastructure for research on artificial intelligence, which will include cooperative agreements with other experts in AI.

“The implications for insecure operation of these machine learning algorithms are very dire,” said Saurabh Bagchi, principal investigator on the project and professor of electrical and computer engineering at Purdue. “If your platoon mistakes an enemy platoon for an ally, for example, then bad things happen. If your drone misidentifies a projectile coming at your base, then, again, bad things happen. So you want these machine learning algorithms to be secure from the ground up.”

The university says the goal is to create “a robust, distributed and usable software suite for autonomous operations.” According to Purdue, the prototype system will be called SCRAMBLE, short for “SeCure Real-time Decision-Making for the AutonoMous BattLefield.”

Prateek Mittal, an associate professor of electrical engineering and computer science at Princeton, will lead a group focused on developing “robust adversarial machine learning algorithms that can operate with uncertain, incomplete or maliciously manipulated data sources.”

“The ability of machine learning to automatically learn from data serves as an enabler for autonomous systems, but also makes them vulnerable to adversaries in unexpected ways,” said Mittal. “For example, malicious agents can insert bogus or corrupted information into the stream of data that an artificial intelligence system is using to learn, thereby compromising security. Our goal is to design trustworthy machine learning systems that are resilient to such threats.”

Purdue says Army researchers will be evaluating SCRAMBLE at the Army Research Laboratory-Computational and Information Sciences Directorate’s autonomous battlefield test bed to make sure the algorithms can be feasibly deployed and avoid cognitive overload for warfighters using the machines.