Chipmaker Intel has been chosen as the leader of a new program led by the U.S. Defense Advanced Research Projects Agency (DARPA), which is believed to be designed to protect machine learning models from spoofing attacks and improve cyber defenses,media reported. Machine learning is an artificial intelligence that allows systems to improve over time with new data and experiences.
Today, one of its most common use cases is object recognition. While spoofing attacks are rare, they can interfere with machine learning algorithms. In the case of self-driving cars, small changes to real-world objects can have disastrous consequences.
Just a few weeks ago, McAfee’s researchers fooled a Tesla by adding a two-inch tape to the speed limit sign, eventually causing the car to accelerate at a faster speed of 50 mph than expected. The study is one of the first examples of machine learning algorithms that manipulate devices.
That’s where DARPA wants to work. The research group said earlier this year that it was developing a program called GARD to ensure that artificial intelligence is robust against deception. Existing mitigations for machine learning attacks are often rule-based and pre-defined, but DARPA hopes it will develop GARD into a system with a broader defense against many different types of attacks.
Jason Martin, chief engineer at Intel Labs and head of the Intel GARD team, said the chipmaker will work with Georgia Tech to strengthen target detection and improve artificial intelligence and machine learning’s ability to respond to counter-attacks.
Intel said that in the first phase of the program, their focus was on enhancing the use of spatial, time and semantically consistent still images and videos using its target detection technology.
DARPA says GARD will be available in many environments, such as biology.
“We have to make sure that machine learning is safe and not being deceived,” says Dr Hava Siegelmann, project manager at DARPA’s Office of Information Innovation. “