To Guard Against AI Attacks, First Think Like a Baddie

In 1947, British mathematician and wartime codebreaker Alan Turing — often considered the father of modern computer science — made a bold prediction: “I believe that at the end of the century, the use of words and general educated opinion will have altered so much that one will be able to speak of machines thinking without expecting to be contradicted.”

Today, some 75 years later, Turing’s statement couldn’t ring truer. Machines and artificial intelligence are now so seamlessly enmeshed in our daily lives that nobody bats an eyelid when we have to chat to a bot for help or when Netflix suggests a movie that perfectly matches our mood.

Central to these capabilities is machine learning: algorithms and models that study troves of data to spew out predictions — anything from moles that look suspiciously malignant, suggestions for new friends on Facebook, to the chances a person will default on his bank loan.

Artificial intelligence, in the form of machine learning, is now ubiquitous in our everyday lives —employed by social media platforms, banks, streaming services, just to name a few. (image credit: mikemacmarketing)

 

“Systems supported by machine learning techniques have brought significant benefit to our daily life,” says Bo An, a computer scientist at Singapore’s Nanyang Technological University (NTU).

While research in the field initially focused on exploring paths to make predictions in a way that was more efficient and accurate, as well as less biased, a lot of recent work — including An’s — has turned to a different perspective in machine learning: their security.

“Machine learning models are vulnerable to manipulation,” explains An, who is also co-director of NTU’s Artificial Intelligence Research Institute.

In academic circles, this is known as Adversarial Machine Learning. “It includes many dimensions and can have many different definitions,” he says. Some errors originate fairly innocently — for example, when data is wrongly or only partially labelled. In other instances, malicious agents may be at play, such as criminals trying to bypass fraud detectors on e-commerce platforms.

Regardless of the source of bad data, adversarial machine learning usually results in the same outcome: a decline in prediction performance. “In some cases, this can have serious consequences,” says An. He offers up the example of an experiment conducted by a group of researchers working on driverless cars in 2018. When the researchers placed a few small stickers on the ground at a traffic junction, the cars got confused and began driving into the opposite lane.

 

Attack first, defend later

Preventing such alarming errors is imperative, but sadly, most existing approaches are inadequate at the task, says An. “Many models are too simple, which doesn’t match what is happening in the real world. Or their approaches are not scalable or don’t hold theoretical guarantees.”

This gap led An to apply for AI Singapore’s research grant in 2019, one he felt would allow him to carry out deep research into techniques that could improve the security of machine learning algorithms.

In October that year, he and two collaborators — Yevgeniy Vorobeychik at Washington University and Milind Tambe, previously at University of Southern California, now at Harvard — were awarded the AISG research grant. The project, which they called ‘New Directions in Adversarial Machine Learning: From Theory to Applications’, was guided by two clear aims: to come up with ways of identifying vulnerabilities in machine learning algorithms, and to figure out how to guard against future attacks.

Ironically, to tackle the first aim, Bo had to think like a bad guy. “How do you make sure a system is robust?” he asks. “By creating some adversarial examples which force the machine learning algorithm to make mistakes.”

In one set of work, published in 2019, Bo and his team studied how online shopping platforms such as Taobao employ AI algorithms to guard against fraudulent transactions. Taobao is the largest such platform in China, where losses due to e-commerce scams are expected to top $12 billion annually by 2025.

A commonly used tactic by some unscrupulous sellers is to hire people to create fake purchases, says An. This helps boost an item’s rating and its perceived popularity.

Artificial intelligence, in the form of machine learning, is now ubiquitous in our everyday lives —employed by social media platforms, banks, streaming services, just to name a few. (image credit: mikemacmarketing)

 

In order to test the robustness of Taobao’s defence systems against such fraud, the researchers invented three different types of attacks. They discovered that only 20% of malicious activity was successfully detected, down from the usual 90%. “We analyse the characteristics of a problem and really look at the key issues involved,” explains An.

Based on this analysis, he and his team devised some solutions. In this case, they took the adversarial examples generated by the attacks and developed a process that was used to retrain the detection model. “Results showed that its robustness significantly improved after that, with a precision above 85.9% under all tested attacks,” he says.

An optimal solution

As part of the AI Singapore project, An was also interested in finding new ways to approach adversarial machine learning. “We did this by building connections between game theory and machine learning research,” he says, referring to the branch of mathematics that studies how players who have conflicting interests try to optimise their decision-making.

“So we develop novel optimisation techniques to compute attack and defence strategies,” he explains.

In one paper, published in 2021, An and his co-authors looked at Network Security Games (NSGs) — which is the challenge of deploying a limited number of security resources to protect against an attacker in a large networked infrastructure, such as an urban city centre or transportation system.

NSGs are particularly problematic because the number of possible attacking paths are infinitely large. But the researchers overcame this challenge by developing a novel learning algorithm that was trained to find the optimal solution — called the Nash equilibrium, where each player has found their ideal strategy with no incentive to deviate from it. “The algorithm significantly outperformed state-of-the-art algorithms in both scalability and solution quality,” says An.

Although the project ended last September, adversarial machine learning is a topic he continues to research with great interest. “This is something that has relevance in the real world and in industry,” says An. “I think it can be deployed in the future to improve the security of many domains, including self-driving vehicles, financial models, and smart traffic control systems.”

Check out other AI Research awarded projects here: https://aisingapore.org/research/grant-call-awardees/

Author