About

AI’s Achilles Heel

Your toaster has likely been through more rigorous safety testing than the AI guiding your finances. In fact, there are no widely used safety standards for artificial intelligence (AI) systems or its subfield machine learning, and yet, it is at the core of many critical systems including healthcare, cybersecurity, finance and transportation.

In AI’s Achilles Heel, we ask the question: What can go wrong when a motivated attacker exploits an AI’s vulnerability, and what can tech companies, governments and civil liberty foundations do to secure AI’s future?

Current AI systems are so brittle that just the right set of unexpected changes in an image or sound or question can cause wildly unexpected outputs. Methods have been designed that can discover even imperceptible (to a human) changes that cause dramatic failures of the AI systems we otherwise trust.

Now ask yourself: Can you reliably say that your self-driving car will always interpret stop signs correctly despite graffiti from a miscreant? How do you know if the directions you are taking from Google Maps is to the right destination and not the reult of adversarial manipulation? are you certain that the song from the ice cream truck is not asking your Alexa to gift merchandise to a stranger?

Nations, tech companies and academic institutions are aware of this problem and are racing to solve it. But they have to start fresh because of the unique differences in AI systems, and they gave to do it fast as AI systems rapidly proliferate. The path to security is perlious, with lots of though questions: for instance, as a society, do we want AI systems that perform optimally or AI systems that are secure?

How we secure our AI systems will define the next decade. The stakes have never been higher, but the public awareness remains scarce. Read and become a voice in the issues that need debating.