With all the innovations and wonders that AI is going to unfurl on our lives, let’s not forget that AI can also be used adversarially to harm against us. Bad actors might exploit the vulnerabilities in AI systems to disrupt and devastate the real intention of the systems.
Some examples of adversarial AI attacks include:
1) Image recognition in automobiles might be corrupted to misinterpret a ‘Stop’ sign as a ‘Yield’ sign and might cause an accident.
2) Algorithm of financial system might be manipulated to cause stock market crash and destabilize economy.
3) Cybersecurity attack on corporate IT systems to disrupt the company’s daily operation.
The MITRE organization, a consortium by government, industry, and academia, has prepared ATLAS™ (Adversarial Threat Landscape for Artificial-Intelligence Systems) as a comprehensive knowledge base of adversary techniques on AI systems. MITRE's objective is to increase awareness of the evolving vulnerabilities that might exist in AI systems.
It's concerning that there're so many ways bad actors can exploit AI systems for nefarious purposes. As a result, AI developers need to be aware of evolving vulnerabilities and take important steps to ensure their AI models are built with strong safety protocol.
No comments:
Post a Comment