First, Manage Security Threats to Machine Learning

The message here is that before leaping into deploying AI/ML, think about the threats. Deception can be a useful tool to deploy for the defender but the attacker may be using it as well…

[…] The U.S. Army tank brigade was once again fighting in the Middle East. Its tanks were recently equipped with a computer vision-based targeting system that employed remotely controlled drones as scouts. Unfortunately, adversary forces deceived the vision system into thinking grenade flashes were actually cannon fire. The tank operators opened fire on their comrades two miles away. Although the U.S. brigade won the battle, they lost 6 soldiers, 5 tanks, and 5 fighting vehicles — all from friendly fire. The brigade commander said, “our equipment is so lethal that there is no room for mistakes.”

This story is based on an actual event. The tanks involved did not have automated computer vision systems — but someday they will.

Deception is as old as warfare itself. Until now, the targets of deception operations have been humans. But the introduction of machine learning and artificial intelligence opens up a whole new world of opportunities to deceive by targeting machines. We are seeing the dawn of a new and expanded age of deception.


Original article here