Researchers from the OpenAI machine learning lab have discovered that their cutting-edge computer vision system can be fooled by simple tools like a pen and a piece of paper.
Simply writing down the name of an object and sticking it on another real object can cause the software to misidentify what it sees. OpenAI's researchers quoted the attack as a "typographic attack" in a blog post.
“By exploiting the model’s ability to read text robustly, we find that even photographs of hand-written text can often fool the model,” they said.
They compared the attacks to be similar to "adversarial images" that usually fool commercial machine vision systems. The bug in the software, as of now, is not a big deal to worry about.