Using artificial evolution to fool neural networks
http://www.evolvingai.org/fooling
Deep Neural Networks are Easily Fooled: High Confidence Predictions for Unrecognizable Images
[…]it is easy to produce images that are completely unrecognizable to humans, but that state-of-the-art DNNs believe to be recognizable objects with 99.99% confidence (e.g. labeling with certainty that white noise static is a lion)
This paper from late 2014 got a lot of almost-mainstream press (Wired, the Atlantic, etc), I found it very interesting, not only for the stress they put Deep Neural Networks under (after all I guess it’s always possible to fool a computer), but also for the not mentioned enough use of evolutionary algorithms for generating the trick images. And, it was not a good old genetic algorithm that was used, but rather some interactive evolution, where a human plays the role of the fitness function (see http://picbreeder.org/), thereby turning evolution into breeding, much like wheat, or dogs, were bred by men and not a direct product of evolution.
I suspect the paper would not have had the same impact if it had only presented examples with “white noise-y” pictures.