Tuesday, May 27, 2014

The Flaw Lurking In Every Deep Neural Net

The Flaw Lurking In Every Deep Neural Net: "For all the networks we studied, for each sample, we always manage to generate very close, visually indistinguishable, adversarial examples that are misclassified by the original network."
To be clear, the adversarial examples looked to a human like the original, but the network misclassified them. You can have two photos that look not only like a cat but the same cat, indeed the same photo, to a human, but the machine gets one right and the other wrong.

No comments:

Post a Comment