This clever AI hid data from its creators to cheat at its appointed task
techcrunch.comThis claim is unsupported:
> The machine, not smart enough to do the actual difficult job of converting these sophisticated image types to each other
Obviously it found an easy way to solve the problem it was given: stenography But could it have solved the problem the researchers intended, if they had framed it correctly? There's no evidence either way for this particular algorithm, but in general this is not hard. This is usually called style transfer and I don't see any reason to believe that standard techniques[1] wouldn't be able to solve the street-map-to-aerial-map problem. And it's pretty well established that adding a bit of noise[2] during training helps GANs avoid these kinds of problems.
[1]: https://medium.com/tensorflow/neural-style-transfer-creating...
[2]: https://www.inference.vc/instance-noise-a-trick-for-stabilis...
Right, I think it would be more accurate to say "The machine, smart enough to choose the easier approach for solving the task". It simply found an accurate and easy way to pass the discriminator and stuck with it.
So basically AI abiding by Goodhart's law (https://en.wikipedia.org/wiki/Goodhart%27s_law). I wonder how much of this goes undetected in other applications due to poor objective definition.
It's incredibly common. For example:
https://docs.google.com/spreadsheets/u/1/d/e/2PACX-1vRPiprOa...
This is a neat result and writeup, but the title very misleading clickbait.
Yeah, the article goes on to explain why the premise of the headline is wrong. But it is a fascinating unintended consequence of the algorithm none the less.