AI Teaches Itself Chess in 72 Hours, Plays at International Master Level
technologyreview.comWell, it didn't really teach itself, it was carefully designed and was fed a huge (155M games generated from 5M actual games) data set. Coupled with an semi-supervised approach (rate each position whether the AI playing itself wins or loses). Similar approach yielded strong AIs for go, too (http://arxiv.org/abs/1412.3409), yet nowhere near the IM level for chess.
One comment in the article that stuck to me was (and this is central to AI discussions) :
"While Deep Blue was searching some 200 million positions per second, Kasparov was probably searching no more than five a second. And yet he played at essentially the same level. Clearly, humans have a trick up their sleeve that computers have yet to master."
Deep Blue didn't have anything to master, if he can beat the world champion that was it! A rough analogy would be: A bird can fly by flapping its wings 5 times a second while a Cessna 172's propeller has to make 200 revolutions per minute (numbers made up), so we still have some avian tricks to master. They are two different approaches to a problem!
At the time, Deep Blue required a 32-node IBM RS/6000 SP high-performance computer (https://www.research.ibm.com/deepblue/meet/html/d.3.shtml) for its power, now a regular MBP can run an instance of Stockfish that would give a GM a good run for its money.
Now, if you can design an AI that can learn a comparably simpler board game, say, Settlers of Catan together with a human (not fed millions of games) and can play with reasonable strategy, that would be a teaching itself how to play.