Computer Learns Game by Reading Manual
itproportal.comThe article is sparse on details, but the linked MIT news article goes into more depth. Of note, the algorithm was able to win 79% of the games it played. Without textual input, it only won 46%, and a more advanced machine learning algorithm without textual input only won 62%. Pretty cool.
Won those games against whom? Other copies of itself? The weakest built-in AI? The strongest built-in AI? A pseudo-random number generator?
Built-in AI. They didn't specify the difficulty level, but knowing Civ games (can't speak for FreeCiv though), difficulty levels only tweak handicaps and not the AI algorithms, so the correct choice would be the difficulty level that has no handicap or bonus for the AI.
Original paper: http://people.csail.mit.edu/regina/my_papers/civ11.pdf
It is a neural network trained, in part, against the text in the game manual.
Of course, it's considerably more complex than that in the theory and implementation. :D
It can read and apply what it's read? This program is already more advanced than half my graduating high school class.
Two things come to my mind:
1. That's gotta be one helluva manual
2. Reversing the procedure to automate documentation by examining variable and method names along various code paths would be brilliant.
Good thing we're teaching them Civilization - they'll never get out of the server room.
Teaching ultra-intelligent AI Monopoly, though? - guaranteed robot overlords.
One might argue that the algorithms used by Wall Street firms to trade the stock market learned how to play long ago... ;).
I have bad news for you, then -- Monopoly is effectively a solved game ;)
Does that mean if the program read additional texts on Civilization strategy that it would get even better? How about texts that may be somewhat related but not specific to the game (combat strategy, world history?!...)
From the looks of it, the program merely learned the rules of the game by doing textual analysis of the manual, and maybe got a few strategic hints as well. As for actually learning to play the game _well_, my sense from the article is that the program then used more conventional machine learning techniques to test and adopt winning moves.
I'm somewhat familiar with this work. (My advisor talked to the authors some. I could be misrepresenting it a little, but not nearly as much as the article.)
It's not learning to play the whole game. It's learning to cheese (in gaming parlance) the opponent. The strategy it learns is to build a warrior as fast as possible and go and attack the enemy's city. If that fails, it almost always loses. The manual gave it some hint in that direction.
I am interested in the details regarding how the AI was able to map the manual to the game controls.
Taking RTFM to the next level. Yes.
The problem occurs when you have to RTFM to your auto-RTFM machine causing the universe to collapse in some kind of recursive logical singularity.