Artificial intelligence systems are good at tackling problems that can be solved using brute force, like chess… All the computer has to do is calculate out every possible permutation of moves and pick the best one. They’re also pretty good at games like poker, where even with incomplete information, a computer can make a move that is statistically ‘best.’ And lastly, they’re good at making decisions far more quickly than a human.
When you combine all of these separate characteristics into one game, things get exponentially more complex, but also much more like real life. And this is why people are trying to teach computers how to play StarCraft, at a level where they can compete with even the best human players.
UC Santa Cruz hosted the 2010 StarCraft AI Competition, which put AI programs through a series of different StarCraft testing scenarios to determine the most effective AI system at micromanagement, small scale combat, tech limited games, and of course full gameplay. The video above shows a bunch of highlights; especially notable is the absolutely brutal use of mutalisks by the eventual AI winner, UC Berkeley’s Overmind.
The last clip in the highlight video shows an AI taking on a world class human player, who wins handily. It’s only a matter of two or three years before humans have no chance against programs like these, however… And the reason (I think) is quite straightforward: the computer can micromanage every single unit it owns, on every part of the map, at the same time. A human can’t. Once the AI reaches a competent level of strategy and unit use (it’s not there yet), we’re screwed, because the AI can just launch multiple simultaneous micromanaged attacks.
There are lots more videos of the different AI programs competing against each other on YouTube here, and you can download in-game replays at the link below.
[ StarCraft AI Competition ] VIA [ New Scientist ]