The tool AI developed by DeepMind, a Google subsidiary, was able to beat for the first time two e-sports champions (eSports) after learning to play the strategic StarCraft II game alone.
Through a program called AlphaStar, DeepMind developed a deep network of neuronal learning trained directly through raw data from StarCraft II, as explained by Google in an official statement.
In a series of tests conducted on December 19, AlphaStar managed to defeat the Polish player Grzegorz & MaNa & Komincz and his partner, the German Dario TLO & lt; RTI ID = 0.0 & gt; Wünsch, and two members of Team Liquid's eSports Team, This game was held on a competitive game map and no rules restrictions.
The StarCraft video game, a real-time strategy developed by Blizzard, DeepMind is considered a "big challenge" due to its complexity the mechanics and the expansion of the cards, which complicates the training of automated systems to be competitive.
To train your AI, DeepMind uses raw data from the StarCraft II interface through two techniques known as controlled learning and enhanced learningThe Neural Network transforms game units and uses the LSTM memory core to provide support for long-term learning.
StarCraft, the strategic Blizzard game.
The Google algorithm, a multi-functional learning software that was originally used to train AlphaStar's neural network through controlled learning, to learn from human players from other Blizzard video games and from their "macroeconomic" and "micro" resourcesWith these techniques he defeated 95% of the most difficult difficulties – known as the "elite" – of the games developed by the California studio.
Subsequently, the researchers surrendered to AlphaStar enhanced learning process, for which a StarCraft II continual league was created with real athletes who created a global map of strategies chosen by competitors.
AlphaStar analyzes the success rate of each strategy and possible counter-attack tactics. With the StarCraft league, the software stacks up experience of more than 200 years of real play, achieved only in 14 days,
Other successes of DeepMind
The Google system has defeated the "MaNa" and "TLO" professionals, something that happens for the first time with eSports players according to DeepMind, as well as a score of 5-0. For this, it takes advantage of a higher average per minute action, tens of thousands against hundreds, and overcomes the limitations of the algorithm as a delay of 350 milliseconds between observation and action.
South Korean Lee Sedol, world champion of the board game "went", competed against Google's artificial AlphaGo. (Photo: AP / Lee Jin-man)
DeepMind's current StarCraft II tests are not the first of Google's AI video games that already play other titles like Atari, Mario and Dota 2, and last summer also Quake III in Capture mode. "