6 March 2019 1447 words, 6 min. read

Artificial intelligence: a new opponent of reference in the gaming world

By Pierre-Nicolas Schwab PhD in marketing, director of IntoTheMinds
Artificial intelligence, algorithms, are now an integral part of our daily lives. They play a role in various fields, which do not always have an obvious link with IT. Indeed, artificial intelligence (AI) is applied in many areas such as […]

Artificial intelligence, algorithms, are now an integral part of our daily lives. They play a role in various fields, which do not always have an obvious link with IT. Indeed, artificial intelligence (AI) is applied in many areas such as robotics, but also games, music, art, health, and many other domains.
This article launches our brand-new series on the multiple facets of artificial intelligence by detailing its integration into our modern world. Today we begin with an exciting topic, that of games in the broadest sense and video games in particular.


  1. Artificial intelligence in everyday life, at a glance
  2. When artificial intelligence challenges humans in their playgrounds
  3. What will be the next step?

Artificial intelligence in everyday life, at a glance

It would be beyond the objectives of this article to provide an exhaustive overview of the applications of artificial intelligence. However, let us return to an example that will be of interest to as many people as possible: voice assistants. Apple and Google have introduced forms of artificial intelligence into our daily lives with the integration of Siri, the voice assistant for iOS devices, and OK Google, the voice assistant for Android devices. More and more, connected speakers such as Google Home – marketed by Google – and Alexa – marketed by Amazon – are making their way into households and are based on the same technologies. We have discussed this example of use in a recent article that we invite you to read.

When artificial intelligence challenges humans in their playgrounds

For many years now, gaming has been a privileged field of research for specialists in artificial intelligence. Many have looked at games, their techniques and strategies to test their algorithmic models and develop ever more efficient AIs.

Indeed, while the first developments incorporated basic learning, little by little the most successful AIs were able to learn new game strategies and even revolutionise the way to play certain games. As was the case with Backgammon, which TD-Gammon AI showed to experienced players in a whole new way as it had developed new strategies, techniques and theories for the human eye.

To understand and illustrate the critical steps in the relationship between artificial intelligence and games, we have created this original infographic, which we will explain in more detail below.


In 1997, Deep Blue was the first artificial intelligence to beat Garry Kasparov, World Chess Champion. After several clashes where Kasparov tried to play conventionally and strategically, then in a messier way using different strategies, no longer being satisfied to remain blocked on a particular way to play. From this approximate technique resulted in 3 draws until Deep Blue beat Kasparov during the 6th inning.
It is nevertheless necessary to specify that between 1996 and 1997, Deep Blue was especially improved to beat Kasparov, its power also being doubled. The resources invested in this project were therefore significant for IBM.

The televised game show Jeopardy

In 2011, on the Jeopardy game show set, a new kind of candidate emerged: an artificial intelligence developed by IBM. Thanks to a 15 TB RAM (terabytes), Watson (the name of the AI) stood out on TV by winning against the defending champions and was thus able to claim a prize of one million dollars.
Watson did not have access to the Internet during the questions. However, it is clear that many servers were running to allow the AI to work reactively. Also, IBM admits that Watson had the equivalent of 200 million pages of data stored in its RAM.


In March 2016, the AlphaGo intelligence developed by DeepMind received the distinction of winning 4 out of 5 Go games against Lee Sedol, one of the world champions in this highly complex strategy game. Many specialists believed that this victory of an AI over a Go champion would not happen for decades; and yet… The game of Go is indeed much more complicated to play, and strategies are only more numerous since the board is twice as large as a chess board.

After studying a database of Go games, the AI was considered to have acquired the experience of a person who had played the game for 80 years consecutively. In comparison to Deep Blue – the AI that beat Kasparov at chess in 1997 – AlphaGo put into practice so-called “Deep Learning” techniques rather than brute force strategies. Thus, AlphaGo and its successor AlphaGo Master defeated Lee Sedol and Ke Jie respectively, also considered to be one of (or) the best player in the world of Go.
This progress is significant for the field of artificial intelligence. However, many people put this victory into perspective, pointing to the energy power used by AlphaGo and AlphaMaster to achieve its goals. Indeed, during the match against Lee Sedol and AlphaGo, it was estimated that the champion’s energy consumption was 20 Watts compared to nearly 1 MW for the AI.

Texas Hold’em

Poker is one of the most complex games to play. Indeed, this game combines reflection, strategy but also instinct and bluff, which does not seem, at first sight, to be the most developed skills of an AI. Taking into account the difficulties of poker, the Libratus developers then focused on Texas Hold’em Poker. Thus, the AI was trained to play against itself and then to examine its strategies to improve them while creating real bluffing skills by reducing disappointment or regret due to certain game choices. With a regret rate close to zero, and it was concluded that it would be impossible to beat this algorithm in a lifetime.
As a result, it was tested in January 2017. Libratus faced Jason Les, Dong Kim, Daniel McAulay and Jimmy Chou, the world’s top Texas Hold’em players. After 20 days of games and 120,000 hands, Libratus was declared the winner. During its rounds, the AI analysed the strategies and techniques of its competitors, continually adapting and improving.


In 2017, after its integration with DeepMind (a Google company), the Maluuba start-up integrated an automatic learning method called Hybrid Reward Architecture (HRA) into the famous arcade game Pac-Man. This technology allows an agent to evaluate the suggestions of all other agents for the current game. This approach was often compared to the expression “divide and conquer” and allowed the AI to achieve an unprecedented score of 999,990.

Dota 2

Following the same operating principle as Libratus, OpenAI was created to learn by itself and continuously improve through its gaming experiences. In August 2017, after just two weeks of learning, the artificial intelligence launched by the laboratory co-founded by the famous Elon Musk was already winning against several world players at Dota 2 in one-on-one games during the World Tournament of eSports The International. What makes this stage essential is the surprising aspect of the events that occur during the game.

The Rubik’s Cube

Another significant milestone is, of course, the record set in early 2018 by an AI who solved a Rubik’s Cube in 0.38 seconds, including visual analysis and colour recognition via a PlayStation 3 webcam and the movements needed to solve the Rubik’s Cube using robotic arms. Until then, this record was 0.637 seconds for a robot and 4.69 seconds for a human, a record held by Patrick Ponce.

StarCraft II

In 2019 DeepMind, the Google subsidiary that had made itself famous by developing AlphaGo continued to push the boundaries of artificial intelligence in the gaming world. His latest developments led to the development of AlphaStar, an artificial intelligence that beat two of the top 50 players in the world at StarCraft II. This real-time strategy game is much more complicated than those that AIs have been able to master before (Atari, Mario, Quake III Arena,) since StarCraft II is a game offering various game modes and in particular the one against one in which participants evolve in real time.

What is the next step?

By repeatedly beating the human brain on more diverse and varied but above all more and more complex games, we can wonder what the future of artificial intelligence and its interactions with humans holds in store for us.
While it is clear that machines can now solve increasingly complex problems (see the Go game and StarCraft II), we must also consider the role of humans in this equation.
In terms of optimisation, it seems evident that energy aspects will have to at least be considered. Many people are asking to see what the capabilities of an AI would be if it operated at the same level of energy power as a human being, thus proposing more “fair play” confrontations.

On the other hand, where artificial intelligence beats humans on the games they have created themselves. But is an AI capable of “inventing” a new game? An AI named Angelina is already being trained and improved to achieve this goal.

Image: Shutterstock

Posted in Innovation.

Post your opinion

Your email address will not be published. Required fields are marked *