Thursday , November 26 2020

Artificial Intelligence: the Challenges of Deep Training

It is very probable that a person will never be able to beat the machine in the Go game. This game of territorial conquest is extremely complex and its combinations exceed the number of atoms in the universe. Therefore, the machine can not rely solely on its computational power – such as a long time for chess or shogun – for mechanical scanning of all possible actions and superiority of the human brain. He must develop strategies and learn to learn, we are talking about "deep learning" (deep learning).

Deep learning can be likened to a kind of artificial intelligence. This is a subset – or evolution – of what is more commonly called "machine training" (machine training), which relates to identifying, analyzing and enhancing machine learning skills with or without the help of people.

from machine training on enhancing learning

When people confront the machine as part of a strategic game (and not accidental), the first of these helm machines is in line with Alan Thuring's work and his work learning machine This is to be able to learn and especially to improve without human intervention so that you can beat a good player.

So on May 11, 1997, Deep Blue was the first machine to beat on champion at that time, Garry Kasparov, as part of the legendary man / machine game.

He had the resource to know the rules, to have thousands of parts in memory, and to have algorithms of calculations and very powerful predictions.

The strategy of Deep Blue before moving one piece was:

  1. Remove a gigantic number of possible moving games in such situations during the game;

  2. to compare with the database with pre-registered parts;

  3. select the shot that will most likely lead to a win (ie when the player who has it unfolds has won the game).

This machine training is still again type machine training but in order to face the big gamers of Go's game, the neural networks are in ambush!

In 2016, introduce training deep learning was successful through the AlphaGo-Lee experiment, developed by DeepMind, a subsidiary of Alphabet, the parent company of Google. This machine is really the first to beat the great game master in the face of South Korean Lee Seddle. But even if it was already based on two neural networks (one to decide what to play and the other to judge the implications of this solution), they needed people before, and their parts were recorded and swallowed, made inaccessible for human competitors.

Then, rather, a "strengthening learning" (or tabula rasa), learning through AlphaGo-Zero, is still being developed by DeepMind. This machine is supported by a neural network and can be released from human parts and surveillance. The point is just to inform the Go game rules machine and then learn it from a blank page! What impresses in this experiment is that AlphaGo-Zero has impressively defeated AlphaGo-Lee from 100 wins to zero just after three days of enhanced learning. The technique is logical, it will be to play machines between them to force them to feed on their own computing power and their own learning strategy.

In this way, it seems clear that the human brain was from the machine (neural networks) and is surpassed by its computational and analytical (algorithmic) capabilities will never be able to beat it again … at least if the machine understands and clearly assimilates the rules of play!

In this connection, it is interesting to note the recent failure of Deep Mind in the context of quite basic mathematical control (algebra). This failure reveals the current difficulty, for AI as powerful as it is, to understand questions that simultaneously articulate symbols, text and functions, such as "What is the sum of 1 + 1 + 1 + 1 + 1 + 1 1?".

from deep learning and neural networks

The emergence of artificial neural networks – neural networks – and their integration into machines and robots marked the beginning of deep learning and his successes, failures and questions

Artificial neural networks are largely inspired by the neural networks of the human brain and the more active the neurons, the more complex (30 layers for Google Photo) will be deep. However, instead of an electrical signal that travels from the neuron to the neuron to excite or obstruct, the network will affect some neurons and thus will give them more or less importance in the final solution at the end of the process. Generally the first layers focus on the main features, the intermediate layers on the specifications, and the latter on the details.

Neural network.
Mark Bidan

Each layer of color represents here a level of information that is increasingly necessary to characterize without too much ambiguity the target object (in blue) and any artificial neuron layer is assigned a weight, will be refined gradually (from 0 to 100 for example) by experiments to assess the importance of the detail in the process of final site characterization.

If, for example, it is to recognize a pedestrian and distinguish himself from a cyclist – in the case of sensors embedded in a moving autonomous car – the algorithm must be able to decide and position itself quickly. He / she has to offer the driver a response from the following three: "yes" (pedestrian) or "no" (no pedestrian) or "no yes or no" (do not know) if not a pedestrian). To do this, the algorithm can rely on a network of artificial neurons that itself is based on millions of pedestrian images in a situation (standing, squatting, front, rear, profile, isolated, grouped) in rain, sunshine, etc. .) that it will mix with other loose images that do not represent pedestrians to compel them to make a good choice of what – he thinks – characterizes a pedestrian.

The network will then exceed these key features and assign those weights. Interestingly, these features have not been seen by people. The last layer of the neural network will decide what is what a man calls a pedestrian and what not! In the first case, compared to the pedestrian database, confirmed by the people, the network will know whether it is wrong or not. If it succeeds, the network will keep the information (here the image) and use it for future solutions, and if this is a failure, the network will correct the wrong weights assigned to the neurons to learn it is no longer wrong. In the second case, which is more complicated in terms of calculations, the grid will do the same to correct what he thinks is not a pedestrian and which eventually turns out to be one (skater, "Trotter", cyclist walking to the wheel your …).

If human intervention is essential, especially upstream, by informing the machine of what the image is type on a pedestrian or after him by adjusting the machine afterwards wrong decision so we can talk about supervised training (training under supervision). But if human intervention does not exist, then it is a matter of trust in neural networks that need to learn what is (or is not) important in characterizing the image (or the given purpose) without the data being labeled, and therefore it is training without supervision (training without supervision)

Consequences and applications of. T deep learning

on deep learning Of course, there are many applications in the research and daily. The recognition of the image as a whole and the recognition of the person in particular has become a wide field of application. Facebook does not hesitate to mobilize to automatically identify your friends to the photos you miss. The Chinese authorities also mobilize it through its millions of Face Detection (and soon the whole body) cameras specifically developed by Watrix to deploy its social loan program. Apple also loves this technology, especially for facial recognition, face ID integrated with iPhone X.

Google is also a user of Google deep learning and this type of artificial intelligence through applications such as Google Translate or painting applications such as Google Canvas / Google Drawing or Shopping, similar to Google Shopping Actions. Of course, Amazon does not lag behind and uses such technology in B2C through many applications on its e-commerce platform or B2B through its subsidiary focused on cloud computing Amazon Web Services. AWS offers preconfigured environments that allow your clients to create online deep-learning apps in Ubuntu, Amazon Linux or Windows2016 (AMI basic / AMI conda)

Of course, application areas are countless. Even if artificial intelligences remain disappointing when they encounter humor, love, irony or even art and artistic creativity, they invest intelligence and military intelligence, health, transport research, agribusiness, economics, management and finance, and of course sensitive areas such as politics, journalism and education.

Make sense deep learning ?

Finally, the applications are countless and can / must remain in the service of humanity.

They are related to our own knowledge of the brain and neural networks, which we can then duplicate in artificial neural networks by replacing electrical impulses with numerical weights. on deep learning de facto is limited by just three traps. The first is related to the energy and depletion of the resources needed for its good functioning (see Villani report of 8 March 2018), knowing that computers are becoming more and more powerful that data is increasing more and more. more massive, and that neural algorithms are getting richer. The second is the financial, cognitive and regulatory capacity for AI research. The third – and the most delicate – is related to ethical and moral issues, knowing that in essence deep learning ultimately allows the machine to solve itself and overcome the human "move" (a case of debate about unmanned aircraft and robot killers). their prophecy, tools that could be humble!

Source link