Artificial intelligence

Since ancient times, mankind has sought to model the human mind, that is, to create artificial intelligence. This was first mentioned in the essay of the philosopher and theologian Raimund Lull (app. 1235 - app. 1315) "Great Art", which not only expressed the idea of a logical machine for solving various problems, based on the universal classification of concepts (XIV century), but also tried to implement it. Rene Descartes (1596–1650) and Gottfried Wilhelm Leibniz (1646–1716) independently developed the doctrine of the innate ability of the mind to cognition and the universal and necessary truths of logic and mathematics, worked to create a universal language for the classification of all knowledge. It is on these ideas that the theoretical foundations of the creation of artificial intelligence are based. The impetus to the further development of artificial intelligence was the appearance in the 40s. XX century computer. In 1948, the American scientist Norbert Wiener (1894–1964) formulated the main provisions of the new science - cybernetics. In 1956, at a seminar called "Artificial intelligence" devoted to solving logical problems, a new scientific direction connected with machine modeling of human intellectual functions and called artificial intelligence was recognized at Stanford University (USA). Soon this industry was divided into two main areas: neurocybernetics and cybernetics of the "black box".

Since ancient times, mankind has sought to model the human mind, that is, to create artificial intelligence

Neurocybernetics turned to the structure of the human brain as the only thinking object and took up its hardware modeling. Physiologists have long identified neurons - nerve cells connected to each other as the basis of the brain. Neurocybernetics is engaged in the creation of elements similar to neurons, and their integration into functioning systems, these systems are called neural networks. In the mid 80s. XX century. in Japan, the first neurocomputer was created to model the structure of the human brain. Its main field of application is pattern recognition.

Cybernetics of the "black box" uses other principles, the structure of the model is not the main thing, its response to the specified input data is important, the model should respond as a human brain. Scientists of this direction are engaged in the development of algorithms for solving intellectual problems for existing computing systems. The most significant results:

1. Model of the labyrinth search (the end of the 50s), in which the state graph of the object is considered and the search for the optimal path from the input data to the resultant one occurs. In practice, this model has not found wide application.

2. Heuristic programming (early 60s) developed action strategies based on predefined rules (heuristics). Heuristics is a theoretically unjustified rule, allowing to reduce the number of searches in the search for the optimal path.

3. Methods of mathematical logic. The resolution method, which makes it possible to automatically prove theorems on the basis of certain axioms. In 1973, the Prolog logic programming language was created that allows processing symbolic information.

Since the mid 70s. XX century. the idea of modeling the specific knowledge of expert experts is being implemented. The first expert systems appear in the USA. There is a new technology of artificial intelligence, based on the representation and use of knowledge. Since the mid 80s. XX century. Artificial intelligence begins to absorb investment. Industrial expert systems are emerging, interest in self-learning systems is increasing.

Tools