What is Artificial Intelligence (AI)? – Definition of the concept in simple words
Artificial intelligence is the ability to execute activities typically associated with intelligent beings in a digital machine or human-controlled system. This concept is also applied to the initiative to build structures that are fitted with logical mechanisms, such as the ability to analyze, generalize, or benefit from previous experience.
Moreover, the interpretation of the AI framework is limited to the explanation of a whole host of related technologies and procedures, such as machine learning, virtual agents, and expert systems. Simply put, AI is a rough neuronal projection in the cortex. Signals are transmitted from neuron to neuron and, finally, are output – a numerical, categorical, or generative result is obtained. This can be illustrated with the following example.
The first layer will classify the general gradients that determine the overall structure of the cat because the device takes the pictures of a cat and is conditioned to know whether or not the cat is a cat. On the next line, bigger objects like ears and mouths may be marked. In the third layer, small objects are identified (like a mustache). Finally, the software can generate “yes” or “no” depending on this knowledge and state whether or not it is an animal. It is not important for the programmer to inform the neurons that these are the functions that they will search for. The AI taught them all on their own, learning with and without cats in several videos. In the third layer, small objects are identified (like a mustache). Finally, based on this information, the program will output “yes” or “no” to tell if it is a cat or not.
The programmer does not have to say that these are the functions he will try. The AI taught them on its own and practiced them with and without cats on several pictures. The third layer describes smaller objects (for example, mustache). Finally, the software will emit “yes” or “no,” depending on this information, to say whether it is or is not a cat. The programmer does not have to say that these are the functions he will try. The AI taught them on its own and practiced them with and without cats on several pictures.
What is artificial intelligence?
Description of the artificial neuron
An artificial neuron is a functional function developed as an illustration of the neuronal biological neurons. The elementary units of artificial neural networks are artificial neurons. An artificial neuron collects and sums one or more inputs into the neuron to produce an output or an event reflecting its action potential. Each input is typically analyzed independently and the sum is converted by a non-linear function called an activation or transition function.
When did AI research start?
In 1935 the British researcher A.M. Turing described an abstract computing machine that consists of a limitless memory and a scanner that moves back and forth through memory, character by character. The scanner reads what it finds, writing down further characters. The actions of the scanner are dictated by an instruction program, which is also stored in memory as symbols. The earliest successful AI program was written in 1951 by Christopher Strachey. In 1952, this program could play checkers with a human, surprising everyone with its ability to predict moves. In 1953, Turing published a classic early article on chess programming.
The Difference between Artificial Intelligence and Natural
The general mental capacity of thinking, problem-solving, and preparation can be described as intelligence. Intelligence combines cognitive processes like vision, concentration, memory, language, or preparation through its very existence. A conscientious approach to the environment characterizes human wisdom. Human thought can’t be isolated from bodiliness and is often emotionally filled. Furthermore, a human is a social being, and society therefore also affects thought. AI’s fundamentally distinct and socially focused, nothing to do about it.
How do human and computer intelligence compare?
It is possible to compare human thinking with artificial intelligence based on several general parameters of the organization of the brain and machine. The activity of a computer, like a brain, includes four stages: encoding, storing, analyzing data and issuing a result. In addition, the human brain and AI can self-learn depending on data received from the environment. Also, the human brain and machine intelligence solve problems (or tasks) using certain algorithms.
Do computer programs have an IQ?
Not. The IQ indicator is associated with the development of a person’s intelligence depending on age. AI in some way exceeds some human abilities, for example, it can retain a huge number of numbers in memory, but this has nothing to do with IQ.
The Symbolic Approach to AI is a collection of all methods for researching artificial intelligence based on high-level symbolic (human-readable) representations of tasks, logic, and search. The symbolic approach was widely used in AI research in the 1950s and 1980s. One of the popular forms of the symbolic approach is expert systems that use a combination of certain production rules.
Manufacturing rules link symbols into logical links that are similar to the If-Then algorithm. The expert system processes the rules to draw conclusions and determine what additional information it needs, that is, what questions to ask using human-readable characters.
The term “logical approach” implies an appeal to logic, thinking, solving problems using logical steps. Logicians back in the 19th century developed a precise notation for all kinds of objects in the world and the relationships between them. By 1965, there were programs that could solve any logical problem (the peak of the popularity of this approach came in the late 1950s and 1970s).
Proponents of the logical approach within the framework of logical artificial intelligence hoped to build intelligent systems on such programs (in particular, written in the Prolog language). However, this approach has two limitations. First, it is not easy to take informal knowledge and put it in the formal terms that are required for AI processing. Second, there is a big difference between solving a problem in theory and solving it in practice.
The agent (“doing” from the Latin agere) is something that functions. Of course, any computer system is doing something, but computer agents are supposed to do more: work individually, collect environmental feedback (using different sensors), respond to the changes, establish and accomplish goals. Someone who tries to achieve the best possible outcome is a moral person.
It is assumed that this approach, which became popular in the late 80s, works most efficiently, as it is a combination of symbolic and neural models. The hybrid approach increases the cognitive and computational capabilities of the machine.
The artificial intelligence technology market
The market is expected to grow to $ 190.61 billion by 2025, at an annual growth rate of 36.62%. Market growth is driven by factors such as the adoption of cloud applications and services, the emergence of big data, and the strong demand for intelligent virtual assistants. However, there are still few experts developing and implementing AI technologies, and this is holding back the market growth. AI-powered systems need integration and technical support for maintenance.
Drivers and market leaders are two corporations – Intel and AMD, manufacturers of the most powerful processors. Intel has traditionally focused on making machines with higher clock speeds, AMD is focused on constantly increasing the number of cores and providing multi-threaded performance.
National Development Concept
Three dozen countries have already approved national strategies for the development of AI. In October 2019, the draft National Strategy for the Development of AI is to be adopted in Russia. It is assumed that a legal regime will be introduced in Moscow to facilitate the development and implementation of AI technologies.
The questions of what artificial intelligence is and how it works have been of concern to scientists from different countries for more than a decade. The US state budget spends $ 200 million annually on research. In Russia, over 10 years – from 2007 to 2017 – about 23 billion rubles were allocated. The AI research support sections will be an important part of the national strategic framework. Soon, new research centers will open in Russia, and the development of innovative software for AI will continue.
The rules and regulations in the field of AI in Russia are in the process of constant revision. It is assumed that at the end of 2019 – beginning of 2020, national standards will be approved, which are now being developed by market leaders. In parallel, the National Standardization Plan for 2020 and beyond is being formed. The world has the standard “Artificial Intelligence. Concept and terminology ”, and in 2019 experts began to develop its Russified version. The document must be approved in 2021.
Impact of Artificial Intelligence
Inextricably related to advancement in science and technology is the launch of IA, and the spectrum of implementation expands every year. Every day we witness this when a big online store chain commends us a commodity, or when we just open a screen, we see a commercial for a film we just wanted to watch. This advice is focused on algorithms that evaluate what the user has bought or seen. Inside these algorithms is artificial intelligence.
Economy and Business
The penetration of AI technology into all spheres of the economy will increase the volume of the global market for services and goods by $ 15.7 trillion by 2030. The United States and China are still leaders in terms of all kinds of AI projects. The developed countries – Germany, Japan, Canada, Singapore – are also striving to realize all the possibilities. Many countries whose economies are growing at a moderate pace, such as Italy, India, Malaysia, are developing strengths in specific areas of AI applications.
To the labor market
The global impact of AI on the labor market will follow two scenarios. First, the proliferation of certain technologies will lead to the layoff of a large number of people, since computers will take over many tasks. Secondly, due to the development of technical progress, AI specialists will be in great demand in many industries.