The series of materials on Artificial Intelligence (AI). Frankly say, your humble servant - practices. Therefore, from my point of view of the position of some opponents by allocating AI as something fundamental to the deep philosophical meaning and erroneous concept. This discipline is applied with established rules and concepts that are used everywhere. Intelligent systems and complexes designed for specific tasks - in particular, the complex technological and computational processes. With "youth" of this science… AI, cybernetics, information theory - almost peers, and many directions, we have discussed on the pages of IT publications, much younger. But all this is actually used in practice and is more or less settled.
For what does it need to introduce the notion of agents?
In the past, we considered the main material of theoretical calculations of agents - structural modules, which are in a certain environment and transform the results of their own perception of their own same effect. As indicated, the very notion of agents rather abstract, with different sources of their definition and classification before it is ambiguous. But in fact such a significant problem in the heterogeneity of opinion no. To describe the work of the same agent, you can use a standard form of submission to the formula:
from = f (x1,…, xn);
where y - so desired result / action (or results / actions), which is issued or made on Wednesday reported there or other agents;
f - abstract mathematical description of the functions of the agent, which specifically implemented the program;
x1,…, xn - received for processing data.
In fact, nothing in it is not complicated. The only thing that is important to understand - by agents may have a structural modules for various hierarchical levels from micro to macro. For example, some large program for multitasking user level (the same desktop Windows) can describe this formula, because it (the programme) exists within a certain environment, gets out of the necessary data, process them and produces the desired effect either displays the desired result. However, reducing hierarchical level, we can say that in relation to it its structural units are also agents. In turn, and they themselves can be razdrobit into separate functions, and almost… it will be too agents. In fact, the very notion quite convenient to use. This, incidentally, very close to the concept of obektnoorientirovannym models used in modern programming. Although often AI you can meet a similar term - agentnoorientirovannye system. In some academic works, you can find a description of what AI is the result of the interaction of many agents (their community) as autonomous, and semi. And if we are talking about artificial intelligence as an imitation of reasonable conduct and its hierarchical level, it implies an entire community of agents of various types, namely modules, collecting, filtering, processing incoming information, generating solutions, coordinating the joint action of all agents and t . e. That is, in the future under the agents, we will most likely involve precisely this level, that is, structural elements AI. But this… and artificial intelligence itself implies a similar abstraction. That is, on the one hand, we can talk as on the implementation of specific operations - for example, imitation of reasonable conduct at the level of "individual" (robot-driver, robot-vacuum cleaner, etc.) and on the management team of such individuals ( System-guard robots, robot researchers), etc. In those same computer games AI can be realized as at the level of individual characters-heroes, both in the management of the army, society, etc.
And most importantly, that should be understood in this matter: AI - is not just a program that in response to 2 х2 decision gives 4. This is the most primitive level. Under AI increasingly understood system that can produce solutions, often while under some uncertainty. And now it makes sense to look at the issue from several other party.
The main strategy of finding solutions
Let us consider a simple and understandable to everyone who works on a computer example - patience "Spider". In fact, when the first and subsequent layouts maps of stopok (except the last) player is in the framework of uncertainty and can not guarantee the correctness of their actions to achieve victory. But he chooses the most appropriate solutions, the first part of which may be successful at this particular moment, and the second gives process stock for the future. This game belongs to nondeterministic type, that is uncertain, implies a coincidence. In doing so, the same chess, checkers and other such games because they are well researched, often called deterministic. In fact, all this is very conditionally from the perspective not of modern computer programs and other world champions, and ordinary people. Especially those who are beginning to learn to play. In fact, without the knowledge and experience, they are in a situation full of uncertainty.
The founder of the theory of information Claude Shannon in 1951 wrote his first article on programming chess. And even then, he noted two key points: the best course theoretically exists, but find it almost impossible. Recall that this was 1951 - the year. And within his Labour Shannon had not seen their analysis of any practical significance, and meant purely theoretical interest. E-echo, these scientists. Rudolf Heinrich Hertz, transferring and receiving electromagnetic wave of wireless way, too, had not seen their experiences for something interesting to practice. It is now listening to the radio, watching TV, using multiple wireless devices, including mobile phones, humanity gratefully appreciates the realization of scientific and technological progress. And Claude Shannon… What is interesting his work on programming chess? The fact is that one of the main tasks is to develop a modern II solutions within the limited resources and processor capacity, while not always himself II is based on experience. In chess situation is like, because the whole Party calculate in advance practically impossible. At the hardware is not enough resources even for modern models of computers, and in early 50 - x last century there was no such prospect in principle, so Shannon withdrew two main strategy for better solutions, namely:
*** Full redundancy of all possible moves to a certain depth (under the depth means the number poluhodov get advance). In doing so, for analysis using a special evaluation function.
*** Allocation only interesting to deal with rows circumcision uninteresting branches.
The last option Shennonom seen as the best and the related human thinking. Although he actually takes place almost throughout the wildlife. For example, if a representative before a certain species is worth a certain goal, he selects only the best routes and actions to achieve it, he otmetaet unpromising. In doing so, in most cases, calculate everything and to advance (that is to guarantee the success of the selected branches) can not, so motion is phased with the expectation of further action at a certain depth. Agree that the situation is very similar to playing patience "Spider":). That is, when solved purely chess challenge, actually developed the first model of artificial intelligence as such. Incidentally, when we wrote about the "paper Turing machine", then not noted, but now say that his algorithms worked on the second principle, that is taken into account only the branches, including taking (this is called static or material assessment). In principle, in modern systems often used II and the third type of decision-making, which also has a direct relationship with nature, especially implies more or less reasonable beings. As noted by biologists, and even that often can be seen in the scientific and popular TV programs about animals, much passed from one individual to another by copying the behaviour, imitation.
As part of the basic areas of application of artificial intelligence are considered expert systems based on the same principle - knowledge transmitted from person-expert (physician, mathematics, physics, economist, etc.). They are structured and take the form of code. Besides writing such systems are not borne on the shoulders of the experts - their only objective is to transfer own experience. For making the entire programme specialist responsible forms on Artificial Intelligence - if translate this specialty in Russian - an engineer knowledge (knowledge engineer). Also this direction are now actively working in the field of computer games, since the NPC (improper heroes, or, in simple terms, artificial characters) should have their AI. And how this can be achieved best way? Correct! Repeat the actions of any player, using his experience. And on the substance of most of today's programmes and specialized software and hardware systems can be regarded as expert systems.
A few words about algorithms…
One of the main tasks is to assess programmers II algorithms embedded agents or their community, which essentially must meet three key requirements:
1. The quality of results.
2. Speed.
3. Optimality.
Under the first item should be understood to choose the most correct decision, minimizing errors that often occur not only through misused algorithms, but in complex cases, major tasks and solutions because of approksimirovaniya calculations (in simple terms - to simplify the calculations, that is the replacement of the main functions more simple, simplified). The second item - speed, if assigned to the program part - usually simply calculated by the number of operations performed. In doing so, the question you need to watch very carefully, to understand the essence of the calculations. We already have in the past material in this series example of the multiplication table. It can be stored in a database with pre-established results, and you can use the usual formula that is much more convenient and easier. Under optimal line need to understand the complexity of algorithmic models in relation to the requirements of the mission. In doing so, among the main goals attendant can speak both quality and speed, and sometimes - and compromise between them.
Here are concrete examples. First. For many years in student competitions land-autonomous robots IGVC, which we often write, the machine must pass along, limited white lines. Departure beyond their boundaries "awarded" punitive goggles, a little, then the robot can and does not return to the track. Decision simple - video, the selection of white, an indication of the programme "taboo". In doing so, in most cases, since such rules, teams was used approximation. That is the input images converted (streamlines) to a binary type, that is, they were only black and white. Thus, the time necessary to enter if… then. For complex tasks originally at the route before and bochonki / obstacles with white stripes, which are also handled by robots. But… pay attention to the ambiguity made simplification. Indeed, in the mode of shooting could happen kazusy that change lighting (cloudiness) or at some sections of the route will shadow area. That is the first to suffer the quality of decision-making. While some point to solve the problem became virtually everything, and the main point was the time achieving semifinal. Several years ago strips on obstacles replaced homogeneous, but colourful. But this is actually not much complicated the task, because the incoming images can also be simplified to 16 - or 256 - colored presentation. More complex autonomous robots being developed for military purposes, provided the machine, equipped with laser scanners to determine the obstacles and perceptions of landscape areas. If the machine itself much in size, the approximation applied to another. For example, small stone can not be a critical constraint, so it into account when formulating decisions are not taken. Example third. Three-dimensional worlds in computer games. The main - is that the user sees, respectively, to show 3D-objects. It is enough to realize algorithm that otchlenyaet visible part of the invisible, and, accordingly, processed (such as lighting, etc.) will be only what is necessary.
All these three cases show examples of optimality. In fact, programmers, designers, choosing an algorithm or set of algorithmic, often exist in some uncertainty. That is, in fact, find themselves in the position player in solitaires "Spider". By no means rare situation where the developer to write software that calculates a substantial and decisive major challenge, respectively, after the testing phase was found errors which manifested themselves in certain situations. What to do? Revised himself algorithm or to make the patch specifically for identified situations? Well, if quickly manage to solve first, but it is not always so easy, moreover, can lead to deterioration of the situation, because the new algorithm can occur with other errors. And more often if there is a small percentage of critical moments are patches for situations. This is very often the case in dealing with major challenges. Computer DeepBlue, wins at World Chess Champions… In its algorithmic basis of virtually no use human experience, except database endshpiley (conclusion parties). A key module for decision-making is a very complicated evaluation function that takes into account about six thousand chess-specific indicators. As a result, selected the most profitable branches, which are calculated with great extension. And in the testing with Deep Blue played a grandmaster, who followed the computer and identify points of miscalculation of the latest situation. When the error was found, met the staff developer, discussed the cause, then changed rates in the evaluation function.
To be continued...
1) "Get Money for Clicks"
2) Search your domain name wishing to have!
No comments:
Post a Comment