Lecture notes on Artificial Intelligence and Expert systems

lecture notes on artificial intelligence and artificial intelligence teaching methods | pdf free download
Dr.AstonCole Profile Pic
Dr.AstonCole,United Kingdom,Researcher
Published Date:10-07-2017
Your Website URL(Optional)
Comment
Eszterházy Károly Collage Institute of Mathematics and Informatics ARTIFICIAL INTELLIGENCE AND ITS RTIFICIAL NTELLIGENCE AND ITS A I TEACHING EACHING T LECTURE NOTES BY DR. GERGELY KOVÁSZNAI AND DR. GÁBOR KUSPERTable of Contents 1 Introduction ......................................................................................................... . .. . . .. . . . .. . . . .. . . .. . . . . . . .. 4 2 The History of Artificial Intelligence .................................................................. . . .. . . .. .. . . .. . .. . .. . .. .. . .. . 7 2.1 Early Enthusiasm, Great Expectations (Till the end of the 1960s) .............................. . . .. . . . .. . . . . 7 2.2 Disillusionment and the knowledge-based systems (till the end of the 1980s) ..... . .. . .. . . . . . .. . . . .. . 8 2.3 AI becomes industry (since 1980) ........................................................................................... . . 9 3 Problem Representation ............................................................................................................ . .. . .. 10 3.1 State-space representation ................................................................................................... . . . . 10 3.2 State-space graph .......................................................................................... .. . . . .. . . . . .. . . . . . .. . . . . . . 11 3.3 Examples ......................................................................................................................... . . .. . . .. 12 3.3.1 3 jugs ........................................................................................................... .. . . . . . .. .. . . . .. . .. . 12 3.3.2 Towers of Hanoi ............................................................................................. .. . .. .. . . .. . . .. . . 15 3.3.3 8 queens .................................................................................................................... .. . . . . . 17 4 Problem-solving methods ............................................................................................... . . .. . . . .. . .. .. . . 20 4.1 Non-modifiable problem-solving methods .............................................................. . . . . .. . . . .. . .. . 22 4.1.1 The Trial and Error method ................................................................................. .. . .. . . .. . . . 23 4.1.2 The trial and error method with restart .......................................................................... .. 23 4.1.3 The hill climbing method ............................................................................. . . . .. . . . . . .. .. . .. .. 24 4.1.4 Hill climbing method with restart ............................................................ . . .. . . . .. . . . .. . . .. . . .. . 25 4.2 Backtrack search ................................................................................................... .. . . . .. . .. . . .. .. . . 25 4.2.1 Basic backtrack .................................................................................... . .. . . .. . . . . .. .. . .. .. .. . . .. . 26 4.2.2 Backtrack with depth limit ............................................................................. .. . . . . .. . . .. . .. .. 29 4.2.3 Backtrack with cycle detection ............................................................ . .. . .. . . . . . .. .. .. .. . . . . .. . . 31 4.2.4 The Branch and Bound algorithm ................................................................. . .. . . .. . . .. . .. . . . . 33 4.3 Tree search methods ........................................................................................................... . . . .. 34 4.3.1 General tree search ............................................................................................ . .. .. . .. . . . .. . 35 4.3.2 Systematic tree search .................................................................................... .. . . .. .. .. . . .. . .. 37 4.3.2.1 Breadth-first search .................................................................... . . .. . . .. . . .. . .. .. . .. . .. . . . . .. 37 4.3.2.2 Depth-first search ........................................................................... . .. .. .. . .. . . . .. . . . .. . .. .. 38 4.3.2.3 Uniform-cost search ....................................................................... .. . . .. . .. . .. . .. . . . . . .. . . . 40 4.3.3 Heuristic tree search .......................................................................................... . . . . . . . . .. . . . . 41 4.3.3.1 Best-first search ........................................................................................ .. . . .. .. .. . . .. . 42 4.3.3.2 The A algorithm ......................................................................................... . .. . .. . . .. . . . . 43 4.3.3.3 The A algorithm ............................................................................................... . .. . . . 47 4.3.3.4 The monotone A algorithm ........................................................................ .. . . . . .. . .. . .. 49 4.3.3.5 The connection among the different variants of the A algorithm ............ .. .. . . . . . . . . . . .. 51 5 2-player games .......................................................................................................................... . . .. . . 52 5.1 State-space representation .................................................................................................. . . . . . 53 5.2 Examples ......................................................................................................................... . . . . . .. . 53 5.2.1 Nim ............................................................................................................................ .. . . .. 53 5.2.2 Tic-tac-toe ......................................................................................... . . .. .. . . .. . . . . . .. . .. . . .. .. . . . . 54 5.3 Game tree and strategy .......................................................................................................... . . 56 5.3.1 Winning strategy .................................................................................. . . . . . . . . .. .. . . .. . . . .. .. . . .. 58 5.4 The Minimax algorithm ...................................................................................... . . .. . .. . . . .. . .. . . .. . 58 5.5 The Negamax algorithm .......................................................................................................... 61 5.6 The Alpha-beta pruning ............................................................................... . . .. . .. . . .. . .. .. . . .. .. .. . .. 63 6 Using artificial intelligence in education .............................................................................. . . .. . . . . .. 66 6.1 The problem ........................................................................................................... . .. .. . . .. . . .. . . .. 66 6.1.1 Non-modifiable searchers .................................................................... . . . . .. . .. . . . .. . .. . . . . .. .. . . 67 6.1.2 Backtrack searchers .................................................................................................. .. . . . . . 69 6.1.3 Tree Search Methods ............................................................................................. . . .. . . .. . . 70 6.1.4 Depth-First Method ............................................................................................ . . . .. . . .. . .. . 72 6.1.5 2-Player game programs .................................................................................................. 73 6.2 Advantages and disadvantages ........................................................................................... .. . . . 74 7 Summary ................................................................................................................................... . .. .. . 75 8 Example programs ............................................................................................................. . . .. . .. . .. . .. 77 8.1 The AbstractState class ................................................................................... .. . . . . .. . .. . . .. . . . . .. .. . 77 8.1.1 Source code ................................................................................................. . .. . . . .. . . . .. . . . . . .. 77 8.2 How to create my own operators? .............................................................. . .. . . . . . .. .. . . .. .. . . . . . .. .. . 78 8.2.1 Source code ................................................................................................. . . .. . . . .. .. . .. . . . .. . 78 8.3 A State class example: HungryCavalryState ................................................................... . .. .. .. . 80 8.3.1 Source code ................................................................................................. . . . .. . .. . .. . .. .. . . . . 80 8.4 Another State class example .......................................................................... . . .. .. .. . .. . . . . . .. .. . . .. . 82 8.4.1 The example source code of the 3 monks and 3 cannibals .................... . . . . . . .. . .. . . .. .. . . .. . . . . 82 8.5 The Vertex class ............................................................................................... .. . . .. .. . .. . . . .. .. . . .. . 84 8.5.1 Source code ................................................................................................. . . . . . . .. .. . . .. . . . . .. 85 8.6 The GraphSearch class ..................................................................................................... . .. .. .. 86 8.6.1 Source code ................................................................................................. . . . .. . .. . . . .. . . . .. . . 86 8.7 The backtrack class ................................................................................................ . .. . . . .. . . . .. . . .. 87 8.7.1 Source code ................................................................................................. .. . . . . .. . . . . .. . .. .. . 87 8.8 The DepthFirstMethod class .................................................................................. . .. . .. . . . .. . . . .. . 88 8.8.1 Source code .................................................................................................. .. . .. . . .. . .. . . .. . . . 89 8.9 The Main Program .............................................................................................. . .. . .. .. . . . . . . .. .. . . 90 8.9.1 Source code ................................................................................................. . .. . . . . .. . .. . . . . . . . . 90 Bibliography .................................................................................................................. . .. . . . . .. . . . .. .. . .. 92 1 INTRODUCTION Surely everyone have thought about what artificial intelligence is? In most cases, the answer from a mathematically educated colleague comes in an instant: It depends on what the definition is? If artificial intelligence is when the computer beats us in chess, then we are very close to attain artificial intelligence. If the definition is to drive a land rover through a desert from point A to point B, then we are again on the right track to execute artificial intelligence. However, if our expectation is that the computer should understand what we say, then we are far away from it. This lecture note uses artificial intelligence in the first sense. We will bring out such „clever” algorithms, that can be used to solve the so called graph searching problems. The problems that can be rewritten into a graph search – such as chess – can be solved by the computer. Alas, the computer will not become clever in the ordinary meaning of the word if we implement these algorithms, at best, it will be able to systematically examine a graph in search of a solution. So our computer remains as thick as two short planks, but we exploit the no more than two good qualities that a computer has, which are: The computer can do algebraic operations (addition, subtraction, etc.) very fast. It does these correctly. So we exploit the fact that such problems that are to difficult for a human to see through – like the solution of the Rubik Cube – are represented in graphs, which are relatively small compared to the capabilities of a computer, so quickly and correctly applying the steps dictated by the graph search algorithms will result in a fast-solved Cube and due to the correctness, we can be sure that the solution is right. At the same time, we can easily find a problem that's graph representation is so huge, that even the fastest computers are unable to quickly find a solution in the enormous graph. This is where the main point of our note comes in: the human creativity required by the artificial intelligence. To represent a problem in a way that it's graph would keep small. This is the task that should be started developing in high school. This requires the expansion of the following skills: Model creation by the abstraction of the reality System approach It would be worthwhile to add algorithmic thinking to the list above, which is required to think over and execute the algorithms published in this note. We will talk about this in a subsequent chapter. The solution of a problem is the following in the case of applying artificial intelligence: We model the real problem. We solve the modelled problem. With the help of the solution found in the model, we solve the real problem. All steps are helped by different branches of science. At the first step, the help comes from the sciences that describe reality: physics, chemistry, etc. The second step uses an abstract idea system, where mathematics and logic helps to work on the abstract objects. At last, the engineering sciences, informatics helps to plant the model's solution into reality. This is all nice, but why can't we solve the existing problem in reality at once? Why do we need modelling? The answer is simple. Searching can be quite difficult and expensive in reality. If the well-know 8 Queens Problem should be played with 1-ton iron queens, we would also need a massive hoisting crane, and the searching would take a few days and a few hundreds of diesel oil till we find a solution. It is easier and cheaper to search for a solution in an abstract space. That is why we need modelling. What guarantees that the solution found in the abstract space will work in reality? So, what guarantees that a house built this way will not collapse? This is a difficult question. For the answer, let's see the different steps in detail. Modelling the existing problem: We magnify the parts of the problem that are important for the solution and neglect the ones that are not. We have to count and measure the important parts. We need to identify the possible „operators” that can be used to change reality. Modelling the existing problem is called state space representation in artificial intelligence. We have a separate chapter on this topic. We are dealing with this question in connection with the „will-the- house-collapse” - issue. Unfortunately, a house can be ruined at this point, because if we neglect an important issue, like the depth of the wall, the house may collapse. How does this problem, finding the important parts in a text, appear in secondary school? Fortunately, it's usually a maths exercise, which rarely contains unnecessary informations. The writer of the exercise usually takes it the other way round and we need to find some additional information which is hidden in the text. It is also important to know that measuring reality is always disturbed with errors. With the tools of Numeric mathematics, the addition of the the initial errors can be given, so the solution's error content can also be given. The third step, the identification of the „operators”, is the most important in the artificial intelligence's aspect. The operator is a thing, that changes the part of reality that is important for us, namely, it takes from one well-describable state into another. Regarding artificial intelligence, it's an operator, when we move in chess, but it may not if we chop down a tree unless the number of the trees is not an important detail in the solution of the problem. We will see that our model, also know as state space can be given with the initial state, the set of end states, the possible states and the operators (including the pre and post condition of the operators). We need to go through the following steps to solve the modelled problem: Chose a framework that can solve the problem. Set the model in the framework. The framework solves the problem. Choosing the framework that is able to solve our model means choosing the algorithm that can solve the modelled problem. This doesn't mean that we have to implement this algorithm. For example, the Prolog interpreter uses backtrack search. We only need to implement, which is the second step, the rules that describe the model in Prolog. Unfortunately, this step is influenced by the fact, that we either took transformational- (that creates a state from another state) or problem reduction (that creates more states from another state) operators in the state space representation. So we can take the definition of the operators to be the next step after choosing the framework. The frameworks may differ from each other in many ways, the possible groupings are: Algorithms, that surly find the solution in a limited, non-circle graph. Algorithms, that surly find the solution in a limited graph. Algorithms, that give an optimal solution according to some point of view.If we have the adequate framework, our last task is to implement the model in the framework. This is usually means setting the initial state, the end condition and the operators (with pre- and postconditions). We only need to push the button, and the framework will solve the problem if it is able to do it. Now, assume that we have got a solution. First of all, we need to know what do we mean under 'solution'. Solution is a sequence of steps (operator applications), that leads from the initial state into an end state. So, if the initial state is that we have enough material to build a house and the end state is that a house had been built according to the design, then the solution is a sequence of steps about how to build the house. There is only one question left: will the house collapse? The answer is definitely 'NO', if we haven't done any mistake at the previous step, which was creating the model, and will not do at the next step, which is replanting the abstract model into reality. The warranty for this is the fact that the algorithms introduced in the notes are correct, namely by logical methods it can be proven that if they result in a solution, that is a correct solution inside the model. Of course, we can mess up the implementation of the model (by giving an incorrect end condition, for example), but if we manage to evade this tumbler, we can trust our solution in the same extent as we can trust in logics. The last step is to solve the real problem with the solution that we found in the model. We have no other task than executing the steps of the model's solution in reality. Here, we can face that a step, that was quite simple in the model (like move the queen to the A1 field) is difficult if not impossible in reality. If we found that the step is impossible, than our model is incorrect. If we don't trust in the solution given by the model, then it worth trying it in small. If we haven't messed up neither of the steps, then the house will stand, which is guaranteed by the correctness of the algorithms and the fact that logic is based on reality2 THE HISTORY OF ARTIFICIAL INTELLIGENCE Studying the intelligence is one of the most ancient scientific discipline. Philosopher have been trying to understand for more than 2000 years what mechanism we use to sense, learn, remember, and think. From the 2000 years old philosophical tradition the theory of reasoning and learning have developed, along with the view that the mind is created by the functioning of some physical system. Among others, these philosophical theories made the formal theory of logic, probability, decision- making, and calculation develop from mathematics. The scientific analysis of skills in connection with intelligence was turned into real theory and practice with the appearance of computers in the 1950s. Many thought that these „electrical masterminds” have infinite potencies regarding executing intelligence. „Faster than Einstein” - became a typical newspaper article. In the meantime, modelling intelligent thinking and behaving with computers proved much more difficult than many have thought at the beginning. The Artificial Intelligence (AI) deals with the ultimate challenge: How can a (either biological or electronic) mind sense, understand, foretell, and manipulate a world that is much larger and more complex than itself? And what if we would like to construct something with such capabilities? AI is one of the newest field of science. Formally it was created in 1956, when its Figure 1. The early optimism of the 1950s: „The smallest electronic mind of the world” :) name was created, although some researches had already been going on for 5 years. AI's history can be broken down into three major periods. 2.1 EARLY ENTHUSIASM, GREAT EXPECTATIONS (TILL THE END OF THE 1960S) In a way, the early years of AI were full of successes. If we consider the primitive computers and programming tools of that age, and the fact, that even a few years before, computers were only though to be capable of doing arithmetical tasks, it was astonishing to think that the computer is – even if far from it – capable of doing clever things. In this era, the researchers drew up ambitious plans (world champion chess software, universal translator machine) and the main direction of research was to write up general problem solving methods. Allen Newell and Herbert Simon created a general problem solving application (General Program Solver, GPS), which may have been the first software to imitate the protocols of human- like problem solving. This was the era when the first theorem provers came into existence. One of these was Herbert Gelernter's Geometry Theorem Prover, which proved theorems based on explicitly represented axioms. Arthur Samuel wrote an application that played Draughts and whose game power level reached the level of the competitors. Samuel endowed his software with the ability of learning. The application played as a starter level player, but it became a strong opponent after playing a few days with itself, eventually becoming a worthy opponent on strong human race. Samuel managed to confute the fact that a computer is only capable of doing what it was told to do, as his application quickly learnt to play better than Samuel himself. In 1958, John McCarthy created the Lisp programming language, which outgrew into the primary language of AI programming. Lisp is the second oldest programming language still in use today. 2.2 DISILLUSIONMENT AND THE KNOWLEDGE-BASED SYSTEMS (TILL THE END OF THE 1980S) The general-purpose softwares of the early period of AI were only able to solve simple tasks effectively and failed miserably when they should be used in a wider range or on more difficult tasks. One of the sources of difficulty was that early softwares had very few or no knowledge about the problems they handled, and achieved successes by simple syntactic manipulations. There is a typical story in connection with the early computer translations. After the Sputnik's launch in 1957, the translations of Russian scientific articles were hasted. At the beginning, it was thought that simple syntactic transformations based on the English and Russian grammar and word substitution will be enough to define the precise meaning of a sentence. According to the anecdote, when the famous „The spirit is willing, but the flesh is weak” sentence was re-translated, it gave the following text: „The vodka is strong, but the meat is rotten.” This clearly showed the experienced difficulties, and the fact that general knowledge about a topic is necessary to resolve the ambiguities. The other difficulty was that many problems that were tried to solve by the AI were untreatable. The early AI softwares were trying step sequences based on the basic facts about the problem that should be solved, experimented with different step combinations till they found a solution. The early softwares were usable because the worlds they handled contained only a few objects. In computational complexity theory, before defining NP-completeness (Steven Cook, 1971; Richard Karp, 1972), it was thought that using these softwares for more complex problems is just matter of faster hardware and more memory. This was confuted in theory by the results in connection with NP-completeness. In the early era, AI was unable to beat the „combinatorial boom” – combinatorial explosion and the outcome was the stopping of AI research in many places. From the end of the 1960s, developing the so-called expert systems were emphasised. These systems had (rule-based) knowledge base about the field they handled, on which an inference engine is executing deductive steps. In this period, serious accomplishments were born in the theory of resolution theorem proving (J. A. Robinson, 1965), mapping out knowledge representation techniques, and on the field of heuristic search and methods for handling uncertainty. The first expert systems were born on the field of medical diagnostics. The MYCIN system, for example, with its 450 rules, reached the effectiveness of human experts, and put up a significantly better show than novice physicians. At the beginning of the 1970s, Prolog, the logical programming language were born, which was built on the computerized realization of a version of the resolution calculus. Prolog is a remarkably prevalent tool in developing expert systems (on medical, judiciary, and other scopes), but natural language parsers were implemented in this language, too. Some of the great achievements of this era is linked to the natural language parsers of which many were used as database-interfaces. 2.3 AI BECOMES INDUSTRY (SINCE 1980) The first successful expert system, called R1, helped to configure computer systems, and by 1986, it made a 40 million dollar yearly saving for the developer company, DEC. In 1988, DEC's AI-group already put on 40 expert systems and was working on even more. In 1981, the Japanese announced the „fifth generation computer” project – a 10-year plan to build an intelligent computer system that uses the Prolog language as a machine code. Answering the Japanese challenge, the USA and the leading countries of Europe also started long-term projects with similar goals. This period brought the brake-through, when the AI stepped out of the laboratories and the pragmatic usage of AI has begun. On many fields (medical diagnostics, chemistry, geology, industrial process control, robotics, etc.) expert systems were used and these were used through a natural language interface. All in all, by 1988, the yearly income of the AI industry increased to 2 billion dollars. Besides expert systems, new and long-forgotten technologies have appeared. A big class of these techniques includes statistical AI-methods, whose research got a boost in the early years of the 1980's from the (re)discovery of neural networks. The hidden Markov-models, which are used in speech- and handwriting-recognition, also fall into this category. There had been a mild revolution on the fields of robotics, machine vision, and learning. Today, AI-technologies are very versatile: they mostly appear in the industry, but they also gain ground in everyday services. They are becoming part of our everyday life.3 PROBLEM REPRESENTATION 3.1 STATE-SPACE REPRESENTATION The first question is, how to represent a problem that should be solved on computer. After developing the details of a representation technology, we can create algorithms that work on these kind of representations. In the followings, we will learn the state-space representation, which is a quite universal representation technology. Furthermore, many problem solving algorithms are rd known in connection with state-space representation, which we will be review deeply in the 3 chapter. To represent a problem, we need to find a limited number of features and parameters (colour, weight, size, price, position, etc.) in connection with the problem that we think to be useful during h ,...,h the solving. For example, if these parameters are described with the values (colour: 1 n black/white/red; temperature: -20C˚, 40C˚; etc.), then we say that the problem's world is in h ,... ,h thestate identified by the vector   . If we denote the set which consists of values adopted 1 n H by the i. parameter with , then the states of the problem's world are elements of the set i H ×H ××H . 1 2 n As we've determined the possible states of the problem's word this way, we have to give a special state that specifies the initial values of the parameters in connection with the problem's world. This is called the initial state. During the problem-solving, starting from the initial state, we will change the different states of the problem's world again and again, till we reach an adequate state called the goal state. We can even define several goal states. Now we only need to specify which states can be changed and what states will these changes call forth. The functions that describe the state-changes are called operators. Naturally, an operator can't be applied to each and every state, so the domain of the operators (as functions) is given with the help of the so-called preconditions. Definition 1. A state-space representation is a tuple 〈 A , k ,C ,O 〉 , where: (1) A : is the set of states, A≠∅ , (2) k ∈A : is the initial state, (3) C⊆ A : is the set of goal states, (4) O : is the set of the operators, O≠∅ . Every o∈O operator is a function o : Domo A , where Domo= a a∉C ∧ precondition a ⊆ A ∣ o C The set can be defined in two ways: C= c ,,c ▫ By enumeration (in an explicit way): 1 m C= a goal conditiona ▫ By formalizing a goal condition (in an implicit way): ∣ precondition a goal conditiona The conditions and can be specified as logical formulas. o Each formulas' parameter is a state a , and the precondition of the operator also has the applicable operator o . Henceforth, we need to define what we mean the solution of a state-space represented problem – as that is the thing we want to create an algorithm for. The concept of a problem's solution can be described through the following definitions:〈 A , k ,C ,O 〉 a ,a'∈A Definition 2. Let be a state-space representation, and are two states. precondition a a ' a o∈O is directly accessible from if there is an operator where holds o oa=a' and . a a ' Notation: . o Definition 3. Let 〈 A , k ,C ,O 〉 be a state-space representation, and a ,a'∈A are two states. a ,a ,, a ∈A a ' is accessible from a if there is a state sequence where 1 2 n ▫ a =a , 1 ▫ a =a' , n a a ▫ ∀ i∈1,2,, n−1 : (any o ∈O operator) i i1 i o i a  a' Notation: o , o ,, o 1 2 n−1 k  c Definition 4. The problem 〈 A , k ,C ,O 〉 is solvable if for any goal state c∈C . In o ,, o 1 n o ,,o this case, the operator sequence is referred as a solution to the problem. 1 n Some problems may have more than one solution. In such cases, it can be interesting to compare the solutions by their costs - and select the less costly (the cheapest) solution. We have the option to cost a assign a cost to the application of an operator to the state a , and denote it as (assuming o precondition a that o is applicable to a , that is, holds), which is a positive integer. o k  c Definition 5. Let in the case of a 〈 A , k ,C ,O 〉 problem for any c∈C . The cost of o ,, o 1 n o ,,o the solution is: 1 n n costo ,a  . ∑ i i i=1 Namely, the cost of a solution is the cost of all the operator applications in the solution. In the case costo,a=1 of many problems, the cost of operator applications is uniform, that is for any o a operator and state . In this case, the cost of the solution is implicitly the number of applied operators. 3.2 STATE-SPACE GRAPH The best tool to demonstrate the state-space representation of a problem is the state-space graph. 〈 A , k ,C ,O 〉 Definition 6. Let be the state-space representation of a problem. The problem's 1 state-space graph is the graph , where a ,a '∈E and a ,a ' is labelled with if 〈 A , E 〉 o a a ' and only if . o Therefore, the vertices of the state-space graph are the states themselves, and we draw an edge between two vertices if and only if one vertex (as a state) is directly accessible from another vertex (as a state). We label the edges with the operator that allows the direct accessibility. It can be easily seen that a solution of a problem is nothing else but a path that leads from a vertex k (aka the initial vertex) to some vertex c∈C (aka the goal vertex). Precisely, the solution is the sequence of labels (operators) of the edges that formulate this path. In Chapter 4, we will get to know a handful of problem solving algorithms. It can be said, in general, that all of them explore the state-space graph of the given task in different degree and look for the path that represents the solution in the graph. remove ( a ,… ,a , p) mound,pcs1n 1 As usual: is the set of the graph's vertices, is the set of the graph's edges.3.3 EXAMPLES In this chapter, we introduce the possible state-space representations of several noteworthy problems. 3.3.1 3 JUGS We have 3 jugs of capacities 3, 5, and 8 litres, respectively. There is no scale on the jugs, so it's only their capacities that we certainly know. Initially, the 8-litre jug is full of water, the other two are empty: We can pour water from one jug to another, and the goal is to have exactly 4 litres of water in any of the jugs. The amount of water in the other two jugs at the end is irrelevant. Here are two of the possible goal states: Since there is no scale on the jugs and we don't have any other tools that would help, we can pour water from jug A to jug B in two different ways: ▫ We pour all the water from jug A to jug B . ▫ We fill up jug B (and it's possible that some water will remain in jug A ). Give a number to each jug: let the smallest be 1, the middle one 2, and the largest one 3 Generalize the task to jugs of any capacity: introduce a vector with 3 components (as a constant object out of the state-space), in which we store the capacities of the jugs: max=(3,5,8) Set of states: In the states, we store the amount of water in the jugs. Let the state be an th tuple, in which the i part tells about the jug denoted by i how many litres of water it is containing. So, the set of states is defined as follows: A= a a a  0≤a ≤max ∣ 1, 2, 3 i i a where every is an integer. i Initial state: at first, jug 3 is full all, the other ones are empty. So, the initial state is:k=0,0 , max  3 Set of goal states: We have several goal states, so we define the set of goal states with help of a goal condition: C= a ,a ,a ∈A ∃i a =4 ∣ 1 2 3 i i Set of operators: Our operators realize the pouring from one jug (denoted by ) to another one (denoted by j ). We can also specify that the source jug ( i ) and the goal jug ( j ) can't be the same. Our operators are defined as follows: O= pour i , j∈1,2,3 ∧ i≠ j ∣ i , j pour Precondition of the operators: Let's define when an operator can be i , j a , a , a  applied to a state It's practical to specify the following conditions: 1 2 3 i ▫ Jug is not empty. j ▫ Jug is not filled. pour So, the precondition of the operator to the state a ,a ,a  is: i , j 1 2 3 a ≠0 ∧ a ≠max i j j a' , a' ,a'  pour Function of applying: Define what state does the operator 1 2 3 i , j create from the state a ,a , a  The question is how many litres of water can we pour from 1 2 3 jug i to jug j . Since at most max −a j j j litres of water can be poured to jug , we can calculate the exact amount to be poured by calculating mina , max −a  i j j Denote this amount with T . Consequently: pour a , a , a =a ' , a ' , a '  , where i , j 1 2 3 1 2 3 a −T , if m=i i a ' = a T , if m= j where m∈1,2,3 m j a , otherwise m STATE-SPACE GRAPH The state-space graph of the aforementioned state-space representation can be seen in Figure 2.Figure 2. The state-space graph of the 3 Jugs problem. In the graph, the red lines depict unidirectional edges while the green ones are bidirectional edges. Naturally, bidirectional edges should be represented as two unidirectional edges, but due to lack of space, let us use bidirectional edges. It can be seen that the labels of the bidirectional edges are pour pour given in the form of , which is different from the form of as it was given in the i , j1− j2 i , j pour state-space representation. The reason for this is that one label encodes two operators at i , j1− j2 pour pour the same time: the operators and . i , j1 i , j2 The green vertices represent the goal states. The bold edges represent one of the solutions, which is the following operator sequence: pour , pour , pour , pour , pour , pour 3,2 2,1 1,3 2,1 3,2 2,1 Notice that the problem has several solutions. It can also be noticed that the state-space graph contains cycles , that makes it even more difficult to find a solution.3.3.2 TOWERS OF HANOI There are 3 discs with different diameters. We can slide these discs onto 3 perpendicular rods. It's important that if there is a disc under another one then it must be bigger in diameter. We denote the rods with „P”, „Q”, and „R”, respectively. The discs are denoted by „1”, „2”, and „3”, respectively, in ascending order of diameter. The initial state of discs can be seen in the figure below: We can slide a disc onto another rod if the disc (5) is on the top of its current rod, and (6) the discs on the goal rod will be in ascending order by size after the replacing. Our goal is to move all the discs to rod R. We create the state-space representation of the problem, as follows: Set of states: In the states, we store the information about the currently positions (i.e., a , a , a  a rods) of the discs. So, a state is a vector where is the position of disc i (i.e., 1 2 3 i either P, Q, or R). Namely: A=  a ,a ,a  a ∈P ,Q , R ∣ 1 2 3 i Initial state: Initially, all the discs are on rod P, i. e.: k=P , P ,P Set of goal states: The goal is to move all the three discs to rod R. So, in this problem, we have only one goal state, namely: C= R , R , R Set of operators: Each operator includes two pieces of information: ▫ which disc to move ▫ to which rod? Namely: O= move which∈1,2,3 , where∈P ,Q , R ∣ which , where move Precondition of the operators: Take an operator Let's examine which ,where a , a , a  when we can apply it to a state We need to formalize the following two 1 2 3 conditions: a which (1) The disc is on the top of the rod . which (2) The disc which is getting moved to the top of the rod where . What we need to formalize as a logical formula is that each disc that is smaller than disc a which (if such one does exist) is not on either rod or rod where . which It's worth to extend the aforementioned condition with another one, namely that we don't want to move a disc to the same rod from which we are removing the disc. This condition is not obligatory, but can speed up the search (it will eliminate trivial cycles in the state-space graph).Thus, the precondition of the operators is: a ≠where∧∀ i iwhich ⇒ a ≠a ∧ a ≠where   which i which i move Function of applying: Take any operator If the precondition of the which ,where a , a , a  operator holds to a state , then we can apply it to this state. We have to formalize 1 2 3 a' ,a' ,a'  that how the resulting state will look like. 1 2 3 which where We have to formalize that the disc will be moved to the rod , while the other discs will stay where they currently are. Thus: move a ,a , a =a ' , a ' , a '  , where which ,where 1 2 3 1 2 3 where ,if i=which a ' = where i∈1,2,3 i a ,otherwise i Important note: we have to define all of the components of the new state, not just the one that changes STATE-SPACE GRAPH The state-space graph of the aforementioned state-space representation can be seen in Figure 3. Figure 3. The state-space graph of the Towers of Hanoi problem. Naturally, all of the edges in the graph are bidirectional, and their labels can be interpreted as in the move move move previous chapter: a label refers to both the operators and . i , j1− j2 i , j1 i , j2 As it can be clearly seen in the figure, the optimal (shortest) solution of the problem is given by the rightmost side of the large triangle, namely, the optimal solution consists of 7 steps (operators). 3.3.3 8 QUEENS Place 8 queens on a chessboard in a way that no two of them attack each other. One possible solution: Generalize the task to a N× N  N1 chessboard, on which we need to place N queens. N is given as a constant out of the state-space. The basic idea of the state-space representation is the following: since we will place exactly one queen to each row of the board, we can solve the task by placing the queens on the board row by st nd row. So, we place one queen to the 1 row, then another one to the 2 row in a way that they can't th attack each other. In this way, in step i we place a queen to row i while checking that it does not attack any of the previously placed i−1 queens. Set of states: In the states we store the positions of the placed queens within a row Let's have a N -component vector in a state, in which component i tells us to which column in row i a queen has been previously placed. If we haven't placed a queen in the given row, then the vector should contain 0 there. In the state we also store the row in which the next queen will be placed. So: A= a ,a ,, a , s 0a N ,1sN1 ∣ 1 2 N i As one of the possible value of s , N1 is a non-existent row index, which is only permitted for testing the terminating conditions. Initial state: Initially, the board is empty. Thus, the initial state is: k=0,0 ,,0,1 Set of goal states: We have several goal states. If the value of s is a non-existent row index, then we have found a solution. So, the set of goal states is: C= a ,,a , N 1∈A 1 N Set of operators: s Our operators will describe the placing of a queen to row . The operators are expecting only s one input data: the column index where we want to place the queen in row . The set of our operators is: O= place 1i8 ∣ i Precondition of the operators: Formalize the precondition of applying an operator place a ,,a , s to a state It can be applied if the queen we are about to place is i 1 8 ▫ not in the same row as any queens we have placed before. So, we need to examine if the value th of i was in the state before the s component. i. e., for all ms: a ≠i m ▫ not attacking any previously placed queens diagonally. Diagonal attacks are the easiest to examine if we take the absolute value of the difference of the row indices of two queens, and then compare it to the absolute value of the difference of the column indices of the two queens. If these values are equal then the two queens are attacking each other. The row index of the queen we are about to place is , while the column-index is . So: s i for all ms : ∣s−m∣≠∣i−a ∣ m place a ,, a , s Thus, the precondition of the operator to the state is: i 1 8 ∀ m ms ⇒ a ≠i ∧ ∣s−m∣≠∣i−a ∣ ,   m m where 1m8 a' ,,a ' , s' Function of applying: Let's specify the state which the operator 1 8 place a ,, a , s will create from a state In the new state, as compared to the original i 1 8 state, we only need to make the following modifications: th ▫ write i to the s component of the state, and ▫ increment the value of s by one. Thus: place a ,,a , s=a ' ,,a ' ,s ' , where i 1 8 1 8 i ,if m=s a ' = , where m∈1,2,3 m a ,otherwise m s' = s1 STATE-SPACE GRAPH The state-space graph of the aforementioned state-space representation for the case N=4 case can be seen in Figure 4. In this case, the problem has 2 solutions. Notice that every solution of the problem is long for sure. It is also important to note that there is N no cycle in the state-space graph, that is, by carefully choosing the elements of the state-space representation, we have managed to exclude cycles from the graph, which will be quite advantageous when we are searching solutions.Figure 4. The state-space graph of the 4 Queens problem.4 PROBLEM-SOLVING METHODS Problem-solving methods are assembled from the following components: Database: stored part of the state-space graph. As the state-space graph may contain circles (and loops), in the database we store the graph in an evened tree form (see below). Operations: tools to modify the database. We usually differentiate two kinds of operations: ▫ operations originated from operators ▫ technical operations Controller: it controls the search in the following steps: (1) initializing the database, (2) selecting the part of the database that should be modified, (3) selecting and executing an operation, (4) examining the terminating conditions: • positive terminating: we have found a solution, • negative terminating: we appoint that there is no solution. The controller usually executes the steps between (1) and (4) iteratively. UNFOLDING THE STATE-SPACE GRAPH INTO A TREE Let's see the graph on Figure 5. The graph contains cycles, one such trivial cycle is the edge from s to s or the path s ,c ,b ,s and the path c ,d ,b ,s ,c . We can eliminate the cycles from the graph by duplicating the appropriate vertices. It can be seen on Figure 6, for example, we eliminated the edge from to by inserting to everywhere as the child of . The cycle appears on s s s s s ,c ,b ,s the figure as the rightmost branch. Of course, this method may result in an infinite tree, so I only give a finite part of it on the figure. Figure 5. A graph that contains cycles and Figure 6. The unfolded tree version. multiple paths. After unfolding, we need to filter the duplications on the tree branches if we want the solution seeking to terminate after a limited number of steps. That's we will be using different cycle detection techniques (see below) in the controller. Although, they do not endanger the finiteness of the search, but the multiple paths in the state-space graph do increase the number of vertices stored in the database. On Figure 5, for example, the c ,d and the c ,b , d paths are multiple, as we can use either of them to get from c to d . The c ,d and c ,b ,a ,d paths are also multiple in a less trivial way. On Figure 6, we can clearly see what

Advise: Why You Wasting Money in Costly SEO Tools, Use World's Best Free SEO Tool Ubersuggest.