Question? Leave a message!




Introduction to Artificial Intelligence

Introduction to Artificial Intelligence
Introduction to Artificial Intelligence Revision CS 3243 Revision 1 Final Exam   Venue: TBA   Date: 23 November (Tuesday)   Time: 5:00 – 7:00 pm CS 3243 Revision 2 Format   Open book   Six questions, emphasizing material covered after the midterm   Yes, all material in the course will be covered on the exam   Bring a (nonprogrammable) calculator with you CS 3243 Revision 3 Outline   Agents   Search   Uninformed Search   Informed Search   Adversarial Search   Constraint Satisfaction   KnowledgeBased Agents   Uncertainty and Learning   Natural Language Processing CS 3243 Revision 4 Agent types Four basic types in order of increasing generality:   Simple reflex agents   Modelbased reflex agents   Goalbased agents   Utilitybased agents CS 3243 Revision 5 Simple reflex agents CS 3243 Revision 6 Modelbased reflex agents CS 3243 Revision 7 Goalbased agents CS 3243 Revision 8 Utilitybased agents CS 3243 Revision 9 Creating agents Where does the intelligence come from   Coded by the designers Knowledge representation – predicate and first order logic   Learned by the machine Machine learning – expose naïve agent to examples to learn useful actions CS 3243 Revision 10 Learning agents CS 3243 Revision 11 Searching for solutions In most agent architectures, deciding what action to take involves considering alternatives   Searching is judged on optimality, completeness and complexity   Do I have a way of gauging how close I am to a goal   No: Uninformed Search   Yes: Informed Search CS 3243 Revision 12 Uninformed search   Formulate the problem, search and then execute actions   Apply TreeSearch   For environments that are   Deterministic   Fully observable   Static CS 3243 Revision 13 Tree search algorithm   Basic idea:   offline, simulated exploration of state space by generating successors of alreadyexplored states CS 3243 Revision 14 Summary of algorithms   BreadthFirst – FIFO order   UniformCost – in order of cost   DepthFirst – LIFO order   DepthLimited – DFS to a maximum depth   Iterative Deepening – Iterative DLS   Bidirectional – also search from goal towards origin Criterion Breadth Uniform Depth Depth Iterative Bidirection First Cost First Limited Deepening al Complete Yes Yes No No Yes Yes d+1 ⎡C/e⎤ m l d d/2 Time O(b ) O(b ) O(b ) O(b ) O(b ) O(b ) d+1 ⎡C/e⎤ d/2 Space O(b ) O(b ) O(bm) O(bl) O(bd) O(b ) Optimal Yes Yes No No Yes Yes CS 3243 Revision 15 Repeated states: GraphSearch CS 3243 Revision 16 Informed search estimated   Heuristic function h(n) = cost of the cheapest path from n to goal.   Greedy Best First Search   Minimizing estimated cost to goal   A Search   Minimizing total cost CS 3243 Revision 17 Properties of heuristic functions   Admissible: never overestimates cost   Consistent: estimated cost from node n+1 is ≥ than cost from node n + step cost.   A using TreeSearch is optimal if the heuristic used is admissible.   GraphSearch needs a consistent heuristic. Why CS 3243 Revision 18 Local search   Good for solutions where the path to the solution doesn’t matter   Often work on a complete state   May not search systematically   Often require very little memory   Correlated to online search   Have only access to the local state CS 3243 Revision 19 Local search algorithms   Hill climbing search – choose best successor   Beam search – take the best k successor   Simulated annealing – allow backward moves during beginning of search   Genetic algorithm – breed k successors using crossover and mutation CS 3243 Revision 20 Searching in specialized scenarios   Properties of the problem often allow us to formulate   Better heuristics   Better search strategy and pruning   Adversarial search   Working against an opponent   Constraint satisfaction problem   Assigning values to variables   Path to solution doesn’t matter   View this as an incremental search CS 3243 Revision 21 Adversarial Search   Turntaking, twoplayer, zerosum games   Minimax algorithm:   One turn = two plys   Max nodes: agent’s move, maximize utility   Min nodes: opponent’s move, minimize utility   AlphaBeta pruning: rid unnecessary computation. CS 3243 Revision 22 Constraint Satisfaction   Discrete or continuous solutions   Discretize and limit possible values   Modeled as a constraint graph   As the path to the solution doesn’t matter, local search can be useful. CS 3243 Revision 23 Techniques in CSPs   Basic: backtracking search   DFS for CSP   A leaf node (at depth v) is a solution   Speed ups   Choosing variables  Minimum remaining values  Most constrained variable / degree   Choosing values  Least constraining value CS 3243 Revision 24 Pruning CSP search space Before expanding node, can prune the search space   Forward checking   Pruning values from remaining variables   Arc consistency   Propagating stronger levels of consistency   E.g., AC3 (applicable before searching and during search)   Balancing arc consistency with actual searching CS 3243 Revision 25 Propositional and First Order Logic   Propositional Logic   Facts are true or false   First Order Logic   Relationships and properties of objects   More expressive and succinct  Quantifiers, functions  Equality operator   Can convert back to prop logic to do CS 3243 Revision 26 Inference in logic   Given a KB, what can be inferred   Query or goaldriven  Backward chaining, model checking (e.g. DPLL), resolution   Deducing new facts  Forward chaining   Efficiency: track of literals of premise using a count or Rete networks CS 3243 Revision 27 Inference in logic   Chaining   Requires Definite Clauses or Horn Clauses   Uses Modus Ponens for sound reasoning   Forward or Backward   Resolution   Requires Conjunctive Normal Form   Uses Resolution for sound reasoning   Proof by Contradiction CS 3243 Revision 28 Inference in FOL   Don’t have to propositionalize   Could lead to infinite sentences when functions occur   Use unification instead   Standardizing apart   Dropping quantifiers   Skolem constants and functions   Handling equality   Inference is semidecidable   Can say yes to entailed sentences, but non entailed sentences will never terminate CS 3243 Revision 29 Connection to knowledgebased agents   CSP can be formulated as logic problems and vice versa   CSP search as model checking Model checking (DPLL) CSP Search Pure Symbol Least constraining value Unit Clause Most constrained value Early Termination Minimum remaining values   Local search: WalkSAT with min conflict heuristic CS 3243 Revision 30 Inference and CSPs   Solving a CSP via inference   Handles special constraints (e.g., AllDiff)   Can learn new constraints not expressed by KB designer   Solving inference via CSP   Whether a query is true under all possible constraints (satisfiable)   Melding the two: Constraint Logic Programming (CLP) CS 3243 Revision 31 Uncertainty   Leads us to use probabilistic agents   Only one of many possible methods   Modeled in terms of random variables   Again, we examined only the discrete case   Answer questions based on full joint distribution CS 3243 Revision 32 Inference by enumeration Interested in the posterior joint distribution of query variables given specific values for evidence variables   Summing over hidden variables   Cons: Exponential complexity   Look for absolute and conditional independence to reduce complexity CS 3243 Revision 33 Bayesian networks   One way to model dependencies   Variable’s probability only depends on its parents   Use product rule and conditional dependence to calculate joint probabilities   Easiest to structure causally   From root causes forward   Leads to easier modeling and lower complexity CS 3243 Revision 34 Learning   Inductive learning – based on past examples   Learn a function h() that approximates real function f(x) on examples x   Balance complexity of hypothesis with fidelity to the examples   Minimize α E(h,D) + (1α) C(h) CS 3243 Revision 35 Learning Algorithms Many out there but the basics are:   K nearest neighbors   Instancebased   Ignores global information   Naïve Bayes   Strong independence assumption   Scales well due to assumptions   Needs normalization when dealing with unseen feature values   Decision Trees   Easy to understand its hypothesis   Decides feature based on information gain CS 3243 Revision 36 Training and testing   Judge induced h()’s quality by using a test set   Training and test set must be separate; otherwise peeking occurs   Modeling noise or specifics of the training data can lead to overfitting   Use pruning to remove parts of the hypothesis that aren’t justifiable CS 3243 Revision 37 Natural Language Processing   Like many fields of advanced AI   Inverse problems: Understanding vs. Generation / Synthesis   How to wreck a nice beach: Ambiguity – the key problem   Levels in processing   Syntactic, Discourse, Semantic, Pragmatic CS 3243 Revision 38 Natural Language Processing   Syntactic Parsing   Bottom Up, Top Down, CYK Chart Parsing   Augmented Grammars   Long distance agreement: Case and number   Use variables to capture parameters   Unification as a viable method   Current NLP uses statistical techniques CS 3243 Revision 39 Where to go Introduction from here to AI Motion Knowledge   Just the tip of the Planning BasedSystems iceberg AI Planning and Decision Making Machine Computer Learning Vision and Information Pattern Retrieval Robotics Recognition   Many advanced Knowledge Discovery topics and Data Mining Constraint Logic Programming   Introduced only a Natural Uncertainty few Language Modeling Processing in AI   Textbook can help in exploration of AI Speech Processing CS 3243 Revision 40 That’s it   Thanks for your attention over the semester   See you at the exam CS 3243 Revision 41