Artificial Intelligence a Modern approach Lecture Notes

artificial intelligence a modern approach and artificial intelligence examples. And artificial intelligence lecture notes pdf free download
Dr.JamesSmith Profile Pic
Dr.JamesSmith,France,Professional
Published Date:11-07-2017
Your Website URL(Optional)
Comment
Artificial Intelligence (Künstliche Intelligenz 1) Winter 2016/17 Provisional Lecture Notes Michael Kohlhase Professur für Wissensrepräsentation und -verarbeitung Informatik, FAU Erlangen-Nürnberg Michael.KohlhaseFAU.de July 9, 2017i Preface Course Concept Aims: The course aims at giving students a solid (and often somewhat theoretically oriented) foundation of the basic concepts and practices of artificial intelligence. The course will predom- inantly cover symbolic AI – also sometimes called “good old-fashioned AI (GofAI)” – in the first semester and only over the very foundations of statistical approaches in the second, as the sub- symbolic, machine-learning-based AI deserves its own specialization courses and needs much more mathematical prerequisites. Context: The course “Künstliche Intelligenz” (KI 1 & 2) at FAU Erlangen is a two-semester course in the “Wahlpflichtbereich” (specialization phase) in the fifth and sixth semesters of the Computer Science bachelor’s studies at FAU Erlangen. It is also available as a (somewhat remedial) course in the “Vertiefungsmodul Künstliche Intelligenz” in the Computer Science Master’s program. KI-1 & 2 builds on the mandatory courses in the FAU Bachelor’s program, in particular the course“GrundlagenderLogikinderInformatik”, whichalreadycoversalotofthematerialsusually presented in the “knowledge and reasoning” part of an introductory AI course. The AI 1& 2 course 1 also minimizes overlap with the course EdN:1 Prerequisites: The course is relatively elementary, we expect that any student who attended the mandatory CS courses at FAU Erlangen can follow it. Course Contents Goal: TogivestudentsasolidfoundationofthebasicconceptsandpracticesofthefieldofArtificial Intelligence. The course will be based on Russell/Norvig’s book “Artificial Intelligence; A modern Approach” RN09 Artificial Intelligence I (the first semester): introduces AI as an area of study and covers problem solving, knowledge representation, and planning. Artificial Intelligence II (the second semester): is more oriented towards exposing students to the 2 basics of statistically based AI. EdN:2 This Document This document contains the course notes for the course Artificial Intelligence held at at FAU 1 Erlangen in the winter semester 2016/17 Format: The document mixes the slides presented in class with comments of the instructor to give students a more complete background reference. Caveat : This document is made available for the students of this course only. It is still very much a draft and will develop over the course of the current course and in coming academic years. Licensing: This document is licensed under a Creative Commons license that requires attribution, allows commercial use, and allows derivative works as long as these are licensed under the same license. Knowledge Representation Experiment: This document is also an experiment in knowledge repre- A sentation. Under the hood, it uses the T X package Koh08, Koh16, a T X/LT X extension for S E E E semantic markup, which allows to export the contents into the eLearning platform PantaRhei. Comments and extensions are always welcome, please send them to the author. 1 EdNote: figure out what happens with Lutz’ Ontologies course and the Machine Learning Seminar 2 EdNote: actually, we will see. 1 They are based on a course held at Jacobs University in Fall 2005.ii Acknowledgments Materials: MostofthematerialsinthiscourseisbasedonRussel/Norvik’sbook(AIMA)“Artificial A Intelligence — A Modern Approach” RN95. Even the slides are based on a LT X-based slide E set, but heavily edited. The section on search algorithms is based on materials obtained from Bernhard Beckert (then Uni Koblenz), which is in turn based on AIMA. Some extensions have been inspired by an AI course by Jörg Hoffmann and Wolfgang Wahlster at Saarland University in 2016. Finally Dennis Müller suggested and supplied some extensions on AGI. All course materials have bee restructured and semantically annotated in the T X format, so S E that we can base additional semantic services on them (see slide ?? for details). AI Students: The following students have submitted corrections and suggestions to this and earlier versionsofthenotes: RaresAmbrus, IoanSucan, YashodanNevatia, DennisMüller, SimonRainer, Demian Vöhringer, Lorenz Gorse, Philipp Reger.iii Recorded Syllabus for WS 2016 In this document, we record the progress of the course in the academic year 2016/17 in the form of a “recorded syllabus”, i.e. a syllabus that is created after the fact rather than before. Recorded Syllabus Winter Semester 2016/17: date until slide page 1 Oct 17. overview, some admin 600 340 2 Oct 21. agents 39 28 3 Oct 24. simple reflex agents 50 33 4. Oct 28. problem formulations 72 46 5. Oct 31. greedy search 131 66 6. Nov 4. relaxed problems 158 78 7. Nov 7. genetic algorithms 170 83 8. Nov 11. minimax 189 92 9. Nov 14. MCTS & AlphaGo 211 103 10. Nov 18. Complexity of CSP 233 115 11. Nov 21. CSP equivalence/tightness 250 127 12. Nov 25. Decomposition of acyclic graphs 267 136 13. Nov 28. Logic Introduction 473 257 14. Dec 2. Propositional Tableaux 314 164 15. Dec 5. Model Existence Preview 410 218 16. Dec 9. Model Existence & Tableau completeness 333 177 17. Dec 16. SAT Solvers 342 182 18. Dec 19. DPLL+Clause Learning 371 196 20. Dec 23. Motivation for First-Order Logic 389 206 Jan 9. Mock Exam 21 Jan 13. Substitution Value Lemma 402 213 22 Jan 16. Free Variable Tableaux 427 231 23 Jan 20. Unification 435 236 24 Jan 23. Prolog: programming by search 464 252 25 Jan 27. Intro to Planning 482 263 26 Jan 30. why complexity analysis? 521 285 + 27 Feb 3. PlanEx 551 304 28 Feb 6. planning conclusion 580 321 Recorded Syllabus Summer Semester 2017: 1 May 4. overview, some admin ?? ?? 2. May 8. probabilities, Kolmogorov axioms ?? ?? 3. 11. May conditional probabilitites, independence, Bayes rule 637 361 4. 15. May conditional independence, Bayesian Networks 660 372 5. 18. May constructing and reasoning with BN 680 382 6. 22. May decision theory, preferences & utilities 731 411 7. 29. May Multi-Attribute Utilities 747 421 8. 1. Jun Belief, transsition, and sensor models, VPI ?? ?? 9. 8. Jun Markov Processes, temporal inference 764 430 10. 12. Jun temporal inference 770 432 11. 19. Jun Markov Decision Procedures, Value Iteration ?? ?? 12. 22. Jun POMDPs, ML Intro 804 453 13. 26. Jun decision tree learning, performance measurement 819 461 14. 29. Jun learning evaluation, loss 830 466 15. 3. Juli Linear regression and classification 846 474 16. 6. Juli Neural Networks 867 483ivContents Preface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . i Course Concept . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . i Course Contents . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . i This Document . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . i Acknowledgments . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . i Recorded Syllabus for WS 2016 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . iii 1 Overview over the Course 1 1.1 What is AI? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1 1.2 AI is here today . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2 1.3 Two Ways to Attack the AI Problem . . . . . . . . . . . . . . . . . . . . . . . . . . 7 1.4 AI Topics Covered . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8 1.5 Strong vs. Narrow AI . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9 1.6 AI in the KWARC Group . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10 2 Administrativa 13 I Getting Started with AI: A Conceptual Framework 17 3 Introduction: What is Artificial Intelligence 21 4 Intelligent Agents: a Unifying Framework for AI 27 4.1 Agents and Environments as a Framework for AI . . . . . . . . . . . . . . . . . . . 27 4.2 Good Behavior; Rationality . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30 4.3 Classifying Environments and Agents . . . . . . . . . . . . . . . . . . . . . . . . . . 31 II Problem Solving 39 5 Problem Solving and Search 41 5.1 Problem Solving . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41 5.2 Search . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 47 5.3 Uninformed Search Strategies . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 50 5.4 Informed Search Strategies . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 65 5.4.1 Greedy Search . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 66 5.4.2 Heuristics and their Properties . . . . . . . . . . . . . . . . . . . . . . . . . 70 5.4.3 A-Star Search . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 72 5.4.4 Finding Good Heuristics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 77 5.5 Local Search . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 79 vvi CONTENTS 6 Adversarial Search 85 6.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 85 6.2 Minimax Search . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 89 6.3 Evaluation Functions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 92 6.4 Alpha-Beta Search . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 95 6.5 MCTS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 100 7 Constraint Satisfaction Problems 107 7.1 Constraint Satisfaction Problems: Motivation . . . . . . . . . . . . . . . . . . . . . 107 7.2 The Waltz Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 109 7.3 CSP: Towards a Formal Definition . . . . . . . . . . . . . . . . . . . . . . . . . . . 112 7.4 CSP as Search . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 116 8 Constraint Propagation 123 8.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 123 8.2 Inference . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 124 8.3 Forward Checking . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 128 8.4 Arc Consistency . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 130 8.5 Decomposition: Constraint Graphs, and Two Simple Cases . . . . . . . . . . . . . 134 8.6 Cutset Conditioning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 136 8.7 Constraint Propagation with Local Search . . . . . . . . . . . . . . . . . . . . . . . 137 8.8 Conclusion & Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 139 III Knowledge and Inference 141 9 Formal Logic as the Mathematics of Meaning 143 10 Propositional Reasoning, Part I: Principles 147 10.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 147 10.2 Propositional Logic (Syntax/Semantics) . . . . . . . . . . . . . . . . . . . . . . . . 150 10.3 Calculi for Propositional Logic . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 154 10.4 Propositional Natural Deduction Calculus . . . . . . . . . . . . . . . . . . . . . . . 157 10.5 Machine-Oriented Calculi for Propositional Logic . . . . . . . . . . . . . . . . . . . 159 10.5.1 Calculi for Automated Theorem Proving: Analytical Tableaux . . . . . . . 160 Analytical Tableaux . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 160 Practical Enhancements for Tableaux . . . . . . . . . . . . . . . . . . . . . 163 Soundness and Termination of Tableaux . . . . . . . . . . . . . . . . . . . . 165 10.5.2 Resolution for Propositional Logic . . . . . . . . . . . . . . . . . . . . . . . 167 10.5.3 Abstract Consistency and Model Existence . . . . . . . . . . . . . . . . . . 169 10.5.4 A Completeness Proof for Propositional Tableaux . . . . . . . . . . . . . . 176 10.6 Wumpus . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 177 10.7 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 179 11 Propositional Reasoning: SAT Solvers 181 11.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 181 11.2 Davis-Putnam . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 184 11.3 DPLL = (A Restricted Form of) Resolution . . . . . . . . . . . . . . . . . . . . . . 186 11.4 UP Conflict Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 189 11.5 Clause Learning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 194 11.6 Phase Trans. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 197 11.7 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 200CONTENTS vii 12 First Order Predicate Logic 203 12.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 203 12.2 First-Order Logic . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 207 12.2.1 First-Order Logic: Syntax and Semantics . . . . . . . . . . . . . . . . . . . 207 12.2.2 First-Order Substitutions . . . . . . . . . . . . . . . . . . . . . . . . . . . . 211 12.3 First-Order Calculi . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 214 12.4 Abstract Consistency and Model Existence . . . . . . . . . . . . . . . . . . . . . . 218 12.5 A Completeness Proof for First-Order ND . . . . . . . . . . . . . . . . . . . . . . . 224 12.6 First-Order Predicate Logic (Conclusion) . . . . . . . . . . . . . . . . . . . . . . . 226 13 First-Order Inference 229 13.1 First-Order Inference with Tableaux . . . . . . . . . . . . . . . . . . . . . . . . . . 229 13.1.1 First-Order Tableaux . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 229 13.1.2 Free Variable Tableaux . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 231 13.1.3 First-Order Unification . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 233 13.1.4 Soundness and Completeness of First-Order Tableaux . . . . . . . . . . . . 239 13.2 First-Order Resolution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 241 13.2.1 Resolution Examples . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 242 14 Logic Programming 247 14.1 Introduction to Logic Programming and PROLOG . . . . . . . . . . . . . . . . . . 247 14.2 Programming as Search . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 251 14.3 Logic Programming as Resolution Theorem Proving . . . . . . . . . . . . . . . . . 256 14.4 Topics in Logic Programming . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 257 IV Planning 259 15 Planning, Part I: Framework 261 15.1 Planning: Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 261 15.2 Planning History . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 267 15.3 STRIPS Planning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 272 15.4 PDDL Language . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 275 15.5 Why Complexity Analysis? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 279 15.6 Planning Complexity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 285 15.7 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 288 16 Planning, Part II: Algorithms 291 16.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 291 16.2 How to Relax . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 293 16.3 Delete Relaxation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 299 + 16.4 The h Heuristic . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 305 + 16.5 Approximating h . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 312 16.6 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 320 17 What did we learn in AI 1? 323 18 Overview over AI-II 329 18.1 What is AI? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 329 18.2 AI is here today . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 330 18.3 AI in the KWARC Group . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 334 18.4 AI-II: Advanced Rational Agents . . . . . . . . . . . . . . . . . . . . . . . . . . . . 336 19 Administrativa 339viii CONTENTS V Uncertain Knowledge and Reasoning 343 20 Probabilistic Reasoning, Part I: Basics 345 20.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 345 20.2 Unconditional Probabilities . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 350 20.3 Conditional Probabilities . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 353 20.4 Independence . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 354 20.5 Basic Methods . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 357 20.6 Bayes’ Rule . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 359 20.7 Conditional Independence . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 361 20.8 The Wumpus World Revisited . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 365 20.9 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 368 21 Probabilistic Reasoning, Part II: Bayesian Networks 369 21.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 369 21.2 What is a Bayesian Network? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 371 21.3 What is the Meaning of a Bayesian Network? . . . . . . . . . . . . . . . . . . . . . 373 21.4 Constructing Bayesian Networks . . . . . . . . . . . . . . . . . . . . . . . . . . . . 376 21.5 Inference in Bayesian Networks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 381 21.6 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 386 22 Making Simple Decisions Rationally 389 22.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 389 22.2 Rational Preferences . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 390 22.3 Utilities and Money . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 391 22.4 Multi-Attribute Utility . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 394 22.5 Decision Networks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 399 22.6 Recap: Agent Architectures based on Belief States . . . . . . . . . . . . . . . . . . 401 22.7 The Value of Information . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 403 23 Making Simple Decisions Rationally 407 23.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 407 23.2 Rational Preferences . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 408 23.3 Utilities and Money . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 409 23.4 Multi-Attribute Utility . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 412 23.5 Decision Networks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 417 23.6 Recap: Agent Architectures based on Belief States . . . . . . . . . . . . . . . . . . 419 23.7 The Value of Information . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 421 24 Temporal probability models 425 24.1 Modeling Time and Uncertainty . . . . . . . . . . . . . . . . . . . . . . . . . . . . 425 24.2 Inference: Filtering, Prediction, and Smoothing . . . . . . . . . . . . . . . . . . . . 428 24.3 Hidden Markov Models . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 432 24.4 Dynamic Bayesian Networks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 434 25 Making Complex decisions 437 25.1 Sequential Decision Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 437 25.2 Value/Policy Iteration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 441 25.3 Partially Observable MDPs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 444CONTENTS ix VI Machine Learning 449 26 Learning from Observations 451 26.1 Forms of Learning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 451 26.2 Inductive Learning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 453 26.3 Learning Decision Trees . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 456 26.4 Information Theory . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 459 26.5 Evaluating and Choosing the Best Hypothesis . . . . . . . . . . . . . . . . . . . . . 463 26.6 Computational Learning Theory . . . . . . . . . . . . . . . . . . . . . . . . . . . . 467 26.7 Regression and Classification with Linear Models . . . . . . . . . . . . . . . . . . . 469 26.8 Artificial Neural networks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 475 26.9 Support Vector Machines . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 484 27 Knowledge in Learning 489 27.1 Logical Formulations of Learning . . . . . . . . . . . . . . . . . . . . . . . . . . . . 489 27.2 Explanation-Based Learning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 492 27.3 Relevance-Based Learning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 496 27.4 Inductive Logic Programming . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 500 28 Statistical Learning 509Chapter 1 Overview over the Course Plot of this Course  Today: Motivation, Admin, and find out what you already know  What is Artificial Intelligence?  What has AI already achieved?  a (very) quick walk through the topics  how can you get involved with AI at KWARC ©:Michael Kohlhase 1 1.1 What is AI? What is Artificial Intelligence  Definition 1.1.1 (According to Wikipedia) Artificial Intelligence (AI) is intelligence exhibited by machines  Definition 1.1.2 (also) Artificial Intel- ligence (AI) is a sub-field of Computer Sci- encethatisconcernedwiththeautomation of intelligent behavior.  BUT: it is already difficult to define “Intelli- gence” precisely  ElaineRich: AIstudieshowwecanmakethe computer do things that humans can still do better at the moment. ©:Michael Kohlhase 2 Whatis Artificial Intelligence 12 CHAPTER 1. OVERVIEW OVER THE COURSE  Elaine Rich: AI studies how we can make the computer do things that humans can still do better at the moment.  This needs a combination of  the ability to learn  inference  perception  language understanding  emotion ©:Michael Kohlhase 3 1.2 AI is here today1.2. AI IS HERE TODAY 3 Artificial Intelligence is here today4 CHAPTER 1. OVERVIEW OVER THE COURSE1.2. AI IS HERE TODAY 5  in outer space  in outer space systems need autonomous con- trol:  remote control impos- sible due to time lag  in artificial limbs  the user controls the prosthesis via existing nerves, can e.g. grip a sheet of paper.  in household appliances  The iRobot Roomba vacuums, mops, and sweeps in corners, ..., parks, charges, and discharges.  general robotic house- hold help is on the horizon.  im hospitals  in the USA 90% of the prostateoperationsare carried out by Ro- boDoc  Paro is a cuddly robot that eases solitude in nursing homes.  for safety/security  e.g. Intel verifies cor- rectness of all chips af- ter the “pentium 5 dis- aster”6 CHAPTER 1. OVERVIEW OVER THE COURSE ©:Michael Kohlhase 4 And here’s what you all have been waiting for ...  AlphaGo is a program developed by Google DeepMind to play the board game Go.  In March 2016, it beat Lee Sedol in a five-game match, the first time a com- puter Go program has beaten a 9-dan professional without handicaps. ©:Michael Kohlhase 5 The AI Conundrum  Observation: Reserving the term “Artificial Intelligence” has been quite a land- grab  But: researchers at the Dartmouth Conference (1050) really thought they would solve AI in two/three decades.  Consequence: AI still asks the big questions.  Another Consequence: AI as a field is an incubator for many innovative tech- nologies.  AI Conundrum: Once AI solves a subfield it is called “Computer Science”. (becomes a separate subfield of CS)  Example 1.2.1 Functional/LogicProgramming,AutomatedTheoremProv- ing, Planning, Machine Learning, Knowledge Representation, ...  Still Consequence: AI research was alternatingly flooded with money and cut off brutally. ©:Michael Kohlhase 61.3. TWO WAYS TO ATTACK THE AI PROBLEM 7 1.3 Two Ways to Attack the AI Problem The field of Artificial Intelligence (AI) is an engineering field at the intersection of computer science (logic, programming, ), cognitive science (psychology), and linguistics (natural language 3 understanding). EdN:3 4 There are currently two avenues of attack to this problem. EdN:4 Two ways of reaching Artificial Intelligence?  Two avenues of of attack for the problem: knowledge-based and statistical techniques (they are complementary) Deep Knowledge-based Not there yet We are here cooperation? Shallow no-one wants this Statistical Methods applications Analysis" vs. narrow wide Coverage  We will cover foundational methods of deep processing in this semester and shallow ones in the next. ©:Michael Kohlhase 7 To get this out of the way ...  AlphaGo = search + neural networks  we do search this semester and cover neural networks in KI-2.  I will explain AlphaGo a bit int “Adversarial Search” 3 EdNote: continue, this is not complete yet. 4 EdNote: continue8 CHAPTER 1. OVERVIEW OVER THE COURSE ©:Michael Kohlhase 8 1.4 AI Topics Covered Topics of AI-1 (Winter Semester)  Getting Started  What is Artificial Intelligence (situating ourselves)  Intelligent Agents (a unifying framework)  Problem Solving  Problem Solving and Search  Constraint Satisfaction Problems  Game playing (Adversarial Search)  Knowledge and Reasoning  Formal Logic as the Mathematics of Meaning  Logic Programming  Planning  Planning  Planning and Acting in the real world ©:Michael Kohlhase 9 Topics of AI-2 (Summer Semester)  Uncertain Knowledge and Reasoning  Uncertainty  Probabilistic Reasoning  Making Decisions  Learning  Learning from Observations  Knowledge in Learning  Statistical Learning Methods  Reinforcement Learning  Deep Learning  Communicating, Perceiving, and Acting (if we have time)  Communication  Probabilistic Language Processing1.5. STRONG VS. NARROW AI 9  Perception  Robotics ©:Michael Kohlhase 10 1.5 Strong vs. Narrow AI Strong AI vs. Narrow AI  Definition 1.5.1 With the term narrow AI (also weak AI, instrumental AI,appliedAI)werefertotheuseofsoftwaretostudyoraccomplish specific problem solving or reasoning tasks (e.g. playing chess, controlling elevators, composing music, ...)  Definition 1.5.2 With the term strong AI (also full AI, AGI) we denote the quest for software performing at the full range of human cognitive abil- ities. Problems requiring strong AI to solve are called AI complete.  In short: We can characterize the difference intuitively:  narrow AI: What (most) computer scientists think AI is / should be.  strong AI: What Hollywood authors think AI is / should be. ©:Michael Kohlhase 11 A few words on AGI...  The conceptual and mathematical framework (Agents, environments etc.) is the same for strong and ?narrow-AIweakAI.  AGIresearchfocusesmostlyonabstractaspectsofmachinelearning(reinforce- ment learning, neural nets) and decision/game theory (“which goals should an AGI pursue?”).  Academic respectability fluctuates massively, recently increased (again). (cor- relates somewhat with AI winters and golden years)  Public attention increasing due to talk of “existential risks” (e.g. Hawking, Musk, Bostrom, Yudkowsky, Obama) . ©:Michael Kohlhase 12 AGI Research  “Famous” research(ers) / organizations  MIRI (Machine Intelligence Research Institute, Eliezer Yudkowsky) (For- merly knows as Singularity Institute)10 CHAPTER 1. OVERVIEW OVER THE COURSE  Future of Humanity Institute Oxford (Nick Bostrom),  Google (Ray Kurzweil),  AGIRI / OpenCog (Ben Goertzel),  petrl.org (People for the Ethical Treatment of Reinforcement Learners). (Obviously somewhat tongue-in-cheek) : Be highly skeptical about any claims with respect to AGI ©:Michael Kohlhase 13 1.6 AI in the KWARC Group The KWARC Research Group  Observation: The ability to represent knowledge about the world and to draw logical inferences is one of the central components of intelligent behavior.  Thus: reasoningcomponentsofsomeformareattheheartofmanyAIsystems.  KWARC Angle: Scaling up (web-coverage) without dumbing down (too much)  Content markup instead of full formalization (too tedious)  User support and quality control instead of “The Truth” (elusive anyway)  use Mathematics as a test tube( Mathematics =b Anything Formal )  care more about applications than about philosophy (we cannot help getting this right anyway as logicians)  The KWARC group was established at Jacobs Univ. in 2004, moved to FAU Erlangen in 2016  see http://kwarc.info for projects, publications, and links ©:Michael Kohlhase 14 Overview: KWARC Research and Projects

Advise: Why You Wasting Money in Costly SEO Tools, Use World's Best Free SEO Tool Ubersuggest.