Question? Leave a message!




Expert Systems

Expert Systems
Intelligent Control and Cognitive Systems brings you... Cognitive Architectures Joanna J. Bryson University of Bath, United KingdomFrom Last Week Combinatorics is the problem, search is the only • solution. The task of intelligence is to focus search. • Called bias (learning) or constraint (planning). • Most `intelligent’ behavior has no or little real • time search (noncognitive) (c.f. Brooks IJCAI91). For artificial intelligence, most focus from design. •Architectures What kinds of parts does the system need • Ontology • How should those parts be put together • Development methodology • How exactly is the whole thing arranged • Architecture •“Architectures” Like reactive planning, the term cognitive • architecture doesn’t quite mean what its component words do. People have been looking for a generic plan • for building “real” (humanlike) AI. This used to be a popular area of research, • now gets fewer publications. Nevertheless, evolutionary history tells us • something about what worked didn’t.What Worked The past does not • necessarily predict the future, particularly in AI. Changes in hardware • and other tech change what is possible.Cognitive Architecture Where do you put the cognition • Really: How do you bias / constrain / • focus cognition (learning, search) so it worksBasic Unit– Production From sensing to action (c.f. Skinner; • conditioning; Witkowski 2007.) These work basic component of • intelligence. The problem is choice (search). • Require an arbitration mechanism. •ProductionBased Architectures arbitration mechanisms Expert Systems: allow choice of • policies, e.g. recency, utility, random. SOAR: problem spaces (from GPS), • impasses, chunk learning. ACTR: (Bayesian) utility, problem • spaces (reluctantly, from SOAR/GPS.)Expert Systems Idea: Encode the knowledge of a • domain expert as productions, replace them with AI. Big hype in 1980s, do still exist e.g. for • checking circuit boards, credit / fraud detection, device driver code. Problem: Experts don’t know why they • do what they do, tend to report novice knowledge (last explicit rules learned.)General Problem Solver GPS, written by Newell, Shaw Simon • (1959, CMU), first program that separated specific problem (coded as productions) from reasoning system. Cool early AI, but suffered from both • combinatorial explosion and the Markov assumption. Soar was Newell’s next try. •Productions Soar • operate on a predicate database. If conflict, • declare impasse, then reason (search harder). Remember • resolution: chunkSoar has serious • Soar engineering. Contributing Soar Major Example Implementation Ideas Version Results Systems “Evolution of • Goal Substate MOUTBOT Decision Cycle Soar8 1999 SGIO Dependency Coherence QuakeBot Soar” is a Improved TacAirSoar TCL/Tk Soar7 1996 Interfaces RWASoar Wrapper favourite AI High AirSoar Soar6 1992 C Efficiency InstructoSoar paper (Laird External AirSoar Destructive Single State Soar5 1989 Tasks HeroSoar Operators Rosenbloom ETSoar External UTC Soar4 1986 NLSoar Release 1996) – admits General R1Soar Chunking Soar3 1984 Learning problems Universal R1Soar OPS5 Preferences Subgoals Soar2 1983 Subgoaling DyparSoar Lisp mistakes Production Universal XAPS 2 Weak Soar1 1982 Toy Tasks Systems Weak Method Lisp Methods Symbol Heuristic Problem Systems Search Spaces Not enough 50 • applications for ← One problem: main ap / funding humanlike AI is war games for US military.Architecture Lessons (from CMU➣Michigan) An architecture needs: • action from perception, and • further structure to combat • combinatorics. Dealing with time is hard (Soar 5). •ACTR Learns ( executes) • productions. For arbitration, relies • on (Bayesian probabilistic) utility. Call utility “implicit • knowledge”.ACTR Research Replicate lots of Programme • IntentionalModule Cognitive DeclarativeModule (notidentified) (Temporal/Hippocampus) Science results. GoalBuffer RetrievalBuffer (DLPFC) (VLPFC) See if the brain • Matching(Striatum) does what you Selection(Pallidum) think it needs to. Execution(Thalamus) Win Rumelhart • VisualBuffer ManualMotor (Parietal) (Motor) Prize (John Anderson, VisualModule ManualModule (Occipital/Parietal) (Motor/Cerebellum) 2000). ExternalWorld Productions (BasalGanglia)Architecture Lessons (from CMU Ψ) Architectures need productions and • problem spaces. Realtime is hard. • Grounding in biology is good PR, may • be good science too. Being easy to use can be a win. •Spreading Activation Networks “Maes • Nets” (Adaptive Neural Arch.; Maes 1989, VUB) Activation spreads • from senses and from goals through net of actions. Highest activated • action acts.Spreading Activation Networks Sound good: • easy • brainlike (priming, action potential). • Still influential (Franklin Baars 2010, • Shanahan 2010). Can’t do full action selection: • Don’t scale; don’t converge on • comsumatory acts (Tyrrell 1993).Tyrrell’s Extended Rosenblatt Payton Networks Consider all information all possible • actions at all times. Favour consumatory actions by system • of weighting. Also weight uncertainty (e.g. of memory, • temporal discounting).Tyrrell (1993) = small negative activation Distance Night Prox Low Health 1.4 Dirtiness from Den = zero activation = small positive activation = positive activation Keep Sleep in Den Clean Reproduce = large positive activation (1.0) T T U U 0.02 0.15 0.25 0.05 0.05 Den 0.05 0.02 in Sq 0.30 0.10 0.01 0.04 0.08 Mate Court Sleep Approach Explore For Mates Clean Approach Approach Leave Mate P. Den R. Den this Sq Explore P. Den R. Den P. Mate Rand. Dir All Dirs No Den in Sq Receptive Mate in Sq Courted Mate in Sq Den No Den in Sq in Sq N NE E SE S SW W NW Clean Sleep Mate Court Move Actions Extended Rosenblatt and Payton FreeFlow HierarchyTyrrell’s Analysis Compared all leading architectures. • Discovered many weren’t practical. • Hoped to be “fair” by having parameters • learned with a GA. Discovered this wasn’t tractable. • Went into oceonagraphy after PhD. •Subsumption (Brooks 1986) Emphasis on • sensing to action (via Augmented FSM). Very complicated, • distributed arbitration. No learning. • Worked. •Architecture Lessons (Subsumption) Action from perception can provide the • further structure – modules (behaviors). Modules also support iterative • development / continuous integration. Real time should be a core organising • principle – start in the real world. Good ideas can carry bad ideas a long • way (no learning, hard action selection).Architecture Lesson Goals ordering • needs to be flexible. Maybe spreading • activation is good for this.SA: Layers vs. Behaviours Relationship not • evident except in developmentLayered or Hybrid Architectures 1. Incorporate behaviors/modules (action from sensing) as “smart” primitives. 2. Use hierarchical dynamic plans for behavior sequencing. 3. (Allegedly) some have automated planner to make plans for layer 2. Examples: Firby/RAPS/3T (‘97); PRS • (19922000); Hexmoore ‘95; Gat ‘9198Belief, Desires, Beliefs: • Intentions (BDI) Predicates Desires: • goals related dynamic plans Intentions: • current goalProcedural Reasoning System BDI • And reactive • (responds to emergencies by changing intentions.) Er... once or • twice (Bryson ATAL 2000).Architecture Lessons Structured dynamic plans make it easier to • get your robot to do complicated stuff. Automated planning (or for Soar, chunking/ • learning) is seldom actually used. To facilitate that automated planning, • modularity is often compromised. (Bryson JETAI 2000)Soar as a 3LA J. Laird P. Rosenbloom, “The Evolution of the Soar Cognitive Architecture”, Mind Matters, D. Steier and T. Mitchell eds., 1996.Architecture Lessons Structured dynamic plans make it easier to • get your robot to do complicated stuff. Automated planning (or for Soar, chunking/ • learning) is seldom actually used. Military turns chunking off because more • productions slow down the system. “Teaching by brain surgery” / programming, • not learning in real, installed systems.CogAff Reflection on Top. • Sense Action • separated (Davis Sloman • 1995)CogAff Reflection on Top. • Sense Action • separated Hierarchy in AS; • Goal Swapping (Alarms). (Sloman 2000) •CogAff Reflection on Top. • Sense Action • separated Hierarchy in AS, • Goal Swapping (now reactive). Current Web •Separate Sense Action Something we • higher mammals do. Central Sulcus • Chance for Cognition (pictures from Carlson)Architecture Lessons (CogAff) Maybe you don’t really want productions as • your basic representation – you may want to come between a sense and an act sometimes. Your architecture looks very different if you • really worry about adult human linguistic / literaturelevel behaviour rather than just making something work.Contemporary Architectures Currently people talk more about an • architecture for a system, not an “architecture” meaning a generic development methodology + ontology. But the topic may come back again. • And the ontologies and histories are still • useful.iCub architecture (Vernon 2010)Contemporary Architectures Currently people talk more about an • architecture for a system, not an “architecture” meaning a generic development methodology + ontology. But the topic may come back again. • And the ontologies and histories are still • useful.Summary Architectures assume an ontology of what • intelligence needs, and a development methodology. Architectures describe how the necessary • parts should be connected. Cognitive architectures are often identified • with working code – action selection systems.
Website URL
Comment