Agent based intelligent systems

agent and environment in artificial intelligence ppt and agent function in artificial intelligence
Dr.BenjaminClark Profile Pic
Dr.BenjaminClark,United States,Teacher
Published Date:21-07-2017
Your Website URL(Optional)
Artificial Intelligence 2. AI Agents www.ThesisScientist.comWays of Thinking About AI  Language – Notions and assumptions common to all AI projects – (Slightly) philosophical way of looking at AI programs – “Autonomous Rational Agents”,  Following Russell and Norvig  Design Considerations – Extension to systems engineering considerations – High level things we should worry about  Before hacking away at code – Internal concerns, external concerns, evaluation www.ThesisScientist.comAgents  Taking the approach by Russell and Norvig – Chapter 2 An agent is anything that can be viewed as perceiving its environment through sensors and acting upon the environment through effectors  This definition includes: – Robots, humans, programs www.ThesisScientist.comExamples of Agents Humans Programs Robots___ senses keyboard, mouse, dataset cameras, pads body parts monitor, speakers, files motors, limbs www.ThesisScientist.comRational Agents A rational agent is one that does the right thing  Need to be able to assess agent’s performance – Should be independent of internal measures  Ask yourself: has the agent acted rationally? – Not just dependent on how well it does at a task  First consideration: evaluation of rationality www.ThesisScientist.comThought Experiment: Al Capone  Convicted for tax evasion – Were the police acting rationally?  We must assess an agent’s rationality in terms of: – Task it is meant to undertake (Convict guilty/remove crims) – Experience from the world (Capone guilty, no evidence) – Its knowledge of the world (Cannot convict for murder) – Actions available to it (Convict for tax, try for murder)  Possible to conclude – Police were acting rationally (or were they?) www.ThesisScientist.comAutonomy in Agents The autonomy of an agent is the extent to which its behaviour is determined by its own experience  Extremes – No autonomy – ignores environment/data – Complete autonomy – must act randomly/no program  Example: baby learning to crawl  Ideal: design agents to have some autonomy – Possibly good to become more autonomous in time www.ThesisScientist.comRunning Example The RHINO Robot Museum Tour Guide  Museum guide in Bonn  Two tasks to perform – Guided tour around exhibits – Provide info on each exhibit  Very successful – 18.6 kilometres – 47 hours – 50% attendance increase – 1 tiny mistake (no injuries) www.ThesisScientist.comInternal Structure  Second lot of considerations – Architecture and Program – Knowledge of the Environment – Reflexes – Goals – Utility Functions www.ThesisScientist.comArchitecture and Program  Program – Method of turning environmental input into actions  Architecture – Hardware/software (OS etc.) on which agent’s program runs  RHINO’s architecture: – Sensors (infrared, sonar, tactile, laser) – Processors (3 onboard, 3 more by wireless Ethernet)  RHINO’s program: – Low level: probabilistic reasoning, vision, – High level: problem solving, planning (first order logic) www.ThesisScientist.comKnowledge of Environment  Knowledge of Environment (World) – Different to sensory information from environment  World knowledge can be (pre)-programmed in – Can also be updated/inferred by sensory information  Choice of actions informed by knowledge of... – Current state of the world – Previous states of the world – How its actions change the world  Example: Chess agent – World knowledge is the board state (all the pieces) – Sensory information is the opponents move – Its moves also change the board state www.ThesisScientist.comRHINO’s Environment Knowledge  Programmed knowledge – Layout of the Museum  Doors, exhibits, restricted areas  Sensed knowledge – People and objects (chairs) moving  Affect of actions on the World – Nothing moved by RHINO explicitly – But, people followed it around (moving people) www.ThesisScientist.comReflexes  Action on the world – In response only to a sensor input – Not in response to world knowledge  Humans – flinching, blinking  Chess – openings, endings – Lookup table (not a good idea in general) 100  35 entries required for the entire game  RHINO: no reflexes? – Dangerous, because people get everywhere www.ThesisScientist.comGoals  Always need to think hard about – What the goal of an agent is  Does agent have internal knowledge about goal? – Obviously not the goal itself, but some properties  Goal based agents – Uses knowledge about a goal to guide its actions – E.g., Search, planning  RHINO – Goal: get from one exhibit to another – Knowledge about the goal: whereabouts it is – Need this to guide its actions (movements) www.ThesisScientist.comUtility Functions  Knowledge of a goal may be difficult to pin down – For example, checkmate in chess  But some agents have localised measures – Utility functions measure value of world states – Choose action which best improves utility (rational) – In search, this is “Best First”  RHINO: various utilities to guide search for route – Main one: distance from the target exhibit – Density of people along path www.ThesisScientist.comDetails of the Environment  Must take into account: – some qualities of the world  Imagine: – A robot in the real world – A software agent dealing with web data streaming in  Third lot of considerations: – Accessibility, Determinism – Episodes – Dynamic/Static, Discrete/Continuous www.ThesisScientist.comAccessibility of Environment  Is everything an agent requires to choose its actions available to it via its sensors? – If so, the environment is fully accessible  If not, parts of the environment are inaccessible – Agent must make informed guesses about world  RHINO: – “Invisible” objects which couldn’t be sensed – Including glass cases and bars at particular heights – Software adapted to take this into account www.ThesisScientist.comDeterminism in the Environment  Does the change in world state – Depend only on current state and agent’s action?  Non-deterministic environments – Have aspects beyond the control of the agent – Utility functions have to guess at changes in world  Robot in a maze: deterministic – Whatever it does, the maze remains the same  RHINO: non-deterministic – People moved chairs to block its path www.ThesisScientist.comEpisodic Environments  Is the choice of current action – Dependent on previous actions? – If not, then the environment is episodic  In non-episodic environments: – Agent has to plan ahead:  Current choice will affect future actions  RHINO: – Short term goal is episodic  Getting to an exhibit does not depend on how it got to current one – Long term goal is non-episodic  Tour guide, so cannot return to an exhibit on a tour www.ThesisScientist.comStatic or Dynamic Environments  Static environments don’t change – While the agent is deliberating over what to do  Dynamic environments do change – So agent should/could consult the world when choosing actions – Alternatively: anticipate the change during deliberation – Alternatively: make decision very fast  RHINO: – Fast decision making (planning route) – But people are very quick on their feet