Lecture note on Human Computer Interface

human computer interaction lecture notes in computer science and human–computer interaction fundamentals and practice free pdf download
NathanBenett Profile Pic
Published Date:11-07-2017
Your Website URL(Optional)
HUMAN–COMPUTER INTERACTION Much has changed since the first edition of ALAN DIX, JANET FINLAY, New to this edition: Human–Computer Interaction was published. Ubiquitous computing and rich sensor-filled environments are � A revised structure, reflecting the GREGORY D. ABOWD, RUSSELL BEALE finding their way out of the laboratory, not just into growth of HCI as a discipline, movies but also into our workplaces and homes. The separates out basic material suitable computer has broken out of its plastic and glass for introductory courses from more bounds providing us with networked societies where detailed models and theories. personal computing devices from mobile phones to HUMAN–COMPUTER smart cards fill our pockets and electronic devices � New chapter on interaction design surround us at home and work. The web too has grown adds material on scenarios and basic from a largely academic network into the hub of navigation design. business and everyday lives. As the distinctions between the physical and the digital, and between work and � New chapter on universal design, leisure start to break down, human–computer INTERACTION substantially extending the coverage interaction is also changing radically. of this material in the book. The excitement of these changes is captured in this new THIRD EDITION � Updated and extended treatment of edition, which also looks forward to other emerging socio/contextual issues. technologies. However, the book is firmly rooted in strong principles and models independent of the � Extended and new material on novel passing technologies of the day: these foundations will interaction, including updated be the means by which today’s students will ubicomp material, designing understand tomorrow’s technology. experience, physical sensors and a new chapter on rich interaction. The third edition of Human–Computer Interaction can be used for introductory and advanced courses on HCI, � Updated material about the web Interaction Design, Usability or Interactive Systems including dynamic content. Design. It will also prove an invaluable reference for professionals wishing to design usable computing � Relaunched website including case devices. studies, WAP access and search. Accompanying the text is a comprehensive website containing a broad range of material for instructors, students and practitioners, a full text search facility for the book, links to many sites of additional interest and THIRD much more: go to www.hcibook.com EDITION Alan Dix is Professor in the Department of Computing, Lancaster, UK. Janet Finlay is DIX Professor in the School of Computing, Leeds Metropolitan University, UK. Gregory D. Abowd is FINLAY Associate Professor in the College of Computing and GVU Center at Georgia Tech, USA. Russell Beale is lecturer at the School of Computer Science, University of ABOWD Birmingham, UK. BEALE www.pearson-books.com Cover illustration by Peter GudynasINTRODUCTION WHY HUMAN–COMPUTER INTERACTION? In the first edition of this book we wrote the following: This is the authors’ second attempt at writing this introduction. Our first attempt fell victim to a design quirk coupled with an innocent, though weary and less than attentive, user. The word-processing package we originally used to write this intro- duction is menu based. Menu items are grouped to reflect their function. The ‘save’ and ‘delete’ options, both of which are correctly classified as file-level operations, are consequently adjacent items in the menu. With a cursor controlled by a trackball it is all too easy for the hand to slip, inadvertently selecting delete instead of save. Of course, the delete option, being well thought out, pops up a confirmation box allow- ing the user to cancel a mistaken command. Unfortunately, the save option produces a very similar confirmation box – it was only as we hit the ‘Confirm’ button that we noticed the word ‘delete’ at the top... Happily this word processor no longer has a delete option in its menu, but unfortu- nately, similar problems to this are still an all too common occurrence. Errors such as these, resulting from poor design choices, happen every day. Perhaps they are not catastrophic: after all nobody’s life is endangered nor is there environmental damage (unless the designer happens to be nearby or you break something in frustration). However, when you lose several hours’ work with no written notes or backup and a publisher’s deadline already a week past, ‘catastrophe’ is certainly the word that springs to mind. Why is it then that when computers are marketed as ‘user friendly’ and ‘easy to use’, simple mistakes like this can still occur? Did the designer of the word processor actually try to use it with the trackball, or was it just that she was so expert with the system that the mistake never arose? We hazard a guess that no one tried to use it when tired and under pressure. But these criticisms are not levied only on the design- ers of traditional computer software. More and more, our everyday lives involve pro- grammed devices that do not sit on our desk, and these devices are just as unusable. Exactly how many VCR designers understand the universal difficulty people have trying to set their machines to record a television program? Do car radio designers2 Introduction actually think it is safe to use so many knobs and displays that the driver has to divert attention away from the road completely in order to tune the radio or adjust the volume? Computers and related devices have to be designed with an understanding that people with specific tasks in mind will want to use them in a way that is seamless with respect to their everyday work. To do this, those who design these systems need to know how to think in terms of the eventual users’ tasks and how to translate that knowledge into an executable system. But there is a problem with trying to teach the notion of designing computers for people. All designers are people and, most prob- ably, they are users as well. Isn’t it therefore intuitive to design for the user? Why does it need to be taught when we all know what a good interface looks like? As a result, the study of human–computer interaction (HCI) tends to come late in the designer’s training, if at all. The scenario with which we started shows that this is a mistaken view; it is not at all intuitive or easy to design consistent, robust systems DESIGN FOCUS Things don’t change It would be nice to think that problems like those described at the start of the Introduction would never happen now. Think again Look at the MacOS X ‘dock’ below. It is a fast launch point for applica- tions; folders and files can be dragged there for instant access; and also, at the right-hand side, there sits the trash can. Imagine what happens as you try to drag a file into one of the folders. If your finger accidentally slips whilst the icon is over the trash can – oops Happily this is not quite as easy in reality as it looks in the screen shot, since the icons in the dock con- stantly move around as you try to drag a file into it. This is to make room for the file in case you want to place it in the dock. However, it means you have to concentrate very hard when dragging a file over the dock. We assume this is not a deliberate feature, but it does have the beneficial side effect that users are less likely to throw away a file by accident – whew In fact it is quite fun to watch a new user trying to throw away a file. The trash can keeps moving as if it didn’t want the file in it. Experienced users evolve coping strategies. One user always drags files into the trash from the right-hand side as then the icons in the dock don’t move around. So two lessons: n designs don’t always get better n but at least users are clever. Screen shot reprinted by permission from Apple Computer, Inc.What is HCI? 3 that will cope with all manner of user carelessness. The interface is not something that can be plugged in at the last minute; its design should be developed integrally with the rest of the system. It should not just present a ‘pretty face’, but should sup- port the tasks that people actually want to do, and forgive the careless mistakes. We therefore need to consider how HCI fits into the design process. Designing usable systems is not simply a matter of altruism towards the eventual user, or even marketing; it is increasingly a matter of law. National health and safety standards constrain employers to provide their workforce with usable computer sys- tems: not just safe but usable. For example, EC Directive 90/270/EEC, which has been incorporated into member countries’ legislation, requires employers to ensure the following when designing, selecting, commissioning or modifying software: n that it is suitable for the task n that it is easy to use and, where appropriate, adaptable to the user’s knowledge and experience n that it provides feedback on performance n that it displays information in a format and at a pace that is adapted to the user n that it conforms to the ‘principles of software ergonomics’. Designers and employers can no longer afford to ignore the user. WHAT IS HCI? The term human–computer interaction has only been in widespread use since the early 1980s, but has its roots in more established disciplines. Systematic study of human performance began in earnest at the beginning of the last century in factories, with an emphasis on manual tasks. The Second World War provided the impetus for studying the interaction between humans and machines, as each side strove to pro- duce more effective weapons systems. This led to a wave of interest in the area among researchers, and the formation of the Ergonomics Research Society in 1949. Tradi- tionally, ergonomists have been concerned primarily with the physical characteristics of machines and systems, and how these affect user performance. Human Factors incorporates these issues, and more cognitive issues as well. The terms are often used interchangeably, with Ergonomics being the preferred term in the United Kingdom and Human Factors in the English-speaking parts of North America. Both of these disciplines are concerned with user performance in the context of any system, whether computer, mechanical or manual. As computer use became more widespread, an increasing number of researchers specialized in studying the interaction between people and computers, concerning themselves with the physical, psychological and theoretical aspects of this process. This research originally went under the name man– machine interaction, but this became human–computer interaction in recognition of the particular interest in computers and the composition of the user population Another strand of research that has influenced the development of HCI is infor- mation science and technology. Again the former is an old discipline, pre-dating the introduction of technology, and is concerned with the management and manipulation4 Introduction of information within an organization. The introduction of technology has had a profound effect on the way that information can be stored, accessed and utilized and, consequently, a significant effect on the organization and work environment. Systems analysis has traditionally concerned itself with the influence of technology in the workplace, and fitting the technology to the requirements and constraints of the job. These issues are also the concern of HCI. HCI draws on many disciplines, as we shall see, but it is in computer science and systems design that it must be accepted as a central concern. For all the other discip- lines it can be a specialism, albeit one that provides crucial input; for systems design it is an essential part of the design process. From this perspective, HCI involves the design, implementation and evaluation of interactive systems in the context of the user’s task and work. However, when we talk about human–computer interaction, we do not necessarily envisage a single user with a desktop computer. By user we may mean an individual user, a group of users working together, or a sequence of users in an organization, each dealing with some part of the task or process. The user is whoever is trying to get the job done using the technology. By computer we mean any technology ranging from the general desktop computer to a large-scale computer system, a process control system or an embedded system. The system may include non-computerized parts, including other people. By interaction we mean any communication between a user and computer, be it direct or indirect. Direct interaction involves a dialog with feedback and control throughout performance of the task. Indirect interaction may involve batch processing or intelligent sensors controlling the environment. The important thing is that the user is interacting with the computer in order to accomplish something. WHO IS INVOLVED IN HCI? HCI is undoubtedly a multi-disciplinary subject. The ideal designer of an interactive system would have expertise in a range of topics: psychology and cognitive science to give her knowledge of the user’s perceptual, cognitive and problem-solving skills; ergonomics for the user’s physical capabilities; sociology to help her under- stand the wider context of the interaction; computer science and engineering to be able to build the necessary technology; business to be able to market it; graphic design to produce an effective interface presentation; technical writing to produce the manuals, and so it goes on. There is obviously too much expertise here to be held by one person (or indeed four), perhaps even too much for the average design team. Indeed, although HCI is recognized as an interdisciplinary subject, in practice peo- ple tend to take a strong stance on one side or another. However, it is not possible to design effective interactive systems from one discipline in isolation. Input is needed from all sides. For example, a beautifully designed graphic display may be unusable if it ignores dialog constraints or the psychological limitations of the user.Theory and HCI 5 In this book we want to encourage the multi-disciplinary view of HCI but we too have our ‘stance’, as computer scientists. We are interested in answering a particular question. How do principles and methods from each of these contributing dis- ciplines in HCI help us to design better systems? In this we must be pragmatists rather than theorists: we want to know how to apply the theory to the problem rather than just acquire a deep understanding of the theory. Our goal, then, is to be multi-disciplinary but practical. We concentrate particularly on computer science, psychology and cognitive science as core subjects, and on their application to design; other disciplines are consulted to provide input where relevant. THEORY AND HCI Unfortunately for us, there is no general and unified theory of HCI that we can present. Indeed, it may be impossible ever to derive one; it is certainly out of our reach today. However, there is an underlying principle that forms the basis of our own views on HCI, and it is captured in our claim that people use computers to accomplish work. This outlines the three major issues of concern: the people, the computers and the tasks that are performed. The system must support the user’s task, which gives us a fourth focus, usability: if the system forces the user to adopt an unacceptable mode of work then it is not usable. There are, however, those who would dismiss our concentration on the task, saying that we do not even know enough about a theory of human tasks to support them in design. There is a good argument here (to which we return in Chapter 15). However, we can live with this confusion about what real tasks are because our understanding of tasks at the moment is sufficient to give us direction in design. The user’s current tasks are studied and then supported by computers, which can in turn affect the nature of the original task and cause it to evolve. To illustrate, word processing has made it easy to manipulate paragraphs and reorder documents, allowing writers a completely new freedom that has affected writing styles. No longer is it vital to plan and construct text in an ordered fashion, since free-flowing prose can easily be restructured at a later date. This evolution of task in turn affects the design of the ideal system. However, we see this evolution as providing a motivating force behind the system development cycle, rather than a refutation of the whole idea of supportive design. This word ‘task’ or the focus on accomplishing ‘work’ is also problematic when we think of areas such as domestic appliances, consumer electronics and e-commerce. There are three ‘use’ words that must all be true for a product to be successful; it must be: useful – accomplish what is required: play music, cook dinner, format a document; usable – do it easily and naturally, without danger of error, etc.; used – make people want to use it, be attractive, engaging, fun, etc.6 Introduction The last of these has not been a major factor until recently in HCI, but issues of motivation, enjoyment and experience are increasingly important. We are certainly even further from having a unified theory of experience than of task. The question of whether HCI, or more importantly the design of interactive sys- tems and the user interface in particular, is a science or a craft discipline is an inter- esting one. Does it involve artistic skill and fortuitous insight or reasoned methodical science? Here we can draw an analogy with architecture. The most impressive struc- tures, the most beautiful buildings, the innovative and imaginative creations that provide aesthetic pleasure, all require inventive inspiration in design and a sense of artistry, and in this sense the discipline is a craft. However, these structures also have to be able to stand up to fulfill their purpose successfully, and to be able to do this the architect has to use science. So it is for HCI: beautiful and/or novel interfaces are artistically pleasing and capable of fulfilling the tasks required – a marriage of art and science into a successful whole. We want to reuse lessons learned from the past about how to achieve good results and avoid bad ones. For this we require both craft and science. Innovative ideas lead to more usable systems, but in order to maximize the potential benefit from the ideas, we need to understand not only that they work, but how and why they work. This scientific rationalization allows us to reuse related con- cepts in similar situations, in much the same way that architects can produce a bridge and know that it will stand, since it is based upon tried and tested principles. The craft–science tension becomes even more difficult when we consider novel systems. Their increasing complexity means that our personal ideas of good and bad are no longer enough; for a complex system to be well designed we need to rely on something more than simply our intuition. Designers may be able to think about how one user would want to act, but how about groups? And what about new media? Our ideas of how best to share workloads or present video information are open to debate and question even in non-computing situations, and the incorporation of one version of good design into a computer system is quite likely to be unlike anyone else’s version. Different people work in different ways, whilst different media color the nature of the interaction; both can dramatically change the very nature of the original task. In order to assist designers, it is unrealistic to assume that they can rely on artistic skill and perfect insight to develop usable systems. Instead we have to pro- vide them with an understanding of the concepts involved, a scientific view of the reasons why certain things are successful whilst others are not, and then allow their creative nature to feed off this information: creative flow, underpinned with science; or maybe scientific method, accelerated by artistic insight. The truth is that HCI is required to be both a craft and a science in order to be successful. HCI IN THE CURRICULUM If HCI involves both craft and science then it must, in part at least, be taught. Imagination and skill may be qualities innate in the designer or developed through experience, but the underlying theory must be learned. In the past, when computersHCI in the curriculum 7 were used primarily by expert specialists, concentration on the interface was a lux- ury that was often relinquished. Now designers cannot afford to ignore the interface in favour of the functionality of their systems: the two are too closely intertwined. If the interface is poor, the functionality is obscured; if it is well designed, it will allow the system’s functionality to support the user’s task. Increasingly, therefore, computer science educators cannot afford to ignore HCI. We would go as far as to claim that HCI should be integrated into every computer science or software engineering course, either as a recurring feature of other modules or, preferably, as a module itself. It should not be viewed as an ‘optional extra’ (although, of course, more advanced HCI options can complement a basic core course). This view is shared by the ACM SIGCHI curriculum development group, who propose a curriculum for such a core course 9. The topics included in this book, although developed without reference to this curriculum, cover the main emphases of it, and include enough detail and coverage to support specialized options as well. In courses other than computer science, HCI may well be an option specializing in a particular area, such as cognitive modeling or task analysis. Selected use of the relevant chapters of this book can also support such a course. HCI must be taken seriously by designers and educators if the requirement for additional complexity in the system is to be matched by increased clarity and usabil- ity in the interface. In this book we demonstrate how this can be done in practice. DESIGN FOCUS Quick fixes You should expect to spend both time and money on interface design, just as you would with other parts of a system. So in one sense there are no quick fixes. However, a few simple steps can make a dramatic improvement. Think ‘user’ Probably 90% of the value of any interface design technique is that it forces the designer to remember that someone (and in particular someone else) will use the system under construction. Try it out Of course, many designers will build a system that they find easy and pleasant to use, and they find it incomprehensible that anyone else could have trouble with it. Simply sitting someone down with an early version of an interface (without the designer prompting them at each step) is enormously valuable. Professional usability laboratories will have video equipment, one-way mirrors and other sophisticated monitors, but a notebook and pencil and a home-video camera will suffice (more about evaluation in Chapter 9). Involve the users Where possible, the eventual users should be involved in the design process. They have vital know- ledge and will soon find flaws. A mechanical syringe was once being developed and a prototype was demonstrated to hospital staff. Happily they quickly noticed the potentially fatal flaw in its interface. 8 Introduction Figure 0.1 Automatic syringe: setting the dose to 1372. The effect of one key slip before and after user involvement The doses were entered via a numeric keypad: an accidental keypress and the dose could be out by a factor of 10 The production version had individual increment/decrement buttons for each digit (more about participatory design in Chapter 13). Iterate People are complicated, so you won’t get it right first time. Programming an interface can be a very difficult and time-consuming business. So, the result becomes precious and the builder will want to defend it and minimize changes. Making early prototypes less precious and easier to throw away is crucial. Happily there are now many interface builder tools that aid this process. For example, mock- ups can be quickly constructed using HyperCard on the Apple Macintosh or Visual Basic on the PC. For visual and layout decisions, paper designs and simple models can be used (more about iterative design in Chapter 5).PART FOUNDATIONS 1 In this part we introduce the fundamental components of an interactive system: the human user, the computer system itself and the nature of the interactive process. We then present a view of the history of interactive systems by look- ing at key interaction paradigms that have been significant. Chapter 1 discusses the psychological and physiological attributes of the user, providing us with a basic overview of the capabilities and limitations that affect our ability to use computer systems. It is only when we have an understand- ing of the user at this level that we can understand what makes for successful designs. Chapter 2 considers the computer in a similar way. Input and output devices are described and explained and the effect that their individual characteristics have on the interaction highlighted. The computational power and memory of the computer is another important component in determining what can be achieved in the interaction, whilst due attention is also paid to paper output since this forms one of the major uses of computers and users’ tasks today. Having approached interaction from both the human and the computer side, we then turn our attention to the dialog between them in Chapter 3, where we look at models of interaction. In Chapter 4 we take a historical perspective on the evolution of interactive systems and how they have increased the usability of computers in general.THE HUMAN 1 OVERVIEW n Humans are limited in their capacity to process information. This has important implications for design. n Information is received and responses given via a number of input and output channels: – visual channel – auditory channel – haptic channel – movement. n Information is stored in memory: – sensory memory – short-term (working) memory – long-term memory. n Information is processed and applied: – reasoning – problem solving – skill acquisition – error. n Emotion influences human capabilities. n Users share common capabilities but are individuals with differences, which should not be ignored.12 Chapter 1 n The human 1.1 INTRODUCTION This chapter is the first of four in which we introduce some of the ‘foundations’ of HCI. We start with the human, the central character in any discussion of interactive systems. The human, the user, is, after all, the one whom computer systems are de- signed to assist. The requirements of the user should therefore be our first priority. In this chapter we will look at areas of human psychology coming under the general banner of cognitive psychology. This may seem a far cry from designing and building interactive computer systems, but it is not. In order to design something for some- one, we need to understand their capabilities and limitations. We need to know if there are things that they will find difficult or, even, impossible. It will also help us to know what people find easy and how we can help them by encouraging these things. We will look at aspects of cognitive psychology which have a bearing on the use of com- puter systems: how humans perceive the world around them, how they store and process information and solve problems, and how they physically manipulate objects. We have already said that we will restrict our study to those things that are relev- ant to HCI. One way to structure this discussion is to think of the user in a way that highlights these aspects. In other words, to think of a simplified model of what is actually going on. Many models have been proposed and it useful to consider one of the most influential in passing, to understand the context of the discussion that is to follow. In 1983, Card, Moran and Newell 56 described the Model Human Processor, which is a simplified view of the human processing involved in interacting with computer systems. The model comprises three subsystems: the perceptual system, handling sensory stimulus from the outside world, the motor system, which controls actions, and the cognitive system, which provides the processing needed to connect the two. Each of these subsystems has its own processor and memory, although obviously the complexity of these varies depending on the complexity of the tasks the subsystem has to perform. The model also includes a number of principles of operation which dictate the behavior of the systems under certain conditions. We will use the analogy of the user as an information processing system, but in our model make the analogy closer to that of a conventional computer system. Information comes in, is stored and processed, and information is passed out. We will therefore discuss three components of this system: input–output, memory and processing. In the human, we are dealing with an intelligent information-processing system, and processing therefore includes problem solving, learning, and, con- sequently, making mistakes. This model is obviously a simplification of the real situation, since memory and processing are required at all levels, as we have seen in the Model Human Processor. However, it is convenient as a way of grasping how information is handled by the human system. The human, unlike the computer, is also influenced by external factors such as the social and organizational environ- ment, and we need to be aware of these influences as well. We will ignore such factors for now and concentrate on the human’s information processing capabilities only. We will return to social and organizational influences in Chapter 3 and, in more detail, in Chapter 13.1.2 Input–output channels 13 In this chapter, we will first look at the human’s input–output channels, the senses and responders or effectors. This will involve some low-level processing. Secondly, we will consider human memory and how it works. We will then think about how humans perform complex problem solving, how they learn and acquire skills, and why they make mistakes. Finally, we will discuss how these things can help us in the design of computer systems. 1.2 INPUT–OUTPUT CHANNELS A person’s interaction with the outside world occurs through information being received and sent: input and output. In an interaction with a computer the user receives information that is output by the computer, and responds by providing input to the computer – the user’s output becomes the computer’s input and vice versa. Consequently the use of the terms input and output may lead to confusion so we shall blur the distinction somewhat and concentrate on the channels involved. This blurring is appropriate since, although a particular channel may have a primary role as input or output in the interaction, it is more than likely that it is also used in the other role. For example, sight may be used primarily in receiving information from the computer, but it can also be used to provide information to the computer, for example by fixating on a particular screen point when using an eyegaze system. Input in the human occurs mainly through the senses and output through the motor control of the effectors. There are five major senses: sight, hearing, touch, taste and smell. Of these, the first three are the most important to HCI. Taste and smell do not currently play a significant role in HCI, and it is not clear whether they could be exploited at all in general computer systems, although they could have a role to play in more specialized systems (smells to give warning of malfunction, for example) or in augmented reality systems. However, vision, hearing and touch are central. Similarly there are a number of effectors, including the limbs, fingers, eyes, head and vocal system. In the interaction with the computer, the fingers play the primary role, through typing or mouse control, with some use of voice, and eye, head and body position. Imagine using a personal computer (PC) with a mouse and a keyboard. The appli- cation you are using has a graphical interface, with menus, icons and windows. In your interaction with this system you receive information primarily by sight, from what appears on the screen. However, you may also receive information by ear: for example, the computer may ‘beep’ at you if you make a mistake or to draw attention to something, or there may be a voice commentary in a multimedia presentation. Touch plays a part too in that you will feel the keys moving (also hearing the ‘click’) or the orientation of the mouse, which provides vital feedback about what you have done. You yourself send information to the computer using your hands, either by hitting keys or moving the mouse. Sight and hearing do not play a direct role in sending information in this example, although they may be used to receive 14 Chapter 1 n The human information from a third source (for example, a book, or the words of another per- son) which is then transmitted to the computer. In this section we will look at the main elements of such an interaction, first con- sidering the role and limitations of the three primary senses and going on to consider motor control. 1.2.1 Vision Human vision is a highly complex activity with a range of physical and perceptual limitations, yet it is the primary source of information for the average person. We can roughly divide visual perception into two stages: the physical reception of the stimulus from the outside world, and the processing and interpretation of that stimulus. On the one hand the physical properties of the eye and the visual system mean that there are certain things that cannot be seen by the human; on the other the interpretative capabilities of visual processing allow images to be constructed from incomplete information. We need to understand both stages as both influence what can and cannot be perceived visually by a human being, which in turn directly affects the way that we design computer systems. We will begin by looking at the eye as a physical receptor, and then go on to consider the processing involved in basic vision. The human eye Vision begins with light. The eye is a mechanism for receiving light and transform- ing it into electrical energy. Light is reflected from objects in the world and their image is focussed upside down on the back of the eye. The receptors in the eye transform it into electrical signals which are passed to the brain. The eye has a number of important components (see Figure 1.1) which we will look at in more detail. The cornea and lens at the front of the eye focus the light into a sharp image on the back of the eye, the retina. The retina is light sensitive and con- tains two types of photoreceptor: rods and cones. Rods are highly sensitive to light and therefore allow us to see under a low level of illumination. However, they are unable to resolve fine detail and are subject to light saturation. This is the reason for the temporary blindness we get when moving from a darkened room into sunlight: the rods have been active and are saturated by the sudden light. The cones do not operate either as they are suppressed by the rods. We are therefore temporarily unable to see at all. There are approximately 120 million rods per eye which are mainly situated towards the edges of the retina. Rods there- fore dominate peripheral vision. Cones are the second type of receptor in the eye. They are less sensitive to light than the rods and can therefore tolerate more light. There are three types of cone, each sensitive to a different wavelength of light. This allows color vision. The eye has approximately 6 million cones, mainly concentrated on the fovea, a small area of the retina on which images are fixated.1.2 Input–output channels 15 Figure 1.1 The human eye Although the retina is mainly covered with photoreceptors there is one blind spot where the optic nerve enters the eye. The blind spot has no rods or cones, yet our visual system compensates for this so that in normal circumstances we are unaware of it. The retina also has specialized nerve cells called ganglion cells. There are two types: X-cells, which are concentrated in the fovea and are responsible for the early detec- tion of pattern; and Y-cells which are more widely distributed in the retina and are responsible for the early detection of movement. The distribution of these cells means that, while we may not be able to detect changes in pattern in peripheral vision, we can perceive movement. Visual perception Understanding the basic construction of the eye goes some way to explaining the physical mechanisms of vision but visual perception is more than this. The informa- tion received by the visual apparatus must be filtered and passed to processing ele- ments which allow us to recognize coherent scenes, disambiguate relative distances and differentiate color. We will consider some of the capabilities and limitations of visual processing later, but first we will look a little more closely at how we perceive size and depth, brightness and color, each of which is crucial to the design of effective visual interfaces.16 Chapter 1 n The human DESIGN FOCUS Getting noticed The extensive knowledge about the human visual system can be brought to bear in practical design. For example, our ability to read or distinguish falls off inversely as the distance from our point of focus increases. This is due to the fact that the cones are packed more densely towards the center of our visual field. You can see this in the following image. Fixate on the dot in the center. The letters on the left should all be equally readable, those on the right all equally harder. This loss of discrimination sets limits on the amount that can be seen or read without moving one’s eyes. A user concentrating on the middle of the screen cannot be expected to read help text on the bottom line. However, although our ability to discriminate static text diminishes, the rods, which are concentrated more in the outer parts of our visual field, are very sensitive to changes; hence we see movement well at the edge of our vision. So if you want a user to see an error message at the bottom of the screen it had better be flashing On the other hand clever moving icons, however impressive they are, will be distracting even when the user is not looking directly at them. Perceiving size and depth Imagine you are standing on a hilltop. Beside you on the summit you can see rocks, sheep and a small tree. On the hillside is a farmhouse with outbuildings and farm vehicles. Someone is on the track, walking toward the summit. Below in the valley is a small market town. Even in describing such a scene the notions of size and distance predominate. Our visual system is easily able to interpret the images which it receives to take account of these things. We can identify similar objects regardless of the fact that they appear to us to be of vastly different sizes. In fact, we can use this information to judge distances. So how does the eye perceive size, depth and relative distances? To understand this we must consider how the image appears on the retina. As we noted in the previous section, reflected light from the object forms an upside-down image on the retina. The size of that image is specified as a visual angle. Figure 1.2 illustrates how the visual angle is calculated. If we were to draw a line from the top of the object to a central point on the front of the eye and a second line from the bottom of the object to the same point, the visual angle of the object is the angle between these two lines. Visual angle is affected by both the size of the object and its distance from the eye. Therefore if two objects are at the same distance, the larger one will have the larger visual angle. Similarly, if two objects of the same size are placed at different distances from the eye, the 1.2 Input–output channels 17 Figure 1.2 Visual angle furthest one will have the smaller visual angle. The visual angle indicates how much of the field of view is taken by the object. The visual angle measurement is given in either degrees or minutes of arc, where 1 degree is equivalent to 60 minutes of arc, and 1 minute of arc to 60 seconds of arc. So how does an object’s visual angle affect our perception of its size? First, if the visual angle of an object is too small we will be unable to perceive it at all. Visual acuity is the ability of a person to perceive fine detail. A number of measurements have been established to test visual acuity, most of which are included in standard eye tests. For example, a person with normal vision can detect a single line if it has a visual angle of 0.5 seconds of arc. Spaces between lines can be detected at 30 seconds to 1 minute of visual arc. These represent the limits of human visual acuity. Assuming that we can perceive the object, does its visual angle affect our per- ception of its size? Given that the visual angle of an object is reduced as it gets further away, we might expect that we would perceive the object as smaller. In fact, our perception of an object’s size remains constant even if its visual angle changes. So a person’s height is perceived as constant even if they move further from you. This is the law of size constancy, and it indicates that our perception of size relies on factors other than the visual angle. One of these factors is our perception of depth. If we return to the hilltop scene there are a number of cues which we can use to determine the relative positions and distances of the objects which we see. If objects overlap, the object which is partially covered is perceived to be in the background, and therefore further away. Similarly, the size and height of the object in our field of view provides a cue to its distance. 18 Chapter 1 n The human A third cue is familiarity: if we expect an object to be of a certain size then we can judge its distance accordingly. This has been exploited for humour in advertising: one advertisement for beer shows a man walking away from a bottle in the fore- ground. As he walks, he bumps into the bottle, which is in fact a giant one in the background Perceiving brightness A second aspect of visual perception is the perception of brightness. Brightness is in fact a subjective reaction to levels of light. It is affected by luminance which is the amount of light emitted by an object. The luminance of an object is dependent on the amount of light falling on the object’s surface and its reflective properties. Luminance is a physical characteristic and can be measured using a photometer. Contrast is related to luminance: it is a function of the luminance of an object and the luminance of its background. Although brightness is a subjective response, it can be described in terms of the amount of luminance that gives a just noticeable difference in brightness. However, the visual system itself also compensates for changes in brightness. In dim lighting, the rods predominate vision. Since there are fewer rods on the fovea, objects in low lighting can be seen less easily when fixated upon, and are more visible in peripheral vision. In normal lighting, the cones take over. Visual acuity increases with increased luminance. This may be an argument for using high display luminance. However, as luminance increases, flicker also increases. The eye will perceive a light switched on and off rapidly as constantly on. But if the speed of switching is less than 50 Hz then the light is perceived to flicker. In high luminance flicker can be perceived at over 50 Hz. Flicker is also more noticeable in peripheral vision. This means that the larger the display (and consequently the more peripheral vision that it occupies), the more it will appear to flicker. Perceiving color A third factor that we need to consider is perception of color. Color is usually regarded as being made up of three components: hue, intensity and saturation. Hue is determined by the spectral wavelength of the light. Blues have short wavelengths, greens medium and reds long. Approximately 150 different hues can be discriminated by the average person. Intensity is the brightness of the color, and saturation is the amount of whiteness in the color. By varying these two, we can perceive in the region of 7 million different colors. However, the number of colors that can be identified by an individual without training is far fewer (in the region of 10). The eye perceives color because the cones are sensitive to light of different wave- lengths. There are three different types of cone, each sensitive to a different color (blue, green and red). Color vision is best in the fovea, and worst at the periphery where rods predominate. It should also be noted that only 3–4% of the fovea is occupied by cones which are sensitive to blue light, making blue acuity lower. Finally, we should remember that around 8% of males and 1% of females suffer from color blindness, most commonly being unable to discriminate between red and green.1.2 Input–output channels 19 The capabilities and limitations of visual processing In considering the way in which we perceive images we have already encountered some of the capabilities and limitations of the human visual processing system. However, we have concentrated largely on low-level perception. Visual processing involves the transformation and interpretation of a complete image, from the light that is thrown onto the retina. As we have already noted, our expectations affect the way an image is perceived. For example, if we know that an object is a particular size, we will perceive it as that size no matter how far it is from us. Visual processing compensates for the movement of the image on the retina which occurs as we move around and as the object which we see moves. Although the retinal image is moving, the image that we perceive is stable. Similarly, color and brightness of objects are perceived as constant, in spite of changes in luminance. This ability to interpret and exploit our expectations can be used to resolve ambi- guity. For example, consider the image shown in Figure 1.3. What do you perceive? Now consider Figure 1.4 and Figure 1.5. The context in which the object appears Figure 1.3 An ambiguous shape? Figure 1.4 ABC

Advise: Why You Wasting Money in Costly SEO Tools, Use World's Best Free SEO Tool Ubersuggest.