Lecture notes on Digital Image Processing with Matlab

digital image processing matlab exercises solutions and digital image processing matlab tutorial pdf free downlaod
LylaKnight Profile Pic
LylaKnight,France,Professional
Published Date:12-07-2017
Your Website URL(Optional)
Comment
i An Introduction to Digital Image Processing with Matlab Notes for SCM2511 Image Processing 1 Alasdair McAndrew School of Computer Science and Mathematics Victoria University of Technologyii CONTENTS Contents 1 Introduction 1 1.1 Images and pictures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1 1.2 What is image processing? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1 1.3 Images and digital images . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4 1.4 Some applications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6 1.5 Aspects of image processing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7 1.6 An image processing task . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7 1.7 Types of digital images . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8 1.8 Image File Sizes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11 1.9 Image Acquisition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12 1.10 Image perception . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12 2 Basic use of Matlab 15 2.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15 2.2 Basic use of Matlab . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16 2.3 Variables and the workspace . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17 2.4 Dealing with matrices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19 2.5 Plots . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28 2.6 Help in Matlab . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32 3 Images and Matlab 33 3.1 Greyscale images . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33 3.2 RGB Images . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35 3.3 Indexed colour images . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35 3.4 Data types and conversions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 38 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 39 4 Image Display 41 4.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41 4.2 The imshow function . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41 4.3 Bit planes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 44 4.4 Spatial Resolution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 45 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 47 5 Point Processing 51 5.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 51 5.2 Arithmetic operations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 52CONTENTS iii 5.3 Histograms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 56 5.4 Thresholding . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 67 5.5 Applications of thresholding . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 70 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 71 6 Spatial Filtering 75 6.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 75 6.2 Notation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 79 6.3 Filtering in Matlab . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 80 6.4 Frequencies; low and high pass lters . . . . . . . . . . . . . . . . . . . . . . . . . 84 6.5 Gaussian lters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 87 6.6 Non-linear lters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 89 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 91 7 Noise 95 7.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 95 7.2 Types of noise . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 95 7.3 Cleaning salt and pepper noise . . . . . . . . . . . . . . . . . . . . . . . . . . . . 98 7.4 Cleaning Gaussian noise . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 102 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 106 8 Edges 111 8.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 111 8.2 Dierences and edges . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 111 8.3 Second dierences . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 118 8.4 Edge enhancement . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 122 8.5 Final Remarks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 128 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 128 9 The Fourier Transform 131 9.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 131 9.2 The one-dimensional discrete Fourier transform . . . . . . . . . . . . . . . . . . . 131 9.3 Properties of the one-dimensional DFT . . . . . . . . . . . . . . . . . . . . . . . . 135 9.4 The two-dimensional DFT . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 137 9.5 Fourier transforms in Matlab . . . . . . . . . . . . . . . . . . . . . . . . . . . . 142 9.6 Fourier transforms of images . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 144 9.7 Filtering in the frequency domain . . . . . . . . . . . . . . . . . . . . . . . . . . . 148 9.8 Removal of periodic noise . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 159 9.9 Inverse ltering . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 161 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 167 10 The Hough and Distance Transforms 169 10.1 The Hough transform . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 169 10.2 Implementing the Hough transform in Matlab . . . . . . . . . . . . . . . . . . . 174 10.3 The distance transform . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 180 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 191 11 Morphology 195iv CONTENTS 11.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 195 11.2 Basic ideas . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 195 11.3 Dilation and erosion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 196 11.4 Opening and closing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 203 11.5 The hit-or-miss transform . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 211 11.6 Some morphological algorithms . . . . . . . . . . . . . . . . . . . . . . . . . . . . 213 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 220 12 Colour processing 223 12.1 What is colour? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 223 12.2 Colour models . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 227 12.3 Colour images in Matlab . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 234 12.4 Pseudocolouring . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 236 12.5 Processing of colour images . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 239 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 245 13 Image coding and compression 247 13.1 Lossless compression . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 247 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 252 Bibliography 255 Index 257Chapter 1 Introduction 1.1 Images and pictures As we mentioned in the preface, human beings are predominantly visual creatures: we rely heavily on our vision to make sense of the world around us. We not only look at things to identify and classify them, but we can scan for dierences, and obtain an overall rough feeling for a scene with a quick glance. Humans have evolved very precise visual skills: we can identify a face in an instant; we can dierentiate colours; we can process a large amount of visual information very quickly. However, the world is in constant motion: stare at something for long enough and it will change in some way. Even a large solid structure, like a building or a mountain, will change its appearance depending onthe timeofday(day ornight); amount ofsunlight (clearor cloudy), orvariousshadows falling upon it. We are concerned with single images: snapshots, if you like, of a visual scene. Although image processing can deal with changing scenes, we shall not discuss it in any detail in this text. For our purposes, an image is a single picture which represents something. It may be a picture of a person, of people or animals, or of an outdoor scene, or a microphotograph of an electronic component, or the result of medical imaging. Even if the picture is not immediately recognizable, it will not be just a random blur. 1.2 What is image processing? Image processing involves changing the nature of an image in order to either 1. improve its pictorial information for human interpretation, 2. render it more suitable for autonomous machine perception. We shall be concerned with digital image processing, which involves using a computer to change the nature of a digital image (see below). It is necessary to realize that these two aspects represent two separate but equally important aspects of image processing. A procedure which satises condition (1)a procedure which makes an image look bettermay be the very worst procedure for satis- fying condition (2). Humans like their images to be sharp, clear and detailed; machines prefer their images to be simple and uncluttered. Examples of (1) may include: 12 CHAPTER 1. INTRODUCTION Enhancing the edges of an image to make it appear sharper; an example is shown in gure 1.1. Note how the second image appears cleaner; it is a more pleasant image. Sharpening edges is a vital component of printing: in order for an image to appear at its best on the printed page; some sharpening is usually performed. (a) The original image (b) Result after sharperning Figure 1.1: Image sharperning Removing noise from an image; noise being random errors in the image. An example is given in gure 1.2. Noise is a very common problem in data transmission: all sorts of electronic components may aect data passing through them, and the results may be undesirable. As we shall see in chapter 7 noise may take many dierent forms;each type of noise requiring a dierent method of removal. Removing motion blur from an image. An example is given in gure 1.3. Note that in the deblurred image (b) it is easy to read the numberplate, and to see the spokes on the wheels of the car, as well as other details not at all clear in the original image (a). Motion blur may occur when the shutter speed of the camera is too long for the speed of the object. In photographs of fast moving objects: athletes, vehicles for example, the problem of blur may be considerable. Examples of (2) may include: Obtaining the edges of an image. This may be necessary for the measurement of objects in an image; an example is shown in gures 1.4. Once we have the edges we can measure their spread, and the area contained within them. We can also use edge detection algorithms as a rst step in edge enhancement, as we saw above.1.2. WHAT IS IMAGE PROCESSING? 3 (a) The original image (b) After removing noise Figure 1.2: Removing noise from an image (a) The original image (b) After removing the blur Figure 1.3: Image deblurring4 CHAPTER 1. INTRODUCTION From the edge result, we see that it may be necessary to enhance the original image slightly, to make the edges clearer. (a) The original image (b) Its edge image Figure 1.4: Finding edges in an image Removing detail from an image. For measurement or counting purposes, we may not be interested inall thedetail in animage. For example, a machine inspected items onanassembly line, the only matters of interest may be shape, size or colour. For such cases, we might want to simplify the image. Figure 1.5 shows an example: in image (a) is a picture of an African bualo, and image (b) shows a blurred version in which extraneous detail (like the logs of wood in the background) have been removed. Notice that in image (b) all the ne detail is gone; what remains is the coarse structure of the image. We could for example, measure ther size and shape of the animal without being distracted by unnecessary detail. 1.3 Images and digital images Suppose we take an image, a photo, say. For the moment, lets make things easy and suppose the photo is black and white (that is, lots of shades of grey), so no colour. We may consider this image as being a two dimensional function, where the function values give the brightness of the image at any given point, as shown in gure 1.6. We may assume that in such an image brightness values can be any real numbers in the range (black) to (white). The ranges of and will clearly depend on the image, but they can take all real values between their minima and maxima. A digital image diers from a photo in that the , , and values are all discrete. Usually they take on only integer values, so the image shown in gure 1.6 will have and ranging from 1 to 256 each, and the brightness values also ranging from 0 (black) to 255 (white). A digital image can be considered as a large array of discrete dots, each of which has a brightness associated with it. These dots are called picture elements, or more simply pixels. The pixels surrounding a given pixel constitute its neighbourhood. A neighbourhood can be characterized by its shape in the same way as a matrix: we can speak of a neighbourhood, or of a neighbourhood. Except in very special circumstances, neighbourhoods have odd numbers of rows and columns; this ensures that the current pixel is in the centre of the neighbourhood. An example of a neighbourhood is1.3. IMAGES AND DIGITAL IMAGES 5 (a) The original image (b) Blurring to remove detail Figure 1.5: Blurring an image Figure 1.6: An image as a function6 CHAPTER 1. INTRODUCTION given in gure 1.7. If a neighbourhood has an even number of rows or columns (or both), it may be necessary to specify which pixel in the neighbourhood is the current pixel. 48 219 168 145 244 188 120 58 49 218 87 94 133 35 17 148 174 151 74 179 224 3 252 194 77 127 87 139 44 228 149 135 Current pixel 138 229 136 113 250 51 108 163 38 210 185 177 69 76 131 53 neighbourhood 178 164 79 158 64 169 85 97 96 209 214 203 223 73 110 200 Figure 1.7: Pixels, with a neighbourhood 1.4 Some applications Image processing has anenormous range ofapplications; almost every area of science and technology can make use of image processing methods. Here is a short list just to give some indication of the range of image processing applications. 1. Medicine Inspection and interpretation of images obtained from X-rays, MRI or CAT scans, analysis of cell images, of chromosome karyotypes. 2. Agriculture Satellite/aerial views of land, for example to determine how much land is being used for dierent purposes, or to investigate the suitability of dierent regions for dierent crops, inspection of fruit and vegetablesdistinguishing good and fresh produce from old. 3. Industry Automatic inspection of items on a production line, inspection of paper samples. 4. Law enforcement Fingerprint analysis, sharpening or de-blurring of speed-camera images.1.5. ASPECTS OF IMAGE PROCESSING 7 1.5 Aspects of image processing It is convenient to subdivide dierent image processing algorithms into broad subclasses. There are dierent algorithms for dierent tasks and problems, and often we would like to distinguish the nature of the task at hand. Image enhancement. This refers to processing an image so that the result is more suitable for a particular application. Example include: sharpening or de-blurring an out of focus image, highlighting edges, improving image contrast, or brightening an image, removing noise. Image restoration. This may be considered as reversing the damage done to an image by a known cause, for example: removing of blur caused by linear motion, removal of optical distortions, removing periodic interference. Image segmentation. This involves subdividing an image into constituent parts, or isolating certain aspects of an image: nding lines, circles, or particular shapes in an image, in an aerial photograph, identifying cars, trees, buildings, or roads. These classes are not disjoint; a given algorithm may be used for both image enhancement or for image restoration. However, we should be able to decide what it is that we are trying to do with our image: simply make it look better (enhancement), or removing damage (restoration). 1.6 An image processing task We will look in some detail at a particular real-world task, and see how the above classes may be used to describe the various stages in performing this task. The job is to obtain, by an automatic process, the postcodes from envelopes. Here is how this may be accomplished: Acquiring the image. First we need to produce a digital image from a paper envelope. This an be done using either a CCD camera, or a scanner. Preprocessing. This is the step taken before the major image processing task. The problem here is to perform some basic tasks in order to render the resulting image more suitable for the job to follow. In this case it may involve enhancing the contrast, removing noise, or identifying regions likely to contain the postcode. Segmentation. Here is where we actually get the postcode; in other words we extract from the image that part of it which contains just the postcode.8 CHAPTER 1. INTRODUCTION Representation and description. These terms refer to extracting the particular features which allow us to dierentiate between objects. Here we will be looking for curves, holes and corners which allow us to distinguish the dierent digits which constitute a postcode. Recognition and interpretation. This means assigning labels to objects based on their descrip- tors (from the previous step), and assigning meanings to those labels. So we identify particular digits, and we interpret a string of four digits at the end of the address as the postcode. 1.7 Types of digital images We shall consider four basic types of images: Binary. Each pixel is just black or white. Since there are only two possible values for each pixel, we only need one bit per pixel. Such images can therefore be very ecient in terms of storage. Images for which a binary representation may be suitable include text (printed or handwriting), ngerprints, or architectural plans. An example was the image shown in gure 1.4(b) above. In this image, we have only the two colours: white for the edges, and black for the background. See gure 1.8 below. 1 1 0 0 0 0 0 0 1 0 0 0 0 0 1 0 0 0 0 0 0 1 0 0 0 0 0 1 1 0 0 0 0 0 0 1 Figure 1.8: A binary image Greyscale. Each pixel is a shade of grey, normally from (black) to (white). This range means that each pixel can be represented by eight bits, or exactly one byte. This is a very natural range for image le handling. Other greyscale ranges are used, but generally they are a power of 2. Such images arise in medicine (X-rays), images of printed works, and indeed dierent grey levels is sucient for the recognition of most natural objects. An example is the street scene shown in gure 1.1 above, and in gure 1.9 below.1.7. TYPES OF DIGITAL IMAGES 9 230 229 232 234 235 232 148 237 236 236 234 233 234 152 255 255 255 251 230 236 161 99 90 67 37 94 247 130 222 152 255 129 129 246 132 154 199 255 150 189 241 147 216 132 162 163 170 239 122 Figure 1.9: A greyscale image True colour, or RGB. Here each pixel has a particular colour; that colour being described by the amount of red, green and blue in it. If each of these components has a range  , this gives a total of dierent possible colours in the image. This is enough colours for any image. Since the total number of bits required for each pixel is , such images are also called -bit colour images. Such an image may be considered as consisting of a stack of three matrices; representing the red, green and blue values for each pixel. This measn that for every pixel there correspond three values. We show an example in gure 1.10. Indexed. Most colour images only have a small subset of the more than sixteen million possible colours. For convenience of storage and le handling, the image has an associated colour map, or colour palette, which is simply a list of all the colours used in that image. Each pixel has a value which does not give its colour (as for an RGB image), but an index to the colour in the map. It is convenient if an image has colours or less, for then the index values will only require one byte each to store. Some image le formats (for example, Compuserve GIF), allow only colours or fewer in each image, for precisely this reason. Figure 1.11 shows an example. In this image the indices, rather then being the grey values of the pixels, are simply indices into the colour map. Without the colour map, the image would be very dark and colourless. In the gure, for example, pixels labelled 5 correspond to10 CHAPTER 1. INTRODUCTION 49 55 56 57 52 53 64 76 82 79 78 78 66 80 77 80 87 77 58 60 60 58 55 57 93 93 91 91 86 86 81 93 96 99 86 85 58 58 54 53 55 56 88 82 88 90 88 89 83 83 91 94 92 88 83 78 72 69 68 69 125 119 113 108 111 110 135 128 126 112 107 106 88 91 91 84 83 82 137 136 132 128 126 120 141 129 129 117 115 101 69 76 83 78 76 75 105 108 114 114 118 113 95 99 109 108 112 109 61 69 73 78 76 76 96 103 112 108 111 107 84 93 107 101 105 102 Red Green Blue Figure 1.10: A true colour image1.8. IMAGE FILE SIZES 11 0.2627 0.2588 0.2549, which is a dark greyish colour. 0.1211 0.1211 0.1416 0.1807 0.2549 0.1729 0.2197 0.3447 0.1807 4 5 5 5 5 5 0.1611 0.1768 0.1924 5 4 5 5 5 5 0.2432 0.2471 0.1924 5 5 5 0 5 5 0.2119 0.1963 0.2002 5 5 5 5 11 11 0.2627 0.2588 0.2549 5 5 5 8 16 20 0.2197 0.2432 0.2588 8 11 11 26 33 20 . . . . . . 11 20 33 33 58 37 . . . Indices Colour map Figure 1.11: An indexed colour image 1.8 Image File Sizes Image les tend to be large. We shall investigate the amount of information used in dierent image type of varying sizes. For example, suppose we consider a binary image. The number of bits used in this image (assuming no compression, and neglecting, for the sake of discussion, any header information) is bytes Kb Mb. (Here we use the convention that a kilobyte is one thousand bytes, and a megabyte is one million bytes.) A greyscale image of the same size requires: bytes Kb Mb. If we now turn our attention to colour images, each pixel is associated with 3 bytes of colour information. A image thus requires bytes12 CHAPTER 1. INTRODUCTION Kb Mb. Many images are of course such larger than this; satellite images may be of the order of several thousand pixels in each direction. A picture is worth one thousand words Assuming a word to contain 10 ASCII characters (on average), and that each character requires 8 bits of storage, then 1000 words contain bits of information. This is roughly equivalent to the information in a binary image greyscale image RGB colour image. 1.9 Image Acquisition We will briey discuss means for getting a picture into a computer. CCD camera. Such a camera has, in place of the usual lm, an array of photosites; these are silicon electronic devices whose voltage output is proportional to the intensity of light falling on them. For a camera attached to a computer, information from the photosites is then output to a suitable storage medium. Generally this is done on hardware, as being much faster and more ecient than software, using a frame-grabbing card. This allows a large number of images to be captured in a very short timein the order of one ten-thousandth of a second each. The images can then be copied onto a permanent storage device at some later time. Digital still cameras use a range of devices, from oppy discs and CD's, to various specialized cards and memory sticks. The information can then be downloaded from these devices to a computer hard disk. Flat bed scanner. This works on a principle similar to the CCD camera. Instead of the entire image being captured at once on a large array, a single row of photosites is moved across the image, capturing it row-by-row as it moves. Since this is a much slower process than taking a picture with a camera, it is quite reasonable to allow all capture and storage to be processed by suitable software. 1.10 Image perception Much of image processing is concerned with making an image appear better to human beings. We should therefore be aware of the limitations of the the human visual system. Image perception consists of two basic steps: 1. capturing the image with the eye,1.10. IMAGE PERCEPTION 13 2. recognising and interpreting the image with the visual cortex in the brain. The combination and immense variability of these steps inuences the ways in we perceive the world around us. There are a number of things to bear in mind: 1. Observed intensities vary as to the background. A single block of grey will appear darker if placed on a white background than if it were placed on a black background. That is, we don't perceive grey scales as they are, but rather as they dier from their surroundings. In gure 1.12 a grey square is shown on two dierent backgrounds. Notice how much darker the square appears when it is surrounded by a light grey. However, the two central squares have exactly the same intensity. Figure 1.12: A grey square on dierent backgrounds 2. We may observe non-existent intensities as bars in continuously varying grey levels. See for example gure 1.13. This image varies continuously from light to dark as we travel from left to right. However, it is impossible for our eyes not to see a few horizontal edges in this image. 3. Our visualsystemtendstoundershoot orovershootaround theboundaryofregionsofdierent intensities. For example, suppose we had a light grey blob on a dark grey background. As our eye travels from the dark background to the light region, the boundary of the region appears lighter than the rest of it. Conversely, going in the other direction, the boundary of the background appears darker than the rest of it.14 CHAPTER 1. INTRODUCTION Figure 1.13: Continuously varying intensitiesChapter 2 Basic use of Matlab 2.1 Introduction Matlab is a data analysis and visualization tool which has been designed with powerful support for matrices and matrix operations. As well as this, Matlab has excellent graphics capabilities, and its own powerful programming language. One of the reasons that Matlab has become such an important tool is through the use of sets of Matlab programs designed to support a particular task. These sets of programs are called toolboxes, and the particular toolbox of interest to us is the image processing toolbox. Rather than give a description of all of Matlab's capabilities, we shall restrict ourselves to just those aspects concerned with handling of images. We shall introduce functions, commands and techniques as required. A Matlab function is a keyword which accepts various parameters, and produces some sort of output: for example a matrix, a string, a graph or gure. Examples of such functions are sin, imread, imclose. There are manyfunctions in Matlab, and as we shall see, it is very easy (and sometimes necessary) to write our own. A command is a particular use of a function. Examples of commands might be sin(pi/3) c=imread('cameraman.tif'); a=imclose(b); As we shall see, we can combine functions and commands, or put multiple commnds on a single input line. Matlab's standard data type is the matrixall data are considered to be matrices of some sort. Images, of course, are matrices whose elements are the grey values (or possibly the RGB values) of its pixels. Single values are considered by Matlab to be matrices, while a string is merely a matrix of characters; being the string's length. In this chapter we will look at the more generic Matlab commands, and discuss images in further chapters. When you start up Matlab, you have a blank window called the Command Window in which you enter commands. This is shown in gure 2.1. Given the vast number of Matlab's functions, and the dierent parameters they can take, a command line style interface is in fact much more ecient than a complex sequence of pull-down menus. The prompt consists of two right arrows: 1516 CHAPTER 2. BASIC USE OF MATLAB Figure 2.1: The Matlab command window ready for action 2.2 Basic use of Matlab If you have never used Matlab before, we will experiment with some simple calculations. We rst note that Matlab is command line driven; all commands are entered by typing them after the prompt symbol. Let's start o with a mathematical classic: 2+2 What this means is that you type in 2 at the prompt, and then press your Enter key. This sends the command to the Matlab kernel. What you should now see is ans = 4 Good, huh? Matlab of course can beused as a calculator; it understands the standard arithmetic operations of addition (as we have just seen), subtraction, multiplication, division and exponentiation. Try these: 34 7-3 11/7