2D Graphics Programming for Games

3d graphics programming games and beyond and programming the cell processor for games graphics and computation john pile jr. 2d graphics programming for games
Dr.NaveenBansal Profile Pic
Dr.NaveenBansal,India,Teacher
Published Date:25-10-2017
Your Website URL(Optional)
Comment
2D Graphics Programming for Games www.itbookshub.com Chapter 2 Basics of Computer Graphics This chapter presents a brief overview of how simple images are stored and displayed on the screen, especially as computer graphics impacts mod- ern game development. It is by no means a complete story. During the early days of computer graphics, a variety of rather complicated hardware and software tricks were employed by game console manufacturers to dis- play moving images on a television screen. Techniques such as \racing the beam" allowed programmers to extend the capabilities of very limited hard- ware. Although interesting, the details are not relevant to modern game development and are beyond the scope of this text. Instead, this chap- ter focuses on some basic theories and implementations of the standard graphics techniques used today. 2.1 Bits and Bytes Starting at the most basic level, computers use 1s and 0s to store informa- tion. The value (1 or 0) is stored in a bit, analogous to a light bulb that is either on or o . Series of bits are used to store larger numbers, in which each number column represents a power of 2. This binary number system is the basis for modern computing, but, as you can imagine, it is not very convenient for humans. As seen below, we need four digits just to display the number 15: 0000 = 0, 0001 = 1, 0010 = 2, 0011 = 3, . . . , 1111 = 15. To make things a bit easier, we group our binary numbers into blocks of 4 bits. Each group of 4 bits has 16 unique combinations of 0s and 1s (0000 to 1111), corresponding to the decimal numbers 0 to 15. As a matter of convenience, we can write these 16 combinations into a single \digit" 1516 2. Basics of Computer Graphics by using the hexadecimal number system (base 16), in which decimal 10 is hexadecimal A, 11 is B, and so on. In hexadecimal, then, we can count to 15 as 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, A, B, C, D, E, F. A group of 8 bits (called a byte) can store 256 unique combinations of bits (0000 0000 to 1111 1111) and can also be more easily written by using the hexadecimal number system of 00 to FF. In counting upward, when reaching F in the rightmost digit, we start over with 0 in the right digit and add 1 to the left digit until we reach FF (just as 39 is followed by 40 when counting upward in the decimal system): 00, 01, 02, . . . 09, 0A, 0B, 0C, 0D, 0E, 0F, 10, 11, . . . FD, FE, FF. If you're feeling a bit overwhelmed by all these numbers (pun intended), don't worry. You'll soon see the reason for this review of introductory computer science. 2.1.1 Digital Color Theory The simplest (and perhaps most obvious) way to store a graphical image is as a two-dimensional array of colors. Or, as in the following example, an array of bits. Consider the following array of 36 bits: 000000 010010 000000 100001 011110 000000. Figure 2.1. Thirty-six By aligning the array of 36 bits into 6 rows of 6 bits, as shown in bits aligned in rows. Figure 2.1, we can build the image shown in Figure 2.2 where a 0 bit represents white and a 1 bit represents black. This type of black and white \1 bits per pixel (bpp) color" was used in early games such as Atari's Pong (Figure 2.3) and later in the Figure 2.2. Bitmap graphical user interface (GUI) for the Apple Macintosh OS (Figure 2.4). from 36 bits. This two-dimensional map of bits is where we get the term bitmap. Figure 2.3. Pong, Atari Inc. (1972). Figure 2.4. Mac 128k, Apple Inc. (1984).2.1. Bits and Bytes 17 Figure 2.5. The 4-bit color palette (right) and some 4-bit games (clockwise from top left): Namco's Pac-Man (1980), Origin Systems' Ultima IV (1985), Bruce Lee (1984), and Sega's Zaxxon (1982). The decade between Pong and the Macintosh did see signi cant ad- vances in game graphics. By 1977, the Atari 2600 game system featured a palette of 128 available colors. Advances in this era were achieved through a variety of creative hardware and software techniques, allowing program- mers to stretch the limits of game consoles. At the time, RAM was too signi cantly expensive to allow for a single bit in memory to represent ev- ery pixel on the screen. Instead, games had to reuse the same collection of bits (called a sprite) so that the same chunk of memory could be used mul- tiple times (sometimes ipping or stretching it) to ll the game screen. It wasn't until the early 1980s that we began to see personal computers with dedicated video RAM for displaying a 2D array of colors directly to the screen. However, the use of sprites was convenient and continues through today. We'll take a closer look at sprites in Chapter 3. IBM's Color Graphics Adapter (CGA) featured 16 kilobytes of memory, capable of displaying either a 2-bit color depth (4 colors) at 320 pixels wide by 200 pixels high or a 4-bit color depth (16 colors) at 160 200: bits 2  (320 200) pixels = 128;000 bits = 16;000 bytes; pixel bits 4  (160 200) pixels = 128;000 bits = 16;000 bytes: pixel These early graphical systems implemented a speci c set of colors that could be use in developing software for their system. Figure 2.5 shows an example of a 4-bit color palette. Depending on the system, this usually18 2. Basics of Computer Graphics included 8 colors (black, red, green, yellow, blue, magenta, cyan, and white) in both low and high intensity, providing for 16 colors. In some cases, the developer could set a speci c color palette to use for a particular game, allowing for at least some color variety between titles. As hardware became cheaper, software developers soon had access to greater color depth. Doubling the depth from 4 bpp to 8 bpp allowed a move from 16 colors to a full palette of 256 colors. Now there was the new challenge of dealing with all those colors in a way that made sense. 2.1.2 RGB Color Model Let's take a quick side step and look at the way computer monitors works. First, let's look at the traditional CRT computer monitor (the heavy ones with the large cone-shaped back, which were typical in the 1980s and 1990s). As with CRT televisions, CRT com- puter monitors send a stream of electrons that bombard a net of phosphors located on the back of the computer screen. A phosphor is simply a substance that illuminates light when hit with an electron. Tiny red, green, and blue (RGB) phosphors group together to form what we would consider a single pixel. (See Figures 2.6 and 2.7.) In the more modern LCD screens, the same concept is used, but instead of a ray of electrons and phosphors, LCD monitors make use of the light-emitting properties of liquid Figure 2.6. RGB colors combined: magenta, yel- crystals. Again, the chosen colors are red, low, cyan, and white are all clearly visible in the green, and blue. intersections of red, green, and blue. In both CRT and LED screens, the colors red, green, and blue are combined in a small point to create the color of each pixel on the screen. These combinations blend together to form all the colors we need. If you have a background in traditional painting, you may know that from an artist's perspective, red, yellow, and blue are the primary colors. Then why not use red, yellow, and blue light instead of RGB? Actually, the human eye also works by combining RGB light. As you can see in Figure 2.8, the human eye comprises millions of red, green, and blue light-sensitive cones. The red cones allow us to perceive red light; the green cones, green light; and the blue cones, blue light. Combined, these cones allow us to see all the colors of the rainbow.2.1. Bits and Bytes 19 Figure 2.8. Cross section of light-sensitive rods Figure 2.7. The surface of a CRT monitor is and cones that permeate the surface of the human covered with red, green, and blue phosphors retina: (1) retina, (2) cones, and (3) rods. that glow when energized. In addition to the color-sensitive cones, the retina of the eye also has rods, which work best in low light conditions. This is why colors will seem more vivid in the light of day. Therefore, it made sense to use the same RGB color model to store color data in the computer's memory. So in the move to 12-bit color depth, in- stead of simply de ning an arbitrary palette of 4,096 colors, game develop- ers could now divide those 12 bits into groups so that 4 bits were available to each of the three colors in a color computer monitor: bits 12 = 4 bits red + 4 bits green + 4 bits blue: pixel From three 0s (R = G = B = 0) to three 15s (R = G = B = 15), we suddenly had an easy convention for managing 4,096 combinations of the RGB colors. Conveniently, these values can be recorded hexadecimally: for example,  F00 (red),  000 (black),  AAF (dark blue),  0F0 (green),  888 (gray),  44F (light blue),  00F (blue),  FFF (white),  808 (purple). Even though 12-bit color is good, it doesn't provide enough colors to create photographic-quality images. As a result, once the hardware be- came a ordable, 12-bit RBG color was followed by color depths of 16-bit (commonly referred to as high color) and eventually 24-bit (true color). See Figure 2.9. The 24-bit color allows a full 8 bits (1 byte) per RGB color channel, resulting in more than 16 million color combinations.20 2. Basics of Computer Graphics Figure 2.9. RGB colors combined. In other elds it may be necessary to go beyond 24-bit RGB color (the bitmap letype supports up to 64 bpp), but the current standard for game development is 8 bits per color channel: bits 24 = 8 bits red + 8 bits green + 8 bits blue: pixel Figure 2.10 shows an example of a photograph rendered at various color depths. De ning colors in terms of various amounts of red, green, and blue is convenient and has become a game industry standard, but it is not the only way to de ne a color. In fact, the human eye does not see those three colors evenly. When viewing Figure 2.11, you may notice that your eye can see more detail in the green gradient when compared to the red or blue gradients. For that reason, when 16-bit RGB color was introduced and the bits could not be easily divided among the three components, it made sense to give the remaining bit to green. Figure 2.10. The same photograph at 1 bpp (left), 8 bpp (center), and 24 bpp (right).2.1. Bits and Bytes 21 Figure 2.11. RGB gradients: you will likely detect more detail in the green band than in the red or blue bands. 2.1.3 RGBA: Blending with Alpha With 256 options per channel, the permutations of the 24-bit RGB color model provide for a signi cant variety of colors (16.7 million colors per pixel): 16;777;216 colors = 256 shades of red 256 shades of green 256 shades of blue: In the real world, however, not all materials are completely opaque; some surfaces allow light through (picture a pair of red-tinted glasses sitting on a blue tablecloth). In computer graphics, we can store how \transparent" a pixel is in a fourth byte called the alpha value. Since artists want to layer images within a game, the color model would not be complete without transparency. An 8-bit alpha value is convenient because it allows an additional 256 shades of transparency to the base RGB color scheme, forming the RGBA color scheme. An alpha value of 255 represents a pixel that is fully opaque, and a value of 0 signi es a pixel that is completely transparent. The exact algorithm for determining how overlapping transparent pixels are blended together is discussed in Chapter 8.22 2. Basics of Computer Graphics With the 32-bit RGBA color palette, we now have the ability to store more than 4 billion color combinations in just 4 bytes of memory. That's more than enough for most applications, and a far distance from the two colors from the beginning of this chapter. But now we have another poten- tial problem: the memory required for an 800 600 image, which is bytes 1:92 MB = 800 pixels 600 pixels 4 : pixel Notice the switch from bits per pixel (bpp) to bytes per pixel (Bpp). 2.1.4 First XNA Project Building your rst XNA project is very simple by using the built-in tem- plates and the XNA framework game class. Once you have installed Visual C Express 2010 and Microsoft XNA Game Studio 4.0, simply start Visual C Express. Select File New Project from the toolbar. In the dialog box, choose Installed Templates Visual C XNA Game Studio 4.0 Windows Game (4.0). Check that you're happy with the project name and le location, and then click OK. Within the game class created by the template, you will notice a con- structor and ve overridden functions for initialization, content load, con- tent unload, update, and draw. The XNA framework is de ned so that the update and draw functions are called at an appropriate frame rate (frames per second, or fps) for the given platform (60 fps for PC and Xbox, 30 fps for Windows Phone). Press F5 to start debugging, and you should soon see a light blue game window. 2.1.5 XNA Corner XNA has a built-in 32-bit color structure for de ning red, green, blue, and alpha byte values. In addition to the R, G, B, and A accessors, the structure includes a variety of prede ned named colors. As of XNA Game Studio 4.0, this includes 142 colors from Alice blue (R: 240; G: 248; B: 255; A: 255) to yellow green (R: 154; G: 205; B: 50; A: 255). To demonstrate, temporarily add the following code to your Initialize function: 1 //Color values example Color myColor = Color.DarkOliveGreen; 5 Console.WriteLine("Color values for DarkOliveGreen"); Console.WriteLine(" Red: " + myColor.R); Console.WriteLine(" Green: " + myColor.G); Console.WriteLine(" Blue: " + myColor.B); Console.WriteLine(" Alpha: " + myColor.A);2.1. Bits and Bytes 23 Figure 2.12. Output screenshot. When running your project, you will notice the output in the console window similar to that shown in Figure 2.12. The choice of colors and as- sociated RGBA values seems a bit arbitrary and not necessarily very useful for game development. Instead, we'll rely on our artist to use colors within sprites and then we'll use numeric values to programmatically modify the color RGBA accessors at runtime. Microsoft XNA samples use the default color of corn ower blue (R: 100; G: 149; B: 237; A: 255), which has become synonymous with programmer art. A quick search for the text \Corn owerBlue" in the XNA template shows that it is used as the clear color in the Draw function. 2.1.6 Raster versus Vector Graphics The term for the type of bitmap graphics we have discussed so far is raster graphics. The term derives its name from the way images were originally drawn on a television monitor, but it now has a more generalized meaning to describe graphics comprised of a rectangular grid of pixels. Storing raster graphics can take up a lot of space in memory, but they have another disadvantage (consider Figure 2.13). When the sprite is en- larged, the image appears pixelated. A similar (although sometimes less noticeable) loss of detail occurs even when the image is made smaller. In some cases this may be acceptable, but in others you'll need your artist to make multiple copies of your images, rendered at the appropriate sizes. An alternative is vector graphics, which uses mathematical formulas and the computational power of the computer to draw the exact shape you want at the exact resolution you need. For example, if you need to draw a line, you would need only the start and end points of the line and then to tell the computer to render pixels at all the points in between. www.itbookshub.com 24 2. Basics of Computer Graphics Figure 2.13. An enlarged vector circle (left); note the pixel-perfect smooth edge. An enlarged raster circle (right); note the jagged edge. Alternatively, to render a solid circle, you simply need to track a center location and the radius. For every pixel in the scene, simply check the distance to the center of the circle. If it is less than or equal to the radius, then color the pixel with the appropriate color. Vector graphics comes with both advantages and disadvantages, and the details of how to use it could ll a book. In this text, the closest we will get is with splines in Section 10.3. 2.2 Display 2.2.1 UV Coordinates Often various-sized source images will be used for deploying the same game to various platforms. For example, large textures may be used when de- ploying the game to a desktop computer with a powerful graphics card, whereas smaller textures may be used when deploying the same game to mobile devices. In these cases, it can make sense to normalize (see Ap- pendix B.4) the coordinate system so that the top-left pixel is set to be (0; 0) and the bottom-right pixel is set to be (1; 1). As a result, any indi- vidual texel can be measured in terms of percentage from the origin along the U (normalized X) and V (normalized Y) axes. (Texel is the term for a pixel on a texture.) For example, an individual texel located at the coordinates (512; 512) on a texture that measures 1,024 1,024 will have UV-coordinates of (0:5; 0:5). Measuring texel locations in terms of UV coordinates instead of with XY- coordinates ensures that the location values are independent of the texture size.2.2. Display 25 UV coordinates are most often used in 3D graphics; it also helps to distinguish between the UV-axes on the source textures and the XYZ-axes within the 3D game world. This same normalization of the axes occurs when working with pixel shaders (see Section 9.3). For the purposes of clarity, the examples in this book use nonnormalized XY-coordinates when working with textures. 2.2.2 Image Resolution Thus far, we have explored the ability to increase the quality of an image by increasing the range of possible colors for each pixel. Another option is to simply in- crease the number of pixels. This may seem obvious, but let's consider Figures 2.14 and 2.15. Figure 2.14 is rendered at 400 pixels wide, and Figure 2.15 is 200 pixels wide. By doubling the pixel width (assuming we're constraining the image proportions), we need four times the amount of storage space: bytes New storage = (2 width) (2 height) : pixel Figure 2.14. 400  278. Note that some artists (especially those with a graphic design or print background) think of images as a combination of pixel density and nal physical width on the screen (rather than as simple pixel res- olution). Since Figures 2.14 and 2.15 are rendered to the same physical width on the page, Figure 2.14 has twice the pixel density of Figure 2.15. On modern game platforms, it is not common practice to scale images in a 2D game; game artists will expect a 1:1 relationship between the pixels in the images they create and how those pixels result on screen. As a result, graphics programmers have histori- Figure 2.15. 200  139. cally discussed games only in terms of their pixel res- olution. When developing games for a game console, developers know players will be playing their games on either standard def- inition televisions (SDTV) or high de nition televisions (HDTV). Assuming an HDTV, developers ensure their games will render at 1,280 720 (a typical resolution for HDTVs). In this scenario, the developers do not need to worry about the actual size of the screen. Whether the player's game console is connected to a 20- inch TV set or the game is displayed on a wall through an HD projector, the resolution is still 1,280 720. Similarly, a game on a PC is rendered26 2. Basics of Computer Graphics at a speci c resolution. If that resolution is smaller than the screen size, the game is rendered in a window. If the player chooses to switch the game to full-screen mode, the computer's hardware and operating system handle the appropriate upscaling of the game onto the PC monitor. Occasionally, 2D games are required to support multiple graphical res- olutions. For a console game, this is done so the game can support both SDTV and HDTV. Since modern game consoles have enough computing power to deal with the higher resolution, it has become common practice to generate game assets only at high resolution and then simply scale the nal game image for the smaller screen. In some cases, such as porting a game to run on a very low-end PC, the art assets need to be scaled down to an appropriate size before the game is shipped. In these cases, the game is programmed to detect the game hardware and then select the appropriate art assets. (An exception must be made for font sizes, because we never want the text scaled so small that it becomes unreadable.) With the move toward game development on tablet computers and other mobile devices, however, this is changing. The pixel density on these de- vices is increasing to the point where the human eye cannot detect indi- vidual pixels, so game developers now need to decide whether they really want their 720 pixels shoved onto a two-inch wide screen. Even though all the pixels are still there, is the image now too small? Although \too many pixels" may be a good problem to have, it's still something that graphics programmers need to understand and know how to handle. We'll look in more detail at scaling in Chapter 3. 2.2.3 Aspect Ratio A measure of the relationship of width to height (W:H), aspect ratio is often discussed in terms of television displays. For decades, SDTVs displayed images at an aspect ratio of 4:3 (1.33:1), the width being one-third greater in length than the height. This aspect ratio was common also in computer monitors, resulting in resolutions that hold the same aspect ratio (400 300, 640 480, 800 600, and 1,024 768). At the same time, feature lms are often shot in the much wider aspect ratio of 1.85:1, and this has been the standard for US theaters since the 1960s. The advantage of the wider aspect ratio is the ability to display an image in a way that better matches the way we see the world. With the advent of high-de nition displays has come a move toward a wider aspect ratio. As mentioned earlier, the typical 1,280 720 HDTV resolution is now common, with an aspect ratio of 16:9 (1.78:1). We see the same move in computer monitors, with many wide-screen monitors running resolutions to match the HDTV aspect ratio (1,280 720, 1,600 900, and 1,920 1,080). Compare the various aspect ratios shown in Figure 2.16.2.2. Display 27 Figure 2.16. Various aspect ratios and resolutions 2.2.4 Mobile Displays Since the recent introduction of the iPhone and the subsequent mobile game boom, we have seen an incredible \mobile arms race" between Apple, its competitors, and even itself. Device Resolution Aspect Release Date Apple iPhone 480  320 1.5:1 29-06-2007 Apple iPad 1,024  768 1.3:1 03-04-2010 Google Nexus One 800  480 1.67:1 05-01-2010 Apple iPhone 4s 960  640 1.5:1 24-06-2010 Amazon Kindle Fire 1,024  600 1.7:1 15-11-2011 Apple iPad 3 2.048  1.536 1.3:1 13-03-2012 Samsung Galaxy S III 1.280  720 1.78:1 29-04-2012 Amazon Kindle Fire HD 1,280  800 1.6:1 14-09-2012 Apple iPhone 5 1,136  640 1.78:1 21-09-2012 Apple iPad Mini 1,024  768 1.3:1 02-11-2012 The resultant constantly morphing expectations for resolution and aspect ratios have made for a very dicult situation for game developers in the mobile market. Current devices have anywhere from 320,000 to 3.1 million pixels, with aspect ratios varying from 1.3:1 to 1.78:1. In the case of the latest full-size iPads, the resolution of 2,048 1,536 is signi cantly larger than that of HDTVs. While providing some amazing potential for game display, this resolution is problematically even higher28 2. Basics of Computer Graphics than the monitors used by most game developers. Not only is there the obvious problem of the huge resolution on art resources, there also is an expectation that the game will deploy and make use of both the low and high ends of the pixel spectrum. This may mean multiple size art assets that must be packaged with the mobile game. These new issues associated with resolution versus physical width became apparent during the develop- ment of aliEnd. We had originally planned the game for the Xbox 360, but as the project neared comple- tion, it was evident that mobile devices provided a really cool mechanic for the game. At the time, I was experimenting with the Windows phone development and decided that aliEnd provided a perfect opportu- nity to test out Microsoft's claim that an XNA game would easily port to the phone. Even though the game functioned great on the mobile device, the artist, Geo Gunning, wasn't happy with the way it looked on the small device. All the personality he had lovingly embodied frame by frame into the game characters was lost on the tiny screen. I later compared it to an actor moving from television to the Broadway stagethe subtle facial expressions are lost on those in the back rows. The solution of zooming in on the character was a fairly simple solution, but we lucked out. Had the original view been necessary for the game play, we would have faced a fairly dicult problem. 2.2.5 Console Standards Before we leave the topic of resolution, it is worth noting one other dif- ference in the old analog SDTV. That is, there are actually three primary standards in place: NTSC (developed in the United States and primarily used in the Americas and various other locations), SECAM (developed in Europe and adopted for use in various European and Asian countries), and PAL (developed in Germanyit eventually became the standard for all of Europe and Russia). Even though developers now make games targeted for HDTV systems, the Xbox 360, PlayStation 3, and Wii generation of game consoles still need to connect with those older standards. The result is that console games are often released based on their geographic region. Combined with DVD region coding, languages, and rating bodies that vary from country to country, publishing games for consoles can be a fairly sig- ni cant undertaking. Whereas issues surrounding languages and ratings still exist for mobile development, development tasks due to analog televi- sion standards and DVD regions thankfully are not an issue for mobile and PC development.2.2. Display 29 2.2.6 Frame Rate The frame rate is a measure of the number of screen draws (frames) per second. Console players will expect a minimum of 60 fps for action games, and the limited graphics hardware in mobile devices will often see accept- able frame rates of 30 fps. In old animation clips, 12 fps was considered the lowest acceptable frame rate, although today it would look fairly bad if the entire screen was updating at such a slow speed. Keeping track of the current frame rate is important because it will allow you to quickly learn whether you have written any poor performing code. You can keep track by creating a counter that is incremented every time the Draw function is executed. Then, once a second has passed, update your frame rate with the number of frames counted over the last second. 1 double m_iElapsedMilliseconds = 0; int m_iFrameCount = 0; int m_iFPS = 0; 5 public void Update(GameTime gameTime) m_iElapsedMilliseconds += gameTime.ElapsedGameTime. TotalMilliseconds; if (m_iElapsedMilliseconds 1000) 10 m_iElapsedMilliseconds -= 1000; m_iFPS = m_iFrameCount; m_iFrameCount = 0; 15 //Update Game //... 20 public void Draw(GameTime gameTime) m_iFrameCount++; Console.WriteLine("FPS is: " + m_iFPS); 25 //Draw Scene //... Running at 60 fps means that the frame should be drawn every 17 milliseconds (ms). The game update may run faster or slower than 60 fps, but it is important to try to hold the draw rate at 60 fps. If not, the player will notice. As a result, if any signi cant operations occur during your game up- date that take longer than 17 ms (for example, texture or audio content30 2. Basics of Computer Graphics loading, save game operations, arti cial intelligence calculations, or leader- board updates), it is important that these do not block the game draw from occurring. One option is to divide the work across multiple frames. For exam- ple, if you know your path- nding algorithm may take up to 60 ms, you could pause the path- nding algorithm after 10 ms and then resume the path- nding calculations on the next frame. Depending on your system architecture, a better option may be to ooad the intensive calculations to other nonblocking processor threads. Ensuring background operations do not prevent the game Draw func- tion from occurring is especially important when saving games or querying remote databases. In these circumstances, you should always use asyn- chronous function calls if they are available. 2.3 Double Bu ering Drawing images to the screen is fast, but our eyes are fast too. Imagine we were to draw a background image and then quickly draw another image on top of it to hide the background. The goal here is to create a nal scene in which some piece of the background is obscured by the foreground image. Although this occurs in a fraction of a second, it is likely that our eyes would catch this. In fact, the result would look pretty bad. If you could look back at some of the games made in the 1970s for the Apple II, you would notice that you can actually see the images as they are drawn. What we do to get around this issue is to make use of two bu ers. The bu er that displays the current image is called the front bu er. A second bu er (the back bu er) is a duplicate area of graphics memory in which we can add all the art assets, building up to a nal image while the front bu er displays the previously rendered image. The back bu er is where we do all our work. When we're ready, we swap the front bu er with the back bu er. The result is that the user will see the image only when we're nished editing it. In XNA, all we need to do is request that a back bu er be created at a speci c size, and the framework will do the rest of the work for us. 1 public Game1() graphics = new GraphicsDeviceManager(this); graphics.PreferredBackBufferWidth = 1280; 5 graphics.PreferredBackBufferHeight = 720; \\\... 2.4. Graphic File Formats 31 In DirectX and OpenGL, this is only slightly more complicated because we explicitly tell the system when we want it to swap bu ers. 2.4 Graphic File Formats PNG les are the format of choice for most 2D games today, but it is worth taking a look at other common le formats. 2.4.1 Bitmap Bitmap (BMP) les are the most basic of the image le formats. For all practical purposes, they simply store the raw image data as a 2D array of colors. For this reason, I use the term bitmap (lowercase B) throughout this book to refer to the generic concept of storing 2D pixel data in RAM (or video RAM). The actual le format (Bitmap) has a few variations, but for the most part, it is a bit-for-bit match with the data in RAM. As a result, to process a bitmap le, all we need is the color depth and resolution of the image (stored in the le header). This lack of compression means that the bitmap les can be processed very quickly. The downside is that they almost always require signi cantly more storage space than is necessary. As an example, consider the image part of the Taylor & Francis logo in Figure 2.17. We can see that the image is composed of large amounts of white space, stored in a bitmap as a series of white pixels. In memory, a white pixel takes up just as much space as any other colored pixel, despite the fact that the white pixels are all the same. With this in mind, a simple compression algorithm was developed that is ideal for logos or other images that contain groupings of pixels that are Figure 2.17. Taylor & Francis logo (left) and a scaled version of it (right).32 2. Basics of Computer Graphics the same color. Instead of storing the same value for each pixel, we can group the pixels by color, storing the discrete number of that color. For example, instead of FF 00 00, FF 00 00, FF 00 00, FF 00 00, FF FF FF, FF FF FF, we can store the color of the pixel along with the number of occurrences before the pixel color changes: FF 00 00 ( 4) FF FF FF ( 2). In so doing, we have dramatically decreased the storage requirements for the logo. This type of compression is called run-length encoding. It is simple to comprehend, and no data are lost during the compression process. An additional advantage is that the image can be created as the le is processed. 2.4.2 Graphics Interchange Format The graphics interchange format (GIF) for images, developed in 1987, imple- ments run-length encoding for compression as described above. This made GIF images an ideal choice for logos, and GIF was used extensively in the 1990s, especially on the web. Although GIF images can be used e ectively to store an animation, GIF animations are more of a novelty and do not serve much use for game development. Even worse, the GIF format has further strikes against it. First, the lossless compression algorithm used by the GIF format was patented by Unisys until 2004. Second, GIF images do not sup- port transparency. Looking back at Figure 2.17 (right), we see that the edges of the image are a blend between white and blue. Now imagine that we wanted to place the image on a dark background. If the logo had a harder edge, we could open a graphics editor and Figure 2.18. Taylor & simply replace all the white pixels with background color. But since Francis logo with a dark background. the logo has a soft edge, the result is rather awful (see Figure 2.18). Without the ability to store transparent pixels, the GIF le for- mat is simply not robust enough for our needs. 2.4.3 Portable Network Graphics Like GIF, the portable network graphics (PNG) le format supports lossless compression. It was developed in 1995 as a result of the two shortcomings of the GIF le type noted above (lack of support for transparency and patent issues). The PNG format is now the primary graphical storage format for 2D games.2.4. Graphic File Formats 33 2.4.4 Joint Photographic Experts Group Unlike GIF and PNG, the Joint Photographic Experts Group (JPEG or JPG) image format utilizes lossy compression. That is, as the image is com- pressed, the original detail is lost and cannot be recovered. The advantage of the JPEG format is that when used on photographs, it allows for a large compression ratio, as much as 10 to 1, with very little loss in compressed image quality. This makes JPEG a popular standard for photography and web pages. However, JPEG compression is not a good choice for 2D games. Not only is detail lost as an image is processed by JPEG compression, but more important, the format does not support transparency. 2.4.5 Truevision Advanced Raster Graphics Adapter Developed as a native format for early graphics cards, Truevision graphics adapter (TGA) and Truevision advanced raster graphics adapter (TARGA) les allow for both raw and lossless compression. Simple in structure, TGA les were historically used for textures in 3D games. 2.4.6 XNA Binary The last le type worth mentioning is XNA binary (XNB). XNA developers may notice that their PNG les are converted to XNB les. These binary les are created automatically during one of the nal stages of the game deployment process by XNA Game Studio. They o er a minimal level of security so that raw PNGs won't be available to prying eyes. But, even though they are compressed into a Microsoft-speci c format and protected by copyright, the images are not completely protected; exporters can be found on the Internet. Exercises Questions 2.1. Calculate the amount of memory (in bytes) to store a 1,024 768 24-bit RGB image. 2.1. At a garage sale, you nd a used digital camera. On the side of the camera it states that it takes pictures that are 5.0 megapixels in size. What is a likely resolution (width and height) of the images taken by the camera. Assuming the images are in true color and stored uncompressed, how much space in memory does each image require? www.itbookshub.com

Advise: Why You Wasting Money in Costly SEO Tools, Use World's Best Free SEO Tool Ubersuggest.