How Software testing is different from Debugging

how software testing works and how software testing helps to measure the quality and how unit testing is done and how unit testing is easy in mvc
Dr.KiranArora Profile Pic
Published Date:27-10-2017
Your Website URL(Optional)
About Software Testing and Unit Testing To gain the most benefit from unit testing, you must understand its purpose and how it can help improve your software. In this chapter, you learn about the “universe” of soft- ware testing, where unit testing fits into this universe, and what its benefits and draw- backs are. ptg7913098 What Is Software Testing For? A common goal of many software projects is to make some profit for someone.The usual way in which this goal is realized is directly, by selling the software via the app store or licensing its use in some other way. Software destined for in-house use by the developer’s business often makes its money indirectly by improving the efficiency of some business process, reducing the amount of time paid staff must spend attending to the process. If the savings in terms of process efficiency is greater than the cost of devel- oping the software, the project is profitable. Developers of open source projects often sell support packages or use the software themselves: In these cases the preceding argument still applies. So, economics 101: If the goal of a software project is to make profit—whether the end product is to be sold to a customer or used internally—it must provide some value to the user greater than the cost of the software in order to meet that goal and be suc- cessful. I realize that this is not a groundbreaking statement, but it has important ramifi- cations for software testing. If testing (also known as Quality Assurance, or QA) is something we do to support our software projects, it must support the goal of making a profit.That’s important because it automatically sets some constraints on how a software product must be tested: If the test- ing will cost so much that you lose money, it isn’t appropriate to do. But testing software can show that the product works; that is, that the product contains the valuable features expected by your customers. If you can’t demonstrate that value, the customers may not buy the product. 1 About Software Testing and Unit Testing 2 Notice that the purpose of testing is to show that the product works, not discover bugs. It’s Quality Assurance, not Quality Insertion. Finding bugs is usually bad.Why? Because it costs money to fix bugs, and that’s money that’s being wasted because you were being paid to write the software without bugs in in the first place. In an ideal world, you might think that developers just write bug-free software, do some quick test- ing to demonstrate there are no bugs, and then we upload to iTunes Connect and wait for the money to roll in. But hold on:Working like that might introduce the same cost problem, in another way. How much longer would it take you to write software that you knew, before it was tested, would be 100% free of bugs? How much would that cost? It seems, therefore, that appropriate software testing is a compromise: balancing the level of control needed on development with the level of checking done to provide some confi- dence that the software works without making the project costs unmanageable. How should you decide where to make that compromise? It should be based on reducing the risk associated with shipping the product to an acceptable level. So the most “risky” com- ponents—those most critical to the software’s operation or those where you think most bugs might be hiding—should be tested first, then the next most risky, and so on until you’re happy that the amount of risk remaining is not worth spending more time and money addressing.The end goal should be that the customer can see that the software does what it ought, and is therefore worth paying for. ptg7913098 Who Should Test Software? In the early days of software engineering, projects were managed according to the 1 “waterfall model” (see Figure 1.1). In this model, each part of the development process was performed as a separate “phase,” with the signed-off output of one phase being the input for the next. So the product managers or business analysts would create the prod- uct requirements, and after that was done the requirements would be handed to design- ers and architects to produce a software specification. Developers would be given the specification in order to produce code, and the code would be given to testers to do quality assurance. Finally the tested software could be released to customers (usually ini- tially to a select few, known as beta testers). 1. In fact, many software projects, including iOS apps, are still managed this way. This fact shouldn’t get in the way of your believing that the waterfall model is an obsolete historical accident. Should Test Software? 3 Requirements Specification Development Test ptg7913098 Deployment Figure 1.1 The phases of development in the waterfall software project management process. This approach to software project management imposes a separation between coders and testers, which turns out to have both benefits and drawbacks to the actual work of test- ing.The benefit is that by separating the duties of development and testing the code, there are more people who can find bugs.We developers can sometimes get attached to the code we’ve produced, and it can take a fresh pair of eyes to point out the flaws. Similarly, if any part of the requirements or specification is ambiguous, a chance exists that the tester and developer interpret the ambiguity in different ways, which increases the chance that it gets discovered. The main drawback is cost.Table 1.1, reproduced from Code Complete, 2nd Edition, by Steve McConnell (Microsoft Press, 2004), shows the results of a survey that evaluated the cost of fixing a bug as a function of the time it lay “dormant” in the product.The table shows that fixing bugs at the end of a project is the most expensive way to work, which makes sense:A tester finds and reports a bug, which the developer must then interpret and attempt to locate in the source. If it’s been a while since the developer worked on that project, then the developer must review the specifications and the code. The bug-fix version of the code must then be resubmitted for testing to demonstrate that the issue has been resolved. 1 About Software Testing and Unit Testing 4 Table 1.1 Cost of Fixing Bugs Found at Different Stages of the Software Development Process Cost of Bugs Time Detected Time System Post- Introduced Requirements Architecture Coding Test Release Requirements 1 3 5–10 10 10–100 Architecture - 1 10 15 25–100 Coding - - 1 10 10–25 Where does this additional cost come from? A significant part is due to the communica- tion between different teams: your developers and testers may use different terminology to describe the same concepts, or even have entirely different mental models for the same features in your app.Whenever this occurs, you’ll need to spend some time clearing up the ambiguities or problems this causes. The table also demonstrates that the cost associated with fixing bugs at the end of the project depends on how early the bug was injected:A problem with the requirements can be patched up at the end only by rewriting a whole feature, which is a very costly undertaking.This motivates waterfall practitioners to take a very conservative approach to the early stages of a project, not signing off on requirements or specification until they ptg7913098 believe that every “i” has been dotted and every “t” crossed.This state is known as analy- sis paralysis, and it increases the project cost. Separating the developers and testers in this way also affects the type of testing that is done, even though there isn’t any restriction imposed. Because testers will not have the same level of understanding of the application’s internals and code as the developers do, they will tend to stick to “black box” testing that treats the product as an opaque unit that can be interacted with only externally.Third-party testers are less likely to adopt “white box” testing approaches, in which the internal operation of the code can be inspected and modified to help in verifying the code’s behavior. The kind of test that is usually performed in a black box approach is a system test, or integration test.That’s a formal term meaning that the software product has been taken as a whole (that is, the system is integrated), and testing is performed on the result.These tests usually follow a predefined plan, which is the place where the testers earn their salary:They take the software specification and create a series of test cases, each of which describes the steps necessary to set up and perform the test, and the expected result of doing so. Such tests are often performed manually, especially where the result must be interpreted by the tester because of reliance on external state, such as a network service or the current date. Even where such tests can be automated, they often take a long time to run:The entire software product and its environment must be configured to a known baseline state before each test, and the individual steps may rely on time-consuming interactions with a database, file system, or network service. Should Test Software? 5 Beta testing, which in some teams is called customer environment testing, is really a spe- cial version of a system test.What is special about it is that the person doing the testing probably isn’t a professional software tester. If any differences exist between the tester’s system configuration or environment and the customer’s, or use cases that users expect to use and the project team didn’t consider, this will be discovered in beta testing, and any problems associated with this difference can be reported. For small development teams, particularly those who cannot afford to hire testers, a beta test offers the first chance to try the software in a variety of usage patterns and environments. Because the beta test comes just before the product should ship, dealing with beta feedback sometimes suffers as the project team senses that the end is in sight and can smell the pizza at the launch party. However, there’s little point in doing the testing if you’re not willing to fix the problems that occur. Developers can also perform their own testing. If you have ever pressed Build & Debug in Xcode, you have done a type of white-box testing:You have inspected the internals of your code to try to find out more about whether its behavior is correct (or more likely, why it isn’t correct). Compiler warnings, the static analyzer, and Instruments are all applications that help developers do testing. The advantages and disadvantages of developer testing almost exactly oppose those of independent testing:When developers find a problem, it’s usually easier (and cheaper) for them to fix it because they already have some understanding of the code and where the bug is likely to be hiding. In fact, developers can test as they go, so that bugs are found ptg7913098 very soon after they are written. However, if the bug is that the developer doesn’t under- stand the specification or the problem domain, this bug will not be discovered without external help. Getting the Requirements Right The most egregious bug I have written (to date, and I hope ever) in an application fell into the category of “developer doesn’t understand requirements.” I was working on a systems administration tool for the Mac, and because it ran outside any user account, it couldn’t look at the user settings to decide what language to use for logging. It read the language setting from a file. The file looked like this: LANGUAGE=English Fairly straightforward. The problem was that some users of non-English languages were reporting that the tool was writing log files in English, so it was getting the choice of lan- guage wrong. I found that the code for reading this file was very tightly coupled to other code in the tool, so set about breaking dependencies apart and inserting unit tests to find out how the code behaved. Eventually, I discovered the problem that was occasionally caus- ing the language check to fail and fixed it. All of the unit tests pass, so the code works, right? Actually, wrong: It turned out that I didn’t know the file can sometimes look at this: LANGUAGE=en 1 About Software Testing and Unit Testing 6 Not only did I not know this, but neither did my testers. In fact it took the application crash- ing on a customer’s system to discover this problem, even though the code was covered by unit tests. When Should Software Be Tested? The previous section gave away the answer to the preceding question to some extent—the earlier a part of the product can be tested, the cheaper it will be to find any problems that exist. If the parts of the application available at one stage of the process are known to work well and reliably, fewer problems will occur with integrating them or adding to them at later stages than if all the testing is done at the end. However, it was also shown in that section that software products are traditionally only tested at the end: An explicit QA phase follows the development, then the software is released to beta testers before finally being opened up for general release. Modern approaches to software project management recognize that this is deficient and aim to continually test all parts of the product at all times.This is the main differ- ence between “agile” projects and traditionally managed projects.Agile projects are organized in short stints called iterations (sometimes sprints).At every iteration, the requirements are reviewed; anything obsolete is dropped and any changes or necessary additions are made.The most important requirement is designed, implemented, and ptg7913098 tested in that iteration.At the end of the iteration, the progress is reviewed and a deci- sion made as to whether to add the newly developed feature to the product, or add requirements to make changes in future iterations. Crucially, because the agile manifesto ( values “individuals and interactions over processes and tools,” the customer or a representative is included in all the important decisions.There’s no need to sweat over perfecting a lengthy functional specification document if you can just ask the user how the app should work—and to confirm that the app does indeed work that way. In agile projects then, all aspects of the software project are being tested all the time. The customers are asked at every implementation what their most important require- ments are, and developers, analysts, and testers all work together on software that meets those requirements. One framework for agile software projects called Extreme Programming (or XP) goes as far as to require that developers unit test their code and work in pairs, with one “driving” the keyboard while the other suggests changes, improvements, and potential pitfalls. So the real answer is that software should be tested all the time.You can’t completely remove the chance that users will use your product in unexpected ways and uncover bugs you didn’t address internally—not within reasonable time and budget constraints, anyway. But you can automatically test the basic stuff yourself, leaving your QA team or beta testers free to try out the experimental use cases and attempt to break your app in new and ingenious ways.And you can ask at every turn whether what you’re about to Does Unit Testing Fit In? 7 do will add something valuable to your product and increase the likelihood that your customers will be satisfied that your product does what the marketing text said it would. Examples of Testing Practices I have already described system testing, where professional testers take the whole applica- tion and methodically go through the use cases looking for unexpected behavior.This sort of testing can be automated to some extent with iOS apps, using the UI Automation instrument that’s part of Apple’s Instruments profiling tool. System tests do not always need to be generic attempts to find any bug that exists in an application; sometimes the testers will have some specific goal in mind. Penetration testers are looking for security problems by feeding the application with malformed input, performing steps out of sequence, or otherwise frustrating the application’s expec- tation of its environment. Usability testers watch users interacting with the application, taking note of anything that the users get wrong, spend a long time over, or are confused by.A particular technique in usability testing is A/B Testing: Different users are given dif- ferent versions of the application and the usages compared statistically. Google is famous for using this practice in its software, even testing the effects of different shades of color in their interfaces. Notice that usability testing does not need to be performed on the complete application:A mock-up in Interface Builder, Keynote, or even on paper can be used to gauge user reaction to an app’s interface.The lo-fi version of the interface might ptg7913098 not expose subtleties related to interacting with a real iPhone, but they’re definitely much cheaper ways to get early results. Developers, particularly on larger teams, submit their source code for review by peers before it gets integrated into the product they’re working on.This is a form of white- box testing; the other developers can see how the code works, so they can investigate how it responds to certain conditions and whether all important eventualities are taken into account. Code reviews do not always turn up logic bugs; I’ve found that reviews I have taken part in usually discover problems adhering to coding style guidelines or other issues that can be fixed without changing the code’s behavior.When reviewers are given specific things to look for (for example, a checklist of five or six common errors—retain count problems often feature in checklists for Mac and iOS code) they are more likely to find bugs in these areas, though they may not find any problems unrelated to those you asked for. Where Does Unit Testing Fit In? Unit testing is another tool that developers can use to test their own software.You will find out more about how unit tests are designed and written in Chapter 3,“How to Write a Unit Test,” but for the moment it is sufficient to say that unit tests are small pieces of code that test the behavior of other code.They set up the preconditions, run the code under test, and then make assertions about the final state. If the assertions are valid (that is, the conditions tested are satisfied), the test passes.Any deviation from the 1 About Software Testing and Unit Testing 8 asserted state represents a failure, including exceptions that stop the test from running to 2 completion. In this way, unit tests are like miniature versions of the test cases written by integra- tion testers:They specify the steps to run the test and the expected result, but they do so in code.This allows the computer to do the testing, rather than forcing the developer to step through the process manually. However, a good test is also good documentation: It describes the expectations the tester had of how the code under test would behave. A developer who writes a class for an application can also write tests to ensure that this class does what is required. In fact, as you will see in the next chapter, the developer can also write tests before writing the class that is being tested. Unit tests are so named because they test a single “unit” of source code, which, in the case of object-oriented software, is usually a class.The terminology comes from the com- piler term “translation unit,” meaning a single file that is passed to the compiler.This means that unit tests are naturally white-box tests, because they take a single class out of the context of the application and evaluate its behavior independently. Whether you choose to treat that class as a black box, and only interact with it via its public API, is a personal choice, but the effect is still to interact with a small portion of the application. This fine granularity of unit testing makes it possible to get a very rapid turnaround on problems discovered through running the unit tests.A developer working on a class is often working in parallel on that class’s tests, so the code for that class will be at the front of her mind as she writes the tests. I have even had cases where I didn’t need to run a ptg7913098 unit test to know that it would fail and how to fix the code, because I was still thinking about the class that the test was exercising. Compare this with the situation where a dif- ferent person tests a use case that the developer might not have worked on for months. Even though unit testing means that a developer is writing code that won’t eventually end up in the application, this cost is offset by the benefit of discovering and fixing problems before they ever get to the testers. Bug-fixing is every project manager’s worst nightmare:There’s some work to do, the product can’t ship until it’s done, but you can’t plan for it because you don’t know how many bugs exist and how long it will take the developers to fix them. Looking back at Table 1.1, you will see that the bugs fixed at the end of a project are the most expensive to fix, and that there is a large variance in the cost of fixing them. By factoring the time for writing unit tests into your development estimates, you can fix some of those bugs as you’re going and reduce the uncertainty over your ship date. Unit tests will almost certainly be written by developers because using a testing framework means writing code, working with APIs, and expressing low-level logic: exactly the things that developers are good at. However it’s not necessary for the same developer to write a class and its tests, and there are benefits to separating the two tasks. 2. The test framework you use may choose to report assertion failures and “errors” separately, but that’s okay. The point is that you get to find out the test can’t be completed with a successful outcome. Does Unit Testing Fit In? 9 A senior developer can specify the API for a class to be implemented by a junior devel- oper by expressing the expected behavior as a set of tests. Given these tests, the junior developer can implement the class by successively making each test in the set pass. This interaction can also be reversed. Developers who have been given a class to use or evaluate but who do not yet know how it works can write tests to codify their assumptions about the class and find out whether those assumptions are valid.As they write more tests, they build a more complete picture of the capabilities and behavior of the class. However, writing tests for existing code is usually harder than writing tests and code in parallel. Classes that make assumptions about their environment may not work in a test framework without significant effort, because dependencies on surrounding objects must be replaced or removed. Chapter 11,“Designing for Test-Driven Development” covers applying unit testing to existing code. Developers working together can even switch roles very rapidly: One writes a test that the other codes up the implementation for; then they swap, and the second devel- oper writes a test for the first. However the programmers choose to work together is immaterial. In any case, a unit test or set of unit tests can act as a form of documentation expressing one developer’s intent to another. One key advantage of unit testing is that running the tests is automated. It may take as long to write a good test as to write a good plan for a manual test, but a computer can then run hundreds of unit tests per second. Developers can keep all the tests they’ve ever used for an application in their version control systems alongside the application ptg7913098 code, and then run the tests whenever they want.This makes it very cheap to test for regression bugs: bugs that had been fixed but are reintroduced by later development work. Whenever you change the application, you should be able to run all the tests in a few seconds to ensure that you didn’t introduce a regression.You can even have the tests run automatically whenever you commit source code to your repository, by a continuous inte- gration system as described in Chapter 4,“Tools for Testing.” Repeatable tests do not just warn you about regression bugs.They also provide a safety net when you want to edit the source code without any change in behavior— when you want to refactor the application.The purpose of refactoring is to tidy up your app’s source or reorganize it in some way that will be useful in the future, but without introducing any new functionality, or bugs If the code you are refactoring is covered by sufficient unit tests, you know that any differences in behavior you introduce will be detected.This means that you can fix up the problems now, rather than trying to find them before (or after) shipping your next release. However, unit testing is not a silver bullet.As discussed earlier, there is no way that developers can meaningfully test whether they understood the requirements. If the same person wrote the tests and the code under test, each will reflect the same preconceptions and interpretation of the problem being solved by the code.You should also appreciate that no good metrics exist for quantifying the success of a unit-testing strategy.The only popular measurements—code coverage and number of passing tests—can both be changed without affecting the quality of the software being tested. 1 About Software Testing and Unit Testing 10 Going back to the concept that testing is supposed to reduce the risk associated with deploying the software to the customer, it would be really useful to have some reporting tool that could show how much risk has been mitigated by the tests that are in place. The software can’t really know what risk you place in any particular code, so the meas- urements that are available are only approximations to this risk level. Counting tests is a very naïve way to measure the effectiveness of a set of tests. Consider your annual bonus—if the manager uses the number of passing tests to decide how much to pay you, you could write a single test and copy it multiple times. It doesn’t even need to test any of your application code; a test that verifies the result "1==1" would add to the count of passing tests in your test suite.And what is a reasonable num- ber of tests for any application? Can you come up with a number that all iOS app devel- opers should aspire to? Probably not—I can’t. Even two developers each tasked with writing the same application would find different problems in different parts, and would thus encounter different levels of risk in writing the app. Measuring code coverage partially addresses the problems with test counting by meas- uring the amount of application code that is being executed when the tests are run.This now means that developers can’t increase their bonuses by writing meaningless tests— but they can still just look for “low-hanging fruit” and add tests for that code. Imagine increasing code coverage scores by finding all of the synthesize property definitions in your app and testing that the getters and setters work. Sure, as we’ll see, these tests do have value, but they still aren’t the most valuable use of your time. ptg7913098 In fact, code coverage tools specifically weigh against coverage of more complicated code.The definition of “complex” here is a specific one from computer science called cyclomatic complexity. In a nutshell, the cyclomatic complexity of a function or method is related to the number of loops and branches—in other words, the number of different paths through the code. Take two methods:-methodOne has twenty lines with no if,switch,?: expressions or loops (in other words, it is minimally complex).The other method, -methodTwo:(BOOL)flag has an if statement with 10 lines of code in each branch.To fully cover -methodOne only needs one test, but you must write two tests to fully cover -methodTwo:. Each test exercises the code in one of the two branches of the if condi- tion.The code coverage tool will just report how many lines are executed—the same number, twenty, in each case—so the end result is that it is harder to improve code coverage of more complex methods. But it is the complex methods that are likely to harbor bugs. Similarly, code coverage tools don’t do well at handling special cases. If a method takes an object parameter, whether you test it with an initialized object or with nil, it’s all the same to the coverage tool. In fact, maybe both tests are useful; that doesn’t matter as far as code coverage is concerned. Either one will run the lines of code in the method, so adding the other doesn’t increase the coverage. Ultimately, you (and possibly your customers) must decide how much risk is present in any part of the code, and how much risk is acceptable in the shipping product. Even if the test metric tools worked properly, they could not take that responsibility away from Does This Mean for iOS Developers? 11 you.Your aim, then, should be to test while you think the tests are being helpful—and conversely, to stop testing when you are not getting any benefit from the tests.When asked the question,“Which parts of my software should I test?” software engineer and unit testing expert Kent Beck replied,“Only the bits that you want to work.” What Does This Mean for iOS Developers? The main advantage that unit testing brings to developers of iOS apps is that a lot of benefit can be reaped for little cost. Because many of the hundreds of thousands of apps in the App Store are produced by micro-ISVs, anything that can improve the quality of an app without requiring much investment is a good thing.The tools needed to add unit tests to an iOS development project are free. In fact, as described in Chapter 4, the core functionality is available in the iOS SDK package.You can write and run the tests your- self, meaning that you do not need to hire a QA specialist to start getting useful results from unit testing. Running tests takes very little time, so the only significant cost in adopting unit test- ing is the time it takes you to design and write the test cases. In return for this cost, you get an increased understanding of what your code should do while you are writing the code. This understanding helps you to avoid writing bugs in the first place, reducing the uncertainty in your project’s completion time because there should be fewer show- stoppers found by your beta testers. ptg7913098 Remember that as an iOS app developer, you are not in control of your application’s release to your customers:Apple is. If a serious bug makes it all the way into a release of your app, after you have fixed the bug you have to wait for Apple to approve the update (assuming they do) before it makes its way into the App Store and your customers’ phones and iPads.This alone should be worth the cost of adopting a new testing proce- dure. Releasing buggy software is bad enough; being in a position where you can’t rap- idly get a fix out is disastrous. You will find that as you get more comfortable with test-driven development—writ- ing the tests and the code together—you get faster at writing code because thinking about the code’s design and the conditions it will need to cope with become second nature.You will soon find that writing test-driven code, including its tests, takes the same time that writing the code alone used to take, but with the advantage that you are more confident about its behavior.The next chapter will introduce you to the concepts behind test-driven development: concepts that will be used throughout the rest of the book. Techniques for Test-Driven Development You have seen in Chapter 1,“About Software Testing and Unit Testing,” that unit tests have a place in the software development process: You can test your own code and have the computer automatically run those tests again and again to ensure that development is progressing in the right direction. Over the past couple of decades, developers working with unit testing frameworks—particularly practitioners of Extreme Programming (XP), a software engineering methodology invented by Kent Beck, the creator of the SUnit ptg7913098 framework for SmallTalk (the first unit testing framework on any platform, and the pro- genitor of Junit for Java and OCUnit for Objective-C)—have refined their techniques, and created new ways to incorporate unit testing into software development.This chap- ter is about technique—and using unit tests to improve your efficiency as a developer. Test First The practice developed by Extreme Programming aficionados is test-first or test-driven development, which is exactly what it sounds like: Developers are encouraged to write tests before writing the code that will be tested.This sounds a little weird, doesn’t it? How can you test something that doesn’t exist yet? Designing the tests before building the product is already a common way of working in manufacturing real-world products:The tests define the acceptance criteria of the prod- uct. Unless all the tests pass, the code is not good enough. Conversely, assuming a com- prehensive test suite, the code is good enough as soon as it passes every test, and no more work needs to be done on it. Writing all the tests before writing any code would suffer some of the same problems that have been found when all the testing is done after all the code is written. People tend to be better at dealing with small problems one at a time, seeing them through to completion before switching context to deal with a different problem. If you were to write all the tests for an app, then go back through and write all the code, you would need to address each of the problems in creating your app twice, with a large gap in 2 Techniques for Test-Driven Development 14 between each go. Remembering what you were thinking about when you wrote any particular group of tests a few months earlier would not be an easy task. So test-driven developers do not write all the tests first, but they still don’t write code before they’ve written the tests that will exercise it. An additional benefit of working this way is that you get rapid feedback on when you have added something useful to your app. Each test pass can give you a little boost of encouragement to help get the next test written.You don’t need to wait a month until the next system test to discover whether your feature works. The idea behind test-driven development is that it makes you think about what the code you’re writing needs to do while you’re designing it. Rather than writing a module or class that solves a particular problem and then trying to integrate that class into your app, you think about the problems that your application has and then write code to solve those problems. Moreover, you demonstrate that the code actually does solve those prob- lems, by showing that it passes the tests that enumerate the requirements. In fact, writing the tests first can even help you discover whether a problem exists. If you write a test that passes without creating any code, either your app already deals with the case identi- fied by the new test, or the test itself is defective. The “problems” that an app must solve are not, in the context of test-driven develop- ment, entire features like “the user can post favorite recipes to Twitter.” Rather they are microfeatures: very small pieces of app behavior that support a little piece of a bigger feature.Taking the “post favorite recipes to Twitter” example, there could be a microfea- ptg7913098 ture requiring that a text field exists where users can enter their Twitter username. Another microfeature would be that the text in that field is passed to a Twitter service object as the user’s name.Yet another requires that the Twitter username be loaded from NSUserDefaults. Dozens of microfeatures can each contribute a very small part to the use case, but all must be present for the feature to be complete. A common approach for test-driven working is to write a single test, then run it to check that it fails, then write the code that will make the test pass.After this is done, it’s on to writing the next test.This is a great way to get used to the idea of test-driven development, because it gets you into the mindset where you think of every new feature and bug fix in terms of unit tests. Kent Beck describes this mindset as “test infection”— the point where you no longer think,“How do I debug this?” but “How do I write a test for this?” Proponents of test-driven development say that they use the debugger much less fre- quently than on non–test-driven projects. Not only can they show that the code does what it ought using the tests, but the tests make it easier to understand the code’s behav- ior without having to step through in a debugger. Indeed, the main reason to use a debugger is because you’ve found that some use-case doesn’t work, but you can’t work out where the problem occurs. Unit tests already help you track down the problem by testing separate, small portions of the application code in isolation, making it easy to pin- point any failures. So test-infected developers don’t think,“How can I find this bug?” because they have a tool that can help locate the bug much faster than using a debugger. Instead, they Red, Green, Refactor 15 think,“How can I demonstrate that I’ve fixed the bug?” or,“What needs to be added or changed in my original assumptions?”As such, test-infected developers know when they’ve done enough work to fix the problem—and whether they accidentally broke anything else in the process. Working in such a way may eventually feel stifling and inefficient. If you can see that a feature needs a set of closely related additions, or that fixing a bug means a couple of modifications to a method, making these changes one at a time feels artificial. Luckily, no one is forcing you to make changes one test at a time.As long as you’re thinking,“How do I test this?” you’re probably writing code that can be tested. So writing a few tests up front before writing the code that passes them is fine, and so is writing the code that you think solves the problem and then going back and adding the tests. Ensure that you do add the tests, though (and that they do indeed show that the freshly added code works); they serve to verify that the code behaves as you expect, and as insurance against intro- ducing a regression in later development. Later on, you’ll find that because writing the tests helps to organize your thoughts and identify what code you need to write, the total time spent in writing test-driven code is not so different than writing code without tests used to take.The reduced need 1 for debugging time at the end of the project is a nice additional savings. Red, Green, Refactor ptg7913098 It’s all very well saying that you should write the test before you write the code, but how do you write that test? What should the test of some nonexistent code look like? Look at the requirement, and ask yourself,“If I had to use code that solved this problem, how would I want to use it?”Write the method call that you think would be the perfect way to get the result. Provide it with arguments that represent the input needed to solve the problem, and write a test that asserts that the correct output is given. Now you run the test.Why should you run the test (because we both know it’s going to fail)? In fact, depending on how you chose to specify the API, it might not even com- pile properly. But even a failing test has value: It demonstrates that there’s something the app needs to do, but doesn’t yet do. It also specifies what method it would be good to use to satisfy the requirement. Not only have you described the requirement in a repeat- able, executable form, but you’ve designed the code you’re going to write to meet that requirement. Rather than writing the code to solve the problem and then working out how to call it, you’ve decided what you want to call, making it more likely that you’ll come up with a consistent and easy-to-use API. Incidentally, you’ve also demonstrated 1. Research undertaken by Microsoft in conjunction with IBM ( en-us/groups/ese/nagappan_tdd.pdf) found that although teams that were new to TDD were around 15–35% slower at implementing features than when “cowboy coding,” the products they created contained 40–90% fewer bugs—bugs that needed to be fixed after the project “com- pleted” but before they could ship. 2 Techniques for Test-Driven Development 16 that the software doesn’t yet do what you need it to.When you’re starting a project from 2 scratch, this will not be a surprise.When working on a legacy application with a com- plicated codebase, you may find that it’s hard to work out what the software is capable of, based on visual inspection of the source code. In this situation, you might write a test expressing a feature you want to add, only to discover that the code already supports the feature and that the test passes.You can now move on and add a test for the next feature you need, until you find the limit of the legacy code’s capabilities and your tests begin to fail. Practitioners of test-driven development refer to this part of the process—writing a failing test that encapsulates the desired behavior of the code you have not yet written— as the red stage, or red bar stage.The reason is that popular IDEs including Visual Studio and Eclipse (though not, as you shall see shortly, the newest version of Xcode) display a large red bar at the top of their unit-testing view when any test fails.The red bar is an obvious visual indicator that your code does not yet do everything you need. Much friendlier than the angry-looking red bar is the peaceful serenity of the green bar, and this is now your goal in the second stage of test-driven development.Write the code to satisfy the failing test or tests that you have just written. If that means adding a new class or method, go ahead: You’ve identified that this API addition makes sense as part of the app’s design. At this stage, it doesn’t really matter how you write the code that implements your new API, as long as it passes the test.The code needs to be Just Barely Good Enough™ ptg7913098 to provide the needed functionality.Anything “better” than that doesn’t add to your app’s capabilities and is effort wasted on code that won’t be used. For example, if you have a single test for a greeting generator, that it should return “Hello, Bob” when the name “Bob” is passed to it, then this is perfectly sufficient: - (NSString )greeter: (NSString )name return "Hello, Bob"; Doing anything more complicated right now might be wasteful. Sure, you might need a more general method later; on the other hand, you might not. Until you write another test demonstrating the need for this method to return different strings (for example, returning “Hello,Tim” when the parameter is “Tim”), it does everything that you know it needs to. Congratulations, you have a green bar (assuming you didn’t break the result of any other test when you wrote the code for this one); your app is demonstrably one quantum of awesomeness better than it was. You might still have concerns about the code you’ve just written. Perhaps there’s a different algorithm that would be more efficient yet yield the same results, or maybe 2. I use the same definition of “legacy code” as Michael Feathers in Working Effectively with Legacy Code (Prentice Hall, 2004). Legacy code is anything you’ve inherited—including from yourself— that isn’t yet described by a comprehensive and up-to-date set of unit tests. In Chapter 11 you will find ways to incorporate unit testing into such projects. Red, Green, Refactor 17 your race to get to the green bar looks like more of a kludge than you’re comfortable with. Pasting code from elsewhere in the app in order to pass the test—or even pasting part of the test into the implementation method—is an example of a “bad code smell” that freshly green apps sometimes give off. Code smell is another term invented by Kent Beck and popularized in Extreme Programming. It refers to code that may be OK, but 3 there’s definitely something about it that doesn’t seem right. Now you have a chance to “refactor” the application—to clean it up by changing the implementation without affecting the app’s behavior. Because you’ve written tests of the code’s functionality, you’ll be able to see if you do break something.Tests will start to fail. Of course, you can’t use the tests to find out if you accidentally add some new unexpected behavior that doesn’t affect anything else, but this should be a relatively harmless side-effect because nothing needs to use that behavior. If it did, there’d be a test for it. However, you may not need to refactor as soon as the tests pass.The main reason for doing it right away is that the details of the new behavior will still be fresh in your mind, so if you do want to change anything you won’t have to familiarize yourself with how the code currently works. But you might be happy with the code right now.That’s fine; leave it as it is. If you decide later that it needs refactoring, the tests will still be there and can still support that refactoring work. Remember, the worst thing you can do is waste time on refactoring code that’s fine as it is (see “Ya Ain’t Gonna Need It” later in the chapter). ptg7913098 So now you’ve gone through the three stages of test-driven development:You’ve written a failing test (red), got the test to pass (green), and cleaned up the code without changing what it does (refactor).Your app is that little bit more valuable than it was before you started.The microfeature you just added may not be enough of an improve- ment to justify releasing the update to your customers, but your code should certainly be of release candidate quality because you can demonstrate that you’ve added something new that works properly, and that you haven’t broken anything that already worked. Remember from the previous chapter that there’s still additional testing to be done. There could be integration or usability problems, or you and the tester might disagree on what needed to be added.You can be confident that if your tests sufficiently describe the range of inputs your app can expect, the likelihood of a logic bug in the code you’ve just written will be low. Having gone from red, through green, to refactoring, it’s time to go back to red. In other words, it’s time to add the next microfeature—the next small requirement that rep- resents an improvement to your app.Test-driven development naturally supports iterative software engineering, because each small part of the app’s code is developed to produc- tion quality before work on the next part is started. Rather than having a dozen features 3. An extensive list of possible code smells was written by Jeff Atwood and published at 2 Techniques for Test-Driven Development 18 that have all been started but are all incomplete and unusable, you should either have a set of completely working use cases, or one failing case that you’re currently working on. However, if there’s more than one developer on your team, you will each be working on a different use case, but each of you will have one problem to solve at a time and a clear idea of when that solution has been completed. Designing a Test-Driven App Having learned about the red-green-refactor technique, you may be tempted to dive straight into writing the first feature of your app in this test-driven fashion, then incre- mentally adding the subsequent features in the same way.The result would be an app whose architecture and design grow piecemeal as small components aggregate and stick themselves to the existing code. Software engineers can learn a lot by watching physical engineering take place. Both disciplines aim to build something beautiful and useful out of limited resources and in a finite amount of space. Only one takes the approach that you can sometimes get away with putting the walls in first then hanging the scaffolding off of them. An example of an aggregate being used in real-world engineering is concrete. Find a building site and look at the concrete; it looks like a uniformly mucky mush. It’s also slightly caustic.Touch it while you’re working with it and you’ll get burned. Using test- driven development without an overall plan of the app’s design will lead to an app that ptg7913098 shares many of the characteristics of concrete.There’ll be no discernible large-scale struc- ture, so it’ll be hard to see how each new feature connects to the others.They will end up as separate chunks, close together but unrelated like the pebbles in a construction aggregate.You will find it hard to identify commonality and chances to share code when there’s no clear organization to the application. So it’s best to head into a test-driven project with at least a broad idea of the applica- tion’s overall structure.You don’t need a detailed model going all the way down to lists of the classes and methods that will be implemented.That fine-grained design does come out of the tests.What you will need is an idea of what the features are and how they fit together: where they will make use of common information or code, how they communicate with each other, and what they will need to do so.Again, Extreme Programming has a name for this concept: It’s called the System Metaphor. More generally in object-oriented programming it’s known as the Domain Model: the view of what users are trying to do, with what services, and to what objects, in your application. Armed with this information, you can design your tests so that they test that your app conforms to the architecture plan, in addition to testing the behavior. If two components should share information via a particular class, you can test for that. If two features can make use of the same method, you can use the tests to ensure that this happens, too. When you come to the refactoring stage, you can use the high-level plan to direct the tidying up, too. Ain’t Gonna Need It 19 More on Refactoring How does one refactor code? It’s a big question—indeed it’s probably open-ended, because I might be happy with code that you abhor, and vice versa.The only workable description is something like this: n Code needs refactoring if it does what you need, but you don’t like it.That means you may not like the look of it, or the way it works, or how it’s organized. Sometimes there isn’t a clear signal for refactoring; the code just “smells” bad. n You have finished refactoring when the code no longer looks or smells bad. n The refactoring process turns bad code into code that isn’t bad. That description is sufficiently vague that there’s no recipe or process you can follow to get refactored code.You might find code easier to read and understand if it follows a commonly used object-oriented design pattern—a generic blueprint for code that can be applied in numerous situations. Patterns found in the Cocoa frameworks and of gen- eral use in Objective-C software are described by Buck and Yacktman in Cocoa Design Patterns (Addison-Wesley 2009).The canonical reference for language-agnostic design patterns is Design Patterns: Elements of Reusable Object-Oriented Software by Gamma, Helm, Johnson, and Vlissides (Addison-Wesley 1995), commonly known as the “Gang of Four” book. ptg7913098 Some specific transformations of code are frequently employed in refactoring, because they come up in a variety of situations where code could be made cleaner. For example, if two classes implement the same method, you could create a common superclass and push the method implementation into that class.You could create a protocol to describe a method that many classes must provide.The book Refactoring: Improving the Design of Existing Code by Martin Fowler (Addison-Wesley, 1999) contains a big catalog of such transformations, though the example code is all in Java. Ya Ain’t Gonna Need It One feature of test-driven development that I’ve mentioned in passing a few times deserves calling out: If you write tests that describe what’s needed of your app code, and you only write code that passes those tests, you will never write any code that you don’t need. Okay, so maybe the requirements will change in the future, and the feature you’re work- ing on right now will become obsolete. But right now, that feature is needed.The code you’re writing supports that feature and does nothing else. Have you ever found that you or a co-worker has written a very nice class or frame- work that deals with a problem in a very generic way, when you only need to handle a restricted range of cases in your product? I’ve seen this happen on a number of projects; often the generic code is spun out into its own project on Github or Google Code as a “service to the community,” to try to justify the effort that was spent on developing unneeded code. But then the project takes on a life of its own, as third-party users dis- cover that the library isn’t actually so good at handling the cases that weren’t needed by 2 Techniques for Test-Driven Development 20 the original developers and start filing bug reports and enhancement requests. Soon enough, the application developers realize that they’ve become framework developers as they spend more and more effort on supporting a generic framework, all the while still using a tiny subset of its capabilities in their own code. Such gold plating typically comes about when applications are written from the inside out.You know that you’ll need to deal with URL requests, for example, so you write a class that can handle URL requests. However, you don’t yet know how your application will use URL requests, so you write the class so that it can deal with any case you think of.When you come to write the part of the application that actually needs to use URL requests, you find it uses only a subset of the cases handled by the class. Perhaps the application makes only GET requests, and the effort you put into han- dling POST requests in the handler class is wasted. But the POST-handling code is still there, making it harder to read and understand the parts of the class that you actually use. Test-driven development encourages building applications from the outside in.You know that the user needs to do a certain task, so you write a test that asserts this task can be done.That requires getting some data from a network service, so you write a test that asserts the data can be fetched.That requires use of a URL request, so you write a test for that use of a URL request.When you implement the code that passes the test, you need to code only for the use that you’ve identified.There’s no generic handler class, because there’s no demand for it. ptg7913098 Testing Library Code In the case of URL request handlers, there’s an even easier way to write less code: find some code somebody else has already written that does it and use that instead. But should you exhaustively test that library code before you integrate it into your app? No. Remember that unit tests are only one of a number of tools at your disposal. Unit tests—particularly used in test-driven development—are great for testing your own code, including testing that your classes interact with the library code correctly. Use integration tests to find out whether the application works. If it doesn’t, but you know (thanks to your unit tests) that you’re using the library in the expected way, you know that there’s a bug in the library. You could then write a unit test to exercise the library bug, as documentation of the code’s failure to submit as a bug report. Another way in which unit tests can help with using third- party code is to explore the code’s API. You can write unit tests that express how you expect to use somebody else’s class, and run them to discover whether your expectations were correct. Extreme programmers have an acronym to describe gold-plated generic framework ™ classes:YAGNI, short for Ya Ain’t Gonna Need It . Some people surely do need to write generic classes; indeed,Apple’s Foundation framework is just a collection of general- purpose objects. However, most of us are writing iOS applications, not iOS itself, and applications have a much smaller and more coherent set of use cases that can be satisfied without developing new generic frameworks. Besides which, you can be sure that Apple Testing Before, During, and After Coding 21 studies the demand and potential application of any new class or method before adding it to Foundation, which certainly isn’t an academic exercise in providing a functionally complete API. It saves time to avoid writing code when YAGNI—you would basically be writing code that you don’t use.Worse than that, unnecessary code might be exploitable by an attacker, who finds a way to get your app to run the code. Or you might decide to use it yourself at some future point in the app’s development, forgetting that it’s untested code you haven’t used since it was written. If at this point you find a bug in your app, you’re likely to waste time tracking it down in the new code you’ve written—of course, the bug wasn’t present before you wrote this code—not realizing that the bug actually resides in old code.The reason you haven’t discovered the bug yet is that you haven’t used this code before. A test-driven app should have no unused code, and no (or very little) untested code. Because you can be confident that all the code works, you should experience few prob- lems with integrating an existing class or method into a new feature, and you should have no code in the application whose only purpose is to be misused or to cause bugs to manifest themselves.All the code is pulling its weight in providing a valuable service to your users. If you find yourself thinking during the refactoring stage that there are some changes you could make to have the code support more conditions, stop.Why aren’t those conditions tested for in the test cases? Because those conditions don’t arise in the app. So don’t waste time adding the support:Ya Ain’t Gonna Need It. ptg7913098 Testing Before, During, and After Coding If you’re following the red-green-refactor approach to test-driven development, you’re running tests before writing any code, to verify that the test fails.This tells you that the behavior specified by the test still needs to be implemented, and you may get hints from the compiler about what needs to be done to pass the test—especially in those cases when the test won’t even build or run correctly because the necessary application code is missing.You’re also running tests while you’re building the code, to ensure you don’t break any existing behavior while working toward that green bar.You’re also testing after you’ve built the functionality, in the refactoring stage, to ensure that you don’t break anything while you’re cleaning up the code. In a fine-grained way this reflects the sug- gestion from Chapter 1 that software should be tested at every stage in the process. Indeed, it can be a good idea to have tests running automatically during the develop- ment life cycle so that even if you forget to run the tests yourself, it won’t be long before they get run for you. Some developers have their tests run every time they build, although when it takes more than a few seconds to run all the tests this can get in the way. Some people use continuous integration servers or buildbots (discussed in Chapter 4,“Tools for Testing”) to run the tests in the background or even on a different com- puter whenever they check source into version control or push to the master repository. In this situation it doesn’t matter if the tests take minutes to run; you can still work in

Advise: Why You Wasting Money in Costly SEO Tools, Use World's Best Free SEO Tool Ubersuggest.