Lecture notes on Software Testing pdf

lecture notes on software testing methodologies and lecture notes on software testing and quality assurance, testing and maintenance in software engineering, testing procedures during maintenance
Dr.ShaneMatts Profile Pic
Dr.ShaneMatts,United States,Teacher
Published Date:23-07-2017
Your Website URL(Optional)
Comment
Software Engineering Unit-V Unit – V TESTING & MAINTENANCE Syllabus Testing strategies- Testing Tactics - strategies Issues for conventional and object oriented software- Verification and Validation- validation testing – system testing – Art of debugging. Software evolution -Critical Systems Validation – Metrics for Process, Project and Product-Quality Management - Process Improvement –Risk Management- Configuration Management – Software Cost Estimation TESTING STRATEGIES • A strategy for software testing integrates the design of software test cases into a well-planned series of steps that result in successful development of the software • The strategy provides a road map that describes the steps to be taken, when, and how much effort, time, and resources will be required • The strategy incorporates test planning, test case design, test execution, and test result collection and evaluation • The strategy provides guidance for the practitioner and a set of milestones for the manager • Because of time pressures, progress must be measurable and problems must surface as early as possible General Characteristics of Strategic Testing • To perform effective testing, a software team should conduct effective formal technical reviews • Testing begins at the component level and work outward toward the integration of the entire computer-based system • Different testing techniques are appropriate at different points in time • Testing is conducted by the developer of the software and (for large projects) by an independent test group • Testing and debugging are different activities, but debugging must be accommodated in any testing strategy Mr. John Blesswin Page 1 Software Engineering Unit-V TESTING TACTICS & STRATEGIES Levels of Software Testing There are different levels during the process of Testing. Levels of testing include the different methodologies that can be used while conducting Software Testing. Following are the main levels of Software Testing:  Functional Testing.  Non-Functional Testing. Functional Testing This is a type of black box testing that is based on the specifications of the software that is to be tested. The application is tested by providing input and then the results are examined that need to conform to the functionality it was intended for. Functional Testing of the software is conducted on a complete, integrated system to evaluate the system's compliance with its specified requirements.There are five steps that are involved when testing an application for functionality. Steps Description The determination of the functionality that the intended application is I meant to perform. II The creation of test data based on the specifications of the application. III The output based on the test data and the specifications of the application. IV The writing of Test Scenarios and the execution of test cases. The comparison of actual and expected results based on the executed test V cases. a. Unit Testing This type of testing is performed by the developers before the setup is handed over to the testing team to formally execute the test cases. Unit testing is performed by the respective developers on the individual units of source code assigned areas. The developers use test data that is separate from the test data of the quality assurance Mr. John Blesswin Page 2 Software Engineering Unit-V team. The goal of unit testing is to isolate each part of the program and show that individual parts are correct in terms of requirements and functionality. Limitations of Unit Testing Testing cannot catch each and every bug in an application. It is impossible to evaluate every execution path in every software application. The same is the case with unit testing. There is a limit to the number of scenarios and test data that the developer can use to verify the source code. So after he has exhausted all options there is no choice but to stop unit testing. b. Integration Testing The testing of combined parts of an application to determine if they function correctly together is Integration testing. There are two methods of doing Integration Testing Bottom-up Integration testing and Top down Integration testing. In a comprehensive software development environment, bottom-up testing is usually done first, followed by top-down testing. Bottom-up integration This testing begins with unit testing, followed by tests of progressively higher- level combinations of units called modules or builds. Top-Down integration This testing, the highest-level modules are tested first and progressively lower-level modules are tested after that. c. System Testing This is the next level in the testing and tests the system as a whole. Once all the components are integrated, the application as a whole is tested rigorously to see that it meets Quality Standards. This type of testing is performed by a specialized testing team. System testing is so important because of the following reasons:  System Testing is the first step in the Software Development Life Cycle, where the application is tested as a whole. Mr. John Blesswin Page 3 Software Engineering Unit-V  The application is tested thoroughly to verify that it meets the functional and technical specifications.  The application is tested in an environment which is very close to the production environment where the application will be deployed.  System Testing enables us to test, verify and validate both the business requirements as well as the Applications Architecture. d. Regression Testing Whenever a change in a software application is made it is quite possible that other areas within the application have been affected by this change. To verify that a fixed bug hasn't resulted in another functionality or business rule violation is Regression testing. The intent of Regression testing is to ensure that a change, such as a bug fix did not result in another fault being uncovered in the application. Regression testing is so important because of the following reasons:  Minimize the gaps in testing when an application with changes made has to be tested.  Testing the new changes to verify that the change made did not affect any other area of the application.  Mitigates Risks when regression testing is performed on the application.  Test coverage is increased without compromising timelines.  Increase speed to market the product. e. Acceptance Testing This is arguably the most importance type of testing as it is conducted by the Quality Assurance Team who will gauge whether the application meets the intended specifications and satisfies the client’s requirements. The QA team will have a set of pre written scenarios and Test Cases that will be used to test the application. More ideas will be shared about the application and more tests can be performed on it to gauge its accuracy and the reasons why the project was initiated. Acceptance tests are not only intended to point out simple spelling mistakes, cosmetic errors or Interface gaps, but also to point out any bugs in the application that will result in system crashers or major errors in the application. Mr. John Blesswin Page 4 Software Engineering Unit-V By performing acceptance tests on an application the testing team will deduce how the application will perform in production. There are also legal and contractual requirements for acceptance of the system. f. Alpha Testing This test is the first stage of testing and will be performed amongst the teams (developer and QA teams). Unit testing, integration testing and system testing when combined are known as alpha testing. During this phase, the following will be tested in the application:  Spelling Mistakes  Broken Links  Cloudy Directions  The Application will be tested on machines with the lowest specification to test loading times and any latency problems. g. Beta Testing This test is performed after Alpha testing has been successfully performed. In beta testing a sample of the intended audience tests the application. Beta testing is also known as pre-release testing. Beta test versions of software are ideally distributed to a wide audience on the Web, partly to give the program a "real-world" test and partly to provide a preview of the next release. In this phase the audience will be testing the following:  Users will install, run the application and send their feedback to the project team.  Typographical errors, confusing application flow, and even crashes.  Getting the feedback, the project team can fix the problems before releasing the software to the actual users.  The more issues you fix that solve real user problems, the higher the quality of your application will be.  Having a higher-quality application when you release to the general public will increase customer satisfaction. Mr. John Blesswin Page 5 Software Engineering Unit-V Non-Functional Testing This Testing is based upon the testing of the application from its non-functional attributes. Non-functional testing of Software involves testing the Software from the requirements which are non functional in nature related but important a well such as performance, security, user interface etc. Some of the important and commonly used non-functional testing types are mentioned as follows: a. Performance Testing It is mostly used to identify any bottlenecks or performance issues rather than finding the bugs in software. There are different causes which contribute in lowering the performance of software:  Network delay.  Client side processing.  Database transaction processing.  Load balancing between servers.  Data rendering. Performance testing is considered as one of the important and mandatory testing type in terms of following aspects:  Speed (i.e. Response Time, data rendering and accessing)  Capacity  Stability  Scalability It can be either qualitative or quantitative testing activity and can be divided into different sub types such as Load testing and Stress testing. b. Load Testing A process of testing the behavior of the Software by applying maximum load in terms of Software accessing and manipulating large input data. It can be done at both normal and peak load conditions. This type of testing identifies the maximum capacity of Software and its behavior at peak time. Mr. John Blesswin Page 6 Software Engineering Unit-V Most of the time, Load testing is performed with the help of automated tools such as Load Runner, AppLoader, IBM Rational Performance Tester, Apache JMeter, Silk Performer, Visual Studio Load Test etc. Virtual users (VUsers) are defined in the automated testing tool and the script is executed to verify the Load testing for the Software. The quantity of users can be increased or decreased concurrently or incrementally based upon the requirements. c. Stress Testing This testing type includes the testing of Software behavior under abnormal conditions. Taking away the resources, applying load beyond the actual load limit is Stress testing. The main intent is to test the Software by applying the load to the system and taking over the resources used by the Software to identify the breaking point. This testing can be performed by testing different scenarios such as:  Shutdown or restart of Network ports randomly.  Turning the database on or off.  Running different processes that consume resources such as CPU, Memory, server etc. d. Usability Testing This includes different concepts and definitions of Usability testing from Software point of view. It is a black box technique and is used to identify any error(s) and improvements in the Software by observing the users through their usage and operation. Usability can be defined in terms of five factors i.e. Efficiency of use, Learn-ability, Memor-ability, Errors/safety, satisfaction. According to him the usability of the product will be good and the system is usable if it possesses the above factors. Usability is the quality requirement which can be measured as the outcome of interactions with a computer system. This requirement can be fulfilled and the end user will be satisfied if the intended goals are achieved effectively with the use of proper resources. Mr. John Blesswin Page 7 Software Engineering Unit-V e. Security Testing Security testing involves the testing of Software in order to identify any flaws ad gaps from security and vulnerability point of view. Following are the main aspects which Security testing should ensure:  Confidentiality.  Integrity.  Authentication.  Availability.  Authorization.  Non-repudiation.  SQL insertion attacks.  Injection flaws.  Session management issues. f. Portability Testing Portability testing includes the testing of Software with intend that it should be re- useable and can be moved from another Software as well. Following are the strategies that can be used for Portability testing.  Transferred installed Software from one computer to another.  Building executable (.exe) to run the Software on different platforms. Portability testing can be considered as one of the sub parts of System testing, as this testing type includes the overall testing of Software with respect to its usage over different environments. Computer Hardware, Operating Systems and Browsers are the major focus of Portability testing. Following are some pre-conditions for Portability testing:  Software should be designed and coded, keeping in mind Portability Requirements.  Unit testing has been performed on the associated components.  Integration testing has been performed.  Test environment has been established. Mr. John Blesswin Page 8 Software Engineering Unit-V STRATEGIC ISSUES FOR CONVENTIONAL SOFTWARE TESTING Unit testing  Exercises specific paths in a component's control structure to ensure complete coverage and maximum error detection  Components are then assembled and integrated Integration testing  Focuses on inputs and outputs, and how well the components fit together and work together Validation testing  Provides final assurance that the software meets all functional, behavioral and performance requirements System testing  Verifies that all system elements (software, hardware, people, databases) mesh properly and that overall system function and performance is achieved STRATEGIC ISSUES FOR OBJECT ORIENTED SOFTWARE TESTING  Must broaden testing to include detections of errors in analysis and design models  Unit testing loses some of its meaning and integration testing changes significantly  Use the same philosophy but different approach as in conventional software testing  Test "in the small" and then work out to testing "in the large" – Testing in the small involves class attributes and operations; the main focus is on communication and collaboration within the class – Testing in the large involves a series of regression tests to uncover errors due to communication and collaboration among classes Mr. John Blesswin Page 9 Software Engineering Unit-V  Finally, the system as a whole is tested to detect errors in fulfilling requirements  With object-oriented software, you can no longer test a single operation in isolation (conventional thinking)  Class testing for object-oriented software is the equivalent of unit testing for conventional software – Focuses on operations encapsulated by the class and the state behavior of the class  Drivers can be used – To test operations at the lowest level and for testing whole groups of classes – To replace the user interface so that tests of system functionality can be conducted prior to implementation of the actual interface  Stubs can be used – In situations in which collaboration between classes is required but one or more of the collaborating classes has not yet been fully implemented  Two different object-oriented testing strategies – Thread-based testing • Integrates the set of classes required to respond to one input or event for the system • Each thread is integrated and tested individually • Regression testing is applied to ensure that no side effects occur – Use-based testing • First tests the independent classes that use very few, if any, server classes • Then the next layer of classes, called dependent classes, are integrated • This sequence of testing layer of dependent classes continues until the entire system is constructed Mr. John Blesswin Page 10 Software Engineering Unit-V VERIFICATION AND VALIDATION • Software testing is part of a broader group of activities called verification and validation that are involved in software quality assurance • Verification (Are the algorithms coded correctly?) – The set of activities that ensure that software correctly implements a specific function or algorithm • Validation (Does it meet user requirements?) – The set of activities that ensure that the software that has been built is traceable to customer requirements VALIDATION TESTING • Validation testing follows integration testing • The distinction between conventional and object-oriented software disappears • Focuses on user-visible actions and user-recognizable output from the system • Demonstrates conformity with requirements • Designed to ensure that – All functional requirements are satisfied – All behavioral characteristics are achieved – All performance requirements are attained – Documentation is correct – Usability and other requirements are met (e.g., transportability, compatibility, error recovery, maintainability) • After each validation test – The function or performance characteristic conforms to specification and is accepted Mr. John Blesswin Page 11 Software Engineering Unit-V – A deviation from specification is uncovered and a deficiency list is created • A configuration review or audit ensures that all elements of the software configuration have been properly developed, cataloged, and have the necessary detail for entering the support phase of the software life cycle Alpha and Beta Testing • Alpha testing – Conducted at the developer’s site by end users – Software is used in a natural setting with developers watching intently – Testing is conducted in a controlled environment • Beta testing – Conducted at end-user sites – Developer is generally not present – It serves as a live application of the software in an environment that cannot be controlled by the developer – The end-user records all problems that are encountered and reports these to the developers at regular intervals • After beta testing is complete, software engineers make software modifications and prepare for release of the software product to the entire customer base Mr. John Blesswin Page 12 Software Engineering Unit-V SYSTEM TESTING • Recovery testing – Tests for recovery from system faults – Forces the software to fail in a variety of ways and verifies that recovery is properly performed – Tests re-initialization, check pointing mechanisms, data recovery, and restart for correctness • Security testing – Verifies that protection mechanisms built into a system will, in fact, protect it from improper access • Stress testing – Executes a system in a manner that demands resources in abnormal quantity, frequency, or volume • Performance testing – Tests the run-time performance of software within the context of an integrated system – Often coupled with stress testing and usually requires both hardware and software instrumentation – Can uncover situations that lead to degradation and possible system failure Mr. John Blesswin Page 13 Software Engineering Unit-V ART OF DEBUGGING Debugging Process • Debugging occurs as a consequence of successful testing • It is still very much an art rather than a science • Good debugging ability may be an innate human trait • Large variances in debugging ability exist • The debugging process begins with the execution of a test case • Results are assessed and the difference between expected and actual performance is encountered • This difference is a symptom of an underlying cause that lies hidden • The debugging process attempts to match symptom with cause, thereby leading to error correction Why is Debugging so Difficult? • The symptom and the cause may be geographically remote • The symptom may disappear (temporarily) when another error is corrected • The symptom may actually be caused by non errors (e.g., round-off accuracies) • It may be difficult to accurately reproduce input conditions, such as asynchronous real-time information Debugging Strategies • Objective of debugging is to find and correct the cause of a software error • Bugs are found by a combination of systematic evaluation, intuition, and luck • Debugging methods and tools are not a substitute for careful evaluation based on a complete design model and clear source code • There are three main debugging strategies Mr. John Blesswin Page 14 Software Engineering Unit-V 1.Brute Force • Most commonly used and least efficient method • Used when all else fails • Involves the use of memory dumps, run-time traces, and output statements • Leads many times to wasted effort and time 2.Backtracking • Can be used successfully in small programs • The method starts at the location where a symptom has been uncovered • The source code is then traced backward (manually) until the location of the cause is found • In large programs, the number of potential backward paths may become unmanageably large 3.Cause Elimination • Involves the use of induction or deduction and introduces the concept of binary partitioning – Induction (specific to general): Prove that a specific starting value is true; then prove the general case is true – Deduction (general to specific): Show that a specific conclusion follows from a set of general premises • Data related to the error occurrence are organized to isolate potential causes • A cause hypothesis is devised, and the aforementioned data are used to prove or disprove the hypothesis • Alternatively, a list of all possible causes is developed, and tests are conducted to eliminate each cause • If initial tests indicate that a particular cause hypothesis shows promise, data are refined in an attempt to isolate the bug Mr. John Blesswin Page 15 Software Engineering Unit-V SOFTWARE EVOLUTION: CRITICAL SYSTEMS VALIDATION  To explain how system reliability can be measured and how reliability growth models can be used for reliability prediction.  To describe safety arguments and how these are used  To discuss the problems of safety assurance  To introduce safety cases and how these are used in safety validation Validation of critical systems The verification and validation costs for critical systems involve additional validation processes and analysis than for noncritical systems:  The costs and consequences of failure are high so it is cheaper to find and remove faults than to pay for system failure;  You may have to make a formal case to customers or to a regulator that the system meets its dependability requirements. This dependability case may require specific V & V activities to be carried out. Validation costs  The validation costs for critical systems are usually significantly higher than for noncritical systems.  Normally, V & V costs take up more than 50% of the total system development costs. Reliability validation  Reliability validation involves exercising the program to assess whether or not it has reached the required level of reliability.  This cannot normally be included as part of a normal defect testing process because data for defect testing is (usually) atypicaof actuausage data.  Reliability measurement therefore requires a specially designed data set that replicates the pattern of inputs to be processed by the system. Statistical testing  Testing software for reliability rather than fault detection.  Measuring the number of errors allows the reliability of the software to be predicted. Note that, for statistical reasons, more errors than are allowed for in the reliability specification must be induced.  An acceptable level of reliability should be specified and the software tested and amended until that level of reliability is reached. Mr. John Blesswin Page 16 Software Engineering Unit-V METRICS FOR PROCESS, PROJECT AND PRODUCT Software process and project metrics are quantitative measures that enable software engineers to gain insight into the efficiency of the software process and the projects conducted using the process framework. In software project management, we are primarily concerned with productivity and quality metrics. There are four reasons for measuring software processes, products, and resources (to characterize, to evaluate, to predict, and to improve). Process and Project Metrics  Metrics should be collected so that process and product indicators can be ascertained  Process metrics used to provide indictors that lead to long term process improvement  Project metrics enable project manager to o Assess status of ongoing project o Track potential risks o Uncover problem are before they go critical o Adjust work flow or tasks o Evaluate the project team’s ability to control quality of software work products Process Metrics  Private process metrics (e.g. defect rates by individual or module) are only known to by the individual or team concerned.  Public process metrics enable organizations to make strategic changes to improve the software process.  Metrics should not be used to evaluate the performance of individuals.  Statistical software process improvement helps and organization to discover where they are strong and where is weak. Project Metrics  A software team can use software project metrics to adapt project workflow and technical activities.  Project metrics are used to avoid development schedule delays, to mitigate potential risks, and to assess product quality on an on-going basis.  Every project should measure its inputs (resources), outputs (deliverables), and results (effectiveness of deliverables). Mr. John Blesswin Page 17 Software Engineering Unit-V Size-Oriented Metrics • Derived by normalizing (dividing) any direct measure (e.g. defects or human effort) associated with the product or project by LOC. • Size oriented metrics are widely used but their validity and applicability is widely debated. Function-Oriented Metrics • Function points are computed from direct measures of the information domain of a business software application and assessment of its complexity. • Once computed function points are used like LOC to normalize measures for software productivity, quality, and other attributes. • The relationship of LOC and function points depends on the language used to implement the software. Reconciling LOC and FP Metrics • The relationship between lines of code and function points depends upon the programming language that is used to implement the software and the quality of the design • Function points and LOC-based metrics have been found to be relatively accurate predictors of software development effort and cost • Using LOC and FP for estimation a historical baseline of information must be established. Object-Oriented Metrics • Number of scenario scripts (NSS) • Number of key classes (NKC) • Number of support classes (e.g. UI classes, database access classes, computations classes, etc.) • Average number of support classes per key class • Number of subsystems (NSUB) Use Case-Oriented Metrics • Describe (indirectly) user-visible functions and features in language independent manner • Number of use case is directly proportional to LOC size of application and number of test cases needed • However use cases do not come in standard sizes and use as a normalization measure is suspect • Use case points have been suggested as a mechanism for estimating effort Mr. John Blesswin Page 18 Software Engineering Unit-V WebApp Project Metrics • Number of static Web pages (Nsp) • Number of dynamic Web pages (Ndp) • Customization index: C = Nsp / (Ndp + Nsp) • Number of internal page links • Number of persistent data objects • Number of external systems interfaced • Number of static content objects • Number of dynamic content objects • Number of executable functions Software Quality Metrics • Factors assessing software quality come from three distinct points of view (product operation, product revision, product modification). • Software quality factors requiring measures include o correctness (defects per KLOC) o maintainability (mean time to change) o integrity (threat and security) o usability (easy to learn, easy to use, productivity increase, user attitude) • Defect removal efficiency (DRE) is a measure of the filtering ability of the quality assurance and control activities as they are applied through out the process framework DRE = E / (E + D) E = number of errors found before delivery of work product D = number of defects found after work product delivery Mr. John Blesswin Page 19 Software Engineering Unit-V QUALITY MANAGEMENT • Also called software quality assurance (SQA) • Serves as an umbrella activity that is applied throughout the software process • Involves doing the software development correctly versus doing it over again • Reduces the amount of rework, which results in lower costs and improved time to market • Encompasses – A software quality assurance process – Specific quality assurance and quality control tasks (including formal technical reviews and a multi-tiered testing strategy) – Effective software engineering practices (methods and tools) – Control of all software work products and the changes made to them – A procedure to ensure compliance with software development standards – Measurement and reporting mechanisms Quality • Defined as a characteristic or attribute of something • Refers to measurable characteristics that we can compare to known standards • In software it involves such measures as cyclomatic complexity, cohesion, coupling, function points, and source lines of code • Includes variation control – A software development organization should strive to minimize the variation between the predicted and the actual values for cost, schedule, and resources – They should make sure their testing program covers a known percentage of the software from one release to another Mr. John Blesswin Page 20

Advise: Why You Wasting Money in Costly SEO Tools, Use World's Best Free SEO Tool Ubersuggest.