Glossary of software testing terms

software testing glossary pdf and software qa/testing glossary and technical faqs
NathanBenett Profile Pic
NathanBenett,Germany,Researcher
Published Date:11-07-2017
Your Website URL(Optional)
Comment
Glossary of software testing termsSoftware Testing Terms Glossary Acceptance Testing: Formal testing conducted to enable The programmers take the design documents a user, customer, or other authorized entity to determine (programming requirements) and then proceed with the whether to accept a system or component. Normally per- iterative process of coding, testing, revising, and testing formed to validate the software meets a set of agreed again, this is the Development Phase. acceptance criteria. After the programs have been tested by the programmers, they will be part of a series of formal user and system tests. These are used to verify usability and functionality from a Accessibility Testing: Verifying a product is accessible to user point of view, as well as to verify the functions of the the people having disabilities (visually impaired, hard of application within a larger framework. hearing etc.) The final phase in the development life cycle is to go to production and become a steady state. As a prerequisite Actual Outcome: The actions that are produced when the to going to production, the development team needs to object is tested under specific conditions. provide documentation. This usually consists of user train- ing and operational procedures. The user training familiar- izes the users with the new application. The operational Ad hoc Testing: Testing carried out in an unstructured procedures documentation enables Operations to take and improvised fashion. Performed without clear expected over responsibility for running the application on an ongo- results, ad hoc testing is most often used as a compliment ing basis. to other types of testing. See also Monkey Testing. In production, the changes and enhancements are han- dled by a group (possibly the same programming group) that performs the maintenance. At this point in the life cycle Alpha Testing: Simulated or actual operational testing by of the application, changes are tightly controlled and must potential users/customers or an independent test team at be rigorously tested before being implemented into produc- the developers' site. Alpha testing is often employed for tion off-the-shelf software as a form of internal acceptance testing, before the software goes to beta testing. Application Programming Interface (API): Provided by operating systems or libraries in response to support Arc Testing: See branch testing. requests for services to be made of it by computer programs Agile Testing: Testing practice for projects using agile methodologies, treating development as the customer of Automated Software Quality (ASQ): The use of software testing and emphasizing a test-first design philosophy. In tools, such as automated testing tools, to improve software agile development testing is integrated throughout the quality. lifecycle, testing the software throughout its development. See also Test Driven Development. Automated Software Testing: The use of software to control the execution of tests, the comparison of actual Application Binary Interface (ABI): Describes the low- outcomes to predicted outcomes, the setting up of test level interface between an application program and the preconditions, and other test control and test reporting operating system, between an application and its libraries, functions, without manual intervention. or between component parts of the application. An ABI differs from an application programming interface (API) in Automated Testing Tools: Software tools used by that an API defines the interface between source code and development teams to automate and streamline their test- libraries, so that the same source code will compile on any ing and quality assurance process. system supporting that API, whereas an ABI allows compiled object code to function without changes on any system using a compatible ABI. Application Development Lifecycle The process flow during the various phases of the application development life cycle. The Design Phase depicts the design phase up to the point of starting development. Once all of the requirements have been gathered, analyzed, verified, and a design has been produced, we are ready to pass on the programming requirements to the application programmers.Software Testing Terms Glossary Backus-Naur Form (BNF): A metasyntax used to express An application benchmark tests the same context-free grammars: that is, a formal way to describe throughput capabilities under conditions that are closer production conditions. formal languages. BNF is widely used as a notation for the grammars of Benchmarking is helpful in understanding how the data- computer programming languages, instruction sets and base manager responds under varying conditions. You communication protocols, as well as a notation for can create scenarios that test deadlock handling, utility representing parts of natural language grammars. Many performance, different methods of loading data, transac- textbooks for programming language theory and/or tion rate characteristics as more users are added, and semantics document the programming language in BNF. even the effect on the application of using a new release of the product. Basic Block: A sequence of one or more consecutive, executable statements containing no branches. Benchmark Testing Methods: Benchmark tests are based on a repeatable environment so that the same test run under the same conditions will yield results that you Basis Path Testing: A white box test case design tech- can legitimately compare. nique that fulfills the requirements of branch testing & also You might begin benchmarking by running the test applica- tests all of the independent paths that  could be used to tion in a normal environment. As you narrow down a construct any arbitrary path through the computer program. performance problem, you can develop specialized test cases that limit the scope of the function that you are testing. The specialized test cases need not emulate an Basis Test Set: A set of test cases derived from Basis entire application to obtain valuable information. Start with Path Testing. simple measurements, and increase the complexity only when necessary. Baseline: The point at which some deliverable produced Characteristics of good benchmarks or measurements during the software engineering process is put under for- include: mal change control. Tests are repeatable. Each iteration of a test starts in the same Bebugging: A popular software engineering technique system state. used to measure test coverage. Known bugs are randomly No other functions or applications are active in added to a program source code and the programmer is the system unless the scenario includes some tasked to find them. The percentage of the known bugs not amount of other activity going on in the system. found gives an indication of the real bugs that remain. The hardware and software used for bench- marking match your production environment. For benchmarking, you create a scenario and then applica- Behavior: The combination of input values and precondi- tions in this scenario several times, capturing key informa- tions along with the required response for a function of a tion during each run. Capturing key information after each system. The full specification of a function would normally run is of primary importance in determining the changes comprise one or more behaviors. that might improve performance of both the application and the database. Benchmark Testing: Benchmark testing is a normal part of the application development life cycle. It is a team effort Beta Testing: Comes after alpha testing. Versions of the that involves both application developers and database software, known as beta versions, are released to a limited administrators (DBAs), and should be performed against audience outside of the company. The software is released your application in order to determine current performance to groups of people so that further testing can ensure the and improve it. If the application code has been written as product has few faults or bugs. Sometimes, beta versions efficiently as possible, additional performance gains might are made available to the open public to increase the be realized from tuning the database and database man- feedback field to a maximal number of future users. ager configuration parameters. You can even tune applica- tion parameters to meet the requirements of the application better. Big-Bang Testing: An inappropriate approach to integra- You run different types of benchmark tests to discover tion testing in which you take the entire integrated system specific kinds of information: and test it as a unit. Can work well on small systems but is A transaction per second benchmark deter- not favorable for larger systems because it may be difficult to pinpoint the exact location of the defect when a failure mines the throughput capabilities of the data- base manager under certain limited laboratory occurs. conditions.Software Testing Terms Glossary Binary Portability Testing: Testing an executable appli- Bottom Up Testing: An approach to integration testing cation for portability across system platforms and environ- where the lowest level components are tested first, then ments, usually for conformation to an ABI specification. used to facilitate the testing of higher level components. Black Box Testing: Testing without knowledge of the Boundary Testing: Tests focusing on the boundary or internal workings of the item being tested.  For example, limits of the software being tested. when black box testing is applied to software engineering, the tester would only know the "legal" inputs and what the expected outputs should be, but not how the program Boundary Value: An input value or output value which is actually arrives at those outputs.  It is because of this that on the boundary between equivalence classes, or an black box testing can be considered testing with respect to incremental distance either side of the boundary. the specifications, no other knowledge of the program is necessary.  For this reason, the tester and the programmer Boundary Value Analysis: In boundary value analysis, can be independent of one another, avoiding programmer test cases are generated using the extremes of the input bias toward his own work.  For this testing, test groups are domain, e.g. maximum, minimum, just inside/outside often used. boundaries, typical values, and error values. Advantages of Black Box Testing More effective on larger units of code than glass Boundary Value Coverage: The percentage of boundary box testing. values which have been exercised by a test case suite. Tester needs no knowledge of implementation, Tester and programmer are independent of each Branch: A conditional transfer of control from any state- other. ment to any other statement in a component, or an Tests are done from a user's point of view. unconditional transfer of control from any statement to any other statement in the component except the next state- Will help to expose any ambiguities or inconsis- ment, or when a component has more than one entry point, tencies in the specifications. a transfer of control to an entry point of the component. Test cases can be designed as soon as the spec- ifications are complete. Branch Condition Coverage: The percentage of branch condition outcomes in every decision that has been tested. Disadvantages of Black Box Testing Only a small number of possible inputs can actu- Branch Condition Combination Coverage: The percent- ally be tested, to test every possible input stream age of combinations of all branch condition outcomes in would take nearly forever. every decision that has been tested. Without clear and concise specifications, test cases are hard to design. Branch Condition Combination Testing: A test case There may be unnecessary repetition of test in- design technique in which test cases are designed to puts if the tester is not informed of test cases the execute combinations of branch condition outcomes. programmer has already tried. May leave many program paths untested. Branch Condition Testing: A technique in which test Cannot be directed toward specific segments of cases are designed to execute branch condition outcomes. code which may be very complex (and therefore more error prone). Branch Testing: A test case design technique for a com- Most testing related research has been directed ponent in which test cases are designed to execute toward glass box testing. branch outcomes. Block Matching: Automated matching logic applied to Breadth Testing: A test suite that exercises the full func- data and transaction driven websites to automatically tionality of a product but does not test features in detail. detect block s of related data. This enables repeating elements to be treated correctly in relation to other elements in the block without the need for special coding. Bug: A fault in a program which causes the program to See TestDrive-Gold perform in an unintended. See fault.Software Testing Terms Glossary Capture/Playback Tool: A test tool that records test input Code Coverage: A measure used in software testing. It as it is sent to the software under test. The input cases describes the degree to which the source code of a stored can then be used to reproduce the test at a later program has been tested. It is a form of testing that looks time. at the code directly and as such comes under the heading of white box testing. To measure how well a program has been tested, there Capture/Replay Tool: See capture/playback tool. are a number of coverage criteria – the main ones being: Functional Coverage – has each function in the CAST: Acronym for computer-aided software testing. program been tested? Automated software testing in one or more phases of the Statement Coverage – has each line of the software life-cycle. See also ASQ. source code been tested? Condition Coverage – has each evaluation point Cause-Effect Graph: A graphical representation of inputs (i.e. a true/false decision) been tested? or stimuli (causes) with their associated outputs (effects), Path coverage – has every possible route through which can be used to design test cases. a given part of the code been executed? Entry/exit Coverage – has every possible call and Capability Maturity Model for Software (CMM): The return of the function been tested? CMM is a process model based on software best-practices effective in large-scale, multi-person projects. The CMM has been used to assess the maturity levels of organiza- Code-Based Testing: The principle of structural code tion areas as diverse as software engineering, system based testing is to have each and every statement in the engineering, project management, risk management, sys- program executed at least once during the test. Based on tem acquisition, information technology (IT) or personnel the premise that one cannot have confidence in a section management, against a scale of five key processes, of code unless it has been exercised by tests, structural namely: Initial, Repeatable, Defined, Managed and Opti- code based testing attempts to test all reachable elements mized. in the software under the cost and time constraints. The testing process begins by first identifying areas in the program not being exercised by the current set of test Capability Maturity Model Integration (CMMI) Capability cases, follow by creating additional test cases to increase ® Maturity Model Integration (CMMI) is a process the coverage. improvement approach that provides organizations with the essential elements of effective processes. It can be used to guide process improvement across a project, a Code-Free Testing: Next generation software testing division, or an entire organization. CMMI helps integrate technique from Original Software which does not require traditionally separate organizational functions, set process complicated scripting language to learn. Instead, a simple improvement goals and priorities, provide guidance for point and click interface is used to significantly simplify the quality processes, and provide a point of reference for process of test creation. See TestDrive-Gold appraising current processes. Seen by many as the successor to the CMM, the goal of the CMMI project is to improve the usability of maturity models by integrating many different models into one framework. Certification: The process of confirming that a system or component complies with its specified requirements and is acceptable for operational use. Chow's Coverage Metrics: See N-switch coverage. Code Complete: A phase of development where function- ality is implemented in its entirety; bug fixes are all that are left. All functions found in the Functional Specifications have been implemented. Fig 1. Code-Free testing with TestDrive-GoldSoftware Testing Terms Glossary Code Inspection: A formal testing technique where the Conformance Criterion: Some method of judging wheth- programmer reviews source code with a group who ask er or not the component's action on a particular specified questions analyzing the program logic, analyzing the code input value conforms to the specification. with respect to a checklist of historically common program- ming errors, and analyzing its compliance with coding Conformance Testing: The process of testing to standards. determine whether a system meets some specified standard. To aid in this, many test procedures and test Code Walkthrough: A formal testing technique where setups have been developed, either by the standard's source code is traced by a group with a small set of test maintainers or external organizations, specifically for cases, while the state of program variables is manually testing conformance to standards. monitored, to analyze the programmer's logic and assump- Conformance testing is often performed by external tions. organizations; sometimes the standards body itself, to give greater guarantees of compliance. Products tested in such a manner are then advertised as being certified by that Coding: The generation of source code. external organization as complying with the standard. Compatibility Testing: The process of testing to under- Context Driven Testing: The context-driven school of stand if software is compatible with other elements of a software testing is similar to Agile Testing that advocates system with which it should operate, e.g. browsers, Oper- continuous and creative evaluation of testing opportunities ating Systems, or hardware. in light of the potential information revealed and the value of that information to the organization right now. Complete Path Testing: See exhaustive testing. Control Flow: An abstract representation of all possible sequences of events in a program's execution. Component: A minimal software item for which a sepa- rate specification is available. Control Flow Graph: The diagrammatic representation of the possible alternative control flow paths through a com- Component Testing: The testing of individual software ponent. components. Control Flow Path: See path. Component Specification: A description of a compo- nent's function in terms of its output values for specified input values under specified preconditions. Conversion Testing: Testing of programs or procedures used to convert data from existing systems for use in re- placement systems. Computation Data Use: A data use not in a condition. Also called C-use. Correctness: The degree to which software conforms to its specification. Concurrent Testing: Multi-user testing geared towards determining the effects of accessing the same application code, module or database records. See Load Testing Coverage: The degree, expressed as a percentage, to which a specified coverage item has been tested. Condition: A Boolean expression containing no Boolean operators. For instance, AB is a condition but A and B is Coverage Item: An entity or property used as a basis for not. testing. Condition Coverage: See branch condition coverage. Cyclomatic Complexity: A software metric (measurement). It was developed by Thomas McCabe and is used to measure the complexity of a program. It directly Condition Outcome: The evaluation of a condition to measures the number of linearly independent paths TRUE or FALSE. through a program's source code.Software Testing Terms Glossary Data Case: Data relationship model simplified for data Data Flow Diagram: A modeling notation that represents extraction and reduction purposes in order to create test a functional decomposition of a system. data. Data Flow Coverage: Test coverage measure based on Data Definition: An executable statement where a vari- variable usage within the code. Examples are data defini- able is assigned a value. tion-use coverage, data definition P-use coverage, data definition C-use coverage, etc. Data Definition C-use Coverage: The percentage of da- ta definition C-use pairs in a component that are exer- Data Flow Testing: Data-flow testing looks at the lifecycle cised by a test case suite. of a particular piece of data (i.e. a variable) in an applica- tion. By looking for patterns of data usage, risky areas of code can be found and more test cases can be applied. Data Definition C-use Pair: A data definition and compu- tation data use, where the data use uses the value de- fined in the data definition. Data Protection: Technique in which the condition of the underlying database is synchronized with the test scenario so that differences can be attributed to logical changes. Data Definition P-use Coverage: The percentage of da- This technique also automatically re-sets the database ta definition P-use pairs in a component that is tested. after tests - allowing for a constant data set if a test is re-run. See TestBench Data Definition P-use Pair: A data definition and predi- cate data use, where the data use uses the value defined Data Protection Act: UK Legislation surrounding the in the data definition. security, use and access of an individual’s information. May impact the use of live data used for testing purposes. Data Definition-use Coverage: The percentage of data definition-use pairs in a component that are exercised by a Data Use: An executable statement where the value of a test case suite. variable is accessed. Data Definition-use Pair: A data definition and data use, Database Testing: The process of testing the functional- where the data use uses the value defined in the data ity, security, and integrity of the database and the data held definition. within. Functionality of the database is one of the most critical aspects of an application's quality; problems with the data- Data Definition-use Testing: A test case design tech- base could lead to data loss or security breaches, and may nique for a component in which test cases are designed to put a company at legal risk depending on the type of data execute data definition-use pairs. being stored. For more information on database testing see TestBench. Data Dictionary: A database that contains definitions of all data items defined during analysis. Data Driven Testing: A framework where test input and output values are read from data files and are loaded into variables in captured or manually coded scripts. In this framework, variables are used for both input values and output verification values. Navigation through the program, reading of the data files, and logging of test status and information are all coded in the test script. This is similar to keyword-driven testing in that the test case is contained in the data file and not in the script; the script is just a "driver," or delivery mechanism, for the data. Unlike in table-driven testing, though, the navigation data isn't contained in the table structure. In data-driven testing, only test data is contained in the data files. Fig 2. Database testing using TestBench for OracleSoftware Testing Terms Glossary Debugging: A methodical process of finding and reducing Desk Checking: The testing of software by the manual the number of bugs, or defects, in a computer program or simulation of its execution. a piece of electronic hardware thus making it behave as expected. Debugging tends to be harder when various Design-Based Testing: Designing tests based on subsystems are tightly coupled, as changes in one may objectives derived from the architectural or detail design of cause bugs to emerge in another. the software (e.g., tests that execute specific invocation paths or probe the worst case behavior of algorithms). Decision: A program point at which the control flow has two or more alternative routes. Dirty Testing: Testing which demonstrates that the system under test does not work. (Also known as negative Decision Condition: A condition held within a decision. testing) Documentation Testing: Testing concerned with the ac- Decision Coverage: The percentage of decision out- curacy of documentation. comes that have been exercised by a test case suite. Decision Outcome: The result of a decision. Domain: The set from which values are selected. Defect: Nonconformance to requirements or functional / Domain Expert: A person who has significant knowledge program specification in a specific domain. Delta Release: A delta, or partial, release is one that Domain Testing: Domain testing is the most frequently includes only those areas within the release unit that have described test technique. The basic notion is that you take actually changed or are new since the last full or delta the huge space of possible tests of an individual variable release. For example, if the release unit is the program, a and subdivide it into subsets that are (in some way) delta release contains only those modules that have equivalent. Then you test a representative from each sub- changed, or are new, since the last full release of the set. program or the last delta release of certain modules. Downtime: Total period that a service or component is not Dependency Testing: Examines an application's operational. requirements for pre-existing software, initial states and configuration in order to maintain proper functionality. Dynamic Testing: Testing of the dynamic behavior of code. Dynamic testing involves working with the software, Depth Testing: A test that exercises a feature of a product giving input values and checking if the output is as in full detail. expected. Dynamic Analysis: the examination of the physical response from the system to variables that are not constant and change with time.Software Testing Terms Glossary Emulator: A device that duplicates (provides an emulation Error Guessing: Error Guessing involves making an of) the functions of one system using a different system, so itemized list of the errors expected to occur in a particular that the second system behaves like (and appears to be) area of the system and then designing a set of test cases the first system. This focus on exact reproduction of to check for these expected errors. Error guessing is more external behavior is in contrast to simulation, which can testing art than testing science but can be very effective concern an abstract model of the system being simulated, given a tester familiar with the history of the system. often considering internal state. Error Seeding: The process of injecting a known number Endurance Testing: Checks for memory leaks or other of "dummy" defects into the program and then check how problems that may occur with prolonged execution. many of them are found by various inspections and testing. If, for example, 60% of them are found, the presumption is that 60% of other defects have been found as well. See End-to-End Testing: Testing a complete application envi- Bebugging. ronment in a situation that mimics real-world use, such as interacting with a database, using network communica- tions, or interacting with other hardware, applications, or Evaluation Report: A document produced at the end of systems if appropriate. the test process summarizing all testing activities and results. It also contains an evaluation of the test process and lessons learned. Entry Point: The first executable statement within a com- ponent. Executable statement: A statement which, when com- piled, is translated into object code, which will be executed Equivalence Class: A mathematical concept, an equiva- procedurally when the program is running and may per- lence class is a subset of given set induced by an equiva- form an action on program data. lence relation on that given set. (If the given set is empty, then the equivalence relation is empty, and there are no equivalence classes; otherwise, the equivalence relation Exercised: A program element is exercised by a test and its concomitant equivalence classes are all non-emp- case when the input value causes the execution of that ty.) Elements of an equivalence class are said to be equiv- element, such as a statement, branch, or other structural alent, under the equivalence relation, to all the other element. elements of the same equivalence class. Exhaustive Testing: Testing which covers all combina- Equivalence Partition: See equivalence class. tions of input values and preconditions for an element of the software under test. Equivalence Partitioning: Leverages the concept of "classes" of input conditions. A "class" of input could be Exit Point: The last executable statement within a com- "City Name" where testing one or several city names could ponent. be deemed equivalent to testing all city names. In other words each instance of a class in a test covers a large set Expected Outcome: See predicted outcome. of other possible tests. Expert System: A domain specific knowledge base com- Equivalence Partition Coverage: The percentage of bined with an inference engine that processes knowledge equivalence classes generated for the component, which encoded in the knowledge base to respond to a user's have been tested. request for advice. Equivalence Partition Testing: A test case design tech- Expertise: Specialized domain knowledge, skills, tricks, nique for a component in which test cases are designed to shortcuts and rules-of-thumb that provide an ability to execute representatives from equivalence classes. rapidly and effectively solve problems in the problem do- main. Error: A mistake that produces an incorrect result.Software Testing Terms Glossary Failure: Non performance or deviation of the software from Functional Decomposition: A technique used during its expected delivery or service. planning, analysis and design; creates a functional hierar- chy for the software. Functional Decomposition broadly relates to the process of resolving a functional relationship Fault: A manifestation of an error in software. Also know into its constituent parts in such a way that the original as a bug function can be reconstructed (i.e., recomposed) from those parts by function composition. In general, this process of decomposition is undertaken either for the Feasible Path: A path for which there exists a set of input purpose of gaining insight into the identity of the values and execution conditions which causes it to be constituent components (which may reflect individual executed. physical processes of interest, for example), or for the purpose of obtaining a compressed representation of the global function, a task which is feasible only when the Feature Testing: A method of testing which concentrates constituent processes possess a certain level of modularity on testing one feature at a time. (i.e. independence or non-interaction). Firing a Rule: A rule fires when the “if” part (premise) is Functional Requirements: Define the internal workings of proven to be true. If the rule incorporates an “else” compo- the software: that is, the calculations, technical details, nent, the rule also fires when the “if” part is proven to be data manipulation and processing and other specific false. functionality that show how the use cases are to be satisfied. They are supported by non-functional requirements, which impose constraints on the design or Fit For Purpose Testing: Validation carried out to dem- implementation (such as performance requirements, onstrate that the delivered system can be used to carry out security, quality standards, or design constraints). the tasks for which it was designed and acquired. Functional Specification: A document that describes in Forward Chaining: Applying a set of previously deter- detail the characteristics of the product with regard to its mined facts to the rules in a knowledge base to see if any intended features. of them will fire. Functional Testing: See also Black Box Testing. Full Release: All components of the release unit that are built, tested, distributed and implemented together. See Testing the features and operational behavior of a also delta release. product to ensure they correspond to its specifi- cations. Testing that ignores the internal mechanism of a Functional Specification: The document that describes system or component and focuses solely on the in detail the characteristics of the product with regard to its outputs generated in response to selected inputs intended capability. and execution conditions.Software Testing Terms Glossary Genetic Algorithms: Search procedures that use the Gorilla Testing: An intense round of testing, quite often mechanics of natural selection and natural genetics. It redirecting all available resources to the activity. The idea uses evolutionary techniques, based on function optimiza- here is to test as much of the application in as short a tion and artificial intelligence, to develop a solution. period of time as possible. Glass Box Testing: A form of testing in which the tester Graphical User Interface (GUI): A type of display format can examine the design documents and the code as well that enables the user to choose commands, start pro- as analyze and possibly manipulate the internal state of the grams, and see lists of files and other options by pointing entity being tested. Glass box testing involves examining to pictorial representations (icons) and lists of menu items the design documents and the code, as well as observing on the screen. at run time the steps taken by algorithms and their internal data. See structural test case design. Gray (Grey) Box Testing: A testing technique that uses a combination of black box testing and white box testing. Goal: The solution that the program or project is trying to Gray box testing is not black box testing because the tester reach. does know some of the internal workings of the software under test. In gray box testing, the tester applies a limited number of test cases to the internal workings of the soft- ware under test. In the remaining part of the gray box testing, one takes a black box approach in applying inputs to the software under test and observing the outputs.Software Testing Terms Glossary Harness: A test environment comprised of stubs and High Order Tests: High-order testing checks that the drivers needed to conduct a test. software meets customer requirements and that the soft- ware, along with other system elements, meets the func- tional, behavioral, and performance requirements. It uses Heuristics: The informal, judgmental knowledge of an black-box techniques and requires an outsider perspec- application area that constitutes the "rules of good tive. Therefore, organizations often use an Independent judgment" in the field. Heuristics also encompass the Testing Group (ITG) or the users themselves to perform knowledge of how to solve problems efficiently and high-order testing. effectively, how to plan steps in solving a complex problem, High-order testing includes validation testing, system test- how to improve performance, etc. ing (focusing on aspects such as reliability, security, stress, usability, and performance), and acceptance testing (includes alpha and beta testing). The testing strategy specifies the type of high-order testing that the project requires. This depends on the aspects that are important in a particular system from the user perspective.Software Testing Terms Glossary ITIL (IT Infrastructure Library): A consistent and compre- Installability Testing: Testing whether the software or hensive documentation of best practice for IT Service system installation being tested meets predefined installa- Management. ITIL consists of a series of books giving tion requirements. guidance on the provision of quality IT services, and on the accommodation and environmental facilities needed to Installation Guide: Supplied instructions on any suitable support IT. media, which guides the installer through the installation process.  This may be a manual guide, step-by-step proce- Implementation Testing: See Installation Testing. dure, installation wizard, or any other similar process de- scription. Incremental Testing: Partial testing of an incomplete product. The goal of incremental testing is to provide an Installation Testing: Confirms that the application under early feedback to software developers. test recovers from expected or unexpected events without loss of data or functionality. Events can include shortage Independence: Separation of responsibilities which en- of disk space, unexpected loss of communication, or pow- sures the accomplishment of objective evaluation. er out conditions. Such testing focuses on what customers will need to do to install and set up the new software successfully and is Independent Test Group (ITG): A group of people whose typically done by the software testing engineer in primary responsibility is to conduct software testing for conjunction with the configuration manager. other companies. Implementation testing is usually defined as testing which places a compiled version of code into the testing or pre-production environment, from which it may or may not Infeasible path: A path which cannot be exercised by any progress into production. This generally takes place set of possible input values. outside of the software development environment to limit code corruption from other future releases which may reside on the development network. Inference: Forming a conclusion from existing facts. Inference Engine: Software that provides the reasoning Installation Wizard: Supplied software on any suitable mechanism in an expert system. In a rule based expert media, which leads the installer through the installation system, typically implements forward chaining and back- process.  It shall normally run the installation process, ward chaining strategies. provide feedback on installation outcomes and prompt for options. Infrastructure: The organizational artifacts needed to per- form testing, consisting of test environments, automated Instrumentation: The insertion of additional code into the test tools, office environment and procedures. program in order to collect information about program behavior during program execution. Inheritance: The ability of a class to pass on characteris- tics and data to its descendants. Integration: The process of combining components into larger groups or assemblies. Input: A variable (whether stored within a component or outside it) that is read by the component. Integration Testing: Testing of combined parts of an application to determine if they function together correctly. Usually performed after unit and functional testing. This Input Domain: The set of all possible inputs. type of testing is especially relevant to client/server and distributed systems. Inspection: A group review quality improvement process for written material. It consists of two aspects; product Interface Testing: Integration testing where the interfaces (document itself) improvement and process improvement. between system components are tested. Installability: The ability of a software component or Isolation Testing: Component testing of individual com- system to be installed on a defined target platform allowing ponents in isolation from surrounding components. it to be run as required. Installation includes both a new installation and an upgrade.Software Testing Terms Glossary KBS (Knowledge Based System): A domain specific Knowledge Engineering: The process of codifying an knowledge base combined with an inference engine that expert's knowledge in a form that can be accessed through processes knowledge encoded in the knowledge base to an expert system. respond to a user's request for advice. Known Error: An incident or problem for which the root Key Performance Indicator: Quantifiable measurements cause is known and for which a temporary Work-around or against which specific performance criteria can be set. a permanent alternative has been identified. Keyword Driven Testing: An approach to test script writ- ing aimed at code based automation tools that separates much of the programming work from the actual test steps. The results is the test steps can be designed earlier and the code base is often easier to read and maintain.Software Testing Terms Glossary LCSAJ: A Linear Code Sequence And Jump, consisting of Localization Testing: This term refers to making software the following three items (conventionally identified by line specifically designed for a specific locality. This test is numbers in a source code listing): the start of the linear based on the results of globalization testing, which verifies sequence of executable statements, the end of the linear the functional support for that particular culture/locale. sequence, and the target line to which control flow is Localization testing can be executed only on the localized transferred at the end of the linear sequence. version of a product. LCSAJ Coverage: The percentage of LCSAJs of a com- The test effort during localization testing focuses on: ponent which are exercised by a test case suite. and content LCSAJ Testing: A test case design technique for a com- and region-specific areas ponent in which test cases are designed to execute LCSA- Js. In addition, localization testing should include: Logic-Coverage Testing: Sometimes referred to as Path ized environment Testing, logic-coverage testing attempts to expose software defects by exercising a unique combination of the ity tests according to the product's target program's statements known as a path. region. Load Testing: The process of creating demand on a Log: A chronological record of relevant details about system or device and measuring its response. Load testing the execution of tests. generally refers to the practice of modeling the expected usage of a software program by simulating multiple users Loop Testing: Loop testing is the testing of a resource or accessing the program's services concurrently. As such, resources multiple times under program control. this testing is most relevant for multi-user systems, often one built using a client/server model, such as web servers. However, other types of software systems can be load- tested also. For example, a word processor or graphics editor can be forced to read an extremely large document; or a financial package can be forced to generate a report based on several years' worth of data. The most accurate load testing occurs with actual, rather than theoretical, results. See also Concurrent Testing, Performance Test- ing, Reliability Testing, and Volume Testing.Software Testing Terms Glossary Maintainability: The ease with which the system/software Repetitive manual testing can be difficult to perform on can be modified to correct faults, modified to meet new large software applications or applications having very requirements, modified to make future maintenance easi- large dataset coverage. This drawback is compensated for er, or adapted to a changed environment. by using manual black-box testing techniques including equivalence partitioning and boundary value analysis. Us- ing which, the vast dataset specifications can be divided Maintenance Requirements:  A specification of the and converted into a more manageable and achievable set required maintenance needed for the system/software. of test suites. The released software often needs to be revised and/or There is no complete substitute for manual testing. Manual upgraded throughout its lifecycle. Therefore it is essential testing is crucial for testing software applications more that the software can be easily maintained, and any errors thoroughly. See TestDrive-Assist. found during re-work and upgrading. Within traditional software testing techniques, script main- tenance is often a problem as it can be very complicated Metric: A standard of measurement. Software metrics are and time consuming to ensure correct maintenance of the the statistics describing the structure or content of a pro- software as the scripts these tools use need updating gram. A metric should be a real objective measurement of every time the application under test changes. See Code- something such as number of bugs per lines of code. Free Testing and Self Healing Scripts. Modified Condition/Decision Coverage: The percentage Manual Testing: The oldest type of software testing. of all branch condition outcomes that independently affect Manual testing requires a tester to perform manual test a decision outcome that have been exercised by a test operations on the test software without the help of test case suite. automation. Manual testing is a laborious activity that requires the tester to possess a certain set of qualities; to Modified Condition/Decision Testing: A test case de- be patient, observant, speculative, creative, innovative, sign technique in which test cases are designed to execute open-minded, resourceful, un-opinionated, and skilful. branch condition outcomes that independently affect a As a tester, it is always advisable to use manual white box decision outcome. testing and black-box testing techniques on the test soft- ware. Manual testing helps discover and record any software bugs or discrepancies related to the functionality Monkey Testing: Testing a system or an application on of the product. the fly, i.e. a unit test with no specific end result in mind. Manual testing can be augmented by test automation. It is possible to record and playback manual steps and write Multiple Condition Coverage: See Branch Condition automated test script(s) using test automation tools. Al- Combination Coverage. though, test automation tools will only help execute test scripts written primarily for executing a particular specifica- tion and functionality. Test automation tools lack the ability Mutation Analysis: A method to determine test case suite of decision-making and recording any unscripted discrep- thoroughness by measuring the extent to which a test case ancies during program execution. It is recommended that suite can discriminate the program from slight variants one should perform manual testing of the entire product at (mutants) of the program. See also Error Seeding. least a couple of times before actually deciding to automate the more mundane activities of the product. Manual testing helps discover defects related to the usabil- Mutation Testing: Testing done on the application where ity testing and GUI testing area. While performing manual bugs are purposely added to it. See Bebugging. tests the software application can be validated whether it meets the various standards defined for effective and efficient usage and accessibility. For example, the stand- ard location of the OK button on a screen is on the left and of CANCEL button on the right. During manual testing you might discover that on some screen, it is not. This is a new defect related to the usability of the screen. In addition, there could be many cases where the GUI is not displayed correctly and the basic functionality of the program is correct. Such bugs are not detectable using test automa- tion tools.Software Testing Terms Glossary N-switch Coverage: The percentage of sequences of Negative Testing: Testing a system or application using N-transitions that have been tested. negative data. (For example testing a password field that requires a minimum of 9 characters by entering a pass- word of 6). N-switch Testing: A form of state transition testing in which test cases are designed to execute all valid se- quences of N-transitions. Neural Network: A system modeled after the neurons (nerve cells) in a biological nervous system. A neural network is designed as an interconnected system of pro- N-transitions: A sequence of N+ transitions. cessing elements, each with a limited number of inputs and outputs. Rather than being programmed, these systems learn to recognize patterns. N+1 Testing: A variation of regression testing. Testing conducted with multiple cycles in which errors found in test cycle N are resolved and the solution is retested in test Non-functional Requirements Testing: Testing of those cycle N+1. The cycles are typically repeated until the requirements that do not relate to functionality. i.e. perfor- solution reaches a steady state and there are no errors. mance, usability, etc. See also Regression Testing. Normalization: A technique for designing relational Natural Language Processing (NLP): A computer sys- database tables to minimize duplication of information and, tem to analyze, understand and generate natural human in so doing, to safeguard the database against certain languages. types of logical or structural problems, namely data anomalies.Software Testing Terms Glossary Object: A software structure which represents an identifi- Oracle: A mechanism to produce the predicted outcomes able item that has a well-defined role in a problem domain. to compare with the actual outcomes of the software un- der test. Object Orientated: An adjective applied to any system or language that supports the use of objects. Outcome: The result or visible effect of a test. Objective: The purpose of the specific test being under- Output: A variable (whether stored within a component or taken. outside it) that is written to by the component. Operational Testing: Testing performed by the end-user Output Domain: The set of all possible outputs. on software in its normal operating environment. Output Value: An instance of an output.Software Testing Terms Glossary Page Fault: A program interruption that occurs when a Partial Test Automation: The process of automating page that is marked ‘not in real memory’ is referred to by parts but not all of the software testing process. If, for an active page. example, an oracle cannot reasonably be created, or if fully automated tests would be too difficult to maintain, then a software tools engineer can instead create testing tools to Pair Programming: A software development technique help human testers perform their jobs more efficiently. that requires two programmers to participate in a combined Testing tools can help automate tasks such as product development effort at one workstation. Each member installation, test data creation, GUI interaction, problem performs the action the other is not currently doing: for detection (consider parsing or polling agents equipped with example, while one types in unit tests, the other thinks oracles), defect logging, etc., without necessarily about the class that will satisfy the test. automating tests in an end-to-end fashion. The person who is doing the typing is known as the driver while the person who is guiding is known as the navigator. Pass: Software has deemed to have passed a test if the It is often suggested for the two partners to switch roles at actual results of the test matched the expected results. least every half-hour or after a unit test is made. It is also suggested to switch partners at least once a day. Pass/Fail Criteria: Decision rules used to determine whether an item under test has passed or failed a test. Pair Testing: In much the same way as Pair Programming, two testers work together to find defects. Typically, they share one computer and trade control of it Path: A sequence of executable statements of a compo- while testing. nent, from an entry point to an exit point. Pairwise Testing: A combinatorial software testing Path Coverage: The percentage of paths in a component method that, for each pair of input parameters to a system exercised by a test case suite. (typically, a software algorithm) tests all possible discrete combinations of those parameters. Using carefully chosen test vectors, this can be done much faster than an Path Sensitizing: Choosing a set of input values to force exhaustive search of all combinations of all parameters, by the execution of a component to take a given path. "parallelizing" the tests of parameter pairs. The number of tests is typically O(nm), where n and m are the number of possibilities for each of the two parameters with the most Path Testing: Used as either black box or white box choices. testing, the procedure itself is similar to a walk-through. The reasoning behind all-pairs testing is this: the simplest First, a certain path through the program is chosen. Possible inputs  and the correct result are written down. bugs in a program are generally triggered by a single input parameter. The next simplest category of bugs consists of Then the program is executed by hand, and its result is compared to the predefined. Possible faults  have to be those dependent on interactions between pairs of parameters, which can be caught with all-pairs testing. written down at once. Bugs involving interactions between three or more parameters are progressively less common, whilst at the Performance: The degree to which a system or same time being progressively more expensive to find by component accomplishes its designated functions within exhaustive testing, which has as its limit the exhaustive given constraints regarding processing time and testing of all possible inputs. throughput rate. Many testing methods regard all-pairs testing of a system or subsystem as a reasonable cost-benefit compromise between often computationally infeasible higher-order Performance Testing: A test procedure that covers a combinatorial testing methods, and less exhaustive broad range of engineering or functional evaluations where methods which fail to exercise all possible pairs of a material, product, or system is not specified by detailed parameters. Because no testing technique can find all material or component specifications: Rather, emphasis is bugs, all-pairs testing is typically used together with other on the final measurable performance characteristics. Also quality assurance techniques such as unit testing. See known as Load Testing. TestDrive-Gold. Portability: The ease with which the system/software can be transferred from one hardware or software environment to another.Software Testing Terms Glossary Portability Requirements: A specification of the required Predication: The choice to execute or not to execute a portability for the system/software. given instruction. Portability Testing: The process of testing the ease with Predicted Outcome: The behavior expected by the spec- which a software component can be moved from one ification of an object under specified conditions. environment to another. This is typically measured in terms of the maximum amount of effort permitted. Results are Priority: The level of business importance assigned to an expressed in terms of the time required to move the individual item or test. software and complete data conversion and documentation updates. Process: A course of action which turns inputs into out- puts or results. Postcondition: Environmental and state conditions that must be fulfilled after the execution of a test or test procedure. Process Cycle Test: A black box test design technique in which test cases are designed to execute business proce- dures and processes. Positive Testing: Testing aimed at showing whether the software works in the way intended. See also Negative Testing. Progressive Testing: Testing of new features after re- gression testing of previous features. Precondition: Environmental and state conditions which must be fulfilled before the component can be executed Project: A planned undertaking for presentation of results with a particular input value. at a specified time in the future. Predicate: A logical expression which evaluates to TRUE Prototyping: A strategy in system development in which a or FALSE, normally to direct the execution path in code. scaled down system or portion of a system is constructed in a short time, then tested and improved upon over several iterations. Predication: The choice to execute or not to execute a given instruction. Pseudo-Random: A series which appears to be random but is in fact generated according to some prearranged Predicted Outcome: The behavior expected by the spec- sequence. ification of an object under specified conditions. Priority: The level of business importance assigned to an individual item or test.