Software Testing Basics (Best Tutorial 2019)

Software Testing Basics

Software Testing Attributes and Basics

A  software testing effort brings together several testing attributes such as functionality, UI, usability, performance, and security. In this tutorial, we explain Software Testing Basics with examples. 

 

These attributes have themselves undergone a lot of changes owing to reasons such as market conditions, prevailing external forces, and end-user requirements, which have brought out significant changes in how, when, and how much of testing is needed.

 

But these other testing attributes are also gaining prominence, thereby determining what kind of testing is required and what together should constitute a test strategy.

 

Let us take the case of security testing—during the days of desktop applications— security was not too much of an issue. It was more confined to physical application access or information exchange over a client-server model.

 

In one sense, I look at test attributes as a secondary change agent, as these are not changes that impact the testing effort directly.

 

This is another huge change agent that has brought about changes in what we test and how we test. The technology world has itself undergone a rampant change in the last two decades, whether it be the Internet revolution at the very core or the newer computing technologies around cloud, social, mobile, analytics, wearable, augmented, and so on.

 

Each of these has had a profound impact on the market at large and at a software development industry level.

 

And the nice thing is that the industry is also aligning toward integrating these technologies—for example, the emergence of SMAC to bring together the social, mobile, analytics, and cloud computing worlds.

 

Changes such as complexity of IT systems, fast adoption across markets, and how diverse systems are embracing varied technologies into their fold have all, in their own ways, brought in newer approaches and requirements to software testing.

 

Technology is one area that continues to have a two-sided impact in software testing—both including how to test for the technology and how to leverage the technology in testing.

 

Let’s take cloud computing as an example here. With the introduction of the cloud when several applications are becoming online services, testers had to quickly gear up to test for software as a service and at the same time also leverage the cloud to build their test machines.

 

Infrastructure as a service was a huge boon to testers that significantly saved their time with machine setup especially in areas such as performance testing.

 

Oftentimes, we as testers are only heavily focused on how to test technology, but to make ourselves more efficient and productive now is the time for us to also see how to leverage the technology to make our lives simpler.

 

The examples here are countless. If we start looking at all technologies from this two-pronged approach, it will bring in greater efficiencies in our scope of operations.

 

Market and End-User Requirements

This is yet another important variable that has brought in sweeping changes in what we test and how we test. We continue to be end-user representatives in the product team.

 

The changing market requirements, whether it be the fast time to market, expectations for a rich feature set, or quick response to end-user wants, have all made test teams increasingly agile and nimble.

 

It is still an open question about whether agile development brought in these changes in the market or the market needs to bring in an agile style of development.

 

Regardless, the dynamic testers have changed for the good, where they are able to respond to change requests very quickly. Also, they now understand that although there is a test complete before every product release, the test effort is truly ongoing, with constant interaction with end users even after live deployment.

 

Also given how global software development has gotten to be, the reverse, where end users become testers, is also now a reality. They come in to evaluate the product and provide feedback before its release in the form of crowd-sourced testing.

 

These have become valuable techniques to help with a quality release given the limited time and money on hand, despite the ever-increasing test scope.

 

Service-Oriented Mind-Set

This change agent ties back to the advent of cloud computing, where the overall industry mindset changed into building service-oriented applications. Office, a very successful boxed product that is a cash cow for Microsoft now, has its online subscription model, Office 365.

 

All products have moved into an online services model, which makes it easy and lightweight for users to leverage and, at the same time, provides full control and management power for the ISV in connecting with end users.

 

This service-oriented style of development brings in newer requirements in testing around performance, security, and usability of the application, making these equally important as that of the core product functionality.

 

Markets and Compliances

With how complex the online market has gotten to be, it is becoming increasingly important to bring in checks and balances through compliances that systems have to adhere to and get certified on. For example, the U.S. government is slowly starting to mandate Section 508 to ensure systems are accessible.

 

This calls for a rigorous round of testing, based on checklists such as the Voluntary Product Accessibility Template that testers can leverage on top of their exploratory and scripted tests. Compliances in varied disciplines such as banking and healthcare have also brought in newer changes and opportunities to software testers.

 

While there are more such agents that bring in changes in software testing, these are the core for us to track on an ongoing basis, as these together influence most of what happens in the software quality and testing world.

 

Another lingering question that people have, not specific to software testing, is how frequent should there be a change. This is not a very easy question to answer as most often all changes are not controlled internally.

 

Several changes are driven by external factors where the organization or the team has no option but to change. In such cases, obviously one has to be ready to jump in and embrace the change as quickly as possible with the least amount of impact.

 

But for the others, which are internal planned changes or external changes that one can decide on and adopt, a certain amount of advance planning will really help.

 

Also, giving a reasonable amount of time gap between two changes will not only give the teams a better opportunity to accept them but will also do justice to the existing process to stabilize, given the amount of time and money investment that would have gone into such an implementation.

 

How Did Testing Fit into the SDLC of the Past Years?

The waterfall was the dominant SDLC model adopted in the past few decades. While other models such as the V-shaped and spiral were also in common use, waterfall had a dominant share in the market.

 

A study conducted even as of 2010 showed that while organizations were increasingly moving on to using agile, about 59% mentioned they still use a combination of agile and waterfall methodology, 12% used purely waterfall, and 5% used purely agile.

 

Waterfall has been so deeply rooted in the software development community at large, since the days of early product development that its reign has been very strong even just a few years ago.

 

And if we take in the pure waterfall model, the development phases have been very distinct and gated. Each phase has had ample time to get the required tasks in its scope of operations done—whether it be the initial stages of requirements gathering and design or the final stages of testing, deployment and release, and maintenance.

 

The last few stages were often feared to be more time constrained, but in the traditional waterfall days, where products were more boxed in nature with long release cycles often running into months, this was not too much of a concern.

 

Testers often did not have any communication with the rest of the development team until their work started later in the game. This was one of the huge downsides of the waterfall model where teams worked in a very disconnected manner.

 

They lacked the larger contextual understanding of their role and their tasks, leading to a very loosely coupled effort that ended up impacting end-user satisfaction.

 

Even if a formal project management effort tied up all these loose ends, there was a lot of drain in terms of overall development costs, cost of quality with quite a gap to be fixed, between end-user requirements and the delivered product.

 

While this style of development has not gone away completely today, a large portion of it has been washed away with the increasing adoption of the agile style of development.

 

How Did the STLC Itself Look?

The STLC has had its independent flow both in the days of the waterfall and agile. The rigor with which it has been implemented and the customized flows it has had have undergone change, but at a high level, having an STLC has been greatly valuable to bring in the desired level of discipline in the overall testing effort.

 

The varied efforts traditionally adopted in a software test life cycle include test strategy creation, test plan creation, test design, test execution, defect management (including regression testing), and test sign-off. Test teams have also engaged in not just testing on non-live environments but also in debugging, tracking, and testing issues in the live environments.

 

Testers continue to be end-user representatives on the team but how they have contributed to being such end-user representatives has varied over the years.

 

During the years that we are talking about, even though they represented the end user’s quality needs, they were not empowered to do much, given how late in the game they got into the testing cycle.

 

Even if they had great ideas and suggestions, they were not very empowered to influencing them from being implemented. Before we discussed each of the stages of the testing life cycle in greater detail, let’s talk about the salient features that characterized a test effort in the past:

 

  1. Heavy emphasis on test document and test artifact creation
  2. Strong focus on black box testing, even if the test effort itself was independent in nature
  3. Gated approach to start the test execution effort only after coding was complete
  4. Heavy reliance on metrics that often don’t convey the true meaning of quality
  5. The onus of complete quality passed on to testers, with a lack of accountability among the rest of the product team
  6. Main focus on product functionality, with nonfunctional areas such as performance, security, usability, and accessibility often taking a complete back seat
  7. Pressure to sign off on the test effort to release on time, despite having started late

 

All of these core characteristics made it very difficult to deliver quality releases. The tester was also demotivated as his efforts seemed to go unnoticed. If at all any, he was only made visible when there were live issues seen in the production environment.

 

He did not have an environment to thrive and was merely surviving which made it difficult for him to collaborate with the rest of the product team. He was seen a step down compared to his counterparts in the design and development teams.

 

All of these did not offer a very conducive environment both to the quality of the product and to the morale in the team.

 

While all of these sound not so positive, the activities in the STLC were themselves of great value when seen individually and implemented at the right levels. So, what are those activities and what is done when each of them is taken up? Let’s herein talk about these activities one by one.

 

Test Strategy

When a test effort was initially conceptualized as an independent activity to be performed by testers who were not involved in the development effort, the need for a test strategy came in.

 

This is a document that talks about the product, its architecture, what are the varied test areas (for which test plans would subsequently be created), entry and exit criteria for the overall release, resource mapping identifying testers who would be working on varied components, what the group’s overall test and defect management practices would be, etc. 

 

This has served as a very valuable document in the waterfall days, helping the testers pause and understand the larger context of what they were involved in, how each other’s work interlaced, what are some of the common practices in the team, etc.

 

However, it does take time to create a test strategy. In a large group where a number of modules are being tested, creating a test strategy can even turn out to be a one-month exercise.

 

This is a lot of time, to be spent upfront, but if the right efforts are made into shaping the strategy into relevant and meaningful pieces of information for the group, it can pay off in the long run.

 

For it to continue to be of value to the team, it is important for the test strategy to be maintained and updated regularly as well as used by the team as and when required.

 

Test Planning

Some teams combine the test strategy and the test plan to save the time involved in creating them as well as the overhead of maintaining multiple documents, but in the truest essence of the traditional testing life cycle, these were both independent documents. After creating a test strategy, a small group of testers would be tasked with creating individual module level test plans.

 

Let’s say a test plan to test the web workflow, a test plan for the desktop version, a test plan for the reporting module, a test plan for the overall database implementation, one from the performance and load angle, one for the security of the application, and so on.

 

This assigned individual responsibilities enabling a small group of testers to focus in detail on their specific areas, define scenarios, what kinds of tests they plan to take on, what test environment is needed, what are the test estimates, risks and mitigations, and so on. Such plans gave the tester the time to think through all elements of testing upfront, even before the testing could commence.

 

Test plans again brought in their own value despite the time and effort involved upfront, but as mentioned given these overheads, teams soon moved into combining the strategy and the plan such that everyone would use the core common principles but individually customized test scenarios.

 

Test Design

A lot of emphases was placed on designing test cases. While some teams left test cases at scenarios level, most teams would take in the scenarios from the test plans to elaborate them into a set of individual test cases of varying levels of detail.

 

Some test cases were so detailed that a new tester coming into the test effort can merely review the test cases to understand application workflows.

 

While such granularity helped ensure no confusion on what tests were executed and also to train new people, it also meant that so much more time was being spent on writing test cases, during which the tester could have actually tested the product.

 

Maintenance was also a huge overhead where even for a small feature change, the tester had to spend a lot of time updating the test cases.

 

Test Execution

By the time the tester gets his hands dirty into the actual test execution effort, often much time would have elapsed in getting the test planning done and the artifacts created. By this time, the rest of the product team would have most likely gotten into the product release mindset.

 

The testers were highly focused on executing the scripted test cases that they had earlier designed.

 

We will discuss what kinds of testing they took on, in a subsequent section, but the discussion around tester’s productivity would often linger around how many test cases were executed.

 

This was considered very important because the testers had to sometimes achieve unrealistic numbers to the tune of even 250 test cases a day.

 

This made execution a very monotonous activity hindering the true creativity of a tester. A tester’s typical day also involved recording results from all these executed tests and also filing defects for issues found.

 

All of this added to the tester’s documentation overhead where he was often very exhausted at the end of the day, not leaving any room for the tester to try exploratory tasks.

 

Defect Management

A lot of emphases was rightly placed on defect management. The tester was responsible all the way from finding the defect, to filing it, to following up to getting it fixed, regressing it in the right environment, adding new test cases that map to the defect, and updating the regression suite, to ensure the new test cases are taken into account in subsequent milestones.

 

While each of these activities is very important on their own, the larger question was whether the tester was truly empowered to drive defects to closure.

 

They often had no say or did not participate in defect triages giving supreme powers to the developer to fix defects they see fit. The tester who had the true understanding of what impact the defect had on end users did not often have an opportunity to raise them in the defect triage meetings.

 

Defect management was also in most cases a laborious process with room for quite some efficiency gains, as the testers and developer’s machine environments did not align.

 

These were the days cloud computing technologies were not very prevalent or were just coming into the mainstream market, necessitating a lot of time to be spent on setting up the test environment.

 

There were a lot of wasted cycles around trying to reproduce issues and help the developer get on to the same page with that of the tester in understanding the issue fully, regardless of how detailed the defect report was. All of this made defect management an overhead despite its importance and value.

 

Test Sign-Off

For the entire product team, this was the much-awaited activity from a tester’s side. The tester, after carefully weighing in all the outcomes of his test execution effort, results from defect management, regression suites, code coverage, and metrics that were defined for test exit in the test strategy, would be in a position to make the sign-off call.

 

This was a call made by the test manager and director in consultation with individual test leads who represented specific test teams.

 

Test sign-off was a very busy and important day, as the decision was a big one and also was so close to the release date that the anticipation would be very high.

 

Early in the days, when metrics were not used as much, this call was a very subjective one to take, but over the course of the years, the test team implemented objective metrics that made the sign-off decision more easy to make and reliable to act upon.

 

Other Activities in Which Testers Were Involved

These were activities that not all testers did on all teams but were frequent enough that they are worth mentioning here.

 

Test Environment Setup While in most cases build engineers helped with the test environment setup since this was the age before cloud-based setup became popular, the test setup was often a very laborious and time- consuming task.

 

In the pre-cloud days, virtualization helped them quite a bit in setting up machine instances where testers needed cross-platform configurations across varied operating systems and browsers.

 

Build engineers who were experts with machine setup often helped save a lot of cycles for testers, but such an external dependency and the wait for a testable build also meant that the tester’s valuable test cycles were often eaten into.

 

Code Coverage Test Runs This was one of those early techniques that testers used to bring in traceability and objectivity into the test efforts.

 

Testing on instrumented builds, testers were able to showcase, what code paths their test efforts covered, what were the dead code areas, if any, and plan for any additional testing that was needed.

 

Code coverage run was often taken up after one full test pass was completed and most often after test automation was built.

 

While code coverage runs can be taken up even on manual test efforts, testers often took them up on automated test suite runs. Code coverage results were also used to determine where to improve test automation coverage on, in the coming milestones.

 

I’ve listed this under additional activity for testers since code coverage runs were not always performed or even if they were done regularly, it was not always a task taken up by testers.

 

Static Code Reviews As mentioned in the earlier blog, testing initially started off as an action taken up by the developers themselves. Later, when independent testing came into the picture, testers were an entity of their own on the team, but their focus was heavy on the black box side.

 

Even automation did not stretch into the internal architecture of the application much but rather focused on the top layer functional automation and UI automation. However, in some teams, testers were involved in reviewing the code written by the developers at a static (nonexecution level).

 

These were testers who understood code, who had the knowledge of programming languages, and who were able to stretch themselves into these extended horizons of testing.

 

Such activities certainly helped them bond better with the rest of the product team and also empowered them to understand the nuances of the application implementation better than the ones who merely focused on black box testing. The debate on whether testers should know programming languages continues to this day.

 

There continue to be excellent black box functional testers who know nothing about programming languages but who can find some top-notch bugs in the application and think exceedingly well from an end user’s shoes. That said, the additional value brought in by testers who do static code reviews cannot be discounted.

 

Live Site Debugging In the waterfall days, especially during the early stages of product release, quick fix engineering was a common phenomenon. These were quick releases also called hotfixes to patch issues found in the live environment and most often reported by the end users or people using the product on the field.

 

Since these were user-facing issues, teams had very little time to debug, fix, test, and rerelease the product. Testers played a very active role in shipping hotfixes to resolve such live issues.

 

How Test Attributes Were Tested

While the test attributes have largely remained the same over the years—be it functional, usability, performance, security, localization, etc.—the way we have tested them has changed significantly. In this section, let’s briefly look at how we used to test for each of these attributes in the yesteryears.

 

Functionality

This was the prime focus in a testing effort. A large percent of test cases were focused on validating the functional aspects of the application. This would include the user-facing functionalities, database, reporting and administrative level functionalities, and so on.

 

Functional test cases formed a large portion of most test suites—be it the full test pass, sanity test pass, or regression test pass. Even for a build to be certified as test ready, the functional workflows would be the main elements to be tested.

 

Application Programming Interfaces (APIs) and web services grew in importance over the years, but early in the years APIs were not very prevalent, thus focus on functional testing specific to APIs and web services were limited. Also, most of the functional testing was done manually.

 

Teams typically picked UI test cases for automation with a limited number of functional tests that were built into the automation suite. As they started understanding the importance of Return on Investment (RoI) on test automation, an increasing focus on functional automation was brought in.

 

UI

User Interface elements that brought the overall application’s design together were given a lot of importance to a testing effort. This was typically started a little later than the functional test cycle to allow room for the UI to stabilize.

 

The positioning of page elements, color, font, style, and rendering were all taken into account in testing an application's UI. UI defects were one of the high bug count categories among the overall set of defects a tester reported.

 

Usability and Accessibility

While usability and accessibility were not new areas of testing back in the day, not much emphasis was given to these areas. The product team at large was more focused on functionality and UI and believed that these two together would garner enough customer satisfaction.

 

These elements started gaining more visibility when the market was open for more players to come in. As competition came in, these became differentiating factors helping one stand out against another.

 

There was an increased appreciation for usability, especially around areas of simple application usage, intuitive workflows, graceful error scenarios, etc., all of which hope to earn better customer acceptance in the marketplace.

 

Around the same time, accessibility standards such as Sec 508, DDA, and WCAG all started entering mainstream industry practices, building a niche need for accessibility testing to accommodate application usage for the disabled.

 

Performance

Load, stress, overhaul, and capacity planning have all been important elements in software implementation and testing since the early years, but again, they had a back seat. Companies were not so bothered of their performance and aspects such as availability, page load, and response times.

 

In my experience, this has also been primarily because of the lack of competition. The large software players were monopolies who controlled the market. Even if their performance was not up to the mark, users had no choice but to stick with them.

 

With all of this confidence that the large players had built over the years, performance testing was more of a routine check that was conducted as opposed to a real value-add in the product.

 

Performance testing was also done slightly later in the game, where even if issues were reported, they were not acted upon in the same milestone.

 

Given the time and effort that goes into fixing them, they were often pushed out to be considered in subsequent milestones. Performance testers were slowly gaining prominence, but it was still largely about functional testing and test automation.

 

Security

Security has become a prime area of focus in recent years with growth in the online presence and increased penetration of services in the IT industry.

 

Back in the days of desktop applications, security was more of a physical than a digital concern. Security testing was done but was primarily limited to STRIDE—covering Spoofing, Tampering, Repudiation, Information Disclosure, Denial of Access, and Elevation of Privilege.

 

It was largely related to authentication and authorization-related issues and was often covered with a simple threat model analysis that testers used as a base for their testing efforts. While teams understood the importance of security testing, this also took a back seat when compared to functional and UI testing efforts.

 

Time permitting, teams conducted one to two rounds of security testing toward the end of the milestone.

 

At an industry level, knowledge of both security attacks and testing techniques were fairly limited. However, over the years, security attacks have been increasingly complex and sophisticated, necessitating newer security testing tools and techniques.

 

Guidelines from bodies such as Open Web Application Security Project have greatly helped testers gain understanding and bring in a focused security testing effort.

 

Pen testing organizations have done exceedingly well in the closing years of this “past time frame” bringing in newer testing practices including ethical hacking to gear up the industry to handle security well.

 

Globalization

Globalization has always played an important role in a software development effort. While it was often delayed than a core English release, this has been a more mature testing practice even from the early years compared to areas such as performance and security.

 

Product release in global markets, for instance, an operating system from Microsoft that was made available in several languages or, say, software from Adobe that went into global markets, was something that was fairly common.

 

Core testers on the team took on internationalization and pseudo localization testing followed by localized testers who would come in for localization testing.

 

In addition to localization testing, a team of linguistic experts would also come in to verify contextual product implementation and suggest changes, if any, to enhance product acceptance in local markets.

 

While the process in itself was fairly mature, the challenge with globalization was that it was done too late in the game to incorporate the process tightly into the software testing and development life cycles.

 

Other Testing Types

Other than the core testing types or attributes discussed earlier, testing has always had more breadth to it. The fascinating aspect of software testing is that it is vast running both horizontally and vertically.

 

Domains bring in a new dimension to software testing too where one could specialize in domain-specific areas such as testing for banking applications, health-care applications, etc., and building a niche around the workflows in each of them.

 

Some of the additional testing types that have traditionally existed in the world of software testing include the following:

 

Test Automation Automation in software testing is a very broad area. It continues to evolve on an ongoing basis and has been of significance since the early years.

 

The ways in which automation is done, what kind of tools and frameworks are used, what is automated, and when and how it is done are the ones that undergo change over time.

 

In the later years, automation largely focused on the UI elements and some functional aspects. Record and play tools were the ones primarily used and most of these were commercial tools since the open source was not very popular or mature back then.

 

Automation was typically done after a few rounds of manual testing that ensured the system was stable and testers had a good knowledge of the application’s workflows.

 

Automation testers did not necessarily have a very good understanding of the system’s internals and relied on record and play tools to capture the system’s workflow in an automated manner.

 

While this was quick and convenient to build on an automation suite, it had a lot of issues around reliability and stability of the code, especially when the UI underwent changes.

 

Compatibility Even back in the years, compatibility testing was of very high significance and value in a test effort. Tests would have to be run on varied operating systems and browsers to ensure they rendered fine and worked well.

 

There were often defects that were specific to a given operating system and browser combination that had to be fixed and verified. A couple of unique characteristics of compatibility testing in the past are as follows:

 

A few years before then virtualization was in vogue where testers would set up virtualized machines to switch to and run their tests across varied operating systems and browsers.

 

Until all of these came in, the task of machine set up for compatibility testing was quite tedious involving a plethora of physical machines.

 

Besides operating systems and browsers, other applications traditionally taken into account in compatibility included antivirus software, application firmware, and printer and other device software, as these were often seen influencing factors that may interfere with the functionality of the application under test.

 

Need for optimization: Although the testing matrix for compatibility was relatively smaller back in the days compared to what it is today, the need for optimization was felt back then as well, primarily due to lack of testing time and lack of the required infrastructure.

 

Optimizations would be done based on market usage of varied operating systems and browsers based on which tests would be grouped into full test pass and sanity test pass suites to enhance test coverage within the constraints of time and cost.

 

Integration Often since several modules were being developed and tested as individual pieces, integration testing was very important back in the days. A tester would herein tie all the pieces together to ensure workflows worked well end to end.

 

It was not until later in the game that the application would be ready to test end to end and while some automation was leveraged for this, integration tests were also largely done manually. Most often automation would begin after an integration test pass where testers were able to gauge the stability of the application.

 

Acceptance This test pass often aligned with the exit criteria defined in a test strategy. This would include a suite of cases mostly end-user scenarios that would certify the readiness of the application for external consumption.

 

The acceptance test pass would often also be mapped to metrics such as percent of test cases passed and percent of tests failed in the given priority category, all of which together would help the test manager make the call on the application’s readiness for release.

 

Testing in the past, although later in the game, had its own rigor and discipline. It was a lengthy and elaborate process, not all of which truly added value to the quality of the product under test. We will discuss more of this around testing metrics, pros and cons of testing back in the days, in the subsequent sections.

 

Regression Testing A suite of test cases that were built, maintained and run by testers to ensure defect fixes or new check-ins did not break any existing implementation has been popular for over three decades now.

 

Regression testing has been given the attention it deserved from the very beginning—the only difference is over the years the tools that have been used and the way tests were prioritized have changed.

 

Early in the years, all tests used to be run as part of the regression suite. Not much optimization was really done. Also, most of these tests were run manually.

 

Over the years, testers have started adopting smart regression testing strategies—this included identifying the right set of test cases that together gave the required level of coverage rather than executing all tests. Also, critical tests from newer defects that were filed were added on to the regression test suite.

 

Teams started understanding that the automation RoI from a regression test suite was much higher than in any other area and thus these were identified as good candidates for test automation.

 

Despite all the evolution regression testing has been going through, it still was more of a downstream activity in the yesteryears that would happen after check-ins were made and build came into the hands of the tester.

 

While several of these metrics are still valuable today, some of the metrics were turning out to be mundane driving additional unwanted overhead into a tester’s daily routine.

 

For example, the number of tests executed on a daily basis would often turn out to be a competition between testers to see who executed the most.

 

Sometimes, these numbers were irrationally high to the extent of 250 test cases per tester per day, making it a huge compromise on the actual quality of the test effort.

 

While it was good to see the use of metrics in the past years too, this was an area that needed a lot of improvement and customization to realize the true connection between these numbers and how they translated into an application’s quality.

 

Case Studies from Our Past Years of Software Testing

Having talked about how testing was done in the past in detail, I thought it would be useful to include a couple of case studies from our experience at QA InfoTech.

 

These are examples of projects that were implemented either in a waterfall model or a pseudo-waterfall-agile model.

 

These talk about what the project was, what were the requirements/challenges, and what solutions we proposed for the client to help alleviate the challenges, even if the execution was in a pure waterfall model.

 

Case Study 1

The client was a leading publisher and aggregator providing educational content, tools, services, and other resources to academic libraries across the United States, Canada, and the rest of the world.

 

Problem QA was not involved rigorously in decision-making and requirement changes. This made it difficult to manage a well-balanced quality effort. Also, there was no collective ownership for quality, making it difficult for the test to own up the gaps in the effort of the rest of the product team as well.

 

Additional Challenges

There were a lot of requirement documents and multiple versions of them to be reviewed and incorporated into the test effort. A lot of valuable time was lost in reviewing the required documents and exchanging comments.

 

Multiple features and workflows were grouped in single documents, making it laborious for the QA team to track changes and maintain versions.

 

The common repository was sometimes not updated with the latest versions of the documents, hence leaving the QA team clueless of what changes came in.

 

Due to all of this documentation chaos, QA team ended up logging issues that were rejected—this was a blow not just on the overall quality effort additionally impacting testing costs and time, but it also impacted the morale of the team adversely.

  1. The test cases count ran into thousands, making it a herculean task to keep them updated per changes in the required documents.
  2. Test cases were maintained in excel sheets—multiple copies of such cases in circulation were getting impractical to manage.
  3. QA cycles were typically 8–10 days long.
  4. QA team executed all test cases in each and every cycle, with no optimization done.
  5. Production releases were scheduled quarterly—live issues were pending to be fixed for a very long time.
  6. There was a lot of dependency on the release engineering team to build deployments on the QA environment.
  7. Most importantly, only QA was considered responsible for quality.

 

Solution The QA team from QA InfoTech proposed a more collaborative approach that not only led to substantial time savings but also cut down on redundant efforts. All of the following suggestions below together enhanced the overall quality of the product.

 

Due to the onshore-offshore model, there were certain other communication challenges as well that needed to be addressed.

 

QA team decided to have some overlapping time (2–3 hours daily) with the onsite team.

 

QA team was made a part of all update meetings, making it a more participative environment—this led to other positive changes including more collaboration and a cooperative execution approach among team members.

 

Since the QA team also became a part of the change request process, any sort of confusions was avoided in the first place leading to huge savings in time and effort.

 

Effective version control using SourceForge was implemented leading to an 80% effort reduction. Since everyone had access to the latest versions, no unnecessary defects were logged—this significantly brought down the percent of invalid defects that were filed.

 

These provided a positive change to issues the team was facing and also opened doors for a more collaborative approach in owning quality collectively.

 

Case Study 2

This case study is an example of a project where we faced quite a few challenges owing to the waterfall style of operations and how we turned it around with a blend of agile that we brought in.

 

Since agile was also a fairly common development methodology in that “past” time period we are talking about, I wanted to discuss this case study that runs from waterfall well into an agile implementation.

 

Client The client in discussion here is a leading education technology and services company for higher education and a global publisher of blogs in the United States. They have a presence across 1000 higher educational institutions in North America and are used by over two million students.

 

They offer digital textbooks, instructor supplements, online reference databases, distance learning courses, test preparation materials, and materials for specific academic disciplines.

 

With their expertise in student response system, they focus on formative assessment and pedagogy, making it a perfect audience response system for the growing number of K-12 customers to corporate training environments and tradeshows.

 

Challenges

  1. Test releases were quite late, averaging just 7–10 days before the target deliverable date.
  2. Test passes were long (in days) running well over 2–3 months.
  3. Engineer(s) work estimates were exceptionally inaccurate. For instance, a task estimated around 10 hours would often take around 20–25 hours to complete.
  4. Quality was not at desired levels—the bug counts soared with each release.
  5. Roles and responsibilities were not well defined—work on a particular task would completely stop if the person in charge was not available.
  6. QA/testers and developers rarely collaborated with each other.

 

Solution To address these challenges, we adopted an agile process on our project. This helped us save time and minimize queries and false failures. Our process included adopting the following:

  1. Sprint dashboard—a master list of all the functionality desired in the product
  2. Sprint sizing meeting—to describe, prioritize, and estimate features
  3. Daily Scrum—to involve all team members in daily status sync meetings
  4. Sprint review meeting—to discuss what has been accomplished during the sprint

 

Approach We at QA InfoTech worked on this process—agile methodology

 

Client Benefits

  1. All—Management, development, and QA teams started using the scrum dashboard regularly.
  2. All releases started falling into place in terms of time—We have not missed the target date in the past last 2½ years.
  3. Sprint/iteration timings became much shorter—Releases were scheduled every 4–5 weeks.
  4. Efforts became precise—Sometimes, we undersized the user story.
  5. New issues found in live environments came down significantly—By over 75% across all platforms, and continues to fall down with each release.
  6. Teams were more empowered to make decisions that will benefit the company or clients.
  7. Teamwork—Daily interaction with developers gave better insights into client wants and needs.

 

Analyzing the Past

Having looked at the process and methodology of software testing and quality assurance in the past, let’s analyze to see what made sense and what was more of an overhead that did not really add much value:

 

The past definitely brought in rigor and discipline to software testing. It helped define and implement the software testing life cycle.

 

Given the detailed documentation that was in place, ramping up new testers into a given project or even to bootstrap them into the world of software testing was more feasible.

 

It certainly gave meaning and relevance to the world of independent software testing. Early in the days, the developers would themselves take on testing and this period was an important one to establish the need for independent testing.

 

There was more reliability in certain efforts such as localization testing where efforts could commence after fully verifying the readiness of the product in core English.

 

This was a time period where definitions and detailed processes were chalked out for several areas of testing—for example, the threat modeling for security, base-lining, and profiling for performance, and standards for accessibility. Similarly, frameworks for automation started taking shape toward the end of this time frame.

 

On the flip side, there were a lot of process inefficiencies in this time frame:

A lot of emphases was placed on documenting test efforts, scenarios, and the creation of artifacts. To add to this, these artifacts were hardly used in the actual test effort.

 

A test strategy was often turning into an obsolete piece of document that hardly anyone referred to. Sections such as resource allocation in the strategy did not bring in much relevance or use to anyone on the team.

 

The cycle was such that testers hardly communicated with the rest of the product team. Or even if they did, it was hard with the developers and build engineers bringing in a lot of disconnects in terms of interpersonal team collaboration as well as product understanding and updates.

 

Testers were often not part of important activities such as triage meetings. This held them back from being the true voice of end users on the team. Even if they had useful information to convey about the user impact and quality of the product, they often did not have the right platform to convey it on.

 

Focus was more on functional and UI elements of testing, while areas of specialization such as performance, security, and accessibility were not of significance early in the years.

 

As product teams were moving waterfall to agile, not just development teams were impacted. Test teams also had a huge impact. In fact, changes were more prominent on the west side of things—although these were positive changes, it looks a lot of time to embrace them given the magnitude of changes involved.

 

Testing metrics were not very realistic enough to promote healthy competition. Metrics such as a number of tests executed per day were more detrimental to tester morale than being of much relevance to understanding his progress and status.

 

All of these made testing expensive and time-consuming. This not only impacted the quality of the product and the positioning of the tester but also often led to questioning the value testing brought to the table.

 

Software Testing Career Options

This was also a period where software testers were not held very high in reputation. Testers often chose the profession without being able to land themselves in software development roles.

 

Testers who often chose to take on manual testing were not very well respected for their contributions, and their inputs regarding the quality of the product were also taken in at face value.

 

While it is not possible to generalize this at an industry level, this was the larger prevailing sentiment. However, this was also the time frame where career options were beginning to take shape.

 

A tester could either progress in the individual technical charter of becoming a test analyst and architect or could take the managerial route to become a lead, manager, and director.

 

Specialization roles also slowly started coming in around the end of this time period, where testers could become a performance, automation, security, localization, and usability specialists.

 

In all, although this time period was not a very glorious one for software testing, it was a very important one to set foundations around testing processes and careers.

 

In all this time period until 2010, although there were downsides to the overall testing discipline, strong roots were laid to establish independent software quality assurance.

 

This was also the period where quality started moving from control to assurance. Thus, it laid the foundation for several new beginnings to further come in and set a strong example for adapting to newer changes that were on the horizon.

 

What Has Changed in Today’s Style of Development?

When we discussed the “past” in the previous blog, we looked at how, at an industry level, we were moving from a waterfall to an agile model. This also included the phase where most organizations were aggressively pursuing agile in the supposedly truest form.

 

This sudden jump from a traditionally adopted model to something completely new and drastically different has been quite an impact on product teams. As with any new model that evolves, the industry at large had a lot of issues understanding and implementing agile in the early years.

 

However, around the start of the “current” time frame, we are looking at, organizations started realizing the true requirements and goals of the agile manifesto and how they can be implemented as a customized version to meet their needs.

Oftentimes this was even a pseudo agile model that had a slight flavor of other development lifecycles as and when warranted.

 

By the time teams settled in the agile model of development we are already into another flavor of development, which further empowers teams reap the benefits of what the agile world promises to offer—this is the world of DevOps—a world where agile has become even leaner in its operations and has become truly collaborative in bringing teams together.

 

DevOps is a development model that seemingly includes only development and operations, but in its drive to promote continuous integration and delivery has a very important role for testers to play. Let’s briefly look at how the DevOps model looks and what is a tester’s role therein.

 

DevOps and Changing Role of Test in DevOps

In the earlier days, even with full-blown agile implementation, although the test was involved earlier in the life cycle, there were dependencies that test had with other teams making the overall effort linear in some sense. For example, until something testable came in from development test teams were still not completely occupied.

 

Development, test, and operations teams were still operating as individual units despite working in parallel.

 

DevOps has changed this where all teams come together as a single unit to promote continuous delivery.

 

In specific scenarios, DevOps is seen as an evolution over the agile model (that focuses on collaboration at the plan, code, build stages), continuous integration (that focuses on collaboration at the plan, code, build, test stages), and continuous delivery.

 

DevOps is seen as a model that focuses on collaboration and value proposition at the plan, code, build, test, release, deploy, and operate stages). It is a model where continuous integration and delivery is made possible, and the entire team is able to operate collaboratively yet independently in product development.

 

One of the main requirements in successful agile implementation is for a demonstrable software at the end of each release.

 

While teams understood this requirement and had an agreed set of user stories developed and tested, the developed software was still not pushed to production due to several constraints and limitations on the operations side.

 

Environment availability, configuration dependencies, and infrastructure support were some of the reasons, among others, that handcuffed the operations team in taking on their tasks to completion.

 

And all of this was not purely an operations issue. Business readiness to launch including pricing models, competitive analysis, marketing readiness, and stakeholder sign-off all had to come together to make an ongoing release to production reality.

 

DevOps comes in with such a promise to look past the core development and test teams and enable the others on the product team as well join in to support anytime deployment.

 

What this calls for is increasing levels of automation—traditionally we have looked at automation purely from a testing and testing coverage standpoints, whereas DevOps brings in a completely new definition, to automation.

 

In today’s scenario, automation is all about automating build deployments, test processes, unit tests and enhancing overall test coverage through code coverage, all with the goal of making the development process fast, hassle-free, reliable and code quality robust.

 

Unlike the agile model that came in all of a sudden, in a time when IT professionals were very deeply rooted in their traditional styles of development, DevOps has been a more gradual shift. Since it also evolved from the agile model as a base, teams have been more comfortable adapting to the requirements of DevOps.

 

Some of the core requirements that make DevOps feasible include a strong regression test suite, instrumented code, system configurability, back and forward code compatibility, and keeping track of overall product dependencies, among others.

 

Granted DevOps is a robust model that is here to stay; how has it really impacted the software testing fraternity? The core principles of DevOps around continuous delivery and readiness to ship at short intervals and in parallel work among all teams are the ones that have been impacting the test discipline.

 

What this means is there is a need for a lot of automation. This is automation that focuses not just on test suites but also automating test processes and efforts in all possible ways.

 

For example, from the test suite standpoint, the team is looking at building scalable automation frameworks that can take on regression, compatibility, functionality, and performance testing, among others.

 

From a test process standpoint, automated test deployments, automated defect, and test case management, and automated metrics measurement and report generation are varied areas to consider.

 

Automating the test processes is particularly a more cumbersome task. The reason is there are a lot of loose ends that can be easily done manually—due to this, there is resistance to adopting automation in such areas even among automation engineers. For example, a survey by Intersog shows 56% of bug tracking and 45% of defect closures happen manually.

 

Another piece of data shows only 25% of unit testing, 7% of integration testing, 5% of system testing, and 3% of acceptance testing are automated end to end. The data herein show 42% of automation tests are not automated.

 

If we can automate these to reach near 100% of unattended executions, imagine the amount of time and cost savings that would result in a DevOps environment.

 

Typically some of the core problem areas impeding comprehensive automation include the disconnect between defect tracking and test case management and the need for coding skills to implement automation.

 

To this effect, in our organization, for example, we leverage something called adaptive test automation frameworks—these are frameworks that support the goal of building on automation to reduce manual tasks in all possible spheres of tester operations.

 

This is a framework that encourages test case design and automation in human-readable format from within a defect so that all pieces of the test effort are tightly coupled. The framework is also simple enough to encourage manual testers to take on automation.

 

That said, now is the time for manual testers to spread their wings into the test automation space. While manual testing will not go away, a lot of focus will also be rightly given to the world of automation.

 

Moving forward a tester can push himself only so much with manual testing skills. In current times, automation is almost becoming inevitable. 

 

Besides enhanced automation, the test has an added responsibility to take on root cause investigations to enable the rest of the product team to fix issues faster and more completely.

 

To be able to do this and think of deeper scenarios to test, it is becoming increasingly important for the test to thoroughly understand the larger business of the organization and varied integration points with the product under test.

 

While all of these have been traditionally recommended for Test to take on even during the days of pure agile, these are becoming almost inevitable in the current days of DevOps.

 

This number will only rise in the coming years and whether or not your organization has moved to the world of DevOps, it will be a smart move for you to already look at enhanced automation in your overall test effort and processes.

 

And in this smart approach that teams are attempting to embrace, most of them are working their way up toward automation for existing feature sets leaving manual testing for primarily new user stories that come in to test.

 

Even such new user stories soon make their way into the regression suite, forcing the need to be automated sooner or later.

 

What New Technologies Have Impacted the Overall Test Strategy and Effort?

In this section, we will specifically talk about how varied technologies are impacting software testing and the kind of change they are bringing in.

 

Cloud

 the cloud had a major influence on how we tested and what we tested. In the current times, this is only further intensifying with all systems being looked at from a services angle.

 

For example, in the initial days, the cloud was all about software, platform, and infrastructure as a service. But today, this has taken shape in varied ways—for example, backend as a service.

 

Similarly, with the growing concerns of public cloud, private cloud started gaining popularity a few years back. As with any technology or domain, the industry starts off with an extreme solution and soon settles for something middle ground.

 

This is exactly what has been happening in the cloud space too, where we now have hybrid cloud evolving as a mix of both public and private clouds.

 

This is a solution where organizations leverage the benefits of both the private and public offerings to decide which applications would be on the private segment, which would be on the public segment together building a combined cloud strategy.

 

As for the test, this calls for a combined test strategy too that focuses on the weaknesses of each of these areas.

 

For example, for applications that are on the public cloud, the test focus is largely on the security and performance angles, while for the ones on a private cloud, the focus is on customized feature implementations and rendering to yield the desired functionality at lowest possible costs.

 

The growth in cloud, while it has provided a lot of flexibility for testers from a test engineering standpoint, has also expanded test opportunities and forced them to think of custom test strategies.

 

Mobile

Mobile computing is probably the biggest technology change that has greatly impacted the tester’s realm of operations. Moving away from a desktop or laptop-based web application testing to a device or simulator/emulator-driven testing of web, native, or hybrid applications is a large change the testers are still getting used to.

 

From a physical impact as well, testing on mobile devices with a hand-held has been a big change. Obviously, it is just not a device that has brought in the change.

 

The device in combination with an application development mindset and the overall application development process has created a very large test impact. Mobile applications have brought in a lot of focus on nonfunctional testing elements such as performance, security, and usability.

 

Newer business models around m-commerce and content digitization have all triggered new testing opportunities.

 

In fact, it is interesting to see business strategies such as discounts being offered only on the mobile app, mobile shopping spree, and large retailers such as Flipkart in India working toward a mobile-only presence.

 

The crowdsourced testing strategy has become more prevalent given the need to test applications on a large range of mobile devices across realistic user scenarios. The tester himself is now encouraged to think like an end user.

 

Mobile testing is forcing testers to think deeper and bring out optimization opportunities—for example, we have a homegrown tool at QA InfoTech, called Mobile Application Strategy Tool.

 

This is a tool that will help you consider varied parameters and weigh in on them to make suggestions on what kind of testing should be done to maximize overall test coverage with a minimal set of tests to be run.

 

This will also help you make a call on where the tests should actually be run (whether on physical devices or on simulators through a crowd team

 

Additionally, testers are having to make smart optimization choices based on past analysis of test results to help them further strategize their mobile testing efforts. From our experience, here are some quick tips to empower testers with their mobile test efforts:

 

  1. Functional issues are more OS dependent rather than on the device size or screen resolution.
  2. User interface issues are more specific to a device size or screen resolution rather than OS.
  3. Different devices with the same screen size and OS version would give identical results.
  4. Devices with different screen size but identical OS version would give different results.

 

Wearable Computing and Augmented Reality

In continuation of mobile computing, the next seen in the line is wearable computing and augmented reality. In very simple terms wearable computing is a wearable device with built-in computing capabilities that are able to render valuable information to end users across a range of functionalities.

 

And augmented reality, whether it be mobile or nonmobile, is enhancing a real-life experience and augmenting it to provide relevant and timely information to end users in varied shapes and forms.

 

Head-mounted displays and smartwatches are popular examples of wearable devices that we are seeing in recent times.

 

Testing for apps developed for wearable devices is a huge market and they come in not just as stand-alone apps that are tested but their computing elements that integrate with varied other apps and devices across the mobile and nonmobile segments.

 

Similarly, testing for augmented reality apps calls for a number of pieces that integrate together and several out of box scenarios to be tried.

 

Augmented reality is a very valuable discipline that also can add to a tester’s toolkit enhancing his productivity rather than purely focusing on how to test augmented reality applications.

 

The market for wearable computing is expected to touch $30 billion by 2018. Augmented reality, even for the nonmobile segment, is expected to touch $1 billion by the same time.

 

Testing for these two segments requires a lot of out of box thinking—creative end-user scenarios, varied integration checks, and test areas that cover usability, compatibility, functionality, performance, etc.

 

The tester also has to be aware of known limitations in these areas, since these are still evolving as we speak. For example, security is still a large unaddressed area in wearable computing.

 

Surface detection is a gap in augmented reality. In newer areas such as these, the tester will have to also rely increasingly on community knowledge to determine what scenarios to additionally try.

 

Crowdsourced testing to get end users to test is also very valuable. How to test augmented reality applications and also how to leverage augmented reality to enhance a tester’s productivity are topics of their own, worth individual consideration.

 

Also, core testers may have to take on tests outside of the traditional lab, as some scenarios may have to be tried on the field.

 

Newer computing areas such as these are forcing testers to rethink their test strategy and the balance between manual and automated tests—while we understand the value of automated simulated tests, there is also tremendous value in field-driven manual testing.

 

So, one of the questions to consider is whether advancements in technology are taking us to the grassroots of testing.

 

Social Media

Social presence is not new. It has been influencing product development efforts significantly over the past decade, even more so in the last five years. What impact does social presence, be it your Facebook, Twitter, LinkedIn, Pinterest, or any other app, have on your testing effort?

 

The first thing to evaluate is whether your application or product under test has a social presence. If so, how you would test for it as a stand-alone app, as well as an app in the social climate, is something to plan to build into your test strategy.

 

Additionally, in the current day, whether or not the application’s functionality has direct relevance to social media, there is a lot that the tester can seek from social media. All applications have a social face and users are very vocal on such mediums today.

 

Their feedback, both positive and negative, is a great input to consider for additional testing in a given release or for enhancements in subsequent releases.

 

Although the marketing team typically maintains the social presence of organizations and products, a tester can use this as an important source to understand the end-user satisfaction and overall product quality.

Recommend