Software Testing (Best Tutorial 2019)

Software Testing

Software Testing Attributes

A testing effort brings together several testing attributes. This includes functionality, UI, usability, performance, security, accessibility, and localization to name the core ones. All of these together determine the quality of the product to be released in the market. These attributes have themselves undergone a lot of changes owing to reasons such as market conditions, prevailing external forces, and end-user requirements, which have brought out significant changes in how, when, and how much of testing is needed.

 

To that extent, this change agent can be called an agent and a change in itself. For example, as always, functionality continues to remain the key in determining the success of a product. But these other testing attributes are also gaining prominence, thereby determining what kind of testing is required and what together should constitute a test strategy.

 

Let us take the case of security testing—during the days of desktop applications— security was not too much of an issue. It was more confined to physical application access or information exchange over a client-server model. A simple threat model and STRIDE analysis could easily take care of the security testing effort. But today, with the growth in online services, cyber security is a huge issue.

 

Ethical hacking, penetration testing, tools to constantly monitor applications, and a checklist of vulnerabilities to track have all become critical. With all of this change, security testing has come into the limelight with a need for specialists and subject matter experts that need to be at least readily available for the test effort on demand, if not resident on the team. In one sense, I look at test attributes as a secondary change agent, as these are not changes that impact the testing effort directly.

 

This is another huge change agent that has brought about changes in what we test and how we test. The technology world has itself undergone a rampant change in the last two decades, whether it be the Internet revolution at the very core or the newer computing technologies around cloud, social, mobile, analytics, wearable, augmented, and so on. Each of these has had a profound impact on the market at large and at a software development industry level.

 

And the nice thing is that the industry is also aligning toward integrating these technologies—for example, the emergence of SMAC to bring together the social, mobile, analytics, and cloud computing worlds. Changes such as complexity of IT systems, fast adoption across markets, and how diverse systems are embracing varied technologies into their fold have all, in their own ways, brought in newer approaches and requirements to software testing. Technology is one area that continues to have a two-sided impact in software testing—both including how to test for the technology and how to leverage the technology in testing.

 

Let’s take cloud computing as an example here. With the introduction of the cloud when several applications are becoming online services, testers had to quickly gear up to test for software as a service and at the same time also leverage the cloud to build their test machines. Infrastructure as a service was a huge boon to testers that significantly saved their time with machine setup especially in areas such as performance testing.

 

Oftentimes, we as testers are only heavily focused on how to test a technology, but to make ourselves more efficient and productive now is the time for us to also see how to leverage the technology to make our lives simpler. The examples here are countless. If we start looking at all technologies from this two-pronged approach, it will bring in greater efficiencies in our scope of operations.

 

Market and End-User Requirements

This is yet another important variable that has brought in sweeping changes in what we test and how we test. We continue to be end-user representatives in the product team. The changing market requirements, whether it be the fast time to market, expectations for a rich feature set, or quick response to end-user wants, have all made test teams increasingly agile and nimble. It is still an open question whether agile development brought in these changes in the market or the market needs to bring in an agile style of development.

 

Regardless, the dynamic testers have changed for the good, where they are able to respond to change requests very quickly. Also, they now understand that although there is a test complete before every product release, the test effort is truly ongoing, with constant interaction with end users even after live deployment.

 

Also given how global software development has gotten to be, the reverse, where end users become testers, is also now a reality. They come in to evaluate the product and provide feedback before its release in the form of crowd-sourced testing. These have become valuable techniques to help with a quality release given the limited time and money on hand, despite the ever-increasing test scope.

 

Service-Oriented Mind-Set

This change agent ties back to the advent of cloud computing, where the overall industry mindset changed into building service-oriented applications. Office, a very successful boxed product that is a cash cow for Microsoft now, has its online subscription model, Office 365.

 

All products have moved into an online services model, which makes it easy and lightweight for users to leverage and, at the same time, provides full control and management power for the ISV in connecting with end users. This service-oriented style of development brings in newer requirements in testing around performance, security, and usability of the application, making these equally important as that of the core product functionality.

 

Markets and Compliances

With how complex the online market has gotten to be, it is becoming increasingly important to bring in checks and balances through compliances that systems have to adhere to and get certified on. For example, the U.S. government is slowly starting to mandate Section 508 to ensure systems are accessible.

 

This calls for a rigorous round of testing, based on checklists such as Voluntary Product Accessibility Template that testers can leverage on top of their exploratory and scripted tests. Compliances in varied disciplines such as banking and healthcare have also brought in newer changes and opportunities to software testers.

 

While there are more such agents that bring in changes in software testing, these are the core for us to track on an ongoing basis, as these together influence most of what happens in the software quality and testing world.

 

Another lingering question that people have, not specific to software testing, is how frequent should there be a change. This is not a very easy question to answer as most often all changes are not controlled internally. Several changes are driven by external factors where the organization or the team has no option but to change. In such cases, obviously one has to be ready to jump in and embrace the change as quickly as possible with the least amount of impact.

 

But for the others, which are internal planned changes or external changes that one can decide on and adopt, a certain amount of advance planning will really help. Also, giving a reasonable amount of time gap between two changes will not only give the teams a better opportunity to accept them but will also do justice to the existing process to stabilize, given the amount of time and money investment that would have gone into such an implementation.

 

How Did Testing Fit into the SDLC of the Past Years?

The waterfall was the dominant SDLC model adopted in the past few decades. While other models such as the V-shaped and spiral were also in common use, waterfall had a dominant share in the market. A study conducted even as of 2010 showed that while organizations were increasingly moving on to using agile, about 59% mentioned they still use a combination of agile and waterfall methodology, 12% used purely waterfall, and 5% used purely agile. Waterfall has been so deeply rooted in the software development community at large, since the days of early product development that its reign has been very strong even just a few years ago.

 

And if we take in the pure waterfall model, the development phases have been very distinct and gated. Each phase has had ample time to get the required tasks in its scope of operations done—whether it be the initial stages of requirements gathering and design or the final stages of testing, deployment and release, and maintenance. The last few stages were often feared to be more time constrained, but in the traditional waterfall days, where products were more boxed in nature with long release cycles often running into months, this was not too much of a concern.

 

Testers often did not have any communication with the rest of the development team until their work started later in the game. This was one of the huge downsides of the waterfall model where teams worked in a very disconnected manner. They lacked the larger contextual understanding of their role and their tasks, leading to a very loosely coupled effort that ended up impacting end-user satisfaction.

 

Even if a formal project management effort tied up all these loose ends, there was a lot of drain in terms of overall development costs, cost of quality with quite a gap to be fixed, between end-user requirements and the delivered product. While this style of development has not gone away completely today, a large portion of it has been washed away with the increasing adoption of the agile style of development.

 

How Did the STLC Itself Look?

The STLC has had its independent flow both in the days of the waterfall and agile. The rigor with which it has been implemented and the customized flows it has had have undergone change, but at a high level, having an STLC has been greatly valuable to bring in the desired level of discipline in the overall testing effort.

 

The varied efforts traditionally adopted in a software test life cycle include test strategy creation, test plan creation, test design, test execution, defect management (including regression testing), and test sign-off. Test teams have also engaged in not just testing on non-live environments but also in debugging, tracking, and testing issues in the live environments.

 

Testers continue to be end-user representatives on the team but how they have contributed in being such end-user representatives has varied over the years. During the yesteryears that we are talking about, even though they represented the end user’s quality needs, they were not empowered to do much, given how late in the game they got into the testing cycle.

 

Even if they had great ideas and suggestions, they were not very empowered to influencing them from being implemented. Before we discussed each of the stages of the testing life cycle in greater detail, let’s talk about the salient features that characterized a test effort in the past:

 

1. Heavy emphasis on test document and test artifact creation

2. Strong focus on black box testing, even if the test effort itself was independent in nature

3. Gated approach to start the test execution effort only after coding was complete

4. Heavy reliance on metrics that often don’t convey the true meaning of quality

5. The onus of complete quality passed on to testers, with a lack of accountability among the rest of the product team

6. Main focus on product functionality, with nonfunctional areas such as performance, security, usability, and accessibility often taking a complete back seat

7. Pressure to sign off on the test effort to release on time, despite having started late

 

All of these core characteristics made it very difficult to deliver quality releases. The tester was also demotivated as his efforts seemed to go unnoticed. If at all any, he was only made visible when there were live issues seen in the production environment. He did not have an environment to thrive and was merely surviving which made it difficult for him to collaborate with the rest of the product team. He was seen a step down compared to his counterparts in the design and development teams.

 

All of these did not offer a very conducive environment both to the quality of the product and to the morale in the team. While all of these sound not so positive, the activities in the STLC were themselves of great value when seen individually and implemented at the right levels. So, what are those activities and what is done when each of them is taken up? Let’s herein talk about these activities one by one.

 

Test Strategy

When a test effort was initially conceptualized as an independent activity to be performed by testers who were not involved in the development effort, the need for a test strategy came in. This is a document that talks about the product, its architecture, what are the varied test areas (for which test plans would subsequently be created), entry and exit criteria for the overall release, resource mapping identifying testers who would be working on varied components, what the group’s overall test and defect management practices would be, etc. 

 

This has served as a very valuable document in the waterfall days, helping the testers pause and understand the larger context of what they were involved in, how each other’s work interlaced, what are some of the common practices in the team, etc.

 

However, it does take time to create a test strategy. In a large group where a number of modules are being tested, creating a test strategy can even turn out to be a one-month exercise. This is a lot of time, to be spent upfront, but if the right efforts are made into shaping the strategy into relevant and meaningful pieces of information for the group, it can pay off in the long run. For it to continue to be of value to the team, it is important for the test strategy to be maintained and updated regularly as well as used by the team as and when required.

 

Test Planning

Some teams combine the test strategy and the test plan to save the time involved in creating them as well as the overhead of maintaining multiple documents, but in the truest essence of the traditional testing life cycle, these were both independent documents. After creating a test strategy, a small group of testers would be tasked with creating individual module level test plans.

 

Let’s say a test plan to test the web workflow, a test plan for the desktop version, a test plan for the reporting module, a test plan for the overall database implementation, one from the performance and load angle, one for the security of the application, and so on.

 

This assigned individual responsibilities enabling a small group of testers to focus in detail on their specific areas, define scenarios, what kinds of tests they plan to take on, what test environment is needed, what are the test estimates, risks and mitigations, and so on. Such plans gave the tester the time to think through all elements of testing upfront, even before the testing could commence.

 

Test plans again brought in their own value despite the time and effort involved upfront, but as mentioned given these overheads, teams soon moved into combining the strategy and the plan such that everyone would use the core common principles but individually customized test scenarios.

 

Test Design

A lot of emphases was placed on designing test cases. While some teams left test cases at scenarios level, most teams would take in the scenarios from the test plans to elaborate them into a set of individual test cases of varying levels of detail. Some test cases were so detailed that a new tester coming into the test effort can merely review the test cases to understand application workflows.

 

While such granularity helped ensure no confusion on what tests were executed and also to train new people, it also meant that so much more time was being spent on writing test cases, during which the tester could have actually tested the product. Maintenance was also a huge overhead where even for a small feature change, the tester had to spend a lot of time updating the test cases.

 

Test Execution

 

By the time the tester gets his hands dirty into the actual test execution effort, often much time would have elapsed in getting the test planning done and the artifacts created. By this time, the rest of the product team would have most likely gotten into the product release mindset.

 

The testers were highly focused on executing the scripted test cases that they had earlier designed. We will discuss what kinds of testing they took on, in a subsequent section, but the discussion around tester’s productivity would often linger around how many test cases were executed.

 

This was considered very important because the testers had to sometimes achieve unrealistic numbers to the tune of even 250 test cases a day. This made execution a very monotonous activity hindering the true creativity of a tester. A tester’s typical day also involved recording results from all these executed tests and also filing defects for issues found. All of this added to the tester’s documentation overhead where he was often very exhausted at the end of the day, not leaving any room for the tester to try exploratory tasks.

 

Defect Management

A lot of emphases was rightly placed on defect management. The tester was responsible all the way from finding the defect, to filing it, to following up to getting it fixed, regressing it in the right environment, adding new test cases that map to the defect, and updating the regression suite, to ensure the new test cases are taken into account in subsequent milestones. While each of these activities is very important on their own, the larger question was whether the tester was truly empowered to drive defects to closure.

 

They often had no say or did not participate in defect triages giving supreme powers to the developer to fix defects they see fit. The tester who had the true understanding of what impact the defect had on end users did not often have an opportunity to raise them in the defect triage meetings. Defect management was also in most cases a laborious process with room for quite some efficiency gains, as the testers and developer’s machine environments did not align.

 

These were the days cloud computing technologies were not very prevalent or were just coming into the mainstream market, necessitating a lot of time to be spent on setting up the test environment. There were a lot of wasted cycles around trying to reproduce issues and help the developer get on to the same page with that of the tester in understanding the issue fully, regardless of how detailed the defect report was. All of this made defect management an overhead despite its importance and value.

 

Test Sign-Off

For the entire product team, this was the much-awaited activity from a tester’s side. The tester, after carefully weighing in all the outcomes of his test execution effort, results from defect management, regression suites, code coverage, and metrics that were defined for test exit in the test strategy, would be in a position to make the sign-off call. This was a call made by the test manager and director in consultation with individual test leads who represented specific test teams.

 

Test sign-off was a very busy and important day, as the decision was a big one and also was so close to the release date that the anticipation would be very high. Early in the days, when metrics were not used as much, this call was a very subjective one to take, but over the course of the years, the test team implemented objective metrics that made the sign-off decision more easy to make and reliable to act upon.

 

Other Activities in Which Testers Were Involved

These were activities that not all testers did on all teams but were frequent enough that they are worth mentioning here.

Test Environment Setup While in most cases build engineers helped with the test environment setup since this was the age before cloud-based setup became popular, the test setup was often a very laborious and time- consuming task. In the pre-cloud days, virtualization helped them quite a bit in setting up machine instances where testers needed cross-platform configurations across varied operating systems and browsers.

 

Build engineers who were experts with machine setup often helped save a lot of cycles for testers, but such an external dependency and the wait for a testable build also meant that the tester’s valuable test cycles were often eaten into.

 

Code Coverage Test Runs This was one of those early techniques that testers used to bring in traceability and objectivity into the test efforts. Testing on instrumented builds, testers were able to showcase, what code paths their test efforts covered, what were the dead code areas, if any, and plan for any additional testing that was needed. Code coverage run was often taken up after one full test pass was completed and most often after test automation was built.

 

While code coverage runs can be taken up even on manual test efforts, testers often took them up on automated test suite runs. Code coverage results were also used to determine where to improve test automation coverage on, in the coming milestones. I’ve listed this under additional activity for testers since code coverage runs were not always performed or even if they were done regularly, it was not always a task taken up by testers.

 

Static Code Reviews As mentioned in the earlier blog, testing initially started off as an action taken up by the developers themselves. Later, when independent testing came into the picture, testers were an entity of their own on the team, but their focus was heavy on the black box side.

 

Even automation did not stretch into the internal architecture of the application much but rather focused on the top layer functional automation and UI automation. However, in some teams, testers were involved in reviewing the code written by the developers at a static (nonexecution level). These were testers who understood code, who had the knowledge of programming languages, and who were able to stretch themselves into these extended horizons of testing.

 

Such activities certainly helped them bond better with the rest of the product team and also empowered them to understand the nuances of the application implementation better than the ones who merely focused on black box testing. The debate on whether testers should know programming languages continues to this day.

 

There continue to be excellent black box functional testers who know nothing about programming languages but who can find some top-notch bugs in the application and think exceedingly well from an end user’s shoes. That said, the additional value brought in by testers who do static code reviews cannot be discounted.

 

Live Site Debugging In the waterfall days, especially during the early stages of product release, quick fix engineering was a common phenomenon. These were quick releases also called hotfixes to patch issues found in the live environment and most often reported by the end users or people using the product on the field. Since these were user-facing issues, teams had very little time to debug, fix, test, and rerelease the product. Testers played a very active role in shipping hotfixes to resolve such live issues.

 

How Test Attributes Were Tested

While the test attributes have largely remained the same over the years—be it functional, usability, performance, security, localization, etc.—the way we have tested them has changed significantly. In this section, let’s briefly look at how we used to test for each of these attributes in the yesteryears.

 

Functionality

This was the prime focus in a testing effort. A large percent of test cases were focused on validating the functional aspects of the application. This would include the user-facing functionalities, database, reporting and administrative level functionalities, and so on. Functional test cases formed a large portion of most test suites—be it the full test pass, sanity test pass, or regression test pass. Even for a build to be certified as test ready, the functional workflows would be the main elements to be tested.

 

Application Programming Interfaces (APIs) and web services grew in importance over the years, but early in the years APIs were not very prevalent, thus focus on functional testing specific to APIs and web services were limited. Also, most of the functional testing was done manually. Teams typically picked UI test cases for automation with a limited number of functional tests that were built into the automation suite. As they started understanding the importance of Return on Investment (RoI) on test automation, an increasing focus on functional automation was brought in.

 

UI

 

User Interface elements that brought the overall application’s design together were given a lot of importance to a testing effort. This was typically started a little later than the functional test cycle to allow room for the UI to stabilize. The positioning of page elements, color, font, style, and rendering were all taken into account in testing an application's UI. UI defects were one of the high bug count categories among the overall set of defects a tester reported.

 

Usability and Accessibility

While usability and accessibility were not new areas of testing back in the day, not much emphasis was given to these areas. The product team at large was more focused on functionality and UI and believed that these two together would garner enough customer satisfaction. These elements started gaining more visibility when the market was open for more players to come in. As competition came in, these became differentiating factors helping one stand out against another.

 

There was an increased appreciation for usability, especially around areas of simple application usage, intuitive workflows, graceful error scenarios, etc., all of which hope to earn better customer acceptance in the marketplace. Around the same time, accessibility standards such as Sec 508, DDA, and WCAG all started entering mainstream industry practices, building a niche need for accessibility testing to accommodate application usage for the disabled.

 

Performance

Load, stress, overhaul, and capacity planning have all been important elements in software implementation and testing since the early years, but again, they had a back seat. Companies were not so bothered of their performance and aspects such as availability, page load, and response times.

 

In my experience, this has also been primarily because of the lack of competition. The large software players were monopolies who controlled the market. Even if their performance was not up to the mark, users had no choice but to stick with them. With all of this confidence that the large players had built over the years, performance testing was more of a routine check that was conducted as opposed to a real value-add in the product.

 

Performance testing was also done slightly later in the game, where even if issues were reported, they were not acted upon in the same milestone. Given the time and effort that goes into fixing them, they were often pushed out to be considered in subsequent milestones. Performance testers were slowly gaining prominence, but it was still largely about functional testing and test automation.

 

Security

Security has become a prime area of focus in recent years with growth in the online presence and increased penetration of services in the IT industry. Back in the days of desktop applications, security was more of a physical than a digital concern. Security testing was done but was primarily limited to STRIDE—covering Spoofing, Tampering, Repudiation, Information Disclosure, Denial of Access, and Elevation of Privilege.

 

It was largely related to authentication and authorization-related issues and was often covered with a simple threat model analysis that testers used as a base for their testing efforts. While teams understood the importance of security testing, this also took a back seat when compared to functional and UI testing efforts.

 

Time permitting, teams conducted one to two rounds of security testing toward the end of the milestone. At an industry level, knowledge of both security attacks and testing techniques were fairly limited. However, over the years, security attacks have been increasingly complex and sophisticated, necessitating newer security testing tools and techniques.

 

Guidelines from bodies such as Open Web Application Security Project have greatly helped testers gain understanding and bring in a focused security testing effort. Pen testing organizations have done exceedingly well in the closing years of this “past time frame” bringing in newer testing practices including ethical hacking to gear up the industry to handle security well.

 

Globalization

Globalization has always played an important role in a software development effort. While it was often delayed than a core English release, this has been a more mature testing practice even from the early years compared to areas such as performance and security. Product release in global markets, for instance, an operating system from Microsoft that was made available in several languages or, say, a software from Adobe that went into global markets, was something that was fairly common.

 

Core testers on the team took on internationalization and pseudo localization testing followed by localized testers who would come in for localization testing. In addition to localization testing, a team of linguistic experts would also come in to verify contextual product implementation and suggest changes, if any, to enhance prod-uct acceptance in local markets. While the process in itself was fairly mature, the challenge with globalization was that it was done too late in the game to incorporate the process tightly into the software testing and development life cycles.

 

Other Testing Types

Other than the core testing types or attributes discussed earlier, testing has always had more breadth to it. The fascinating aspect of software testing is that it is vast running both horizontally and vertically. Domains bring in a new dimension to software testing too where one could specialize in domain-specific areas such as testing for banking applications, health-care applications, etc., and building a niche around the workflows in each of them. Some of the additional testing types that have traditionally existed in the world of software testing include the following:

 

Test Automation Automation in software testing is a very broad area. It continues to evolve on an ongoing basis and has been of significance since the early years. The ways in which automation is done, what kind of tools and frameworks are used, what is automated, and when and how it is done are the ones that undergo change over time. In the yesteryears, automation largely focused on the UI elements and some functional aspects. Record and play tools were the ones primarily used and most of these were commercial tools since the open source was not very popular or mature back then.

 

Automation was typically done after a few rounds of manual testing that ensured the system was stable and testers had a good knowledge of the application’s workflows. Automation testers did not necessarily have a very good understanding of the system’s internals and relied on record and play tools to capture the system’s workflow in an automated manner. While this was quick and convenient to build on an automation suite, it had a lot of issues around reliability and stability of the code, especially when the UI underwent changes.

 

Compatibility Even back in the years, compatibility testing was of very high significance and value in a test effort. Tests would have to be run on varied operating systems and browsers to ensure they rendered fine and worked well. There were often defects that were specific to a given operating system and browser combination that had to be fixed and verified. A couple of unique characteristics about compatibility testing in the past are as follows:

 

Lack of adequate infrastructure to run compatibility tests: It was about 2004–2005 when cloud-driven infrastructure came into place with its offerings around infrastructure as a service. A few years before then virtualization was in vogue where testers would set up virtualized machines to switch to and run their tests across varied operating systems and browsers.

 

Until all of these came in, the task of machine set up for compatibility testing was quite tedious involving a plethora of physical machines. Besides operating systems and browsers, other applications traditionally taken into account in compatibility included antivirus software, application firmware, and printer and other device software, as these were often seen influencing factors that may interfere with the functionality of the application under test.

 

Need for optimization: Although the testing matrix for compatibility was relatively smaller back in the days compared to what it is today, the need for optimization was felt back then as well, primarily due to lack of testing time and lack of the required infrastructure. Optimizations would be done based on market usage of varied operating systems and browsers based on which tests would be grouped into full test pass and sanity test pass suites to enhance test coverage within the constraints of time and cost.

 

Integration Often since several modules were being developed and tested as individual pieces, integration testing was very important back in the days. A tester would herein tie all the pieces together to ensure workflows worked well end to end. It was not until later in the game that the application would be ready to test end to end and while some automation was leveraged for this, integration tests were also largely done manually. Most often automation would begin after an integration test pass where testers were able to gauge the stability of the application.

 

Acceptance This test pass often aligned with the exit criteria defined in a test strategy. This would include a suite of cases mostly end-user scenarios that would certify the readiness of the application for external consumption. The acceptance test pass would often also be mapped to metrics such as percent of test cases passed and percent of tests failed in the given priority category, all of which together would help the test manager make the call on the application’s readiness for release.

 

Testing in the past, although later in the game, had its own rigor and discipline. It was a lengthy and elaborate process, not all of which truly added value to the quality of the product under test. We will discuss more of this around testing metrics, pros and cons of testing back in the days, in the subsequent sections.

 

Regression Testing A suite of test cases that were built, maintained and run by testers to ensure defect fixes or new check-ins did not break any existing implementation has been popular for over three decades now. Regression testing has been given the attention it deserved from the very beginning—the only difference is over the years the tools that have been used and the way tests were prioritized have changed.

 

Early in the years, all tests used to be run as part of the regression suite. Not much optimization was really done. Also, most of these tests were run manually. Over the years, testers have started adopting smart regression testing strategies—this included identifying the right set of test cases that together gave the required level of coverage rather than executing all tests. Also, critical tests from newer defects that were filed were added on to the regression test suite.

 

Teams started understanding that the automation RoI from a regression test suite was much higher than in any other area and thus these were identified as good candidates for test automation. Despite all the evolution regression testing has been going through, it still was more of a downstream activity in the yesteryears that would happen after check-ins were made and build came into the hands of the tester.

 

Testing Metrics

The concept of using metrics has always been practiced in the world of software testing. Metrics have greatly helped bring in objectivity into a test effort, which can otherwise get subjective. Metrics were primarily used to understand the quality of the product and the productivity of a test effort—including the overall progress made and that of an individual tester. They were typically divided into two categories to fit into either quality metrics or test effort/management metrics:

 

Test management metrics

Quality of testing

– Number (#) of defects found in testing/(# of defects found in testing + # of acceptance defects found after release)

 

Test efficiency metrics

– Metrics on test execution and pass percentages

 

Test phase metrics

– # of bugs raised and closed per phase

 

Test coverage

– # of test requirements versus # of total requirements

  • Product quality metrics
  • Various views of defect metrics
  • – Defects by resolution
  • – Defects by injection phase
  • – Defects by priority and severity
  • – Defects by type
  • – Defects by cause
  • – Defect patterns

 

While several of these metrics are still valuable today, some of the metrics were turning out to be mundane driving additional unwanted overhead into a tester’s daily routine. For example, the number of tests executed on a daily basis would often turn out to be a competition between testers to see who executed the most.

 

Sometimes, these numbers were irrationally high to the extent of 250 test cases per tester per day, making it a huge compromise on the actual quality of the test effort. While it was good to see the use of metrics in the past years too, this was an area that needed a lot of improvement and customization to realize the true connection between these numbers and how they translated into an application’s quality.

 

Case Studies from Our Past Years of Software Testing

Having talked about how testing was done in the past in detail, I thought it would be useful to include a couple of case studies from our experience at QA InfoTech. These are examples of projects that were implemented either in a waterfall model or a pseudo-waterfall-agile model. These talk about what the project was, what were the requirements/challenges, and what solutions we proposed for the client to help alleviate the challenges, even if the execution was in a pure waterfall model.

 

Case Study 1

Client was a leading publisher and aggregator providing educational content, tools, services, and other resources to academic libraries across the United States, Canada, and the rest of the world.

 

Problem QA was not involved rigorously in decision-making and requirement changes. This made it difficult to manage a well-balanced quality effort. Also, there was no collective ownership for quality, making it difficult for the test to own up the gaps in the effort of the rest of the product team as well.

 

Additional Challenges

There were a lot of requirement documents and multiple versions of them to be reviewed and incorporated into the test effort.

A lot of valuable time was lost in reviewing the required documents and exchanging comments.

Multiple features and workflows were grouped in single documents, making it laborious for the QA team to track changes and maintain versions.

The common repository was sometimes not updated with the latest versions of the documents, hence leaving the QA team clueless of what changes came in.

Due to all of this documentation chaos, QA team ended up logging issues that were rejected—this was a blow not just on the overall quality effort additionally impacting testing costs and time, but it also impacted the morale of the team adversely.

The test cases count ran into thousands, making it a herculean task to keep them updated per changes in the required documents.

Test cases were maintained in excel sheets—multiple copies of such cases in circulation were getting impractical to manage.

QA cycles were typically 8–10 days long.

QA team executed all test cases in each and every cycle, with no optimization done.

Production releases were scheduled quarterly—live issues were pending to be fixed for a very long time.

There was a lot of dependency on the release engineering team to build deployments on QA environment.

Most importantly, only QA was considered responsible for quality.

 

Solution The QA team from QA InfoTech proposed a more collaborative approach that not only led to substantial time savings but also cut down on redundant efforts. All of the following suggestions below together enhanced the overall quality of the product.

 

Due to the onshore-offshore model, there were certain other communication challenges as well that needed to be addressed.

QA team decided to have some overlapping time (2–3 hours daily) with the onsite team.

QA team was made a part of all update meetings, making it a more participative environment—this led to other positive changes including more collaboration and a cooperative execution approach among team members.

Since the QA team also became a part of the change request process, any sort of confusions was avoided at the first place leading to huge savings in time and effort.

 

Effective version control using SourceForge was implemented leading to an 80% effort reduction.

 

Since everyone had access to the latest versions, no unnecessary defects were logged—this significantly brought down the percent of invalid defects that were filed.

 

These provided a positive change to issues the team was facing and also opened doors for a more collaborative approach in owning quality collectively.

 

Case Study 2

This case study is an example of a project where we faced quite a few challenges owing to the waterfall style of operations and how we turned it around with a blend of agile that we brought in. Since agile was also a fairly common development methodology in that “past” time period we are talking about, I wanted to discuss this case study that runs from waterfall well into an agile implementation.

 

Client The client in discussion here is a leading education technology and services company for higher education and a global publisher of blogs in the United States. They have a presence across 1000 higher educational institutions in North America and are used by over two million students.

 

They offer digital textblogs, instructor supplements, online reference databases, distance learning courses, test preparation materials, and materials for specific academic disciplines. With their expertise in student response system, they focus on formative assessment and pedagogy, making it a perfect audience response system for the growing number of K-12 customers to corporate training environments and tradeshows.

 

Challenges

Test releases were quite late, averaging just 7–10 days before the target deliverable date.

Test passes were long (in days) running well over 2–3 months.

 

Engineer(s) work estimates were exceptionally inaccurate. For instance, a task estimated around 10 hours would often take around 20–25 hours to complete.

 

Quality was not at desired levels—the bug counts soared with each release.

Roles and responsibilities were not well defined—work on a particular task would completely stop if the person in charge was not available.

QA/testers and developers rarely collaborated with each other.

 

Solution To address these challenges, we adopted an agile process on our project. This helped us save time and minimize queries and false failures. Our process included adopting the following:

  • Sprint dashboard—a master list of all the functionality desired in the product
  • Sprint sizing meeting—to describe, prioritize, and estimate features
  • Daily Scrum—to involve all team members in daily status sync meetings
  • Sprint review meeting—to discuss what has been accomplished during the sprint

 

Approach We at QA InfoTech worked on this process—agile methodology

 

Client Benefits

  • All—Management, development, and QA teams started using the scrum dashboard regularly.
  • All releases started falling into place in terms of time—We have not missed the target date in the past last 2½ years.
  • Sprint/iteration timings became much shorter—Releases were scheduled every 4–5 weeks.
  • Efforts became precise—Sometimes, we undersized the user story.
  • New issues found in live environments came down signifi-cantly—By over 75% across all platforms, and continues to fall down with each release.
  • Teams were more empowered to make decisions that will benefit the company or clients.
  • Teamwork—Daily interaction with developers gave better insights into client wants and needs.

 

Analyzing the Past

Having looked at the process and methodology of software testing and quality assurance in the past, let’s analyze to see what made sense and what was more of an overhead that did not really add much value:

 

The past definitely brought in rigor and discipline to software testing. It helped define and implement the software testing life cycle.

 

Given the detailed documentation that was in place, ramping up new testers into a given project or even to bootstrap them into the world of software testing was more feasible.

 

It certainly gave a meaning and relevance to the world of independent software testing. Early in the days, the developers would themselves take on testing and this period was an important one to establish the need for independent testing.

 

There was more reliability in certain efforts such as localization testing where efforts could commence after fully verifying the readiness of the product in core English.

 

This was a time period where definitions and detailed processes were chalked out for several areas of testing—for example, the threat modeling for security, base-lining, and profiling for performance, and standards for accessibility. Similarly, frameworks for automation started taking shape toward the end of this time frame.

 

On the flip side, there were a lot of process inefficiencies in this time frame:

A lot of emphases was placed on documenting test efforts, scenarios, and creation of artifacts. To add to this, these artifacts were hardly used in the actual test effort. A test strategy was often turning into an obsolete piece of document that hardly anyone referred to. Sections such as resource allocation in the strategy did not bring in much relevance or use to anyone on the team.

 

The cycle was such that testers hardly communicated with the rest of the product team. Or even if they did, it was hard with the developers and build engineers bringing in a lot of disconnects in terms of interpersonal team collaboration as well as product understanding and updates.

 

Testers were often not part of important activities such as triage meetings. This held them back from being the true voice of end users on the team. Even if they had useful information to convey about the user impact and quality of the product, they often did not have the right platform to convey it on.

 

Focus was more on functional and UI elements of testing, while areas of specialization such as performance, security, and accessibility were not of significance early in the years.

 

As product teams were moving waterfall to agile, not just development teams were impacted. Test teams also had a huge impact. In fact, changes were more prominent on the west side of things—although these were positive changes, it looks a lot of time to embrace them given the magnitude of changes involved.

 

Testing metrics were not very realistic enough to promote healthy competition. Metrics such as a number of tests executed per day were more detrimental to the tester morale than being of much relevance to understanding his progress and status.

 

All of these made testing expensive and time-consuming. This not only impacted the quality of the product and the positioning of the tester but also often led to questioning the value testing brought to the table.

 

Software Testing Career Options

This was also a period where software testers were not held very high in reputation. Testers often chose the profession without being able to land themselves in software development roles. Testers who often chose to take on manual testing were not very well respected for their contributions, and their inputs regarding the quality of the product were also taken in at face value. While it is not possible to generalize this at an industry level, this was the larger prevailing sentiment. However, this was also the time frame where career options were beginning to take shape.

 

A tester could either progress in the individual technical charter of becoming a test analyst and architect or could take the managerial route to become a lead, manager, and director. Specialization roles also slowly started coming in around the end of this time period, where testers could become a performance, automa-tion, security, localization, and usability specialists. In all, although this time period was not a very glorious one for software testing, it was a very important one to set foundations around testing processes and careers.

 

In all this time period until 2010, although there were downsides to the overall testing discipline, strong roots were laid to establish independent software quality assurance. This was also the period where quality started moving from control to assurance. Thus, it laid the foundation for several new beginnings to further come in and set a strong example for adapting to newer changes that were on the horizon.

 

What Has Changed in Today’s Style of Development?

When we discussed the “past” in the previous blog, we looked at how, at an industry level, we were moving from a waterfall to an agile model. This also included the phase where most organizations were aggressively pursuing agile in the supposedly truest form. This sudden jump from a traditionally adopted model to something completely new and drastically different has been quite an impact on product teams. As with any new model that evolves, the industry at large had a lot of issues understanding and implementing agile in the early years.

 

However, around the start of the “current” time frame, we are looking at, organizations started realizing the true requirements and goals of the agile manifesto and how they can be implemented as a customized version to meet their needs. Oftentimes this was even a pseudo agile model that had a slight flavor of other development lifecycles as and when warranted.

 

By the time teams settled in the agile model of development we are already into another flavor of development, which further empowers teams reap the benefits of what the agile world promises to offer—this is the world of DevOps—a world where agile has become even leaner in its operations and has become truly collaborative in bringing teams together.

 

DevOps is a development model that seemingly includes only development and operations, but in its drive to promote continuous integration and delivery has a very important role for testers to play. Let’s briefly look at how the DevOps model looks and what is a tester’s role therein.

 

DevOps and Changing Role of Test in DevOps

In the earlier days, even with full-blown agile implementation, although the test was involved earlier in the life cycle, there were dependencies that test had with other teams making the overall effort linear in some sense. For example, until something testable came in from development test teams were still not completely occupied. Development, test, and operations teams were still operating as individual units despite working in parallel.

 

DevOps has changed this where all teams come together as a single unit to promote continuous delivery. In specific scenarios, DevOps is seen as an evolution over the agile model (that focuses on collaboration at the plan, code, build stages), continuous integration (that focuses on collaboration at the plan, code, build, test stages), and continuous delivery (that focuses on collaboration at the plan, code, build, test, release stages).

 

DevOps is seen as a model that focuses on collaboration and value proposition at the plan, code, build, test, release, deploy, and operate stages). It is a model where continuous integration and delivery is made possible, and the entire team is able to operate collaboratively yet independently in product development.

 

One of the main requirements in a successful agile implementation is for a demonstrable software at the end of each release. While teams understood this requirement and had an agreed set of user stories developed and tested, the developed software was still not pushed to production due to several constraints and limitations on the operations side.

 

Environment availability, configuration dependencies, and infrastructure support were some of the reasons, among others, that handcuffed the operations team in taking on their tasks to completion. And all of this was not purely an operations issue. Business readiness to launch including pricing models, competitive analysis, marketing readiness, and stakeholder sign-off all had to come together to make an ongoing release to production a reality.

 

DevOps comes in with such a promise to look past the core development and test teams and enable the others on the product team as well join in to support anytime deployment. What this calls for is increasing levels of automation—traditionally we have looked at automation purely from a testing and testing coverage standpoints, whereas DevOps brings in a completely new definition, to automation.

 

In today’s scenario, automation is all about automating build deployments, test processes, unit tests and enhancing overall test coverage through code coverage, all with the goal of making the development process fast, hassle-free, reliable and code quality robust.

 

Unlike the agile model that came in all of a sudden, in a time when IT professionals were very deeply rooted in their traditional styles of development, DevOps has been a more gradual shift. Since it also evolved from the agile model as a base, teams have been more comfortable adapting to the requirements of DevOps. Some of the core requirements that make DevOps feasible include a strong regression test suite, instrumented code, system configurability, back and forward code compatibility, and keeping track of overall product dependencies, among others.

 

Granted DevOps is a robust model that is here to stay; how has it really impacted the software testing fraternity? The core principles of DevOps around continuous delivery and readiness to ship at short intervals and in parallel work among all teams are the ones that have been impacting the test discipline.

 

What this means is there is a need for a lot of automation. This is automation that focuses not just on test suites but also automating test processes and efforts in all possible ways. For example, from the test suite standpoint, the team is looking at building scalable automation frameworks that can take on regression, compatibility, functionality, and performance testing, among others. From a test process standpoint, automated test deployments, automated defect, and test case management, and automated metrics measurement and report generation are varied areas to consider.

 

Automating the test processes is particularly a more cumbersome task. The reason is there are a lot of loose ends that can be easily done manually—due to this, there is resistance to adopting automation in such areas even among automation engineers. For example, a survey by Intersog shows 56% of bug tracking and 45% of defect closures happen manually.

 

Another piece of data shows only 25% of unit testing, 7% of integration testing, 5% of system testing, and 3% of acceptance testing are automated end to end. The data herein show 42% of automation tests are not automated. If we can automate these to reach near 100% of unattended executions, imagine the amount of time and cost savings that would result in a DevOps environment.

 

Typically some of the core problem areas impeding comprehensive automation include the disconnect between defect tracking and test case management and the need for coding skills to implement automation.

 

To this effect, in our organization, for example, we leverage something called adaptive test automation frameworks—these are frameworks that support the goal of building on automation to reduce manual tasks in all possible spheres of tester operations. This is a framework that encourages test case design and automation in human-readable format from within a defect so that all pieces of the test effort are tightly coupled. The framework is also simple enough to encourage manual testers to take on automation.

 

That said, now is the time for manual testers to spread their wings into the test automation space. While manual testing will not go away, a lot of focus will also be rightly given to the world of automation. Moving forward a tester can push himself only so much with manual testing skills. In the current times, automation is almost becoming inevitable. 

 

Besides enhanced automation, test has an added responsibility to take on root cause investigations to enable the rest of the product team fix issues faster and more completely. To be able to do this and think of deeper scenarios to test, it is becoming increasingly important for test to thoroughly understand the larger business of the organization and varied integration points with the product under test. While all of these have been traditionally recommended for Test to take on even during the days of pure agile, these are becoming almost inevitable in the current days of DevOps.

 

Data show a sizeable growth in the adoption of DevOps YoY. It has grown from 62% in 2014 to 66% in 2015. This number will only rise in the coming years and whether or not your organization has moved to the world of DevOps, it will be a smart move for you to already look at enhanced automation in your overall test effort and processes.

 

And in this smart approach that teams are attempting to embrace, most of them are working their way up toward automation for existing feature sets leaving manual testing for primarily new user stories that come in to test. Even such new user stories soon make their way into the regression suite, forcing the need to be automated sooner or later.

 

What New Technologies Have Impacted the Overall Test Strategy and Effort?

In this section, we will specifically talk about how varied technologies are impacting software testing and the kind of change they are bringing in.

 

Cloud

 the cloud had a major influence on how we tested and what we tested. In the current times, this is only further intensifying with all systems being looked at from a services angle. For example, in the initial days, the cloud was all about software, platform, and infrastructure as a service. But today, this has taken shape in varied ways—for example, backend as a service. Similarly, with the growing concerns of public cloud, private cloud started gaining popularity a few years back. As with any technology or domain, the industry starts off with an extreme solution and soon settles for something middle ground.

 

This is exactly what has been happening in the cloud space too, where we now have hybrid cloud evolving as a mix of both public and private clouds. This is a solution where organizations leverage the benefits of both the private and public offerings to decide which applications would be on the private segment, which would be on the public segment together building a combined cloud strategy. As for the test, this calls for a combined test strategy too that focuses on the weaknesses of each of these areas.

 

For example, for applications that are on the public cloud, the test focus is largely on the security and performance angles, while for the ones on a private cloud, the focus is on customized feature implementations and rendering to yield the desired functionality at lowest possible costs. The growth in cloud, while it has provided a lot of flexibility for testers from a test engineering standpoint, has also expanded test opportunities and forced them to think of custom test strategies.

 

Mobile

Mobile computing is probably the biggest technology change that has greatly impacted the tester’s realm of operations. Moving away from a desktop or laptop-based web application testing to a device or simulator/emulator-driven testing of web, native, or hybrid applications is a large change the testers are still getting used to. From a physical impact as well, testing on mobile devices with a hand-held has been a big change. Obviously, it is just not a device that has brought in the change.

 

The device in combination with an application development mindset and the overall application development process has created a very large test impact. Mobile applications have brought in a lot of focus on nonfunctional testing elements such as performance, security, and usability. Newer business models around m-commerce and content digitization have all triggered new testing opportunities. In fact, it is interesting to see business strategies such as discounts being offered only on the mobile app, mobile shopping spree, and large retailers such as Flipkart in India working toward a mobile-only presence.

 

The crowdsourced testing strategy has become more prevalent given the need to test applications on a large range of mobile devices across realistic user scenarios. The tester himself is now encouraged to think like an end user. Mobile testing is forcing testers to think deeper and bring out optimization opportunities—for example, we have a homegrown tool at QA InfoTech, called Mobile Application Strategy Tool.

 

This is a tool that will help you consider varied parameters and weigh in on them to make suggestions on what kind of testing should be done to maximize overall test coverage with a minimal set of tests to be run. This will also help you make a call on where the tests should actually be run (whether on physical devices or on simulators through a crowd team

 

Additionally, testers are having to make smart optimization choices based on past analysis of test results to help them further strategize their mobile testing efforts. From our experience, here are some quick tips to empower testers with their mobile test efforts:

 

  • Functional issues are more OS dependent rather than on the device size or screen resolution.
  • User interface issues are more specific to a device size or screen resolution rather than OS.
  • Different devices with the same screen size and OS version would give identical results.
  • Devices with different screen size but identical OS version would give different results.

 

Wearable Computing and Augmented Reality

In continuation of mobile computing, the next seen in the line is wear-able computing and augmented reality. In very simple terms wearable computing is a wearable device with built-in computing capabilities that are able to render valuable information to end users across a range of functionalities. And augmented reality, whether it be mobile or nonmobile, is enhancing a real-life experience and augmenting it to provide relevant and timely information to end users in varied shapes and forms.

 

Head-mounted displays and smartwatches are popular examples of wearable devices that we are seeing in recent times. Testing for apps developed for wearable devices is a huge market and they come in not just as stand-alone apps that are tested but their computing elements that integrate with varied other apps and devices across the mobile and nonmobile segments. Similarly, testing for augmented reality apps calls for a number of pieces that integrate together and several out of box scenarios to be tried.

 

Augmented reality is a very valuable discipline that also can add to a tester’s toolkit enhancing his productivity rather than purely focusing on how to test augmented reality applications. The market for wearable computing is expected to touch $30 billion by 2018. Augmented reality, even for the nonmobile segment, is expected to touch $1 billion by the same time. Testing for these two segments requires a lot of out of box thinking—creative end-user scenarios, varied integration checks, and test areas that cover usability, compatibility, functionality, performance, etc.

 

The tester also has to be aware of known limitations in these areas, since these are still evolving as we speak. For example, security is still a large unaddressed area in wearable computing. Surface detection is a gap in augmented reality. In newer areas such as these, the tester will have to also rely increasingly on community knowledge to determine what scenarios to additionally try.

 

Crowdsourced testing to get end users to test is also very valuable. How to test augmented reality applications and also how to leverage augmented reality to enhance a tester’s productivity are topics of their own, worth individual consideration.

 

Also, core testers may have to take on tests outside of the traditional lab, as some scenarios may have to be tried on the field. Newer computing areas such as these are forcing testers to rethink their test strategy and the balance between manual and automated tests—while we understand the value of automated simulated tests, there is also tremendous value in field-driven manual testing. So, one of the questions to consider is whether advancements in technology are taking us to the grassroots of testing.

 

Social Media

Social presence is not new. It has been influencing product development efforts significantly over the past decade, even more so in the last five years. What impact does social presence, be it your Faceblog, Twitter, LinkedIn, Pinterest, or any other app, have on your testing effort? The first thing to evaluate is whether your application or product under test has a social presence. If so, how you would test for it as a stand-alone app, as well as an app in the social climate, is something to plan to build into your test strategy.

 

Additionally, in the current day, whether or not the application’s functionality has a direct relevance to social media, there is a lot that the tester can seek from social media. All applications have a social face and users are very vocal on such mediums today. Their feedback, both positive and negative, is a great input to consider for additional testing in a given release or for enhancements in subsequent releases. Although the marketing team typically maintains the social presence of organizations and products, a tester can use this as an important source to understand the end-user satisfaction and overall product quality.

 

Analytics

This is certainly going to be the future for many more years to come. Today, it is all about user data. The volume of data handled in a sheer span of 60 seconds is mind-boggling. In 1 minute, e-mail users send 204 million messages, Amazon makes about $83,000 in online sales, and Apple users download 48,000 apps. On the social front, Faceblog users share 2.46 million pieces of content, 277,000 tweets are tweeted, and Tinder users swipe left or right 416,667 times.

 

All of these data are very valuable when analyzed and processed to bring relevance to the core business. Organizations have started understanding this and a lot of emphases is placed on analytics and big data to ensure the data are put to meaningful use. What this means is the tester now needs to understand how to test for such large datasets and ensure the right test coverage is obtained within the constraints of time and cost.

 

The industry as a whole is grappling with testing big data and what tools to leverage, both to handle structured and unstructured data. From our experience, tools such as PigUnit and Hive under the larger Hadoop umbrella are the more popular ones that testers lever-age today depending on what their usage needs may be.

 

SMAC

Gradually the industry has gotten used to handling social, media, analytics, and cloud at individual levels. Teams have at varying lev-els understood what it takes to test them and how to incorporate results from each of those elements into the larger product context, which are all positive outcomes we are seeing.

 

However, the industry is now moving into a newer trend—a trend called Social Media Cloud Analytics (SMAC) that brings together social, media, analytics, and cloud under one umbrella. This is a change at an organization level forcing companies to look for any missing pieces of the pie and fix them. For example, a given organization may have a strong mobile, analytical, and cloud presence but may miss the social piece. Now is the time such gaps are being taken care of. What this means for testers is 24 * 7 availability to address issues that surface.

 

This stretches beyond the bounds of core testing and forces them to constantly evaluate information flowing in from users and how they are faring against competition and release live updates and patches on the go. The year 2014 had some interesting statistics to show. Seventy-six percent of businesses were using social media to fulfill their business goals, 72% acknowledged enhanced productivity through smart device adoption, 75% were focusing on leveraging analytics, and 92% were satisfied with outcomes from their cloud implementations.

 

These numbers are only going to further strengthen with the integration of these pieces to form the SMAC platform. As testers, we need to continue to look for testing scenarios that tie in these pieces together to train ourselves to think from a holistic SMAC platform standpoint and what that interconnect means to overall product quality and market acceptance.

 

Computing Everywhere, Internet of Things, and Context-Sensitive Systems

The digital world today has become ubiquitous. Mobile penetration into remote parts of the world is on the rise. Internet availability, speeds, and reliability are improving by the day. With such strong infrastructure presence, businesses are looking at positively leveraging the user base and better connecting with them for relevant solutions. No longer is isolated solutioning taking place.

 

Solutions are all coming together whether it be the SMAC that we saw previously or the Internet of Things to bring in smart offerings or more importantly proactive user connect through context-sensitive systems. No longer is the user always reaching out to businesses with his requirements.

 

An increasing trend is to see upsell where businesses have reached out to users with suggestions based on the context they have about them. All of these mean the tester cannot limit his work to just his office premises. He lives and breathes quality even outside his core work hours, as you never know where one may get quality cues from.

 

For example, I may be at a shopping mall or a restaurant over a weekend and may be able to see some live product users and get to interact with them. I may get an advertisement based on the set preferences that may give me cues for my product under test, say when I am vacationing out of the country.

 

It is all about anywhere and anytime computing today and the tester who is able to connect the dots between what he does at work and what he sees outside when he is away from work is the one who will be able to thrive in the coming times. Just drawing the connection will also not suffice; he will have to translate them to actionable inputs that he can get back to his workplace to further enhance the quality of the product and bring in true business value.

 

The aforementioned list though not exhaustive is at least comprehensive enough to give you an idea of what is happening in the current times and how all of these are impacting testing. Other areas such as predictive analysis are also slowly making their way, all of which together are expected to bring in a huge change in the coming years in the world of software development. As a tester, it is important for us to understand and keep track of these to see how they relate to what we do currently and what else we can do differently, to experience ongoing continuous improvement.

 

The interesting takeaway here is that several testing best practices (e.g., think of nonfunctional areas, think like an end user) have all been long advocated in the testing industry—they continued to remain optional best practices all along. However, with the changes happening in technology today, they are slowly becoming inevitable best practices for teams to adopt.

 

What Other Things Are We Doing as Testers?

As a tester, we have come a long way. This period is one of the most exciting yet challenging times that a tester is going through. Those who are able to handle and convert these challenges into opportunities clearly have a strong road laid ahead of them both for the products they are working on and for their own personal careers.

 

What is driving all of this change? The state of product quality today, which in turn is directly influenced by the state of product development, has a lot to do with what and what else we do as a tester. For instance, today product quality is no longer a responsibility of just the testing team. Obviously, if something goes wrong, it is still the testing team’s neck that is out online. But the rest of the teams have also started understanding and appreciating quality in hope of being able to contribute in possible ways within their spheres of operation.

 

Secondly, the need for domain knowledge is more important now than ever before. As a tester, while my core testing skills of test planning, execution, and defect management are all important, now is the time I can differentiate myself with knowledge of the domain that I work on. 

 

This could be domain-specific workflows, regulations that govern the domain, the market for the domain, etc. Additionally, although there are specific teams such as the business and marketing teams that are chartered with understanding market competition, a tester can add tremendous value in evaluating the current product against competition in areas such as feature set, functionality, performance, usability, and accessibility—all from a quality angle.

 

And finally, it is becoming important for the tester to think beyond the bounds of his core testing work. Gone are the days where a tester comes in to execute his tests for 8 hours a day and at the end of the release signs off on his task set. Given the multitude of test scenarios, devices, and parameters that he works with, creativity is the need of the hour. We will look at what it takes for a tester to thrive in the current day, in a later section of this blog, but at this time, we will specifically look at what other tasks a tester is taking on in addition to his core testing responsibilities.

 

Market Watcher

As a tester, today, it is important to live and breathe quality outside of core work hours too. One needs to be a keen market watcher to see how competition is faring—simple things such as what is the competition in the news for, what kind of feedback are users giving for the competition, what forums can I monitor to best gauge feedback for my own product, what are users’ pain points, and are there any user-powered events that I can attend. These are all things that no one is going to direct a tester to take on. These need to be self-driven and those who do these are able to clearly set themselves apart.

 

Innovation Seeker

Innovation has become the key to thrive in any discipline and testers are no exceptions. We are seeing testers do amazingly different things to enhance their productivity and also add to the quality of the product whether it be newer tools and frameworks, bringing in newer concepts such as games into software testing, keeping track of new technologies, etc.

 

Innovation need not be something very big to get noticed—testers are even taking on small steps and are eventually seeing a difference in what they do. Management teams are also more receptive to such continuous improvement strategies than ever before, creating a very conducive environment overall for the quality landscape.

 

Quality Empowering Collaborator

 

A tester is often seen helping the rest of the product team take on quality in possible spaces of their work. For example, he is helping a developer with unit test case creation and execution of sanity automated tests to ensure the build released to test is more reliable.

 

Similarly, he is seen working with the operations teams with a sanity suite of tests to ensure live issues from the field are better responded to with a tighter service-level agreement (SLA). When he takes on such tasks to empower others on the team own quality, he is indirectly creating a stronger base for himself to take on bigger and better tasks since the core and tactical tasks are now handled across the team rather than just by himself.

 

Doubling on One’s Role

 

Typically, as testers, we tend to specialize in a core area—for instance, functional testing, performance testing, and security testing. While the core specialization still remains, in today’s scenario the ones who are able to take on more as a value-add to their core tasks are able to do even better in their role as software testers.

 

For instance, at QA InfoTech, we recently introduced a couple of frameworks based on open-source tools—one is a framework where a functional tester is able to double on his open-source selenium scripts to take on not just functional testing but also security testing based on Open Web Application Security Project (OWASP) standards although he may not be a security expert and the other where a functional tester is able to take on accessibility testing.

 

It is a good idea, in general, to get across perspective of the product and branch into other test types too in addition to your niche that you may be specializing in. For instance, given how the mobile application market is skyrocketing and how users react adversely to poor usability experience, an area to consider is how non-usability testers can take on usability testing too.

 

This need not be very complex—it could be simple ones such as evaluating the product from an end-user experience, error scenarios, and overall application simplicity and workflow standpoints. When we do this as a tester, we are able to bring in better customer appreciation.

 

While all of the newer tasks we take on outside our traditional role of a tester are indeed exciting, there is one glitch that needs to be carefully evaluated. All of these require close collaboration with the rest of the team and sometimes even delegation of responsibilities from a tester’s plate to another.

 

When these are done without careful and insightful planning, mindful of the other entities involved we may appear to be trespassing into another team’s area of operations. If and when such a thought process creeps into the team, the overall collaboration tends to be more destructive than constructive. Thus, a tester is required not just to excel in his own zone of operations but really look for opportunities to bond with the team and excel as a group, savvy for team’s sensitivities that need to be balanced with quality goals and market requirements.

 

We Are at the Crossroads

 

With all of this multifold responsibility on a tester’s plate today, he is really at a crossroad. A well-respected tester in the fraternity, James Bach, calls out seven different types of testers: an administrative tester who is very process oriented, technical tester who is very tools and frameworks driven, an analytical tester who is very logical and statistical in his test approach, a social tester who is very embracing of his team and other entities in his test efforts, a developer tester who is very detail oriented at a programming level, a user expert who is keen to track user feedback, and finally an empathetic tester who again empathizes not just with users but is mindful of all entities at stake. 

 

There is no right or wrong approach here on who you should be; also, you need not contain yourself to just one type of tester. What is important though is for testers to see which of these profiles best defines who they are today and whether that is sufficient for them to be in, in the coming days for their market and their own career aspirations. Accordingly, they need to look for diversification opportunities to branch into newer tester types.

 

As someone who is doing over and above what has traditionally been expected of them, a tester should not lose focus of certain core elements. These are elements that will help both him and his product succeed—and cover the need to accommodate context, collaboration, customer, competition, and company in whatever we do. We need to continuously evaluate and align our actions with these five “Cs” to ensure there is an overall return on investments in our efforts.

 

As for the kinds of tests we run as a tester today, at the very core not much has changed since the past. What is changing is how we run the tests and how we prioritize them. While functional testing continues to be important, nonfunctional test areas such as performance, security, usability, and accessibility are becoming increasingly important today. End users are paying equal importance to these areas. An application, despite how rich it may be in its functionality, if lacking in its performance, for example, its responsiveness, will lose ground to its competition.

 

This wasn’t as prominent in the past when the market was limited to a few large enterprise players. Given that situation has changed now, where users have ample options to go with, organizations are placing heavy emphasis to nonfunctional test areas as well. A scalable and equipped test lab has become inevitable to support test execution across a range of devices. In specific scenarios, not all devices can be stocked in the house due to constraints around device cost, availability, usage, etc.

 

In such cases, additionally, where there is value in bringing in the end-user context in the test effort, testers are leveraging crowd users to test applications. These are often freelance testers who are brought in to test given their domain expertise, end-user experience, or niche value addition in areas such as localization, accessibility, and usability.

 

Also as testers, we are changing strategies on how we test. For example, we are increasingly moving toward test automation. Newer emphasis is placed on techniques such as exploratory testing and bug bashes to enable teams to bring out their creative best within the shortest possible test cycles.

 

Test cases that are designed are more in a human-readable (behavior-driven) format. For example, they all may consistently have a format such as “GWT—Given When Then.” When such consistent formats are used, it also becomes easier to bring in modular test automation that is more understandable even among non-programmers.

 

In terms of evaluation, test teams are getting more objective in measuring outcomes. Outcomes are mapped to overall coverage and traceability to the defined requirements. Connecting tests run to user stories, the coverage obtained in terms of code coverage, the kind of defects reported, percentages of valid defects, how defects were found, who found the defects and when they were found, feedback coming in from end users, and team’s adherence to defined SLAs for quality and performance are all becoming increasingly used. While it has been important in the past to keep track of tests run by the tester to understand how productive he was, today, productivity is measured by other parameters.

 

What kind of new utilities did someone bring in to the team, any new practices that made the team’s work smarter, and how a tester has been contributing to doing things differently are all used as gauges to understand his performance and productivity? This is all a welcome change and a much-needed facelift to truly map test metrics to understand the quality of the product, the quality of the test effort, and the performance of a tester.

 

This is a big change from how things were done in the past. So, organizations and testers alike will need to understand and implement these metrics in the true spirit to derive ongoing value, and when such a mature state is reached, it will certainly be an exciting time for the testing fraternity as a whole.

 

How Do We Thrive in Today’s Environment?

When drastic changes roll out in any discipline, the players often feel threatened. The sense of insecurity due to fear of the unknown is very high. With the evolution of DevOps and the focus on greater automation, testers in the recent years have been sailing a similar boat—a boat where they are unsure if their role is still valuable and whether their position will continue to prevail.

 

So, more than a question of thriving, for several of them it has even been a question of surviving. Having been in the independent software testing business for over 12 years now and having worked with a range of clients, Fortune and start-ups alike, the reassurance I want to give to the community is that testing is here to stay—quality assurance and confirmation have a bright future. That said, like in any other discipline, complacency will get an individual only so far.

 

This is sometimes called the Fat, Dumb, and Happy syndrome, especially in large organizations where employees get very comfortable with what they do, jeopardizing both their own careers and the company’s external positioning in a few years. To ensure a healthy, competitive environment and to continue to thrive at a fraternity level, it is important for the testers to keep a few things in mind. These are fairly self-explanatory, so I will list them as points:

 

1. Look for newer challenges to solve either initiated by your team or at a self-inflicted level.

2. Leverage technology not just in what you test but also how you test. For instance, use the cloud to empower your test processes and analytics to help sift through the volumes of test data and draw meaningful inferences.

3. Ensure you along with people from other teams review your own work. Similarly, be closely involved in reviewing the work of other teams. This not only ensures all grounds are covered but also creates strong team bonding.

 

4. Be in close communication with the management to translate your tactical actions into strategic quality decisions.

5. Focus on strengthening your test environment and toolkit to ensure you are empowered to be productive.

6. Build an ongoing learning plan on a range of topics—testing processes, tools, trends, technologies, etc.—and have a custom list of feeds that you use for each of these.

7. Have trust in yourself and inspire such trust among people around you.

 

Current Trends That Will Also Set the Base Moving Forward

Trends define what is coming in the foreseeable future. Not all trends may take off at the same intensity, but trends definitely give a great insight to plan for the future and be prepared for what is in store. We are almost at the close of the “current period” under consideration.

 

To prepare for what is coming, let’s wrap up this blog looking at what those trends are that define the coming years. Since we have discussed most of these points in great detail at varied places in this, I will herein just wrap up and summarize the trends as takeaways for this blog and the premise for the next blog:

 

1. The technology landscape is very dynamic today.

2. Application development on mobile platforms will continue to grow.

3. Mobile-only renderings will soon enter the scene, making desktop-based web applications a thing of the past.

4. End users will have an increasing role in software development—more crowdsourced testing will prevail.

5. The existence of independent testing will be challenged but will prevail.

6. Commercial and open-source tools will coexist and become more collaborative unlike the current time, where they were seen actively competing against each other.

7. A shift left to understand system internals and a shift right to align with end users will have to be the balance in strategy for a tester to take in.

8. DevOps will see increasing levels of automation but will continue to embrace manual testing into its fold—manual testers will see an increased need to branch into test automation.

 

9. Testing centers of excellence will grow to cross-share knowledge and resources and build on efficiencies.

 

10. Testers will be increasingly respected for their role in upholding quality and bringing the team together toward a common goal holding end-user requirements high.

 

11. QA/tester’s toolkit, on one hand, is strengthening but on the other is becoming lighter. This is a great trend to see where the toolkit only has really valuable tools that testers can repeatedly use. Some of the low-value tools are taken a close look at and removed from the set to include tools that are more powerful and oftentimes multifaceted in what they offer.

 

12. Compliance-based testing (Section 508, HIPAA, SOX, among others) will continue to rise to give a formal need for independent validation and verification.

 

How Is a Tester’s Career Shaping Today?

A tester’s career today is more exciting than ever in the past. I say this as opportunities abound. However, these are not always easy and straightforward opportunities. Gone are the days when a tester could afford to sit reactively and wait for opportunities to come along his way. Today, he has to chalk his own career path and go beyond the bounds of what his manager may define for him. When he does this, the industry is ready to welcome him with open arms in varied areas of niche specialization that align with his interests and capabilities.

 

Outside of his core test team, the other entities from whom he can take in inputs include his product team, stakeholders, end users, and market entities—be it forums, conferences, end users, discussion groups, social media, etc. With such vital sources of information, a tester’s career today has become one that he can shape or destroy himself— both are in his own hands and now is the opportune moment for those with the zeal to make the best use of what the opportunities have got to offer.

 

Management is also very receptive to suggestions and inputs coming in from all entities. A tester’s career today is not merely dependent on what he does but also how he represents it to the relevant people. He thus needs to both be able to “walk the talk” and “talk the walk” to ensure he has an edge in the market and is able to truly deliver on products of exceptional quality.

 

When all of these falls in place, automatically his career progression would have been taken care of along the way. Additionally, all of what we have discussed in this blog—be it “how to thrive” or “what trends are we seeing”— are all actionable inputs for the tester to work on toward giving his career a positive facelift.

 

We Will Start Off by Talking about a Very Controversial Question:

“Is Testing Dying and Will It Cease to Exist in the Coming Years?”

There is no direct answer to this question, except to strongly say that all lies in the perspective of the testing group that is involved. If the team happens to be one that is content with its current style of operations and is complacent or not willing to move on and align with the changing needs, then yes, I can strongly say testing will die soon.

 

But if the fraternity at large and testing teams at their small levels understand the changing facets of software quality and work toward customizing and adopting them in their own spaces of operation in possible ways, the future not only exists but is also bright. Such agility is the need of the day today and into the future to ensure quality is built into the products up front. Today, release cycles have already shrunk from several months to just several weeks for many products.

 

This will only continue to further shrink in the coming years necessitating quality to be absolutely nimble. However, since quality has become and will continue to be a collective ownership of the team, the test team will have to create its own value proposition to continue to position itself on the team.

 

Also, they need to take on bigger and better tasks, primarily focusing on continuous integration and delivery, which means more overall automation (both at a test case and at a process level). If as a fraternity we are able to do this, the test will not be a dying profession but definitely a thriving profession with a much better facelift and positioning than what it has today.

 

The Dynamic Landscape Will Continue to Become Increasingly Dynamic

The influential factors we have been talking about at varied places in this blog will continue to strengthen, making the landscape even more dynamic than ever before. Technology is becoming omnipresent. There is hardly any discipline today that is untouched by technology. Also mobile, social, cloud, and analytics-based computing will continue to engulf all new areas that technology is seeping into.

 

Continuous integration and delivery will make the development landscape completely dynamic, forcing the team members to be in an ever-ready state to take on tasks on demand. Such dynamism is not just individual driven but also process and effort driven, making this the most important trait for the entire team to inculcate in the coming times.

 

App Development Will Soar

Mobile app development will reach newer heights. As of October 2015, just in the U.S. Apple Store, the number of active applications to download was close to two million. While games, education, business, and entertainment are some of the top categories for app development, newer areas especially such as healthcare and fitness are all soon catching up. The market is very bullish where anyone with a novel idea can see it take shape with ample resources available to build the application.

 

That being the case, these numbers will only continue to soar—app development will not stop with just mobile devices. Wearables and augmented reality powered devices will also bring in their own apps, making this a huge market. Testing will have to align with this growth since in many cases even leading retailers may choose to offer just an app rendering rather than a web application.

 

All of this calls for a completely different mobile strategy—the one that is very collaborative to pool in devices and other test resources at short notice, even requiring a very active pool of crowd testers to be maintained. More than the test effort itself, the testing teams will have to showcase a state of being ever ready as most test passes may even just last for a day or so. This is something we are already seeing today and will continue to intensify in the coming times, making it inevitable for testers to be on their feet and able to take on tasks on demand.

 

Testers Will Coexist with Crowd Users

As we move ahead in the world of software development and testing, the lines the tester holds with varied entities will blur and one such group is that of the end users. Increasingly, testers will collaborate and communicate with end users not just to get their feedback but also to leverage them as crowd users. Gone past is the days when testers felt threatened with such external entities coming in raising a sense of insecurity about their own job.

 

Testers now understand the value crowd users bring in and how that will help them further improve the quality of the product under test. They will increasingly understand that with unique value propositions that each of the groups brings in, they will together be able to ship the product sooner and with the desired levels of quality. Even if this crowd tester base is not directly the end-user base, there will be crowd testers who are domain specialists and other testers themselves from across the world who will help bring in additional test coverage through their efforts.

 

Testing Fraternity Will Work to Seek Balance in a Number of Areas

Gone are the days when you as a tester will take on everything on your plate and work in an isolated manner. Today, the need of the day is the act of balancing—balancing efforts between several groups of people, balancing practices, processes, tools, etc., all with the common goal of product quality. This will further get intensified in the coming days where the tester will have to work smart in striking balance across multiple variables. For example, testers will increasingly take on the balancing act between the following:

What they do and what the crowd testers do including what, when, and how to engage crowd testers.

What coverage to achieve on real devices and what should be done on emulators and simulators.

 

When to use commercial tools and when to use open source tools—Herein, while the debate goes on at an industry level as to which one is better than the other, the new trend is for each of these groups to acknowledge, embrace the other, and coexist. For example, at a panel discussion, we were part of, in the UNICOM conference in India in 2015, the topic under debate was exactly this.

 

It was welcoming to see the commercial tool group talk about their contributions to the open source world, and the open source world acknowledges that sometimes they build their solutions on commercial tools. Today, both sides understand their strengths and weaknesses and such awareness is great for the industry at large to benefit from. When to merge and collaborate with the rest of the team members and when to work in isolation to retain their independence.

 

Increasing Collaboration in All We Do

In all of this coexistence, the act of balancing will make testing, not just an act of quality assurance and confirmation but an act of collaboration. We will increasingly become social testers who collaborate well with varied entities helping us cross share responsibilities, ownership, and tasks. Testers and the rest of the product teams will become increasingly mature to understand and see such collaboration in a positive light rather than being intimidated as being trespassed by other entities.

 

Manual Testing Will Not Disappear but Will Become Completely Niche

As testers, the one other question we often hear today is whether manual testing will disappear in the coming days. While it is true that a lot more automation will become mainstream given the world of DevOps we are moving into, manual testing will not cease to exist. With the changes in technology and newer applications of such technology, for example, wearable computing, a lot of field testings will be necessitated in the coming days. Such testings will require more manual testing.

 

Exploratory testing will become increasingly valuable that will largely be done manually. What is important to understand is that manual testing will be limited to the core minimum, while the rest will move into an automated mode. A manual tester will have to elevate himself to learn the art of automation, too.

 

Those remaining in the manual testing field will be the more experienced, select few that can take on very-high-end manual tasks of connecting their efforts with the larger business goals, test strategy, and planning. The lower-end manual test tasks are what will cease to exist. Thus, it is important to accept what will go away and what will remain so testers can train themselves accordingly for the coming days.

 

Nonfunctional Areas Will Become Very Important Including Compliances Related to Them

As users become increasingly connected with the product under development, their expectations from the product go up. That is exactly where we are today and will increasingly move into, in the coming days. This trend will necessitate a greater push toward nonfunctional testing. Performance, security, accessibility, usability, and localization will all become very important areas to test for in line with the priorities of core functional testing.

 

Government-mandated compliances and standards will increasingly be enforced for nonfunctional test areas to bring inconsistency in software development. There will also be domain-specific mandates in areas such as banking, health care, and insurance. We are already seeing mandates such as Sec 508, DDA, SOX, and HIPAA, which will further be enforced, and newer ones will enter the market to make it all increasingly regulated and focused on the experience yet the safety of the end users.

 

Automation Will Become an Increasingly Integral Part of Software Testing

The current ongoing push to automate more will only further rise in the coming days. The talk is all about automation today and it certainly is not baseless. In the world of DevOps that we operate in, the amount of direct human involvement needs to be minimized to just the core strategic tasks. Everything else needs to be automated.

 

Manual testers at the lower end of the software testing career pyramid must ramp up their test automation efforts; failing to do so may stall their careers very soon. The idea in all of this is to automate to the largest possible extent so as to leave time for core tasks such as test strategizing, the user connects, exploratory testing, and monitoring the entire test process to keep the engine flowing smoothly.

 

And even in test automation, the push will increasingly move toward reuse as much as possible. For example, can my functional test scripts also be used for other purposes such as security testing and accessibility testing, are my tests fully automated or am I still doing certain tasks manually, are my tests integrated well with my test case and defect management tools, etc.?

 

Also, in cases such as healthcare and such other critical software, it will become important to simulate and automate as much as possible as real-life testing may not even be possible. If one were to look at what is that one big change software testing will see in the coming years, it is the steep rise in test automation, both test execution and test process automation.

 

Agile Will Become a No-Brainer but Customizations Are What Teams Will Need to Understand

Development teams including test teams are already used to the agile style of development. This is a no-brainer where we now understand what it takes to deliver the agile way. Fast time to market, reduced cost of operations, focus on quality, end user–focused development is all well understood and appreciated today.

 

The need for these will be further intensified in the coming days. Today, continuous integration and delivery are not very widely adopted. While the awareness exists, adoption is still very low. This will change in the coming days. Teams will not only embrace these concepts but will increasingly work on customized models to align with their market and user needs.

 

As for quality, this will mean on-demand work. The automation suite has to be robust, scalable, of high quality, and maintainable that tests can be run anytime and can be invoked from anywhere. Customizations and changes will happen even between releases within the same product to get the teams more ready to meet the market needs.

 

Agility will become a norm in the team’s operations, people’s mindset, and for that matter anything that is done that teams are in a state of ever readiness to tackle any situation. This will make them both proactive and effectively reactive, to cut an edge for themselves in the market.

 

Testing Will Not Be Confined to Just Your Core Hours of Work in Office Premises

Everyone wants a work life balance, right? Who doesn’t? However, with the way software development and quality is shaping, it will soon become very difficult to confine testing to just 8 hours of work in the office premises. In a lot of testing, use case scenarios will happen on mobile devices, which is integral to all of our daily lives today. That being said, a tester will increasingly see connections between his or her core work and his or her life outside work, in the future.

 

For instance, the tester may have several touch points to the app’s social presence even when he is outside work. At a social event, for instance, he may meet users of the product. The tester has to be ever ready to seek and take feedback for the product even when he is not at the office, and the touch points he will have, to do this will only continue to rise in the coming years.

 

Will Metrics Still Be Used? For Such Short Releases, How Do We Connect the Dots with the Past?

These are valid questions the testing community will face. Given how short the release cycles are, the testers hardly have any time to take on the test effort. If so, where do they have time to analyze the data, draw inferences, and make meaningful decisions? This is where the testing community has to get smarter.

 

We need to pick and choose only the most relevant metrics (not even the ones that the industry is using at large but the ones that are relevant to our product and market) and again automate them to make the process easy and less cumbersome for everyone to adopt, given the time constraint on hand.

 

Independent Testing’s Future

As testers, we have gone through a lot of ordeals in the last two decades to establish a status of independent testing. The last decade has been especially challenging as we try to understand and create awareness on what independent testing is all about and how to achieve it in the world of collaborative software development. Testers have had to tread a very fine line to maintain the role delineation between them and the developers.

 

This will get increasingly challenging in the coming years given the kind of development efforts that are coming into the mainstream mode. For instance, with a lot of app development, we see a lot of freelance developers across the globe. These are often people who wear multiple hats including that of the ideator, designer, developer, tester, operations, and support. In all of this mix, the sanctity of independent testing is bound to be lost.

 

As a fraternity, we need to be aware of this risk and watch out to ensure independent testing is applied in areas we are involved in. This is a much larger responsibility that extends beyond just the testing fraternity. The software development community at large will need to understand the risk and potential repercussions involved to ensure independent testing is given the positioning it deserves. A very detailed section on the state of independent testing is discussed in the next blog on the state of a tester’s readiness.

 

The Merge between Development Testing and Software Testing

Elaborating on the previous point, given how testers are collaborating with the rest of the members of the product teams, the ones with whom they will collaborate the most is that of the developers. The collaboration will extend beyond discussing user scenarios, test coverage, defects, etc., but to possible areas of overlap in the work as well.

 

For example, small issues, defects in areas such as localization, and content may easily be fixed by the tester himself rather than having to go through the full loop with the team. Testers will work with developers and operations team to enable them to take on automated test runs to help build quality much earlier into the development lifecycle. In cases such as these, there is a clear and cognizant overlap that the cross teams buy into, to help each other for the benefit of the overall product under development.

 

A Twist to Testing Centers of Excellence (TCoE)

Traditionally, the term TCoE is not new. As testers, we know that TCoE, especially in large organizations with a huge quality effort, helps bring in testers to load balance their efforts, share resources, and take up knowledge transfer to bring in economies of scale and operations.

 

Specialized testers were cross shared, buffer testers would reside in a pool to be leveraged as and when a team needs them, cross training would happen, etc. However, in the current day, teams are so busy and occupied on an ongoing basis, in which it has become important for all of these to be resident to specific teams rather than common to a TCoE.

 

While the model of TCoE itself will only further grow in the coming years, the reasons why TCoE will be leveraged will be quite different from the past. Given how testers are busy in collaborating and working with the rest of the team, TCoE will be more of a springboard for them to fall back on, huddle together with fellow testers, and exchange best practices and lessons learned.

 

It will almost become a parent’s home to a kid that has branched out to go meet and work with the rest of the entities—it will be both a learning place and a feel good, morale boost abode for them to come back to, at periodic intervals. If the team at large understands the goal of such TCoEs, which may even exist virtually and remotely, there is a tremendous value to be reaped from such an implementation, in the coming years especially to standardize testing efforts at an organization and industry level.

 

In summary, testing will move into a faster pace roller-coaster mode, much faster than it has ever been in the past. If we talk about releases that are going to be as short as one day, how do we as testers decide what kinds of tests need to be run? We are also talking about the importance of nonfunctional areas, the need to automate, the need to think beyond user stories, and stretching to be a tester beyond core work hours.

 

If all of these mean the tester is doing much more than before and the time is much less than before, is it all really a fair share of tasks on the tester’s plate? This question will equally apply to other entities on the team as well, as we are all expected to do much more in much less time—this is now an active call for all entities on the team to be effective at what we do and look for ways of continuous improvement, so we are able to gear up for what the future holds and has an edge both for ourselves and for our organization and product.

 

As we gear for this new era in software testing, I want to discuss one case study of such a futuristic project that we are working on at quality assurance (QA) InfoTech to give you a sense of the kind of work we are seeing currently and how the team is gearing up to face these challenges and new opportunities. Given the importance of this topic, we will dedicate two blogs related to this subject—one of whether the testing fraternity is ready for this change and another on what all of these mean to software testers—in the following sequence.

 

And before we move on to them, here is a case study of a challenging futuristic project that we are working on and how we are handling the quality assurance efforts for this client.

 

Cover Note and Client Overview

Shaping Up the Future of QA in the Evolving Technology Space

With the paradigm shift from iterative models to agile and now DevOps, there is a sea change in the go-to-market strategy with a lot of emphasis on quick, reliable releases and continuous delivery. This results in a lot of forwarding and backward integration in the overall development processes to ingrain agility, flexibility, and reliability. A lot of new tools and path-breaking practices are being developed around continuous delivery to find every possible way to optimize the overall development and delivery process.

 

Keeping in line with this, the QA function all together is reinventing itself. But this is also bringing up questions about the QA function: Is it an impediment? Or a necessary evil? Or an enabling and value-adding function to the overall development process? This case study provides insights into one such client where the QA function has gone through a strategic change with the changing times and needs.

 

Client

The client is a global leader in the e-learning domain, especially in the health-care sector and certifications. The company provides superior content, government-regulated course material for nurses, and other medical professionals and course-driven digital solutions that accelerate student engagement and transform the learning experience.

 

Challenges

Production releases frequently rolled back: The product platform is huge and complex, with a lot of integrated components, communicating with each other. Development and fixes to one component need extensive testing across the full platform to validate any impact. Most components have ongoing changes handled by separate project teams. Comprehensive regression tests for the platform before any release had become inevitable. The rush against completing the extensive platform regression tests within available time resulted in leakages and production rollbacks.

 

Reliability of regression tests: Due to the enormity of tests and lack of a directly visible linkage between requirements/features and tests, confidence on regression tests was low. Also, with ongoing sprints, there was never a time when all tests could be run on the final build. There was always a huge freeze period on staging to enable running tests and fixing defects, thus thwarting integration of newer developments.

 

Repeated elaborate deployment planning and delayed production deployment schedule: Due to the enormity of the platform with lack of integrated build process, deployment planning always came out as a mammoth task—this resulted in further delays to the deployment process.

  • Frequent post deployment issues: With pressure on releases in an agile environment and changes taken in until the last moment, regression tests could never be reliably executed resulting in defect leakage.
  • Go to the market of newer features/products were mostly not deployed on schedule.
  • Lack of collaboration between product owners and engineering teams led to confusion around requirements and deliverables.

 

The challenges discussed here were actually symptoms to a deeper problem. Though the overall development methodology was agile in nature with a focus on continuous customer value creation, the QA function was not developed and equipped enough in its approach and strategy to support the agile structure.

 

Solution

Though the QA processes were concrete, implementation was falling short due to lack of evolved QA structure and tools and ongoing reliance on old QA practices. To tackle this situation, some gradual changes were made to strengthen the core QA processes stronger. In association with the complete engineering team, the exit criteria were revised to bring in service level agreements not only at the user story level but also at the sprint and then at the release level. This resulted in bringing clear understanding across the board for minimum quality requirements before the release.

 

The next step was to bring in tools for better collaboration within the QA team and also among the rest of the engineering team and stakeholders—next-generation test management tools were also evaluated along the way. The aim was to have a clear link between stories, test cases, test suites, test data, software builds, automated tests, test execution, and automated test results.

 

In addition, the automation framework was carefully selected, keeping the future in view not only from a technology standpoint but from approach standpoint as well. This allowed at one hand to make the tests easier to script and easier to comprehend (for nontechnical stakeholders) as well as enabled developers to use the tests more proactively and better integrate them with automated software build tools.

 

The approach of test automation was not only to test functional black box test cases but also to go a level deeper and automate the services and integration points with the database to make a holistic and comprehensive test suite.

 

The QA function was split into two parts with the new evolved roles in comparison to the earlier testing roles: solution analyst and QA analyst. A solution analyst is a crossover between a business analyst and a traditional QA tester. This enabled and further strengthened the project owner’s role acting as the link between the engineering team and the stakeholders through defined requirements and its criteria of done through concrete test cases.

 

The second role of a QA analyst is an evolved role where the testing team wore multiple hats, from refining test suites to executing them and automating them at the same time to build a robust regression test suite. This at one hand made the QA role creative and at the same time helped in cutting down the manual efforts required in regressing testing, thus giving more productive time to QA for better collaboration with the rest of the engineering team.

 

Approach

In addition to the changes described in the solution mentioned earlier, the whole QA function was based on the following three pillars:

Comprehensive Test Automation A modular approach was taken toward test automation that linked each test case of a story to its automated test case; all of this was managed through the test management tool. This ensured extensive test coverage. The tests suites were grouped in various ways (again through test management tool) to serve various purposes—smoke tests (build verification), on-demand particular module/feature tests, sanity tests, sprint tests, and full regression tests (sign-off tests).

 

Also, a set of tests were developed for execution on the production environment to enable quick production verification, as well as continuous production monitoring from a functionality perspective. A complete test architectural framework was developed that catered to the needs of all stakeholders. For example, the test NG framework enabled easy scripting for testers on selenium, easy to comprehend test cases for nontechnical stakeholders, ease of maintainability and scaling up, distributed/parallel test execution (through grid), and also covered compatibility tests.

 

Continuously Refining and Enhancing Test Suites With the integration of tests with test management tool (Zephyr), it was built into the process to mark each test case as automated (including test data). The completion of test automation was brought in as part of the exit criteria. The focus was on cutting down at least 50% of efforts in the manual execution of tests developed, sprint over sprint.

 

The focus of the available bandwidth was shifted to continuously updating the automated test cases and leveraging them in different suites based on the priority of the test case. With this, even defects that did not have a corresponding test case were automated to make the test suites robust. The QA analyst continuously worked in close tandem with development teams to learn and understand new services that were developed and automated by them.

 

Aim for Continuous Integration and Delivery With the changes in QA approach as well as the overall engineering team, the challenges described earlier were tackled to a large extent. But still, the complexity of multiple components on the platform and their parallel development posed a challenge. To overcome this challenge, the client adopted DevOps methodology to further integrate on the operations and delivery side.

 

To enable this, the deployment process was stream-lined and elaborated automated build process, and build management tools were introduced. To further enable the engineering team, QA integrated the test suites required at various phases of deployment as scheduled jobs in the automated software build tool. This not only, at one hand, significantly reduced the manual QA need but at the same time started providing up-front test results on build stability on any environment without any human dependency.

 

Client Benefits

The change in QA approach over the last 2 years not only highly increased the reliability and quality of the production deployments but it also significantly helped in expediting the QA process, bringing in very short but effective QA cycles. This further led to the quick and early detection of defects, thus resulting in faster deployments. The case mentioned earlier brought in more confidence in the QA function, helped the product get a quality facelift, and also boosted the team’s morale and positioning as a whole.

 

As testers, it is exciting to see what the future beholds. But this is also the time to take stock of the trends and prepare ourselves to brace such changes.

 

Is Test Automation Becoming Mandatory?

Rather than seeing test automation as mandatory if we as testers understand the value test automation brings in, in the current times, we will appreciate the need for it better and start leveraging it on our own. That will be a true win for software quality, for our careers, and for the overall industry rather than forcefully having our managers thrust test automation on to us. Such a positive move from the fraternity is what we really need at this time. So, why has really automation become so important at this time?

 

A study shows what respondents have to say with increased test automation—72% say there is better detection of defects, 70% appreciate it for better test control and transparency, 69% recognize the shorter test cycles, and 66% give it to better test costs. Whatever the kind of tester you are on the team, now is the time to consider taking on test automation in possible ways, whether it be for product quality or for process and productivity improvement, as this is going to make all the difference in software testing in the coming years.

 

With more test automation, continuous testing and quality are made possible, the dependency on a given tester is going to be brought down, and testers can refocus their priorities to taking up bigger and better areas of work that call for their core testing expertise and mindset. The good thing is that the test automation technology and tool landscape have become very open. Having an understanding of any one programming language and the core testing fundamentals and an experience of using one tool, one should be able to fairly and flexibly adapt to newer test automation solutions.

 

In several cases, for instance, even in our company at QA InfoTech, we leverage traditional automation engineers to build test frameworks, after which nonprogramming test engineers are able to take on test automation by using behavior-driven test approaches. Such newer approaches empower the large testing community that may have been so far comfortable only with manual testing, to take on automation with ease and without any inhibitions. This is important in the coming years as we prepare ourselves for the future.

 

Will Manual Testing Cease to Exist?

This is a long-debated question. The focus keeps changing from one to another, but at the current time, we have fairly stabilized in our test engineering processes and their alignment with the development cycle that we can say with certainty that manual testing will not cease to exist.

 

Manual testing will further be elevated to become a niche—a niche where the focus will be on newer testing techniques that primarily look at end-user expectations; additional coverage scenarios that are brought in, in an exploratory manner field; and in-house studies that look at nonfunctional test areas such as usability and accessibility.

 

Manual testing will be increasingly left to the more experienced testers who are able to look beyond the core requirements in not just evaluating the current product but also offering suggestions on its potential future implementations. It will become a proactive test cycle that simultaneously brings in requirements for subsequent considerations.

 

Entry- and midlevel engineers who are into manual testing today should understand this to get themselves ramped up on test automation, as clearly we will see more and more of automa-tion, in the basic test layers in the coming years—whether this be automated build verification tests, sanity tests, regression tests, and so on.

 

Also, while the knowledge of a programming language is a huge advantage in picking up test automation skills, the lack thereof doesn’t necessarily restrict one from working on test automation. Today, there are ample test automation frameworks, tools, and utilities that enable amateur automation engineers and manual testers to step into the test automation space. This would be a good place to start with and gradually ramp up on a programming language that best meets the needs of the work that the tester is involved in. In addition to this, once a manual tester, you are always a manual tester in some sense.

 

So even if you move to test automation, you will see this in a positive light where you are able to automate scenarios to increase your productivity and coverage and be able to simultaneously free up some time for out-of-box manual testing. Also, areas such as bug bashes, session-and charter-based exploratory testing, and crowdsourced testing are all becoming very valuable in bringing in more test coverage—these are tasks that are best done manually. With the increasing focus on end users, competitive positioning, and shift left approach to engineering, manual testing will soon elevate itself to the niche that will coexist with test automation.

 

Do I Have to Be a Domain Expert?

The days where a tester could say, “I understand test strategy, planning, design, and execution; give me any task and I would be able to take on the test effort,” are far gone. Such a generic tester can still exist today but will not be able to thrive. He will merely survive. The domain knowledge of the industry that a tester works in is becoming increasingly important.

 

The workflows specific to the domain, compliances, checklists, user expectations, and target market are all becoming important to understand, to tailor a custom test effort that can differentiate the product in the marketplace.

 

Since the organization anyways has the domain expertise, it is not going to be difficult for the testing group to access the right resources, to ramp up on the domain knowledge. However, oftentimes the domain scope is very large and the domain experts may be too deep into the subject that they may not be able to get to the basics of the knowledge sessions.

 

In such cases, the team can look at having an official training imparted for their group and also target some domain-specific conferences to attend as such forums have a mix of expertise levels for them to pick from. You may not need to become a domain expert, but being privy into the latest domain’s core fundamentals and leveraging them as required in ensuring app quality is becoming inevitable—both for the tester’s performance and the product’s quality.

 

How Do I Stay Current and Look ahead to Give Myself an Advantage?

Staying challenged at work in any discipline can be difficult day after day, and software testing is no exception. On one hand, testers have to keep their heads down while working on tight schedules and perfecting the quality of the product under test. But on the other, technology is advancing at a rapid pace, and testers have to ensure they are not lagging behind in their skills. Testers need to constantly strike a balance between these two demands to stay challenged.

 

PractiTest’s extensive State of Testing 2015 survey talks about inputs directly received from modern testers. Based on the responses that came in, the study calls testers “social beings” who often depend on sources such as social media, blogs, online communities, conferences, magazines, and competitions to keep up with the changes in the testing world.

 

They are also adopting new practices besides the core scripted manual and automated testing techniques, including user simulations, paired testing, user-coordinated beta, and crowd testing efforts, and bug hunts. These practices help them stay better connected with end users and learn more on the ground.

 

The tester’s role has itself expanded beyond the usual day-to-day testing effort. Testers are now required to wear multiple hats and handle new responsibilities such as test deployments, customer training, and developing internal tools. The tester’s role in DevOps is also changing due to the demands on continuous integration.

 

While the debate of whether a tester needs to learn to code continues, the ones who can understand the system internals and the domain of the product under test will definitely be seen as a cut above the rest. The need to innovate is being felt in all disciplines, including software testing.

 

While all of these opportunities are exciting, how are testers managing and embracing them amidst the time challenges they have? The smart testers are integrating these learning solutions into their daily testing responsibilities. The survey mentioned earlier also talks about a very high number of testers learning hands-on, on the job, and through peer mentoring, and a significant number are also teaching themselves through blogs and online communities. These practices of on-the-job learning and mentoring also indirectly help the testers grow their communication, leadership, and management skills, which are very valuable today as well.

 

Will a Tester Be Embraced and Accepted into the DevOps Fold?

Times are changing today. It is a given that there is a need for more collaboration among the members of a product development team to survive in the marketplace and maintain an identity of their own. Such collaboration yields outcome not just in the bonding of the team but also in the quality and competitiveness of the product that is released.

 

While it is not rocket science to understand the need for such collaboration, the true challenge is in implementing such a model especially between the testers and the rest of the product team, in an objective manner that is a win: win for everyone involved.

 

Obviously, the tester is no longer involved in merely executing test cases and reporting results. He has moved upstream, along with the test processes, where he is involved right from the product design stages. For instance, application performance considerations are being discussed up front and built in rather than being merely bench-marked and tuned later on. And the tester plays a critical role in such discussions, bridging the gap that long existed between development and quality.

 

While it is the tester’s neck that is still out in the line if quality issues are found in production, the product team at large understands the role they play in contributing to a quality release. This is a welcome change and the tester is now transitioning into a role where he is empowering the rest of the team’s own quality in possible ways— whether it be helping the developer build unit tests that are traceable to product requirements, enabling the build engineer and developer leverage a powerful set of automated tests for quick sanity tests, etc.

 

All of these create a great opportunity to bond in the team, but if not handled well can also become a breeding ground for strained relations. One may see the other as trespassing into their scope of operations, taking undue charge of their team members, all of which can adversely impact the morale of the team and the quality of the product under development.

 

So, here is a situation where we understand the benefits if implemented but also the challenges in doing so—who is responsible in resolving this and embracing the testers effectively into the DevOps fold in this new role? This is a combined role of both the testers and the rest of the product team.

 

Let’s start first with the rest of the team. It is certainly a mind shift where they now need to understand what the tester’s new role is all about and why such a change is important in today’s world of continuous integration. It is also equally important to understand this at an objective level to ensure there is no resentment or question of insecurity about one’s own positioning in the team, given this new role of testers.

 

Oftentimes, such doubts linger not just among the more junior members in the team but even among seniors and the more experienced. It is important to ease such questions and put to rest any such insecurities sooner than later to promote the right level of collaboration in the team. This is best done by senior management and could be done in forums such as “all-hands meetings” among other agenda items to explain the changing role of a discipline at large.

 

This further needs to be followed up by smaller and more focused meetings among individual teams either at a product or a feature level, where questions and concerns if any can be addressed. The product development team should by now be at least partially ready to accept the testing team in their new avatar to enable DevOps.

 

While the preceding discussions start off the implementation on the right note, a lot has to do with how the testers take it along from this point on. They need to understand the true meaning of empowerment rather than seeing it as a right conferred on them. They need to amicably work with the teams to help them understand the true meaning of quality and what they can do to contribute. Along the way, they can bring in some fun factor through practices such as a weekly or monthly exploratory testing exercises and bug bashes, which will also help the rest of the team understand testing practices and management systems in an effective manner.

 

It is important for the testers to showcase what other new tasks they are taking on, on their plate—whether it be interacting with end users, focusing on building quality not just in areas of defined requirements but also in areas of end-user expectations, evaluating quality of the product in line with competition in the market, looking for more optimization in test efforts, enhancing levels of test automation, etc. It is also in the knack and savviness of the tester to work cohesively with the prod-uct team in enabling quality, additionally leveraging the tools he has wherever possible.

 

The question that often restrains these from happening is, “What if I don’t get credit and recognition for what I have done?” Herein, I am not asking team members to give up on their career progression goals; once you understand that it takes two people to either make or break the relationship and apply this in a constructive manner to truly collaborate, it is not just one individual’s career, but the entire team’s progression that will get a positive facelift.

 

All of this collaboration certainly calls for free-flowing knowledge, sharing of tools and frameworks, ease in accessing people whenever needed, and a true belief in the model of teamwork. When this happens, even issues, if any, can be mutually resolved in a mature manner, thereby truly accepting a tester into the DevOps fold.

 

Is Independent Testing Dying?

Amidst all of this buzz for upstream quality, increased collaboration among team members, freelance app/mobile development, and collective ownership for quality, while it is exciting to see more people partake in contributing to a quality release, the underlying concern that cannot be ignored is whether independent testing is withering away.

 

As testers, it has taken us a long time to establish the status of independence in testing—a status where there are unbiased test coverage and evaluation done by a group of people who have not been involved in product development, at least in the given release. ISTQB calls independence a range rather than a condition, swaying all the way on the spectrum from “no independence” to “full independence”:

 

  • 1. At one end of the spectrum, the absence of independence is where all testing is done by the programmer.
  • 2. Moving along, we have integrated testers—who closely collaborate with developers and report into the same development manager.
  • 3. Further along, you have testers working outside of but with development and reporting to the same project management office.
  • 4. At the extreme end, you have full independence, where testers report to a completely independent business head.

 

All along these four range positions, outsourced test vendors can really fit in anywhere in the spectrum. When independent testing first evolved in the late 1990s, testers moved from state 1 to state 4 directly in this spectrum. Now, we are gradually moving inward into states 3 and 2, which is why the fear on whether we will or we have lost our independence.

 

Undoubtedly, there are organizations that have a merged role of developers and testers, where people assume cross functions. So, is independence really lost in such cases? With due care and execution from the testing fraternity and sensitization of the product team, independence can be retained and resurrected in these challenging times. This is going to be very important for the coming years as software development landscape changes significantly. Let’s see how.

 

Independence Need Not Come in Just from Testers

We often think the independence factor comes in only from the testers, whereas in essence, it can come in from anyone who has not directly worked on coding the module under test. It could thus be designers, build engineers, business and marketing team members, external team developers, end users, crowd users who are testers, users or domain experts, etc.

 

When such a strong independent team is put together in addition to the core additional test team, the unbiased evaluation is able to cover new scenarios, in a shorter time and lesser cost. Freelance developers also need to understand this, as studies show that the average shelf life of a mobile application is only 30 days.

 

Given the time and effort that goes in such development, you would agree that is a dangerously low number. If this has to be improved, independent and unbiased testing is important. Also, even in teams where there are no testers, at least a round robin style of development can be adopted where one may be a developer in the current release but a tester in the next and so on.

 

Understand the Dotted Line in Role Delineation between Developers and Testers

A couple of years back, one of our employees had written an article for a leading software test publication on this topic. While collaboration between testers and the rest of the team is inevitable and valuable in its own ways, it is important to tread the thin line of distinction between developers and testers very carefully, failing which there is a good chance of team morale issues, impacted product quality, role trespass, etc. It is thus important to identify areas where such sensitivity and role ambiguity exist and play carefully.

 

For example, a tester may sometimes be required to suggest solutions for the defects filed—herein, he may be expected to make some small changes especially in content files for locale fixes. Similarly, he may be required to help the developers identify unit test scenarios, take on static code reviews, etc.

 

Likewise, developers who are using the test automation suite may notice areas of improvement to work on. All of these are sensitive touch points where the entities should be careful to ensure they do only the bare minimum that is required. Anything over and above will dilute their focus and waste their time and may also create insecurities among team members.

 

Understand What Role a Tester Should Assume at This Time

As part of maintaining independence, if testers understand their core role and the add-ons that will help bring in value, we can rest assured that the independence can be preserved for the immediate present and into the future. Herein, the tester has to appreciate the need and importance of test automation, understand where manual tasks come in handy (especially in cases of nonfunctional testing such as performance, security, usability, and accessibility), and test centers of excellence and how each of these will help them tie in their efforts to the overall product quality goals.

 

With care taken on these fronts, independent testing is here to stay and for the right reasons to uphold product quality and the tester’s career. Other Additional Best Practices to Adopt to Empower Ourselves for the Present and Prepare for the Future

 

Stay Hungry, Foolish, and Continuously Challenged

This is an adage that became very popular when Steve Jobs quoted it in one of his last public speeches. In today’s world, amidst the constraints we work within, staying hungry and foolish is even more important. It can help you stay challenged in your testing career for advancing in your professional and personal growth. Given how dynamic the technology landscape is, the ongoing culture of learning and staying challenged is what will make all the difference in the coming years.

 

Adopt Continuous Improvement

A mere desire to learn and not knowing how to go about with it will not get the tester very far. In the self-managed teams that we are working in today, the tester needs to devise his own plans to embrace continuous improvement. The best way to achieve this would be to regularly introspect test processes; evaluate how other teams are working;

 

keep tabs on latest in the industry through news feeds, forums, and conferences; and evaluate which would make sense for him and the team. While this is connected in one way to the earlier point, this is more focused on an actionable set of practices rather than the desire to stay hungry.

 

Work Is Not Just a 9–5 Job

As we further empower ourselves for the coming years, it is important to really get out of the 9–5 working day mindset. Granted, in the IT industry, this does not happen very often and we are typically working long hours, taking calls and accessing e-mails from home, etc. However, this point is more than such out of office premises hours that we clock in.

 

Quality has to be ingrained in the tester’s mind— today’s test scenarios are all around us; users are all around us. You could, for instance, take a cab ride to the airport and how the driver uses specific mobile applications may give you ideas for your own application. News items—what you hear, see and read—can give you ideas. Stay curious and work on connecting external events to internal processes, practices, and scenarios.

 

1. Think customizations: Quality is increasingly becoming a customized function. Software development itself has become a very customized area of work. While organizations want to learn from industry best practices and adopt trends, they understand the value of customizations. For example, a survey shows 92% of the respondents leveraging agile development practices. But these are all not “one size fits all” adoption.

 

A lot of customization is taken up to align with the organization’s requirements, user needs, specific market parameters, a domain under consideration, etc. As a tester, similarly, it is becoming increasingly important to think customizations, whether it be in testing processes, tools, or frameworks to ensure you are effective and productive on an ongoing basis.

 

2. Be ever ready: Whether it is a one-day pass, one-hour pass, a customer interaction, or a partner collaboration, testers will have to increasingly be ever ready on the job. It is also not just about being instructed on what needs to be done. As testers, we need to be ever ready to consume things around us and see how they can be translated into actionable items to improve what we do and the quality of the product under test. Such diligence, watchfulness, and attention to detail will differentiate the best testers from the rest and also prepare the fraternity as a whole for the coming years.

 

As we question our empowerment for the present and readiness for the future, a key thing to remember is that all of this readiness is not necessarily sought externally. A tester has to internally look for and prepare himself too, as a lot depends on the tester’s mental readiness and subsequent efforts to align with the required change.

 

Question one’s capabilities for ongoing continuous improvement: A tester's role today is certainly at a complex crossroad. The user expectations are changing, test matrices are expanding, timelines are shrinking, and budgets are being cut—all of which are forcing the tester to reinvent himself.

 

Some of the veterans of the software testing industry formulated a course on rapid software testing, to address exactly this scenario. What if someone were to give you just an hour to test a product—while everyone understands it as an impossible feat to accomplish—how smart a tester you are in optimizing the resources available on hand to achieve the best possible coverage. This is increasingly becoming the need of the day and is covered well in topics such as rapid software testing.

 

As a tester, it is absolutely important to constantly question one’s skill set and see whether they are up to date with the need of the hour and if not seek opportunities to bring in ongoing continuous improvement. Today’s tester needs to resell himself—we are at a stage where on one hand, testers are stepped up to be quality leaders, while on the other, their role is questioned in some cases.

 

The tester has to be able to speak up for himself, explain what his role exactly is, and what would happen if he did not exist. The list that I outline in the following will help him do this and in some sense also help him chalk his own role depending on the needs of his organization and product:

 

1. A representative of the end user on the product team, translating their requirements into suggestions and enhancements to build into the product.

2. True evaluator of competitor offerings in comparison with the richness of one’s own solution.

3. Automation enabler (whether he does the automation himself or is taken up by someone else such as a developer) to bring in repeatability, consistency, and precision into the quality process.

4. Strategy designer and implementer to align the quality goals with other business goals in true belief that quality is an important and inevitable ingredient in the mix to a successful market launch.

5. Constructive destructor to catch as many issues as possible before the user notices them.

6. Enabler to quickly and effectively fix issues caught both in testing and live environments and help the team understand the user impact of reported issues to ascertain the defect fix priorities.

7. An optimizer that balances varied testing scenarios against available time and budget to maximize test coverage.

8. Help the team understand the quality risks associated with the changes they make, in an Agile environment.

9. Collaborate with the programmer to catch issues as they get into the product, rather than catching them days later, in the current compressed development cycles.

10. An unbiased entity that can bring in conscious, collaborative, and continuous quality into the product under test.

 

11. Explain that collaboration with the developer will happen—you will understand system internals and contribute to a quality development effort, but why your unbiased test cycles are important for the larger emphasis on continuous quality.

 

Savvy team member: The tester has always been the one with most touch points for communication in the product team. Now, with a role where a tester is a quality advocate on the team, he needs to be increasingly savvy in his role. He will have to deal with a lot of sensitivities, along the way, as he helps his team members own quality.

 

Herein, a tester’s role calls for maturity to be able to handle all team members in a savvy manner, not losing focus of the larger quality goal. Especially in areas where he walks a thin line, such as the role delineation between a developer and a tester, there is room for conflict if the situation is not managed well. The tester has an important role to play to empower team bonding.

 

At QA InfoTech, we publish a quarterly magazine dedicated to quality. We have external contributors also sharing their thoughts herein on latest and greatest in quality. In a recent edition, we had a quality leader Rahul Vishwaroop from Adobe to share his insights on what 2016 may look like for a tester. 

 

As a tester works on his skills to be a savvy team member, it is important to understand he will also have to part with some of his tasks on hand and be ready to take on new tasks. A give-and-take approach thus becomes important. One of my work colleagues did a keynote on this topic in STC 2013 (a leading testing conference in India) on The New Gives and Takes in a Software Tester’s Role.

 

This was very well received and the full presentation is available online.  The crux of this message is about how a tester today has to increasingly revisit his role on a more frequent basis to give away tasks that don’t make any more business sense in the current day and instead take on tasks that are more relevant. Also, some of these are tasks that the team gives away completely, while some are tasks that you swap with another member of the product team. The presentation talks about how to give away the following in totality from a tester’s plate:

 

1. Detailed documentation and test artifact creation: With the Agile wave in full swing, a common misconception people have is documentation should be completely given up.

Documentation is indeed important, but what to cut down on is what one needs to look at. For example, “Do we need to have detailed test strategies that are often not referred to later on?” and “Do we need to create detailed step-by-step tests?” are all questions testers need to ask themselves. This is also the place where smart processes can be adopted.

 

For example, technologies such as augmented reality can be leveraged to make a tester’s life simpler and more productive. It can help process test results and log them in test case management tools saving time for the tester. We did a webinar for Eurostar in 2015, talking about how augmented reality can be useful to improve a tester’s efficiency. The full recording of the session is available at https://testhuddle.com/resource/the-connect-between-augmented-reality-and-software-testing.

 

2. Pure script-based testing approach: This includes both manual and automated test scripts. The basic idea here is that once a tester hooks himself to just script-based testing, his out-of-box thinking and creativity soon recedes. The combination of a scripted approach and a free flow or guided exploratory testing effort is definitely well worth and is indeed the right balance that testers need to work toward.

 

3. Obsession on age-old test metrics that don’t add any value: In some cases today, even numbers such as return on investment on test automation are being questioned. Metrics have long helped bring in objectivity into a test effort. However, what gets often forgotten is that metrics age over time (sometimes even in short windows) and will need to be periodically revisited for their worth and updated as needed. Sticking on to age-old metrics is more of an overhead than any value they bring in.

 

What Can a Tester Give away to Another Team Member?

  • 1. Build verification test execution can be handed off to a developer.
  • 2. Sanity test suites can again be handed off to the developer, to help take up periodic quality checks and verify bug fixes effectively.
  • 3. Early troubleshooting tests to the operations and support team, to handle field issues with a faster turnaround.
  • 4. Accountability for quality to everyone on the team to step into a more advocating and practitioner role.

 

This list outlines an easy set of tasks that the tester can give away to another member, not necessarily to create more bandwidth on his plate, but to help everyone own quality better. For example, handing off the build verification tests to the developer will enhance the chances of getting a good build to test, sooner, than if the tester took on these tests himself.

 

Also, giving these away to another person on the product team does not mean the tester washes off his responsibilities. He is still responsible to enable them use these tests effectively, help resolve any queries they may have, maintain the tests on a periodic basis, etc., to empower them derives the true value of handling these test suites.

 

What Can a Tester Take on His Plate Instead?

At the end of the day, it is all a give and takes. If the tester had shed off so much from his plate, what can he take on, in line with the needs of the current testing discipline? He should certainly explore to take on bigger and better things, including more extrinsic focused testing, such as more end-user analysis and competitive analysis to bring in more expectation-driven requirements that are built into the product up front.

 

  • 1. Ownership to building a professional culture for quality
  • 2. Controlled freedom with responsibility
  • 3. Competing product quality evaluation
  • 4. Triage representation
  • 5. End-user issue analysis
  • 6. Role of quality consultant/ambassador

 

While these points are easier prescribed than followed, it is important for the team at large, including the management, to understand the importance of this changing role in the new times. They will have to step in to ensure they are implemented well and customized to the needs of the organization. If they do not step in, a lot of anxiety, insecurity, and resentment among the product team will prevail, which will further adversely impact team morale.

 

Fluid Role

The word “fluid” or “dynamic” probably best describes the change in the role of a tester. A tester has to be as dynamic as possible today in shaping his role on the go. In conferences that I have been to, the task that is often asked is to clearly list out a tester’s role. I don’t think there is a concrete answer to this, as this changes depending on the organization, product, and user base that the tester is dealing with.

 

At a high level, we can say the tester’s role is to enable product quality, strategize, and implement tests to inject quality into the product from the early stages and represent the end users in engineering both system requirements and user expectations into the product under test.

 

He should take on all these, with the best possible collaboration with the rest of the team, and objectivity through the use of metrics and service-level agreements, to bring in continuous and conscious quality. However, what is becoming important is an element of context. Given the global product base, context-driven testers are very important.

 

These are testers that are aware of the market requirements of the product, the sensitivities of the global markets, and the compliances that are important to adhere to (be it domain based, geography based, or attribute based) and accordingly bring in a mix of scripted and unscripted test scenarios to work within the time and cost constraints on hand.

 

Such testers are increasingly becoming indispensable, and to be able to achieve such a contextual and creative state, the tester has to mold his role to be fluid enough. He should be able to chalk his role himself under the guidance of his team and managers rather than wait to be told what his role needs to be.

 

Changing Facets in Software Quality That Will Additionally Define a Tester’s Role in the Coming Years

1. Need to resurrect the core value of independent testing: I had discussed this in detail in the previous blog on whether we are ready to handle the changes as a fraternity. The understanding as to why conscious independent quality is important, along with how to collaborate to bring in productivity and efficiency in operations, will together define a tester’s role moving forward. This is precisely why testers need to resell themselves as discussed earlier.

 

2. Automation testing will become mainstream: Automation will play a very important role in moving forward in assuring quality. Today, automation is very feasible even for non-programmers. Test frameworks enable behavior and context-driven automation, which is easily understood and implemented by one and all. While some very seasoned and mature testers can still thrive without being involved in test automation, the bulk of us will have to train in this area to redefine our role moving forward.

 

3. Manual testing will become a niche: To reiterate here, a tester needs to redefine his role with the right balance of manual and automated testing to bring in the required quality coverage in the available time and cost on hand. Manual testing will lean more toward exploratory and out-of-box test scenarios in the coming years, while the more predictable ones will be candidates for test automation, among other variables to consider.

 

4. Mobile first initiative: This will increasingly play an important role in the tester’s profile and tasks. Mobile first is a global initiative today with leading app makers looking to offer app-only solutions. The way we test on a mobile device is quite different—while the scenarios may be in common, the tools we use and the matrix we choose are all drastically different.

 

The testers of today’s generation (those that have less than 5 years of experience) are more exposed to mobile computing and scenarios and have a better mindset alignment to mobile testing, whereas those of us with longer testing experience have to unlearn and relearn a few concepts to train to think along mobile testing. This will play an important part in defining a tester’s role in the coming years.

 

A Constant Innovator

A tester in his new role needs to be a constant innovator. He is someone who is always looking to enhance test processes, product quality, team bonding, collective thinking, embracing new technologies, etc. A tester who incorporates these in his role today will be better able to differentiate himself from the rest, in the coming times.

 

As we wrap up this blog on the changing role of a software tester, I want to leave you all with an interesting and relevant story that truly sums the message I want you all to think about. It is the famous hare and tortoise story that we all have heard as kids, where the hare and the tortoise decide to race. The hare is complacent and overconfident about his running skills and loses to the tortoise because he becomes lazy. The moral is slow and steady wins the race.

 

There are three other sequels to this story that fit very aptly to our needs here. In sequel two, the hare does some soul searching and finds out why he lost the race, so he invites the tortoise to race again. He is fast and steady this time, winning the race hands down. Our takeaway here is “slow and steady is good, but fast and steady is even better.” So, someone who is faster and equally consistent in an organization will be able to shine better than the others.

 

In the third sequel to this story, the tortoise does his soul-searching. He knows what his capabilities are and tries to see if there is a way for him to win. He finds new playing fields that are dynamic and provide better potential for him to excel. So, he invites the hare to the race again, incorporating a river along the racetrack this time.

 

The hare is fast in this race too, until he reaches the river. He freezes, not knowing how to proceed, while the slow and steady tortoise gets there and swims his way to the finishing line. The moral of this sequel is “identify and leverage core competencies and explore new playing fields for growth and advancement.”

 

In the final sequel, by which time the hare and tortoise have become good buddies, the two decide to run together to the finish line as a team. They understand that doing this together will be more fun, effective, and successful for both of them. So, the tortoise jumps on the hare’s back to get all the way to the river. At that point, the hare jumps on the tortoise’s back to cross the river and reach the finish line. This is the sequel testers need to understand and buy into, the moral here being “teamwork is about situational leadership and empowerment.”

 

The person with relevant core competency in a situation should take the lead to the group can shine together. A give-and-take approach lets the team win together with much less overall effort but much more fun and collaboration. This approach also adds the ele-ments of fluidity, continuous learning, savviness and collective effort, and situational leadership, all of which shape a tester’s role to success and enhance the acceptance of a quality product in the marketplace.

 

At the end of the day, quoting Shakespeare, it is not in the stars that holds our destiny, but in ourselves. Let’s take control of such destiny, be cognizant of the changes required in the current times, and ride the wave smooth and high to emerge as successful testers and hold up our profession’s brand.

 

The “Ante” for Products

Every industry has standards, best practices, minimum viable launch criteria, or whatever you want to call the “ante” to just get in the game. These are the “whats” of software testing.

Software needs to be

  • Reliable—Does what is expected, when expected
  • Responsive—Got to be fast
  • Available—From just about any device, from just about anywhere, in just about any language, even if I am disabled in some way
  • Maintainable—Needs to be built for change because the world is not standing still
  • Secure—Both my device(s) and my data

 

This is an important list, but not in any particular order.

The plethora of tools and practices that allow us to meet the minimum expectations of our users and of industry is exploding and that trajectory will likely continue into at least the near future. For example, testing automation has permeated every aspect of functionality. From the user interface to APIs and business logic, from accessibility and usability to performance, and from services layers to deeper levels of unit testing, automation has become an integral part of the architecture and delivery process.

 

In some organizations, builds and releases are being automated to create an ability to deliver software in a continuous steady stream just as fast and programmers can churn it out. In fact, for those continuous delivery organizations, full-stack automation is the backbone that makes it all work.

 

The Cloud, the Crowd, and Distributed Teams

And then there is “The Cloud.” The trend towards commoditized technologies has leveled the playing field such that start-ups can compete with the largest international conglomerates; a brave new world indeed. The glass windows of the isolated data center have all been smashed. The special access of the “Super User” is available to everyone now.

 

A lot of the skills of the SysAdmin have been abstracted into a relatively user-friendly AWS interface. Almost anyone can build out a computer system with “virtually” unlimited scalability and speed. With Moore’s Law still in effect, infrastructure technologies are becoming more abundant.

 

People and teams are all over the globe. Crowdfunding and crowd testing are crowding the market. The cloud, crowd, and continuous delivery are as much about technologies as they are about collaboration, coordination, and effectiveness with people. The cloud may be the place to get things done, but the challenges of time zones, languages, norms, standards, and expectations impinge on the inter-personal trust levels that teams need to move at the pace of today’s business. Teams need to quickly build trust and the ability to communicate, coordinate and collaborate effectively across the globe. This presents challenges we’ve never had to face before.

 

Many of these trends are already well known, so what is new and different? Keeping up with technology is a bare minimum, but what, as a professional, should I concentrate on learning in order to stand out with my peers?

 

In the past, in general, teams worked in one geographical location. We found help from our trusted network of people with whom we had worked. The world and our projects are far more dynamic now. More often, we need to find unique specialists for our team who can be trusted to deliver and we may never even meet them face to face. We need new criteria for deciding with whom we want to work and our criteria need to include their trustworthiness as well as their technical prowess.

 

Mobile First Workflows

This requires a change in mindset and tools being used. As mobile penetration increases and more individuals start using multiple mobile devices, the workflows will change completely from the way we know them today. Test engineers need to be able to validate these new workflows.

Reliance on Community and Social Platforms to Solicit Feedback Proactively

The days of shipping products across the fence and waiting for a few weeks or months to receive user feedback are long gone. With the instant reach of social media, reaction to new versions of applications is instantaneous. Test engineers should not only rely on technical support teams to parse this feedback but also bring in individual due diligence with the abundance of information available today. Such a responsive engineering team leaves a positive impression on the company and product.

 

Another aspect testers should consider here is to leverage easy outreach to customers as a tool to either selectively roll out services and gauge feedback or conduct A/B tests with control groups.

 

Agility in Release Cycles

Traditional release cycles of many months or years have compressed to releasing more frequently.

Agile companies like Faceblog release updates to their site multiple times during a day. This requires test engineers to be nimbler in their processes, more judicious about tests to be conducted, and savvier in taking on calculated risks; all of these while ensuring quality of shipping applications are high. Shorter release cycles lead to challenges like lesser dedicated time for testing. Robust application design, testability of written code, and high bar for quality of code need to be enforced to be able to deliver quality software.

 

Compressed Duration for Endgame Certification

Applications with a large code base need a dedicated window for integration testing. In projects with annual or greater release cycles, this was easily doable. With multiple frequent releases, this window is now compressed. Test engineers need to identify the base set of tests they will execute. Reliable automation is a must. Focused testing on the areas of code impacted in a particular release should be done. Any risks identified during this testing should be duly highlighted and followed up on.

 

Connected Workflows

Gone are the days where software applications had isolated footprints. Almost every application has a reach outside the core desktop or mobile primary interface. Desktop products are connected to mobile devices and vice versa. Mobile apps rely on services to deliver value to users. Test engineers need to understand all the possible touch points of the application they are validating and perform tests that cover all those scenarios. There are many variables and a failure can emanate from any one of the connections. Test engineers need to factor these appropriately at the planning phase itself.

 

Security

Security breaches and the cost of resultant failures are detrimental to a company’s scenarios. There are many variables and of those is an important aspect for test engineers to build skills—this is now more important than ever. Industry acknowledged certifications and best practices should be studied and implemented as a part of the software development process. Security isn’t the responsibility of just software security teams. Test engineers especially can help highlight issues by including tests specifically around security in the test plan.

 

Wearable Technologies

Wearable technologies are set for a huge growth this year. Gaming consoles and fitness devices and tools for specially-abled individuals are now becoming more sophisticated. Testing these devices and applications requires a good understanding of the intended use. Since some of these technologies are cutting edge, there may not be existing benchmarks to compare with.

 

A fine balance between testing on simulators and actual devices has to be drawn and one cannot completely replace the other. Test engineers are encouraged to be comfortable in handling and using these devices like a user would in the real-world scenarios.

 

Internet of Things

With the increase in applications built of the concept of “Internet of Things” (IOT), the testing of these is an aspect that test engineers should be well versed with. Use of common household appliances like washing machines or refrigerators already adopting these technologies means testers need to combine the application of these devices with the technology backbone that supports their “smartness.” Since these appliances are in the home and all-pervasive, security testing and privacy settings are paramount and should be prioritized over other forms of testing.

Another example of IOT that is set for an explosion in adoption is driverless cars. This concept is being actively tested in many countries and is set to redefine our concept of owning cars, driving them, and will lead to a new world that we can barely imagine right now. How to test for scenarios for technology that is still taking shape and with applications that can widely vary depending on the nuances of local geography and demographics will be interesting.

 

DevOps

The lines between traditional IT teams and engineering organizations are blurring due to the nature of connected services and applications. It is difficult to imagine a stand-alone application today that doesn’t rely on back-end operations to support it. DevOps is a great career move for test engineers. In agile application development world, the turnaround time to design, develop, test, and deploy is shrinking and having an engineer who is well versed with test methodologies is an asset to the operations team.

Yes, there are skills one needs to build around tools being used for this function for configuration management, virtualization, app servers, and web servers, but the transition can be easily done. Since DevOps is still an upcoming field, there is a huge demand for good engineers in this field and should be actively considered by test engineers.

 

I’ll end my piece by saying that a trait every test engineer needs to have is the ability to dream. Often in our urge to be quantitative, we lose the edge on thinking out of the box and be creative in approach to test engineering.

 

Concluding Thoughts Summarizing the Sentiments of All Three Experts

In summary of what these experts have had to say, I wanted to highlight a few core takeaways on this question of “where are we heading?”

1. The days of BUFT are going away. That said, testing still needs to be up front. It just cannot and need not always be big.

2. Compatible test matrices cannot be fully envisioned and tested for up front, by just an internal test team. The testers will have to spearhead the effort in unison with end users who are also brought in as testers.

3. The experience in coding makes a tester even better.

 

4. As a tester, you need not to do it all. You have several other entities that are also taking part in a quality effort. You need to manage commitments as a tester to get the job done at the end of the day.

 

5. Continuous delivery will become the norm and not the exception in moving forward.

6. As testers, you will increasingly work toward helping teams innovate and build a competitive advantage.

Recommend