Automated Testing Tutorial 2019
The complex and confusing code looks beautiful only when you write it, but not when you try to understand. Code with the nesting of more than three blocks is very difficult to debug, especially for a person who did not write it.
When debugging such code, it is difficult to keep track of the state of all variables. As a result, searching for a simple problem can take a lot of time. This tutorial explains 38+ tips for Automated Testing with best examples. And explains several techniques and methods that make automated test cases for code more easily.
Automated Testing Tip 1:
Beginners usually don’t use any rules for the names of variables and functions. They write as they like, completely without thinking about the fact that there may be some rules. Nevertheless, for almost every language there are so-called coding standards, which are recommended to be used.
In big projects, such standards become mandatory rules. They must be followed by all programmers since following standards greatly simplifies the understanding of both their own and others' code.
For many languages, such rules have been developed a long time ago and are generally available. You can take them as a sample and supplement them depending on your internal needs. For example, for variables, you can use different prefixes depending on the type of display object this variable corresponds to.
Automated Testing Tip 2:
Do Not Perform Blind Clicks Against Nonstandard Controls
Everyone sooner or later comes across a situation where some self-made control is not supported by the automation tool. After trying a few ways of testing such control and not finding anything suitable, people stop at the simplest way of solving the problem: clicking on hardcoded coordinates. However, this is not always the best way.
Imagine a toolbox with several buttons. Let’s assume that our tool does not recognize the buttons on the toolbox and does not see any properties or methods that would allow us to get their coordinates.
If we hard-code the coordinates for a click inside the toolbar, then for a while our scripts will work. However, then something can happen (for instance, the order of the buttons or their size changes). In this case, the click will occur elsewhere.
One solution to the problem is to search for an image within another image that is provided by certain tools. We can save the image of the button (just the button rather than the entire toolbar), and then search for the button’s image inside the image of the entire toolbar and click in the found coordinates.
On the one hand, searching for an image is a time-consuming operation, but for small images, the search will not last too long. On the other hand, even if a button changes its position or size, the search will still work correctly.
Automated Testing Tip 3:
Avoid Copy and Paste
Copying and pasting of code in order to reuse it is a common mistake of beginners. Once you need to write new code, you remember that you already wrote something similar in the past. You find your old code, change it a little, and create a new function.
Once you find an error in one of your copies, you correct it and understand that in all your copies there is the same problem. You spend the time to find all the places where the mistake was made.
Once you are faced with a problem that needs to be corrected in the same way in several places, you must make sure that there is exactly one place left. Let it be a function that takes into account all the necessary options, or let it be a function with several parameters, but it must be only one function, with the code in one place.
Do not be too lazy to write code correctly. Be lazy now by copying and pasting, and in the future, you will remember what I have said and regret doing so. You will spend an hour and regret that you once grudgingly spent an extra five minutes.
Have pity on yourself, do not copy and paste. If you use a popular programming language, it is likely that the language already has tools for detecting duplicate code. Use them!
Automated Testing Tip 4:
Separate Code from Data for Automated testing
When we write tests, we need a place to store the expected values, which we compare to what the tested application actually produces. When dealing with a single piece of data (one number, text, etc.), we just use variables.
For small datasets (for instance, items in the drop-down list), we can use arrays or lists. But what if there is a lot of data (for instance, several dozen values)?
It is inconvenient to store large amounts of data along with the code since the data simply obstructs the code. You can put the data in a separate code file, but in this case, it is inconvenient to work with the data.
Because there is no vertical alignment in text files and, for instance, the lines will have different widths, which makes it difficult to understand the information.
You can solve all these problems by using the data-driven testing (DDT) approach. This is an approach in which data is stored in a separate file in the form of tables. It can be a database of any format, an Excel or comma-separated value (CSV) file.
Modern automation tools usually support working with one or more of these formats. For many programming languages, there are libraries that make it easier to work with such tables. If in your case there is nothing readily available, you can write simple functions to work with formats such as CSV.
Here are some tips for working with data using the DDT approach:
open data files only in read mode from the tests. Since data files contain model data, changing it automatically during scripts run is potentially dangerous. If you need to change this data, you need to do this manually, knowing exactly what you are changing and why;
store only one type of data in each column. In case of a database, you will not be able to do otherwise; however, in the case of text files or Excel files, you will have to monitor it by yourself.
Some Excel drivers can behave strangely if they encounter a value with a mismatched data type. In addition, your tests can also work with errors in case of inconsistency of data types;
don’t forget to close the connection to the data source file after you have finished reading the data from it. Leaving an open connection can prevent you from connecting to it through other tests;
Automated Testing Tip 5: Learn How to Debug
Debugging is a step-by-step execution of a program in order to detect an error in it. Learning to debug is easy. The basics of debugging can be studied in an hour, but the benefits of using it are huge.
Once I worked in a new team with a new tool for me and I could not step into a function in debug mode. After digging in a little bit, I found that the application editor simply “doesn’t see” where this function is located, therefore it can’t go into its source code. I asked the experienced (as I thought at that time) colleagues:
How can I go inside this function?
In no way. This tool does not know how to do this.
But how do you debug scripts?
Among novices, there are many such people who simply did not learn to debug, and therefore don’t know that it is possible. That is why debugging is one of the mandatory things that I show during my training that I deliver to new people on my team.
Here are the basic concepts that you need to study and begin to use:
The setting of breakpoints. A breakpoint is a place where the execution of the program stops, waiting for action from you.
Step-by-step debugging. This is what allows you to step forward in code, step backward, execute to a cursor, and watch each step of your program unfold.
Viewing values of local and global variables. All debuggers provide mechanisms to examine the value of variables.
Observation of variables and expressions. Debuggers often provide watchlist functionality to let you know when the value of a variable or expression changes.
Typically, there is a detailed description of all the debugging features in the help system of your development environment. Become familiar with that documentation so that you can explore features and take full advantage of what your debugger can offer you.
You can use any programming tutorial to understand the basics of debugging. However, there is one feature specific for automation only. If you are dealing with a variable that corresponds to a display object, the values of its properties will not always match the values during the test run.
For instance, the Visible property can be set to False during debugging, because at that moment the editor itself is a top window. However, while the tests are running, the value of this property will be True, because the window with this element will be displayed on the screen, and the editor window will be hidden.
Automated Testing Tip 6:
Verify All Options of Logical Conditions
When you do verifications in tests with several logical conditions such as the following:
if A and B or C be sure to check such code for each condition (A, B, C in this example) and all probable values (true, false).
Quite often programmers overlook “lazy evaluations,” resulting in incorrect operation of such expressions. Verify each condition, and you’ll catch such problems early.
Also, do not forget to enclose in parentheses those parts of the logical expressions that should be verified together. Do this even if you are 100% sure of the correctness of the computation priority. For example: if (A and B) or (C and D)
The presence of parentheses significantly simplifies the understanding of the code in the future. It doesn’t matter that you understand the computation priority now. What matters is that the person maintaining the code years into the future understands. Make that person’s job easier by clarifying the order with parentheses.
Automated Testing Tip 7:
There are two ways in which to write tests:
You write a test and, as you write it, you write the general functions and methods that will be used in this and future tests.
You first write common functions and methods and then quickly develop a test that uses everything written before.
Each approach has its advantages and disadvantages. Each approach can be effectively used in different situations.
However, there is one approach that does not work: writing a lot of common code for many tests first and then starting using it. The key words here are “a lot”: for example if you are going to immediately write all the functions and methods for a dozen future tests.
Automated Testing Tip 8:
Write to the present. Not for the future.
In one project, we tried to write code with an eye toward all our possible future needs. We took a big piece of functionality that we were going to cover with tests and started planning the code that we might need.
We thought through functions with parameters and return values; we looked at all the tests and their interaction with various application windows; we argued and reworked our plan, and then reviewed again.
We spent two days on planning and started to write the code. We spent two more weeks writing all the functions and classes, rechecked everything several times, and finally started to write tests.
First, we encountered minor changes that we had to make into the code already written since we did not take into account something. Then we had to add a few new functions and completely redo some existing classes. And, finally, after writing all the tests, we removed all the code that was not immediately useful.
As a result of all that we wrote for two weeks, no more than one-third of the code (the most elementary of it) remained almost unchanged, another one-third we had to substantially redo, and we deleted the remainder. About a week’s worth of work by several people was wasted.
You only need to improve code if you are 100% sure that doing this will not affect the efficiency of the code or the results of its execution, and also if it does not take you more than a few minutes.
If this simple rule is followed by everyone, our tests will constantly improve and read such code will be much more pleasant. Therefore, teach others the same by your example.
Automated Testing Tip 9:
Choose a Proper Language for GUI Tests
Many automation tools offer a choice of several programming languages you can write tests in. And often people try to choose the same language for tests in which the application under test is written. This approach has a number of advantages:
We don’t produce programming languages without the need.
Testers can always ask the programmers for advice.
In case of necessity, programmers will be able to create and maintain tests.
API testing is greatly simplified.
However, choosing the same language for tests as for the product being developed is not a strict rule. It is unlikely that programmers will ever look into the test code, so tests can be written in any language that is convenient for testers.
In addition, the tasks of testers are usually much simpler than the tasks of programmers. Writing a commercial application is much more difficult than writing tests for it. For this reason, allowing programmers to design a test engine can lead to complexities that can be avoided.
Programmers might initially try to design such a system “with a margin for the future,” trying to foresee what testers will never encounter. Therefore, you should be guided in choosing the programming language for GUI tests, first of all, by what your team will find more convenient and easier to work with, rather than by what your application under which the test is written in.
Nevertheless, for some types of automation, things can be quite different. In one of my projects, it was decided to implement integration testing for a plug-in that works with mobile devices.
All automation in the project was done using Python. The plug-in was written in C++. We wrote our tests in Python, only to later realize our work would have been easier in C++.
To develop the tests, programmers had to write wrappers that Python could work with (for this, the Boost.Python library was used). The testers, for their part, had to write wrappers that converted some types of Boost. Python variables into regular Python variables so that it was convenient to work with them.
In general, the preparatory work took three months, after which all the necessary tests were written within two weeks. Since neither developers nor testers had had experience doing such tasks, no one could estimate the time needed in advance.
When our little effort was over, everyone agreed that it would be simpler to write our tests in C ++, which would take the same two weeks. By writing in C++ we would not have needed to spend the additional three months in developing wrapper functions.
Automated Testing Tip 10:
Do Not Duplicate Tested Application Functionality in the Scripts
An application under test performs calculations and outputs a result. How to check that the result is correct? The first option that comes to mind is to calculate the same in the test script and compare the result with what the application gives us! This approach is incorrect for several reasons:
Calculations can be complex. The programmers have already spent time on them and now you will do the same.
Formulas for calculations may change later. In this case, you will see an error in the report, although the application works correctly. You will have to make corrections, again spending time for this.
When working with floating-point numbers, the accuracy of calculations may differ in the language that you use for tests and in the language in which the application under test is written. As a result, you will have to artificially customize your calculations so that they coincide with the result of the application.
The correct approach in such situations is to calculate the correct result manually and save it in the script as expected. If one calculation is enough, then write its result directly in the script. If several calculations are needed, use arrays or the DDT-approach.
[Note: You can free download the complete Office 365 and Office 2019 com setup Guide for here]
Automated Testing Tip 11:
Each Test Should Be Independent
Beginners often make an error while automating when one test uses data that was generated in another test. A vivid example of this is the CRUD functionality when one test creates a record, another one edits it, and the third one deletes the record.
The only advantage of this approach is the time saving; however, several disadvantages appear at once:
If the test that creates the record fails for some reason, then the rest will also fail.
In the report, you will see not one error, but several, although the functionality of editing and deleting can be correct.
When the tests are started automatically, it’s not always possible to ensure that they will run in a specific order.
If you want to save a little time and not have to create additional records using the application, you can do it otherwise. For instance, create them using an SQL query to the database, with the application API, or use the database in which the necessary entries are already created.
In this case, to verify the independence of the tests from each other you can run the tests in a random order if this can be implemented for your automation tool.
Another approach to solving the problem is to run first the test, which will create all the test data necessary in the future. However, if this test fails, then running all the others is meaningless, and as a result, no test will be started at all.
If you still think that in your case the use of dependencies between tests is the most optimal approach, in each dependent test you should check the results of that test on which they depend, and do not run the dependent test if the previous one did not work correctly.
Another possible way to solve the problem is to combine several dependent tests into one.
Automated Testing Tip 12:
What Should Not Be Automated?
Do not automate someone else’s application. For instance, if you need to work with Google mail, do not write complex code that will work with the interface of this mailer (of course, this does not apply to Google employees).
Automation of the interface (its correctness and ease of use) is possible in principle, but too time-consuming and usually incomplete.
Interaction with any peripheral devices (printers, scanners, etc.) that require human participation is usually either complicated or requires too much additional manual work.
Verification of the correctness of various images, graphics, and video is also easier to perform manually.
Obfuscated applications and individual components cannot be automated since the names of their properties change at each compilation.
Nevertheless, sometimes it makes sense to combine automation with manual testing.
Automated Testing Tip 13:
Ask the Developers for Help
The tests we write are simplified programming, which is necessary to simplify testing. Of course, you can make a complex framework, but in most cases, this is not required. The same goes for other areas that programmers come across more often: regular expressions, working with databases, using internal application methods, and more.
Therefore, usually, programmers are more experienced in such things and can give useful advice in the rare cases when you need to develop more complex testing solutions.
If you are faced with a complex problem that you do not know how to approach, try to ask the advice of developers. Of course, it is not worth running for help with every little thing; it is often much more useful to understand by yourself, thereby increasing your experience.
Sometimes it is useful to interact with developers even if you know the solution. Once I designed some tests and my solution seemed complicated. I talked with an architect for just five minutes and he suggested a simpler solution right away, as soon as I described the task and my solution to it.
He was faced with designing much more than I did, he did not need to know the language I was writing on, or the features of test execution automation, to solve that particular problem.
However, always remember that the view of the programmer of the application is different from yours and not all pieces of advice are equally good. The programmer looks at the application from the inside and knows how it works. You look from the outside and know how it should work.
Automated Testing Tip 14:
In recent years, cloud services have become popular, providing the ability to run virtual machines in particular. Since these services are quite cheap, many people have a desire to run automatic tests in the clouds. Before making such a decision and starting the implementation, consider several important points:
To test desktop applications, an open session is mandatory (it can be either a logged-in user or Remote Desktop session). If there is no such session, the operating system simply does not render applications’ GUI, which means that the automation tools will not see a single window or control.
In particular, the need for an open GUI means that you can’t just configure a remote virtual machine and run tests on it, using, for instance, the command line. You will need to connect to the virtual machine using Remote Desktop and leave the connection window open for the duration of the tests.
If during the operation the connection is interrupted for at least a second, the automation tool will not recognize the application elements, and this will lead to unforeseen errors.
Any automation associated with physical devices is also inconvenient in the cloud since you have no direct access to the physical devices your cloud provider is allocating to your account.
Cloud services that provide access to mobile devices are more expensive than usual cloud virtual machines. Therefore, it is better to run automatic tests on local mobile devices, and cloud services should be used only for manual testing (for instance, to verify if the application works on a device that you don’t have).
Automation of web applications is best carried out in the clouds. It is only necessary not to return the virtual machine to the initial state if at the end of the tests you need to verify something on this machine.
In general, working with virtual machines in the clouds is not as convenient and fast as with local computers and virtual machines. And given the comparative cheapness of computer hardware.
For ordinary tasks, you can use only local machines. Cloud services are needed when there are no other options (for instance, for emulating load testing).
Automated Testing Tip 15:
Introduce Automation for Corner Cases
Let’s assume that at a certain point in the test you need to enter a number from 1 to 10 into the field. Testing all 10 cases is quite expensive, so we select a random number and enter only it. If each time while running a test only this number is entered, we will have 9 potential errors that are never checked.
You can choose this number randomly from the range or test several values instead of 1, but in addition, you should always test the values 1 and 10.
It is with these corner values there are often associated with different errors, which programmers do not take into account during the development or writing of unit tests. Thus, in our automated test, we will check at least 3 numbers: 1, 10, and any number in the range from 2 to 9 inclusive.
Of course, we also need to check that the application does not allow entering in this field negative, numeric, and other values that are not suitable for this field, but this already refers to negative testing, and we are talking about a positive one here.
Automated Testing Tip 16:
The Difference Between Error and Warning
There are errors of two types: critical and non-critical. From an automation point of view, this means that if after an error occurs we can continue to run the test, then the error is not critical. It is these non-critical errors that are warnings.
Non-critical errors must be seen in the report in order not to forget about them; however, they must be marked with a color different from the color of the critical errors. For instance, critical errors may be marked in red, and non-critical errors may be indicated by yellow.
Unfortunately, many tools generate reports in the JUnit format, which was originally intended for unit tests written by developers. These are small and very fast tests that usually check the correctness of individual functions and methods with different input parameters. Unlike GUI tests, a unit test can’t have an intermediate state; it either passes or fails.
When the unit test engine is used for GUI testing, the test will be stopped at any error that appears. For instance, we verify the sorting in all columns of the table, for which we click on the column headers.
If when clicking on one of the columns the data in this column will not be sorted – this is an error. However, this is not a reason to stop the test and not verify the remaining columns.
In order to solve this problem, in tools using JUnit reports, the following approach is used. Performing non-critical verification, in case of an error we simply add an error line to the error list (which is initially empty).
At the end of the test, check this list. If it is still empty as it was in the beginning, then there are no errors in the test. If there are errors in the list, we mark the entire test as a failed one and put all the accumulated errors to the log.
Automated Testing Tip 17:
Verification of Individual Bugs
In most cases, tests are written to verify some functionality in general and do not verify special cases, for instance, a bug with specific data. However, sometimes it happens that once corrected bug appears again after some time;
The reasons for its appearance can be completely different, but the result looks the same. In such cases, it makes sense to write a separate test to verify this particular bug or a separate test to verify several bugs.
The peculiarity of such bugs is that they do not fit into manual test scripts, and therefore can only be found by chance. Often such errors occur in actively developing projects, where several programmers can work on modifying the same functionality, therefore affecting each other’s changes.
If you write such a test, then in the report it is necessary to indicate the number of the existing bug in the bug tracker so that you can see the history of its findings and fixes.
You can act the same way when you write an automated test and find an error in the application under test. You register a bug and specify its number in the comment to the verification in the new test. Once such a bug is fixed, the error simply disappears from the report.
Automated Testing Tip 18:
Make a Pilot Project Before Writing Real Tests
A pilot project is a project that lasts one-two months and is only needed for informational purposes. Pilot projects are used in several cases:
if you are using a new automation tool for the first time;
if you automate a new type of project
The purpose of the pilot project is to try the tool, to understand its capabilities, and to study the main advantages and disadvantages. Therefore, expect that at the end of the pilot project you will delete it entirely (or almost entirely) since usually, this project does not use the best approaches.
For a pilot project, you usually select a few simple tests and one-two tests of average difficulty. If your application uses complex controls (editable grids, specific controls, etc.), then at least one test should work with such an element.
If this is not done right away, then it can later turn out that this tool can’t work with such objects at all, which can significantly complicate the automation process.
Also, pilot projects are useful for training new team members, if they did not work in the field of test automation before. It will help the beginners to try to create their first projects from scratch and learn the basics before starting working with real projects together with other team members.
Automated Testing Tip 19:
Choose a Proper Set of Tools for Your Needs
In my practice, I met two different types of people. Some tried to solve all their problems with the help of one tool, while others, for each new task, took something new.
All of them had their own arguments. The first believed that since they had experience working with one particular tool, it is with its help that they would solve the problem most quickly.
Others held the view that for each task there was the most suitable tool, and it was necessary to use it, even if no one in the team had experience with this tool.
As is usually the case, the truth is somewhere in the middle. Each of these types of people faced various difficulties. The decisions of the first were sometimes too bulky and complex, and it was very difficult for anyone, other than the author, to support them.
The problem of the others was that it was very difficult to set up the working environment, and it required a lot of time and consultations with the author.
The most obvious is the following solution: find yourself a tool that will cover most of your needs and find one to two additional tools for everything else. Do not arrange a “zoo” of languages, tools, and libraries in your project, but do not try to solve everything with a single tool.
Automated Testing Tip 20:
Do Not Consider Test Automation as a Full-Fledged Development
Let us be honest: though automation requires programming skills, nevertheless it is not a full-fledged development project. Automation is usually handled by less skilled programmers since the work is much simpler and is used for the internal needs of the project. In 99% of cases, you will not need most design patterns.
You will not use transactions when working with databases, the volume of your test data is unlikely to approach the volumes of a real application, and you do not have to worry about the visibility scope of functions.
You need a general knowledge of object-oriented programming (OOP), the ability to write simple SQL queries, and the ability to run tests in debugging mode.
Therefore, there is no need to try every new approach that you learn about software development in general to the practice of writing automated tests. Do not try to show how smart you are. It is better to show the ability to write simple code that even a beginner will understand.
Even if your project uses an approach such as behavior-driven development (BDD) or keyword-driven testing (KDT), in which the code is written by one group of people, and the tests are created by another, it is still better to use simpler solutions.
Automated Testing Tip 21:
Do Not Automatically Register Bugs from Scripts
If the automation tool has integration with a bug tracker, one day you might have a wonderful idea: to automatically create a bug if there was an error in the script.
Usually, this makes no sense since most errors in automated scripts are due to imperfections of the scripts themselves, or due to random events in the system on which tests are run.
Errors also can be from the use of the automation tool, due to the fault of the test author, from the incorrect configuration, and only occasionally due to a real problem in the application. This is one of the reasons why automatic bug creation is a bad practice in test execution automation.
The second reason to avoid automatically registering bugs is that it is very difficult to write correct steps to reproduce a problem. You can’t just write “run a test like this and at the end look at the error log.” Programmers need clear instructions to reproduce the error that they can do on their computer in debug mode.
There is another solution. You can create tasks in the bug tracker automatically if an error occurs, but the person responsible for the error investigation should be the author of the test.
If, for instance, tests are run at night, then in the morning the tester will first look if he has received new bugs during the night that are worth investigating first.
However, in this case, there are hidden pitfalls as well. If we have a problem with the test environment and no tests have passed, then for each of them a separate bug will be created. As a result, in the morning each tester will have a huge number of new tasks created overnight.
If this happens on frequent occasions, after a while, people will just start to ignore such notifications. In this case, it makes sense to find the “golden mean”: for instance, generate a template for a new bug, which will then be viewed manually, edited if necessary, and created in a bug tracker.
Automated Testing Tip 22:
If you have not used a version control system before, you must begin right now. If you think that such a trifle as automatic tests shouldn’t be stored in a version control system, you are mistaken.
You can’t just store your tests on your hard drive, as the tests become more and more time to consume, and one day you can lose them all if the hard disk goes out of order.
Simply creating a copy of tests on another computer is a primitive analog of a version control system, so spend one day studying a real version control system and then begin using it, gradually improving your knowledge.
Usually, developers already use some kind of control system, so just start using that same system as well. You can store your tests in the same place where programmers store their source code or use a separate project for tests.
In various projects, this or that approach will be more convenient. Using the same system as your developers mean that you can learn from them, and possibly vice versa.
However, keep a close eye on what exactly you put into your version control system. Some programming languages (even scripting languages) can generate binary files to speed up the work. These files are copies of scripts, but they do not need to be saved since they are generated automatically and for the version control system are just rubbish.
Also, some tools can create separate files in which user settings are stored. Such files should not also be stored in the version control system since everyone likes to customize the development environment for themselves and storing these settings in the version control system can interfere with other project participants.
Automated Testing Tip 23:
Avoid Custom Forms
In some automation tools, there is such a feature as custom forms. Especially often they are found in commercial instruments. A customs form is a normal window that appears at a certain point during the test run and is intended for entering any information by the user.
In some projects, these forms are used to enter test parameters that will be used (for instance, server address, username, etc.).
In most cases, the use of such forms is not recommended as well as any user interaction with automatic scripts. Tests should be run regardless of the person’s presence so that it can be done automatically.
For instance, during nonworking hours (at night or on weekends). To stop the test at the right time, you need to use debug mode and breakpoints.
As for the test startup parameters, they can be stored in configuration files (for instance, in, XML, and others). Changing these parameters in a file is as easy as entering them into a custom form.
If different parameters are used on different computers (for instance, the username on the current machine), you can use several parameter files, each of which corresponds to the name of the current computer.
Automated Testing Tip 24:
Simplify Everything You Can
You work with a lot of things: tests, code, test environment, virtual machines, SQL queries, test data, reports, and more. It is very important that everything that you work with is simple. This is important both for yourself and for those who will work with it after you. So if something you are working with seems to be complicated, simplify it!
If you have a difficult test and you have to think through for a long time what exactly it does - break it down into several simple tests. If you have the complex code and you have to debug it for a long time, simplify this code (for instance, break it into several separate elementary functions). And so on.
Anything that seems difficult or requires a lot of additional actions needs to be simplified. If you solve a problem and find a solution - do not rush to implement it, try to find an easier solution. Often the first way to solve the problem is not the simplest. By thinking for another half an hour, you can save yourself a half-day job.
One of the best ways to find a simpler solution to the problem is to consult with someone who works with you. An outside point of view can give impetus to thinking in a different direction.
For instance, you are looking for a way to automatically close some unnecessary applications on your computer before you start the nightly test run. Instead of writing complex code to find and close all possible applications.
You can simply force the computer to restart before running the tests, and then at the time of the test startup, you will not have anything open except for your own tool that runs the tests.
Automated Testing Tip 25:
Automate Any Routine
I often meet strange people who work in automation but can perform other daily tasks in the most inconvenient way. For instance, to start the program they open the Explorer, find the necessary folder in it, and run the program.
They do this several times a day, rather than just make a shortcut on the taskbar or start by pressing a certain keyboard combination.
In another case, to start a program with many parameters from the command line, the person opens the console, goes to the desired folder, and manually writes the program name and all its parameters, although the same can be done 10 times faster if you write a batch file.
There are many such examples: people enter huge data sets manually, instead of writing a simple script and executing it; use the mouse where it would be 10 times faster to use the keyboard combination; use simple programs such as Notepad instead of more advanced editors, etc.
Your automation tool is not the only means by which you can automatize routine tasks. Use everything available! Once you have a regular routine task, think about how it can be speeded up or automatized. Using such approaches will not only improve your work speed but will also improve your knowledge of the operating system you work in.
Automated Testing Tip 26:
Run Scripts as Often as Possible
Generally, it is useful to run tests when a new build appears. But what’s the use of the tests if they are unstable and fail with errors even if the application runs correctly?
When a test is just created, there may be a lot of unforeseen details in it. For instance, will the test work normally on a slower computer, in a virtual machine, with other settings, or with a slower network connection?
And vice versa? How will the test work in a better test environment? What happens if the amount of data on the server is different each time and the speed of the application changes every time?
To stabilize your tests, run them as often as possible. The more often they run, the more likely you are to spot problems and immediately fix them.
There is no need to run tests each time on different builds. You can use the same build for multiple runs. This is especially true at the stage of introducing automation when there are not many tests and it doesn’t take much time to complete them all.
It will become more difficult to run all tests when you have more and more tests in the suite, but if you run scripts often and fix different tests-related issues, then in time your tests will be much more stable, and there will be no need for such regular runs on the same build.
Running scripts often is especially helpful for tests that work for a long while and depend on many factors, or in which a large number of verifications are performed.
Such tests should be debugged the most thoroughly, as in the future you may want to run them not very often (for instance, once a week, not for each build), so they must be very reliable.
Automated Testing Tip 27:
Perform an Automatic Restart of Failed Tests
Sometimes tests fail when running them automatically, but pass when running each of the failed tests separately. One of the possible reasons is the error that appears when the application is being used for a long time or the problem with specific scenarios. Such cases need to be investigated to find their causes and reproducible scenarios and then to be fixed.
But sometimes such problems arise because of the specific test environment or the interaction of the automation tool with the application under test. In such cases, tests can hang for no apparent reason or simply report strange-at-first-glance errors that cannot be reproduced.
If this is the case, it’s necessary to do an automatic restart of tests that fail for unknown reasons. Such a process should look like this:
During the test run, the name of each failed test is added to the list.
After all the tests have worked, we must restart our automation tool.
We individually run each of the fallen tests, recording the results separately from the first results.
Then we manually review the results.
If one of the tests still regularly fails, it makes sense to look more closely at what the problem may be. If you cannot see the obvious problem, but the tests were successful the second time, you can consider them successful.
Be careful, however! There is always a possibility that the test fails for the first time due to errors in the test itself or the tested application, and restart of the test is nothing more than hiding the problem.
In such cases, however, it makes sense to understand the causes of errors and eliminate them. To identify such tests, you should keep statistics on all startups and from time to time view it for the presence of “suspicious” tests.
Automated Testing Tip 28:
A Disabled Test Should Be Provided with a Comment
Sometimes you have to temporarily disable an existing test. For instance, the need to disable can occur if the test produces an error that affects other tests, or the corresponding functionality is temporarily disabled in the application under test.
When disabling a test, you should necessarily write a comment to the disabled test, so that any person, stumbling upon it, immediately knows the reason for the disabling. The comment should indicate the author of the disabling, the date, and the reason.
You can go further and implement an automatic verification of whether the corresponding defect is relevant at the time of the test run. If the defect is already closed, you can either automatically run the test, or generate an error stating that the test should be enabled.
It is also useful to view the disabled tests from time to time and update them, if necessary. For instance, after some time the functionality that was tested by the disabled test can be completely removed from the application. In this case, it makes no sense to store the corresponding test.
Automated Testing Tip 29:
Make a Screenshot in Case of Error
No matter how detailed your logs are, nothing will replace a screenshot taken at the time an error occurred. This is especially true in GUI applications where the following occur:
It is always easier to understand the error visually.
It happens that the application under test is affected by something that could not be foreseen (for instance, there appeared a system message that caught the focus).
Some automation tools go further, suggesting you to create a screenshot each time the automation tool interacts with the tested application. Doing so is not recommended, because such a large number of pictures increases the size of the log, and the need for these screenshots is extremely rare.
If your tool does not have the option of the automatic screenshot in case of an error extend its capabilities by yourself so that it happens automatically. At the same time, pay attention to some features of different types of applications:
Desktop applications rarely contain scrolling pages. All controls are usually either placed in one window, or several transitions between different windows are used with the help of a Next button.
With web applications, a long page you need to scroll through to see all the content is quite common. You might want any screenshots to capture the entire scrolling region.
Often, tools allow you to take either a screenshot of the screen or a page, and for these actions, you may need to call different functions. Therefore, when working with a web application and saving a screenshot, always think of what kind of information you need.
If you need the content of the entire page, then save the page exactly. If you need a screenshot (for instance, to see not only the browser window but also other applications), then use the method of saving the entire screen, while remembering that some of the page content may not fit into the image.
Automated Testing Tip 30:
Errors in Logs Should Be Informative
Imagine that you come to work, open the nightly test reports, and you see an error message like the following:
ERROR: incorrect value
What does the text of the error tell you? Nothing!
There are a few components missing: the expected value, the actual value, the place where the error occurred, and the actions that led to this result.
For instance, we test a simple Calculator application such as you might find in Windows or in OS X by entering a lot of different mathematical expressions and verifying the results. An informative error message would look like this:
ERROR verifying result for expression "2+2*2". Expected: "6", actual: "8"
Pay attention to the quotation marks that enclose the values. They are not mandatory, but it is desirable to use them in case there are spaces or other nonprinting characters at the beginning or end of the line. When each of the values is quoted, similar problems are easier to discover.
If possible, you can also arrange the expected and actual values on different lines, one under the other. In this case, it’s also easier to see the differences, especially in the case of long strings.
Automated Testing Tip 31:
Avoid Comparing Images
Very often novice automation engineers make the mistake of comparing images rather than results. For example, they cannot check the individual properties of a control, so they verify a screenshot of the element, or even of the entire window, against a known good image of the element or window.
This approach of comparing screenshots is bad for several reasons:
The slightest change in the appearance or size of an element leads to an error.
Comparing images is much slower than comparing the properties of the same element.
Updating the expected results for such verification points is usually more time consuming than updating the properties.
Usually, verification of screenshots of the elements results from a lack of knowledge of the automation tool (provided, of course, the tool supports the type of your application under test and this particular control).
It is better to spend a few days figuring out how to work with your application than to spend several hours a week in the future on supporting what you can avoid in general.
The most terrible project that I saw used image verification in 50% of cases because there was no other way to work with the application. The tests ran almost all night, although they did not do much work. Every week about a day of work was spent on investigating and updating the expected values, although there were no changes in the application!
Sorting out tools a little bit and reading the help system together with the programmers, we found out that only some small changes were needed to the tested application to avoid the need to compare screenshots.
After making those changes, it was possible to reduce the working time of all tests by half, significantly facilitating their further support.
Nevertheless, although verification of screenshots is considered a bad style in automation, there are several cases when the approach can be used:
Some tools work only with screenshots; this makes them universal for any application; however, they are relatively slow and their tests are less stable.
If you are testing an application that works with graphics, then the comparison of screenshots is usually the only possible approach for performing verifications.
If you still can’t work with the control, it is better to use screenshots than blindly click on the coordinates of the window. In these cases, it usually makes sense to set up the advanced settings if they are provided in your automation tool. Settings to look for include the following:
Inaccuracy confidential interval (may be called threshold or tolerance) – allows you to ignore a certain number of differences, specified in pixels or percentages.
Transparency – allows you to specify an area inside the screenshot that must be ignored during the verification (for instance, there may be a Date field, which changes every day).
Partial comparison – compare not a screenshot of the entire control, but only the significant part of it (for instance, for a button it is enough to verify the area where the text is located).
The set of available options depends on the tool you use. Some tools provide a wide set of options for image comparison, while others don’t have any options at all. If you are unlucky enough to use a tool without necessary options, you can write your own functions to perform a comparison of the images, though it may be a tricky task to implement.
Automated Testing Tip 32:
I have seen many times that testers are doing an unnecessary thing: they optimize the speed of a particular function because it seems to them that it does not work fast enough. Before starting to perform any optimization, you need to make sure it provides benefits that are worth spending time on to optimize your code.
HOW IT LOOKS
Usually, the unneeded optimization effort looks something like this:
An application starts for 3 seconds, then for 5 seconds the test script fills in fields for searching, there are another 5 seconds for the search process, after which for 1 second the script reads the found data and verifies it.
The tester begins to optimize the reading and verification functions (since he can’t optimize the application under test).
The tester spends a day of work on optimization and achieves run times of half a second each for the reading and the verification.
Indeed, the test engineer has doubled the productivity of his code, but as a result, he got a win of about 5% in the overall scheme. The gain is not obtained for all tests, but only for those that use the optimized functions.
It’s hardly worth it to spend a day on such optimization, especially if the tests are run at night and it does not matter if they work at night for 5 hours or for 5 hours and 1 minute.
According to the Theory of Constraints, optimization should always begin with the weakest section. In the test automation, this section is usually the application under test. Not always, but often. Optimization of test scripts is often free from providing any benefit.
Of course, you do not need to always use the slowest algorithm if you know another that happens to be faster. And I am not saying that you don’t ever need to optimize. There is a time for optimization.
Automated Testing Tip 33:
What I suggest is that you optimize tests in two cases:
If a test runs unreasonably long, and the application under test is idle during the test execution.
If you see an obvious problem in the code, the correction of which doesn’t take much time.
You should always look at the performance of the scripts from the point of view of the whole system (scripts, the application under test, and the environment on which the testing is performed).
For this purpose, at the very beginning of the project, it is better to determine the quality criteria that automatic tests should meet (speed of all tests, the frequency of running certain tests, etc.) and follow those criteria.
Automated Testing Tip 34:
Review Someone Else’s Code Regularly
Viewing a new code written by another person is one of the best ways to keep your project in order. As soon as a test and the general functionality necessary for it are written, the code should be looked at by a second person. It is possible during such reviews to find potential errors, or simply to fix incomprehensible or difficult places in the code.
Code written by junior testers must especially be checked. The code of more experienced employees also needs to be reviewed, however, since no one is fail-safe. Sometimes it happens that the author of code for a test has made some assumptions that are incorrect.
Other times a developer might mock up some functionality intending to fix it later but forgets to do it. Reviews can help catch such problems.
There is no need during the review stage to verify whether a test works since such verification should be made by the author of the test. You only need to view the code itself and pay attention to complex or incomprehensible blocks. To do this, you can make a small list of rules, helping you focus on what to look for when reviewing.
Viewing someone else’s code also has the reverse side: it is quite possible that by reading it you will find something new, a simpler algorithm or approach that you did not know about or just did not use for a while.
Another good example is pair programming, in which one person writes the code, and another one sits nearby and gives advice. Although at first glance it may seem that this approach leads to a loss of time for one of the participants, the overall quality of such code is higher than when one person writes it.
Automated Testing Tip 35:
The review process is a good way to keep your code clear and understandable.
It is important both to review others’ code and having your own code reviewed, not depending on how many years you have been working on the project or how much experience you have gotten.
Write Tests That Even Non-Automation Engineers Can Understand Tests should be well written in order to be easy to maintain and modify, as well as to make it easy for you to find the reasons behind any failures during execution. How then, does one determine if a test is well written or not?
One approach is for the automation engineer writing a test is to show it to a tester who is not familiar with programming. If the tester understands what the test does, then the test is considered to be good.
Automated Testing Tip 36:
Remove Tests That Provide Minimal Benefit
Imagine that your application under test is actively developing, and you are constantly writing new tests. Over time, the support for existing tests will take more and more time, with less time to write new ones.
Then you will have a choice: either significantly reduce the writing of new tests, or delete some of the old ones, the benefits of which are minimal.
The first approach is not suitable. You cannot reduce writing new tests, because you need to try to cover with automated tests as much functionality as possible. Thus, the approach that is left is to remove older tests that are providing minimal benefits.
How do you determine whether a test is useful or not? To do this, it is necessary to keep statistics on each available test as early as possible. Track statistics such as the following:
How many times each test was started;
How many real problems were found by a given test;
How often the test has to be fixed.
In time, you will see that not all tests are equally useful. You will find that some tests are helping you to find more bugs, whereas other tests seem to do little more than use up your valuable time in keeping them maintained.
For instance, some problems are easily detected manually, and therefore they are registered before the corresponding tests are started. It can also happen that several tests verify the same functionality, making one or more of those tests redundant.
In such cases, you can’t just throw off the “most useless” tests. It is necessary to review each such test manually in order to understand the probable cause of its “uselessness”.
Only then can you make a decision about its removal, or transfer to manual testing. Try to involve all automation participants in your project in the process, since everyone should feel responsible for the overall work.
It may be difficult for you to delete the results of your own work, but it is better to have 10 tests that can be supported than 50 tests that are generating so many results that you do not have time to figure out the reasons of their fails and correct the situation.