The sweet promise of automated software testing is fairly simple: invest in creating automated tests for your software now, and then enjoy being able to run them at any time, as many times as you need, at the touch of a button. The more you automate, the less you need to manually repeat later, which leads to — at least in the long run — saving money, time, and human resources. Or that’s the idea.
As business process optimization becomes more or less the law of the land in IT, it seems that automation should be ruling the QA world. However, to this day, more than 80% of testing is still done manually. There are plenty of reasons for that, and not all of them boil down to “we have an infinite army of dirt-cheap manual testers at our disposal, so no automation is needed.” Oftentimes, “staying manual” is prompted by one or several of the following arguments:
- If it ain’t broke, don’t fix it
- Not sure if it’s worth it
- Does my project even need it at all?
Let’s see whether they are valid in general and in your particular case.
1. The “If it ain’t broke, don’t fix it”–argument
Your team of manual testers is doing a good job as is; no one’s slacking off, and their time is fully accounted for — so why change and potentially disrupt the flow? Well, the answer to that is the same as with any other type of automation: to make the process more efficient. You’ll get more speed, coverage, and reliability with the tests that you can automate. And when you free your employees from routine tasks, they can focus on things more deserving of their attention: complex or unique bugs in need of human approach and undivided mental energy.
2. Not sure if it’s worth it
Let’s say you consider adding test automation to your development cycle. How to go about it? What tests to automate? What tools to use? Should you hire new staff, invest in educating the testers you already have, or outsource the whole thing? And most important of all — will it be profitable in the end?
To fairly evaluate the potential costs and benefits as well as decide on the strategy and tools are no light matters. Automation is a costly process, more so if it’s introduced later in the production. Purchasing a test automation framework or developing a custom solution isn’t cheap, and the decision to either outsource or keep QA in-house should be made early on. Additionally, 100% automation (and even 100% code coverage) is an unreachable goal, so you will need to know where to cut corners. Plan which tests to automate based on not only your budget, but also test priority and repetitiveness. According to a1qa, automating tests that run 15 and more times per build can be considered a safe investment.
If you/your employees have no experience in forecasting QA automation expenses, you can opt for consulting to help with estimating ROI and plan for coverage. If you do, go for established companies with a good reputation. Even if you end up hiring/training your own test automation engineers, the opinion of vetted specialists that’d managed a lot of projects should be of use and not too far off the mark.
Keep in mind that it can take anywhere between a 50- to 150-hour course for manual testers to learn the ropes of automation. Timeline can vary, considering the chosen platform as well as trainees’ abilities and coding skills.
3. Does my business/project really need test automation?
That’s actually a good point to consider. No matter what automated testing evangelists preach, it’s not the be-all, end-all of software quality assurance. Might be your projects are in dire need of it, pronto; might be they aren’t at all. Or it can be that you don’t quite need to automate right now, but it will be a good idea in three months or half a year from now. How to know for sure? A few reliable tells that the time has come (and when and what to automate):
- Your project isn’t new, has lived through a number of releases and updates, and has significantly built up on functionality/volume since its early days. It’s both rather stable and keeps on growing.
- A sizable portion of QA work done on it is rechecking that everything that worked in previous builds still works in the most recent one (regression testing). It’s repeating a number of identical tests with small variations in large amounts of input data, over and over again.
- Load & performance, volume, stress testing, etc. is a constant part of the QA routine for your project.
- Your development method is Agile/Extreme, possibly under the DevOps model.
- At the moment you’re considering hiring more manual testers because of the increased QA workload.
If three or more of the points above apply to your project, then it’s about time to start planning to introduce automation into the process.
When to test manually
Now, there’s still plenty of testing absolutely requiring that human touch. And until a fully functional, science-fiction-come-reality level of AI is invented — and can be hired — there’ll be sections of QA that humans must do manually. Think big chunks of GUI/UX testing that are about human reactions, emotional and accidental actions/responses. Another example is accessibility testing, which is especially big for mobile. Exploratory testing is on the rise, too, and manual approach is a must there. Test cases that are new and unchecked yet, tests with constantly changing conditions, and ad-hoc (aka “monkey”) testing all qualify. And it goes without saying that UAT (user acceptance testing) can’t really be automated, or it’d rather defeat the purpose.
Additionally, consider the inherent instability and quickly changing nature of any software during the early stages of production. Unless we’re talking huge sets of input data from the get-go, you wouldn’t want to automate tests that’ll become obsolete in a day or two. In general, it’s a good idea to push back automation until you’ve at least laid the groundwork for the core functionality and worked out the major kinks.
When and what to automate
A healthy combination of automation and manual testing is usually the key. There’s no silver bullet ratio of automated/manual testing that’ll work in every case, but here’s the list of good candidates for automation:
- Regression testing
- Simple pass/fail tests with big amounts of input data
- Repeated execution tasks
- Performance testing
- Cross-platform tests
- Tests that would take too long and are too hard (or outright impossible) to perform manually.
Bottom line: if it’s easier, more reliable, and in perspective it takes less resources to automate a test, you automate it. If it’s a one-off scenario, something unique/quickly changing, or has to do with usability and brand new functionality assessment, then stick to testing manually.
“Automated testing is something lazy or rich people do” is only a half-joke after all. A staple in huge long-standing projects by multi-million software companies, it can be just as crucial for smaller to medium-sized business projects. Right out of the gate, QA automation will likely bring an increase in working hours and spending, true. But it will quickly make up for that bringing in speed, scope, accuracy, and reliability where repetitive tasks and redundancy checks are involved.
Even if it ends up being more expensive overall, time benefits alone can be the deciding factor. Competitive field? A (relatively) stable project? Your team lives by Agile, DevOps, “Move fast and break things”, and/or Continuous Integration? Then some level of test automation is in order. Just mind the budget, come up with a reasonable strategy and decide whether to outsource or add to your own QA staff. Manual testing might be the first step, but going automated is a long game for those who play to win — and keep winning where it counts.
Information about the author
Maxim Chernyak is Head of the Test Automation and Performance Testing Lab at A1QA and an expert in test automation methodologies and tools for functional and nonfunctional testing. He is also accountable for the education and adoption of state-of-the-art quality engineering practices by QA teams.