NashTech Insights

Misunderstandings of Automated Testing

Julie Calboutin
Julie Calboutin
Table of Contents
kaufmann, businessman, gears-3821436.jpg

Written by Trung Le Cong

Automated testing plays an important role in the SDLC. Together with manual testing, automated testing helps to assure the product’s quality, and increase testing’s productivity.

Although automated testing is increasingly widely used, from my experience in years working as a test automation engineer, there are still some misunderstandings when deploying automated testing that lead to inefficiency in automated testing.

Automated testing can replace manual testing

Automated testing and manual testing are two different processes in testing. Manual testing requires manually executing and verifying bugs from a human perspective with many differences in criteria and expected results. Automated testing, on the other hand, is a repeatable process of steps and flows that run automatically with commonly expected results. It simulates the way testers perform tests in many scenarios and helps reduce the effort of repeating these tests in every run.

All test cases can be automated

As has been said, automated testing cannot replace manual testing completely. There may be complex functions in an application that require a tester to be involved to use their human eyes, experience, and logic to execute manually. In some cases, manual testing is more cost-effective than automated testing.

Automated testing is high cost

This is typically true for the initial phase when building a new automated test team. But in the long term, automated testing offers a lot of benefits that help save cost and time. Automating the right tests using a repeatable and scalable approach will help provide the fastest ROI for your automated testing.

Automated tests are easy to manage and maintain

Running automated testing is never an easy process. It requires a lot of initial configuration to have tests automated and run across environments. Over time, the number of automated test cases will increase and need to be adapted and maintained regularly. Breaking up automated test scripts into smaller reusable units helps to reduce maintenance.

All automated tests are 100% passed in every execution

This is a common misunderstanding about automated testing. 100% passed test cases is ideal but also an unrealistic goal. There are many conditions that impact the result of automated tests and cause tests to be flaky; bad test data, unstable environments, and non-standard code. Since the objective of testing is to reduce risk by exposing bugs and not to get a 100% pass rate, being able to quickly categorize any failures as confirmed bugs or flaky tests is key, with manual testing as a backup to quickly execute the test and reduce risk while the automated test is being fixed.

Conclusion

Making sure you properly understand what you can achieve with automated testing and how for your own specific organisation without getting caught up in the automation hype is critical to achieving an ROI from your automated testing.

Julie Calboutin

Julie Calboutin

A member of the Technology Advisory team at NashTech and the Quality Solutions practice champion.

Leave a Comment

Your email address will not be published. Required fields are marked *

Suggested Article

%d bloggers like this: