5 reasons why Maximo Test Automation fails

Test automation can be a life saver that makes critical difference to the success of a Maximo project. But it also can be a failure that consumes both human and financial resources without giving much in return.

So what are the things to look out for? Here are some of the key factors in.


5. Test Data

Unlike with the manual testing, where test data specifications sent to the tester can be on a higher level, e.g. “use a typical unapproved purchase order”, with automated testing such instructions are insufficient. The computer would not know what “typical” means and would not know what to do if there are no unapproved PO’s in the system.  Any automated tests that already exist in the system, ones that rely on certain types of data, run a high risk of failing at some stage. Either the data is not present in the system, say after an environment refresh, or it has been exhausted by previous test runs.

Another problem with test data comes from the principle that to benefit from automation we want to run it as often as possible. If our tests generate new records but do not remove them after completion, the amount of data will accumulate in the system will eventually lead to performance issues.

Remedy:

Design your tests so that, if possible, they create their own data and preferably remove their data after completion, leaving the system as it was before the test execution. Create data packs that can be reloaded into the system after each refresh. Have in place data clearance routines.

4. Crowded Development Environments

Automated testing is most effective when the test suites can also be executed in developer’s own development environments. This way the developer will know that any failed tests are a consequence of their latest work and will have a very good idea where to look to fix the problems. The setups where several developers or even several development teams share a development environment makes it difficult to identify which precise change has caused problems, leading to disbelief in the test results and the attitude that it must be someone else who broke the system.

Remedy:

Let developers develop in their own environments and then integrate into higher environments. Make sure that developers can have a quick way of replacing their broken environments with fresh and healthy copies.

3. Brittle Tests

One of the banes of the automation are tests that forever need tweaking to work with the latest changes in the product. In the long run they can produce two major consequences. One is that they are left to fail as noone has time to fix them, leading to an untested system or increased manual testing load. Another outcome can be, especially when you have a powerful QA manager, that the new product features are deliberately held back in order not to break the existing brittle automated tests;  this is particularly the case with product refactoring.

Remedy:

Obey the ‘testing pyramid’ rule which says that most tests should be at object level rather than at API level, and further that the smallest amount of tests should be the user interface tests. Design your tests so that they can be changed easily when the product changes. Follow page object pattern testing design principles for UI tests. Use experienced resources for test development. Quality over quantity.

2. Wrong Tools

Not all tools are designed for all automation tasks. The “jack of all trades and master of none” applies here. Using overly generic tools can require lots of effort to produce test cases, eventually making you question whether the automation is a time saver or a time waster. On the other hand, the tools that are too specialised can also be quite inflexible and prohibit you from creating more demanding tests.

Remedy:

Choose the right tools for the job. When selecting the tools always test them against your own real-life test cases and see if and how they cope with them.

1. Misunderstood Benefits

Expecting that automation will fully replace manual testing is wrong. Not all types of testing can be automated: categories like exploratory testing or user experience testing are firmly in the domain of the humans. Also the initial, first time testing needs to be done by humans in order to spot if something important has been overseen during the product design.

Where automation yields the most benefits is in the activities of regression testing, data driven testing, api testing, performance testing and anywhere else where no creativity is required. Computers are very good workers but are not good at spotting anything out of their strictly prescribed scripts.

Remedy:

Plan carefully what to automate. If possible, stick to the ‘80:20’ rule: the ratio of automated to manual testing. Segregate your testing between manual and automated by playing to their respective strengths.

Leave a reply

Your email address will not be published. Required fields are marked*