Tips for Optimizing Testing Automations and Reducing Inconsistencies

Oct 3, 2018

Zachary Jonas

Running an automation suite can be an extremely rewarding practice for any software QA team. There is a reassuring certainty in knowing that multiple specific use cases are guaranteed to work come release time, but it can be challenging to take time out of manual testing to write automations and even more challenging to maintain them continuously as your product evolves and changes. Even if you manage to surmount these obstacles, it can be hard to ensure that your tests are deterministic – that is, failing for the right reasons.

 

Even if you manage to surmount these obstacles, it can be hard to ensure that your tests are deterministic – that is, failing for the right reasons.

 

Let me give you an example. Imagine you have a test to make sure something gets written to a file. Your test waits until the write process should be complete and then checks the file to ensure that the contents are updated as expected. You test the automation, and it performs well, so you add it to your automation suite. Now, the file being written to might be in its expected state when you make the call to read it the majority of the time. However, unless you know exactly when that writing will be finished, it is possible that some of the time that file might be read before the expected results exist, causing the test to fail not because of an uncaught bug but because the test is out of sync with the process it’s testing. Therein lies the rub.

Test automations involving asynchronous calls from a web application can be especially tricky as the page being used is constantly being updated, making it hard to determine exactly when these updates are to be expected.

Thankfully, there are some best practices that can help QA teams ensure that their tests – even those handling asynchronous requests – are deterministic. You’d be surprised how easily these are to miss, so keep this list handy while writing and updating your own automation suites!

1) Wait for your web elements to be in the correct state.

In a web application, your web elements may exist in the DOM before being in the state that makes them accessible to the end user, such as being visible on the page. One such example of this is buttons that may be hidden in menus that are not currently open. There may be some time between when that menu is open and your automation goes to click that button where it isn’t visible to the user yet, causing your automation to fail when it tries to click it. The solution to this is to wait for specific conditions in your locator methods. Some conditions that may be helpful to consider are if the element is clickable, visible, invisible, displayed, selected, and focused. Check for those conditions before proceeding with the next step of your test.

2) Wait for all asynchronous network requests.

Network speed can change based on a variety of factors, such as network traffic and external lag from your ISP. This makes it crucial to know when your asynchronous network requests have been completed and your automations can continue. It can be helpful to include things like special tags in the DOM so that your automations can poll for the completion of these requests. Another good practice is using loading gifs or status bars that your automations can pace against. You know that when a status bar disappears, the element it refers to should be loaded, and the test can proceed accordingly. Bonus: end users appreciate loading indicators as well!

3) Confirm element identity.

Many web applications simulate a tab-like interface, but under the covers that just means that all of the elements exist in the DOM, only some are shown and some are hidden depending on what “tab” is selected. If two elements in separate tabs share similar identifiers and may be loaded at nearly the same time, as is common with ubiquitous elements like “OK” buttons, you may run into some inconsistencies. Be wary of this and make sure you are interacting with the correct element in all cases!

4) Be careful with cleanup!

In order to maintain a fresh environment between tests, it is common practice to go through a clean-up routine. This could include a variety of things, such as deleting and moving necessary files around or releasing and reinitializing an object. It is important to make sure that your cleanup step is not causing any complications between test runs.

Imagine, for example, that you have a test that writes to a file, locking the file while this process is underway. Let’s say your automation completes its write operation, and then marks the write as successful and ends. This success return in turn kicks off the cleanup process which then promptly deletes the file. In multi-threaded environments, a race condition might occur where the cleanup process is pitted against the operating system’s buffer flushing process. If the cleanup protocol manages to delete the file before the OS has finished writing it, then your automation manager could detect an exception. Hunting down the origin of such errors can be time- and labor-intensive, so avoid these types of scenarios by ensuring that all processes using a given file are complete before you attempt to delete or release the file from its locked state.

5) Be Proactive!

Creating a reliable automation suite is a dynamic battle that requires ongoing attention and effort, but it’s all worth it when you can confidently assert that a failed test means there’s an uncaught bug in an outgoing release. Here at Exago, we’ve learned just how difficult such a task can be, but hopefully anybody trying to write automation scripts can benefit from some of the lessons we’ve learned along the way.    

Photo Credit: This modified version of "Macro Cogwheel Gear Engine Vintage" by Pavlofox is licensed under CC BY 2.0.

SHARES

Schedule a Demo

Leave a Comment