6 JavaScript UI Testing Anti-Patterns You Need to Avoid
Explore the most common bad practices and how to implement the right solutions.
Test automation is beginning to explode in popularity as more and more companies seek to provide programmatic solutions to manual testing.
In terms of US dollars, the field of test automation is projected to skyrocket from $12.6 billion in 2019 to $28.8 billion by the year 2024.¹ That is an increase of more than 100% within five years. This astronomical amount of growth will lead to a wide amount of opportunity.
Unfortunately, new opportunity can also see an increase in bad testing practices. In this article, we will walk through six of the most common JavaScript test automation anti-patterns within the field so new shops can actively avoid them.
Waiting…
Modern automation frameworks such as Cypress and Playwright use implicit and explicit waiting until a page has loaded, an element is visible, or an element can be interacted with. This ensures that tests are less flakey and prone to failure by giving the browser time to load into a testable state. Once the load has completed, testing may resume execution.
An example of a correct usage of waiting:
The most common waiting anti-pattern is waiting for an explicit timeout between code executions. Doing so stops execution, then resumes once the timeout has concluded. This ends up delaying test execution and slows the overall test run down.
An example of an incorrect usage of waiting:
Instead of writing the above code, use built-in wait strategies such as waiting on elements or responses. Consider that clicking “newHeadingButton” in the above example dispatches a request. We can wait on the request to finish before taking our next action.
Taking UI Testing too Literally
If you or your engineers are setting up test state using the UI, you may be participating in a bad testing practice. Generating test state through the UI is cumbersome and can contribute to test flake. Instead, use existing application implementation to your advantage to create hermetically sealed user journeys.
When testing application login for a brand new user, do not build a new user through the UI, then attempt to login. Your test is no longer hermetically sealed as it is now testing two user journeys:
- New user registration
- Application login
Instead, create a request for a new user using your application’s API, then attempt to login with the newly built credentials. This can be done effortlessly using Cypress’ built-in request library and aliasing.
A similar pattern may be used for Playwright using a JavaScript HTTP library such as Requestify.
Testing Independent of CI
An all-too-common anti-pattern in test automation is to run tests independently of Continuous Integration. The direct result being that failing tests have no immediate repercussions. When run in CI, a failing test can stop a build from being merged, thereby saving potential escapes from reaching validation or production environments. In addition, a failing test can act as a call-to-arms for a team of developers to turn their attention to the failure.
Test engineers should work with Site Reliability or Infrastructure engineers to ensure that automated tests are run on a per-build or per-merge basis. Tests can even be run on a commit or pre-push when using packages such as Pre-Commit should an engineering team decide to adopt such a tool.
I personally recommend using Pre-Commit as a team and running linting/formatting on pre-commit followed by unit tests on pre-push. Since UI tests take longer, I would run them on a per-build or per-merge basis using something like Github Actions or a Jenkins workflow.
Behaviorally Driven Overhead
I used to be a big proponent of Behaviorally Driven Development (BDD), so much so that I wrote a best practices standard for two of the companies I have worked for in the past. However, I have moved past using BDD after having come to the conclusion that the process provides little to no value while acting as a source of refactoring pain.
Behaviorally Driven Development tools such as Cucumber are wonderful to work with in the dreamy scenario where the entire company has bought into the art of BDD. All too often however, it is the testing team which writes in Gherkin while other aspects of the business either ignore the practice, or do not contribute. In this scenario, BDD loses its value as a collaboration tool between departments. Instead of bringing product management, development, and QA closer, the practice ends up alienating QA.
Another reason for removing BDD is the amount of overhead involved in writing and refactoring a test. Consider a user journey where a regular user navigates to a login page, submits valid information, and checks for success. We can write that in a feature.
// login.featureFeature: Application Login
As a user,
I would like to login to the application Scenario: Login with valid information
Given we visit the login page
When we submit valid credentials
Then we should redirect to our profile page
Now we need a steps file.
What if the login workflow no longer redirects to the profile page?
Instead of refactoring in one location without BDD, you must now refactor in two: steps and feature. The most glaring pain point with BDD is the fact that small refactors take twice the amount of time to accomplish than if you did not use BDD at all. Instead of having a singular source of truth for your testing journeys, you have two separate modules which must be kept in sync in order to function properly.
Instead of using BDD, you should write more declarative user journeys.
The journeys above are declarative enough to the reader (whether it be a tester, product manager, or developer) that the test takes a user, inputs valid data, and submits it successfully. No steps, features, or complicated overhead required, just plain JavaScript.
Conditions
Testing engineers should seek to remove as much conditional logic as possible from a test in order to reduce flake. Conditional testing generates non-deterministic tests, those being difficult to troubleshoot and run with confidence.²
I worked for a company that needed to test a third party integration. Part of this test was to login via third party, then redirect to their application. Due to the flakey nature of the tests, the authentication state was proving unpredictable. They implemented conditional logic to attempt to handle it in a before step.
- If user logged in, log out, then log in
- If user not logged in, log in
The tests were extremely unreliable due to this conditional logic. Sometimes they would pass, though the majority of times it would outright fail. I opted to remove the conditional logic in favor of an API logout method in a teardown step. The user would always have to login (unless test execution had been manually interrupted) which made the tests much more reliable when run.
Instead of using conditional logic flows, engineers should use APIs to build test state as much as they can, whether it be through making requests or mocking responses.
Let us consider an example application that is in the midst of an A/B testing campaign. The application is a bookstore which displays a specific book for group A, and a different book for group B. Instead of conditional logic, we could implement a response mock to always show the book for group A.
Now our test is deterministic and reliable as we built test state using an API rather than attempting to do so through the UI.
Choosing the Wrong Selectors
Often times using the correct selector criteria can be a difficult task. A common anti-pattern in test automation is to use highly brittle selectors when building page objects or writing tests. Brittle selectors are those that can change due to implementation refactoring. Improper selector criteria would be the use of non-unique IDs and classes, or Xpath.³
Cypress specifically alleviates some of this pain by adopting JQuery .first()
and .last()
methods, as well as methods for DOM traversal such as .sibling()
and .parent()
.
// Cypress
cy.get('.chat-message').last();
It is recommended to harden your selector methods through the addition of customized selectors rather than to rely on these selection methods (unless you do not have access to the source code).
In most cases, test engineers should sync with their development team to ensure unique selectors are added to the codebase. These include selectors such as data-id
, data-test-id
, data-cy
, or any of their variants. They can be used like any attribute in an automation framework.
Summary
We can now identify and avoid the six most common anti-patterns in JavaScript test automation. Keep in mind that these are not the only anti-patterns within the world of automated testing. As you continue working in the industry you will take note of other patterns to avoid while working to instill good testing practices.
Resources
- “Automation Testing Market.” Market Research Firm, www.marketsandmarkets.com/Market-Reports/automation-testing-market-113583451.html.
- “Conditional Testing.” Cypress Documentation, 15 Feb. 2021, docs.cypress.io/guides/core-concepts/conditional-testing.html#Definition.
- “Best Practices.” Cypress Documentation, 15 Feb. 2021, docs.cypress.io/guides/references/best-practices.html#Selecting-Elements.