Why we ditched Unit tests for Integration & End-to-End tests

With the rise of agile development practices like continuous integration, development teams have struggled to find the right balance between unit, integration and end-to-end tests. Most developers settle for the 70/20/10 rule as promoted in this Google Testing article which splits testing to 70% unit tests, 20% integration tests, and 10% end-to-end tests.

For some spending 70% of their testing efforts on unit tests is not optimal; this was the case for our team at Systelos, a SaaS platform that allows financial advisors to guide client behavior at scale. We tried it for a few months and it didn’t work, so we ditched unit tests in favor of integration and end-to-end tests.

The problem

As a team, we wanted to be able to answer questions like is our application working the way we want it to, or more specifically:

  • Can users log in?
  • Can users send messages?
  • Can users receive notifications?

The problem with the 70/20/10 strategy which focuses on unit tests is that it doesn’t answer these questions and many other important high level questions.

Unit tests are too isolated to test real user stories. They are fast and isolate failures really well, but don’t provide the same confidence integration and end-to-end tests do. Especially with a frontend heavy application like ours, where the user’s behaviour is complex and unpredictable.

Furthermore, most issues reported in our JIRA were a result of miscommunication between multiple UI components and backend services. When incorrect information was presented to our users it was usually because a service didn’t access the required information from the right location, or because different frontend components didn’t work together properly.

Our solution

Since unit tests didn’t have much impact on the reliability of our application, we needed a better testing strategy to help us answer high level questions like, can a user still send a message after all these code changes?

After close inspection, we noticed the small amount of end-to-end tests we had did a better job of spotting issues we hadn’t seen before; they proved to be valuable in our quest for reliability. This prompted us to to flip the pyramid on its head and focus on both end-to-end and integration tests.

Our new strategy was to split our automated tests into 20% unit tests, 30% integration tests and 50% end-to-end tests. With the help of testing frameworks like Jasmine, Protractor, Sinon and Karma, this new strategy helped us catch 95% of issues before they made it to production.

20% unit tests

We still believe unit tests provide many advantages to our team; they run fast, provide great isolation for failures, and are much more reliable than integration or end-to-end rests.

Because we now only use 20% of our testing efforts on unit testing, we’re picky about which functions to test. We only focus on functions that have a high probability of breaking when the logic is incorrect or when it receives wrong inputs.

30% integration tests

Integration tests are tricky to implement especially on the frontend because larger functions tend to interact with many different services and APIs. We use Sinon.js to spy, stub and mock external services since they’ve already been tested. And just like our unit tests, we only focus on high reward functions.

50% end-to-end tests

We focus most of our efforts on end-to-end tests; they test the whole application from start to finish. They ensure that all the integrated pieces of our application function and work together as expected. They simulate real user scenarios, essentially testing how a real user would use the application.

We use Protractor to control the browser and simulate a client nudging and starting a conversation with their advisor regarding an update about a life change. Running multiple tests like these to test every single functionality within our application helps us ship code confidently.

There are however hurdles we had to overcome in order to make this possible within our continuous integration environment where code is released every week. Some of them include:

  • End-to-end tests take a long time to run. This can be a problem when you’re trying to deliver code fast (continuous delivery FTW!).
  • They sometimes crash for no reason. The browser may treat long running requests as failures, or another developer may have changed the IDs in your HTML without updating the tests.
  • They take a lot more time to write. Since user actions require multiple steps, you need to be extra careful managing your IDs, adding timeouts, simulating external devices and browser capabilities, the complexities are endless!

We took several steps to solve these issues, and I’ll be sharing them with you in my next blog post so stay tuned!

Conclusion

Finding the right balance to how much unit, integration and end-to-end test to write is not easy. Every team and application is different and requires a different ratio. A good way to identify the best strategy is to analyze and reflect on questions like which tests within your codebase have made a big impact? which ones saved you from having your application crash on production? For us, it was clear that end-to-end tests made the most impact and found the most bugs, with integration tests coming in second.

Leave a Reply

Your email address will not be published. Required fields are marked *