« Back to the Product Team Blog

Improving our automated tests 5x

Automated testing is a critical part of our deploy process at Hudl. We rely on it for feedback during deploys to production as well as our test environments. We started down the path of using automated regression testing about 2 years ago, early on trying various products until we found the right fit. We feel we have learned a lot during this time and made our process better and want to share our experiences.

If you search around you will find hundreds of different automated testing frameworks, each claiming to be the best. The choice for us came down to ease of use and a robust feature set. We wanted something with a low point of entry because the majority of our Quality Assurance team does not come from a development background and we wanted them to participate. For the coverage requirement we wanted something that allowed us to interact with many different portions of the site as well as having an active development community. We have gone through a few frameworks, but are currently using CasperJs.

First Push

When we started this new process we decided on Watir. It’s a low entry Ruby-based framework which fit great since we also had quite a few Ruby gurus on the product team. This worked well for around a year, however, we seemed to have quite a few false positives due to network latency. These problems were common enough that we decided to look elsewhere.

Moving to CasperJs/PhantomJs

CasperJs coupled with the headless browser PhantomJs was the next solution we chose and what we are currently using. While there have been some bumps along the path, we are pretty happy with CasperJs. The JavaScript base of CasperJs allows us to seamlessly interact with the DOM on the page as well as jQuery for some of the harder to reach areas.

CasperJs Evaluate

The Performance Problem

While we liked how easy it was to work with CasperJs, we weren’t happy with its performance. Our test suite was taking over 12 minutes. With our suite continuously growing this problem was only going to get worse. We wanted the coverage, but we also didn’t want to slow down our process. With our deploy process, the tests weren’t finishing until 10 minutes after the deploy was done. We push to production 5-10 times a day, our record being 24, while the extra time might not be much, over the span of a day it adds up. This extra time also takes away from the person deploying as well preventing the next person from merging their code.

Runtimes for CasperJs 1.0.2

The Solution: Update software and run in parallel

After doing some digging we found out the version of CasperJs we were using, 1.0.2, was almost 9 months old. During this time many changes had been committed, both expanding the framework as well as making it faster! Version 1.0.3 of the framework dealt with wait and load times which is where the biggest improvement was to be found. Unfortunately it wasn’t as simple as updating the CasperJs files, we also needed to update our tests. After updating all of our tests to the new suggested format along with the most recent version of CasperJs, version 1.1-Dev3, we were seeing runtimes of 6 minutes, deploys were no longer waiting on tests!

Runtimes for CasperJs 1.1-beta3

While this is great, we wanted to go further; what better way than to run in parallel? Again, there various tools available to do this, we decided on Grunt which is a JavaScript taskrunner. There is a nice npm package that combines Grunt and CasperJs while allowing us to run in parallel with our specified number of threads. Tests were then done to figure out the optimal number of threads for our build agents. Now that we have tests running in parallel, we are seeing runtimes of 2.5 minutes! On top of that, we believe that with a few more optimizations of the tests themselves, we will see this move closer to 2 minutes.

Runetimes for CasperJs 1.1-beta3 (8 threads)

What we learned

Throughout this whole process we have learned quite a bit about automated regression tests. We have tried a few frameworks, saw what did and didn’t work and overall have seen the benefit of the tests themselves.

Don’t be afraid to drop a framework - if it’s not working for you the time you spend researching and building up a new test suite may save you more time in the long run.

Check for updates - it may sound stupid but after the initial setup many things are left alone if they are working correctly. In our case we may have had these faster running tests 6 months ago.

Don’t be content with where you are - always continue to research and look for better alternatives as there are always new ones being added.

comments powered by Disqus

Solve Mysteries at Hudl » Apply Now