UI-testing – EPiTest – about time
This post is part of a series that roughly covers a presentation I did about UI-testing at EPiServer meetup group. It contains the following parts:
It’s been hinted in previous post that time is of some importance when it comes to you UI-tests. And this is because UI-testing is slow. Dead slow. Compared to your in memory unit tests we have to do a lot of time consuming things, such as opening browser windows and talking over HTTP. To give an example, in the project I’m currently working on the integration test project (which consists of 90% UI-tests) takes 56 minutes.
Since our CI chain (rightly so) is dependent on tests passing before pushing to our demo environments etc it caused a lot of frustration if a test had timed out and you’d have to wait another 50+ minutes before things got pushed out. Imagine that another tests timed out the next run and you can understand where the frustration came from.
The feedback you get from failing test is of course invaluable but in this case it was not rapid enough to deliver value. There are various ways to speed up your tests besides the ones mentioned in previous posts.
Let’s say that you have a 100 tests that requires the user to be logged in the edit mode. This means that you need to visit the login page, enter the user name and password information and click the login button 100 times. You could create a simple page that logs in the user directly via code simply by visiting the page. So instead of performing a post you simply do a get. Just remember to remove that page before going into production.
Don’t close the browser
This is a highly debatable step and not something I’d really recommend. Normally you want to open a new browser for each new tests to ensure a fresh context in terms of cookies etc. You could however keep the browser window open and this can save a huge amount of time since closing and opening the browser window is a large cost factor in your UI-tests. Just make sure you know what you’re doing since there’s a high risk that browser data from other tests start affecting the current test.
Running your tests
We’ve only talked about how to make the actual test(s) run quicker, but there are also things you can do in regards to how and when you run your tests.
Divide and conquer
This method is what we’re actually using today. Instead of running all integration tests on code commit we have an “algorithm” that runs newly added tests, recently failed tests and a set of tests that should run all the time. These tests takes around 9 minutes which is more manageable. The full suite of integration tests are run once each night.
While not optimal, this has worked well for us. One draw back is that since the tests are not run because of a persons commit (but rather through a schedule) there’s no set person responsible for fixing a red test run.
If you want to know more details as to how this algorithm works you should talk to Patrik Akselsson who is it’s creator.
Running tests in parallel
This is a viable option if your testing framework allows running tests in parallel. I don’t have much experience in this area myself but a simple google search might give you some information.
To the cloud(mobile)!
Another option is to out source the running of the tests to any of the cloud based services. SauseLabs is an example of such a service that runs Selenium based tests against your site for you. Their pricing model is based in how long your suite takes to complete but keep in mind that these services typically allows your tests to be run in parallel which shortens the time it takes for your suite to run. There are some added benifits such as video playback of failed tests. If you have the money services like these can be a nice choice.