Taking performance to a new level is one of the key goals of Atlassian. To support this we have recently setup a new team, the Performance Engineering Team, to focus on this. Our job is to build tools and provide performance expertise throughout the company. Expect to see a difference in our releases later this year!

The primary job of a performance team is to benchmark, profile and measure. Over the years Atlassian has accumulated tests in a number of different load generation frameworks. Each, for their time, served us well, but they lacked something when it came to ease of development and maintenence.

…cue Soke!

Soke is the baby of the new Performance Engineering Team at Atlassian, and has been birthed at version 1.0 only a few weeks ago. It is our main measurement tool and the start of our push to make the performance of all Atlassian products *lustworthy*.

What is it already? Sure…marketing over.

Soke is a framework for generating load using browsers, via Selenium WebDriver, that stresses applications with accurate traffic and allows for collection of client-side profiling data, network traffic timings and server-side data.

Well what does that let you do?

For a test run, you can collect browser profiling data. We have integration with Google’s Chrome browser to get browser profiling data, JavaScript memory data for example. This can help you investigate the in-browser components of your applications; something we think is missing from the tools out there today. In fact it will allow you to accurately track where time is being consumed in your application be it browser or server-side of the application.

Performance Trend
A graph of the browser memory usage in an upcoming UX project that's still in the lab.

Catching memory growth like this early is a huge benefit!

The quality of the load is also one of the main benefits. Typically the tools available today model the workloads of requests, either as a static list of URLs with parameterised values or as a captured set of requests from observing a user manually perform some tasks. While largely a fine approach it doesn’t fully capture the sequence of requests that will actually happen under production, nor will it model the interdependencies between individual requests; these interdependencies are usually modeled by specifying that resource X must only be fetched after resource Y, not whether it is dependent on the result of fetching X nor whether any other requests may be fetched instead of Y. The only way to model accurate load is to use the same algorithms browsers use to fetch the resources; it turns out that just using a browser is the best way to do that.

You may well question the need for modelling workloads at this level, and for many types of simple application a list of URLs may be sufficient. Dealing with complex products such as Jira and Confluence that have many layered components, aggressive caching and data manipulation the order of requests and mix of requests may yield different patterns of performance. The devil really is in the detail. One application where this is of particular importance is manual GC tuning. Simplistic workloads may suggest vastly suboptimal tuning values when applied to a real workload.

That’s not the whole story!

While static lists of resources get you a long way to getting acceptable workloads, and are reasonably straightforward to produce, they have their downsides. With any product trying to provide an ever improving User Interface, resources change, the structure of the pages change. Things get optimised, rejiged and polished, unfortunately this means that the static list of resources encoded in basic load generation frameworks becomes obsolete – often very quickly.

Here at Atlassian we have a number of ways we try to mitigate this(btw. you can’t eliminate the impact of change). The important component is the Selenium/WebDriver PageObject pattern. We extend this for our own internal unit testing, allowing us to use the same abstraction for those unit tests and for in-browser testing. The cute property here is that as the application changes we get naturally get the changes needed to support performance testing; it’s not totally free – people have to fix their compilation failures in the performance test code 🙂

Show me, already!!!

[youtube https://www.youtube.com/watch?v=oo8EhSALMh8&w=620&h=465]

I hope that gives you a flavour of the work we are doing here to build performance into our products. I’ll follow-up with another post detailing the guts of Soke and what technology goes into building a framework like this.

As a final note I’d like to point out the excellent frameworks that have inspired Soke’s development,

* JMeter
* Grinder
* Gatling

Performance and Soke