Continuous Performance Testing — Sensei Labs

Sensei Labs
Welcome to the Enterprise Orchestration Era
3 min readAug 20, 2020

--

Our Quality Engineering team’s goal is to continue to shift left, testing earlier in the lifecycle, providing our developers with as much information about the software under test as early and often as possible. The benefits of this are early bug detection resulting in better code quality, reduced costs associated with development and testing, and effective time and resource management.

An initial step in building any Quality Engineering practice is to automate as much functional testing as possible. This is usually the low-hanging fruit and frees up the Quality Engineers to conduct higher value exploratory testing and test more complex scenarios. After the functional test automation is defined and implemented, a team has lots of options to automate other types of testing, including security testing, usability testing, or performance testing.

For our practice, we found the process of conducting performance testing on back-end code had concrete repeatable steps which yielded measurable results that could be compared. And so, we decided to first focus on automating our back-end performance testing.

The process

Our automated performance testing solution is comprised of a C# based application to query our test databases and construct sample datasets. These datasets are used as our test input to our JMeter scripts which are dynamically generated based on the endpoint we’re targeting, the number of simultaneous users, and the number of rows in the dataset. Once we have our input data and our test script, we execute this script against the developer’s changes while also running against an instance of our production code. Once the tests are complete, we not only compare the performance metrics between the two branches but, if applicable, we look at the content of the responses that are returned. This allows us to conduct functional testing as well as performance testing all within the continuous integration pipeline.

As both the code and the suite of tests grow, the execution time of our test suite increases which conflicts with our goal of providing results as quickly as possible. To manage this, the performance tests monitor changes to files and only the appropriate tests are executed upon commit, thus ensuring the time it takes for a result is optimized.

In addition to testing individual branches, we opted to execute our entire suite of performance tests periodically throughout the duration of our development sprint. This regression testing enables us to measure the performance of the whole system, once all branches have been merged.

We store the results of all our performance test runs. A separate process then analyzes the stored performance run data and reports on the results of the runs daily in comparison to previous runs across several reporting periods. This process allows us to stop any slow performing code from being released into production, to view the performance trendline of endpoints over time, and to provide corrective measures, if needed, before our customers experience a degradation of performance.

The outcome

We’ve seen some great measurable benefits since implementing automated performance testing! For example, we recently completed a comparison of ten proposed code changes, measuring the performance and functionality of each branch against one another. We were accurately able to see which code change had the least impact, rejecting nine of the branches as they didn’t meet our performance requirements. This testing would normally have taken us a day or more to complete, however, we were able to provide results within a couple hours and without taking up manual testing time!

The future

The next step for our team will be to integrate application monitoring (CPU usage, memory usage) during the test runs. This will give us an even richer dataset and enable us to be proactive in preventing issues should we see them in testing. We’ll also explore how we can take the lessons and steps learned in our implementation for back-end performance testing and apply them to front-end performance testing so that our team can benefit from these gains across our entire development lifecycle!

About the author

PAWEL POPOWICZ

Pawel thrives on working closely with development teams, providing them with tools and processes to improve software quality. He’s always striving for the highest quality, which he believes comes about when Quality Engineers are involved as early and often as possible in the product design and software development process.

Originally published at https://www.senseilabs.com on August 20, 2020.

--

--

Sensei Labs builds smarter workplace solutions that your people will love, powered by data and grounded in experience.