Metrics and Analysis—Part 1: Mean and Standard Deviation

Let’s cut right to the chase and state how it is: mean and standard deviation aren’t useful in load testing.

Most of our time is used looking at metrics, we need to make sure it is spent as efficiently as we can. That in mind, which metrics should we use to have a clear view of what is happening at any point in time? Are these actually useful? This series is all about digging into common metrics, understand their common pitfalls, and avoid missing changes in your application behavior while load testing.

Definitions

Mean

The mean—arithmetic average, describes the central value of a data set, and is defined as the sum of all parts divided by the number of parts. Hence, for n parts:

    \[\mu=\frac{x_1+x_2+...+x_n}{n}=\frac{1}{n}\sum_{i=1}^{n} x_i\]

The arithmetic average, also written \bar{x}, is a summary of central tendency, is easy to use, compute and so far, widely used.

Variance

The variance is a bit more involved. It describes how much values are spread around the mean. It is achieved by subtracting each part of the data set by its arithmetic average, squared, then dividing by the number of parts:

    \[\sigma=\frac{1}{n}\sum_{i=1}^{n}(x_i-\mu)^2\]

Before we dive into it’s actual sense, let’s go right to the standard deviation.

Standard deviation

The standard deviation is the same as the variance, except it is expressed in the same unit as the mean, whereas the variance is expressed in squared units. You can use both interchangeably as long are you are rigorous with what units you are using:

    \[s=\sqrt{\sigma}\]

Is it easier to think about the standard deviation as a description of variability rather than it’s formula. In fact, this is all the mathematics we’ll see for today. Hope you’re okay.

Distributions with the same arithmetic average can eventually be differentiated by their standard deviation:

Two distributions with the same average, but different standard deviations

Two distributions with the same average, but different standard deviations

We need to go deeper

Sadly, when using variance and/or standard deviation, you need to make sure which distribution you are dealing with. Knowing how much your data set is spread around the mean doesn’t account for much if you have no idea how the data looks like in the first place. Worse, how to make sense of the standard deviation if your data is shared between multiple binomial distributions—or, multi modal distributions, as such:

Multi modal distribution, showing its arithmetic average doesn't tell much about its shape

Multi modal distribution, showing its arithmetic average doesn’t tell much about its shape

Such data set could be split into multiple sub data sets, then studied individually. Arguably, that would be cumbersome to do, which would defeat our initial purpose of gaining time when analyzing our data.

Furthermore, what happens when the mean and standard deviation are the same? Does this mean the data sets are the same? In fact, it is easy to craft distributions with these kind of properties:

Multiple distributions, sharing the same arithmetic average and standard deviation

Multiple distributions, sharing the same arithmetic average and standard deviation

Some people got even further as to squash all sorts of shapes with the same average, standard deviation, on both axis, in a single animation:

DinoSequentialSmaller

https://www.autodeskresearch.com/publications/samestats

As you understand now, variance and standard deviation only make sense on Gaussian distributions, which are rarely encountered in the context of load testing. Most common cases are multi modal distributions, outliers or extreme values, long tails or skewed distributions, and so on.

The arithmetic average is very sensitive to outliers and it won’t tell us much about the shape of the distribution anyway. We will need a more powerful tool to deal with all these cases, which could be stated as extreme if they were not so common!

Then, why are they used?

As said earlier, these are metrics that are easy to use and compute. However, they will only be efficient if the distribution is perfectly shaped—i.e., symmetric. It’s an understatement, to say the least, that it is not the case in the world of load testing.

Next time we’ll talk about metrics that are more robust and can handle these edge cases.