3

From statistics point of view standard deviation when all values are equal should be 0. For arr1 result is as expected: 0, but for arr2 is 1.3877787807814457e-17 - very small but not 0, which leads to issues with e.g. zscore.

Is this a proper behavior or weird bug?

import numpy as np

arr1 = [20.0] * 3
#[20.0, 20.0, 20.0]

arr2 = [-0.087] * 3
#[-0.087, -0.087, -0.087]

np.std(arr1) #0.0
np.std(arr2) #1.3877787807814457e-17
2
  • 4
    That is a floating point error. When the operations made in the std function the float32 values are not precise enough to represent theoretical values. As you see the std a very small number. see np.mean(arr2) it is -0.08700000000000001 Commented Sep 7, 2020 at 7:18
  • I can't recollect example, but issue also occurs for large numbers. Commented Sep 7, 2020 at 7:32

2 Answers 2

4

The Numpy documentation for std states:

The standard deviation is the square root of the average of the squared deviations from the mean, i.e., std = sqrt(mean(abs(x - x.mean())**2)).

The average squared deviation is normally calculated as x.sum() / N, where N = len(x). If, however, ddof is specified, the divisor N - ddof is used instead. In standard statistical practice, ddof=1 provides an unbiased estimator of the variance of the infinite population. ddof=0 provides a maximum likelihood estimate of the variance for normally distributed variables. The standard deviation computed in this function is the square root of the estimated variance, so even with ddof=1, it will not be an unbiased estimate of the standard deviation per se.

Note that, for complex numbers, std takes the absolute value before squaring, so that the result is always real and nonnegative.

For floating-point input, the std is computed using the same precision the input has. Depending on the input data, this can cause the results to be inaccurate, especially for float32 (see example below). Specifying a higher-accuracy accumulator using the dtype keyword can alleviate this issue.

a = np.zeros((2, 512*512), dtype=np.float32) 
a[0, :] = 1.0 
a[1, :] = 0.1 np.std(a)
>>>0.45000005 

but for float64:

a = np.zeros((2, 512*512), dtype=np.float64) 
a[0, :] = 1.0 
a[1, :] = 0.1 
np.std(a)
>>>0.45 
Sign up to request clarification or add additional context in comments.

2 Comments

Should np.std in the first place check for "equality" (if true return 0.0)? I think it would avoid in such case rounding issue.
@QuantChristo Yes, it does not seem to examine equality.
-1

I tried it and got same results. This says it's a bug for Numpy. It seems this happens when you use small numbers. https://github.com/numpy/numpy/issues/8207

Comments

Your Answer

By clicking “Post Your Answer”, you agree to our terms of service and acknowledge you have read our privacy policy.

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.