When getting into NumPy from Matlab, you'll probably want to keep the docs for both handy. They're similar but often differ in small but important details. Basically, they calculate the standard deviation differently. I would strongly recommend checking the documentation for anything you use that calculates standard deviation, whether a pocket calculator or a programming language, since the default is not (sorry!) standardized.
Numpy STD: http://docs.scipy.org/doc/numpy/reference/generated/numpy.std.html
Matlab STD: http://www.mathworks.com/help/matlab/ref/std.html
The Numpy docs for std are a bit opaque, IMHO, especially considering that NumPy docs are generally fairly clear. If you read far enough: The average squared deviation is normally calculated as x.sum() / N, where N = len(x). If, however, ddof is specified, the divisor N - ddof is used instead. In standard statistical practice, ddof=1 provides an unbiased estimator of the variance of the infinite population. (In english, default is pop std dev, set ddof=1 for sample std dev).
OTOH, the Matlab docs make clear the difference that's tripping you up:
There are two common textbook definitions for the standard deviation s of a data vector X. [equations omitted] n is the number of elements in the sample. The two forms of the equation differ only in n – 1 versus n in the divisor.
So, by default, Matlab calculates the sample standard deviation (N-1 in the divisor, so bigger to compensate for the fact this is a sample) and Numpy calculates the population standard deviation (N in the divisor). You use the ddof parameter to switch to the sample standard, or any other denominator you want (which goes beyond my statistics knowledge).
Lastly, it doesn't help on this problem, but you'll probably find this helpful at some point. Link