Is there a way to vectorize an operation that takes several numpy arrays and puts them into a list of dictionaries?
Here's a simplified example. The real scenario might involve more arrays and more dictionary keys.
import numpy as np
x = np.arange(10)
y = np.arange(10, 20)
z = np.arange(100, 110)
print [dict(x=x[ii], y=y[ii], z=z[ii]) for ii in xrange(10)]
I might have thousands or hundreds of thousands of iterations in the xrange call. All the manipulation to create x, y, and z is vectorized (my example is not as simple as above). So, there's only 1 for loop left to get rid of, which I expect would result in huge speed ups.
I've tried using map with a function to create the dict and all sorts of other work arounds. It seems the Python for loop is the slow part (as usual). I'm sort of stuck to using dictionaries because of a pre-existing API requirement. However, solutions without dicts and record arrays or something would be interesting to see, but ultimately I don't think that will work with the existing API.
z=z[ii], good catch![dict(x=x_, y=y_, z=z_) for x_, y_, z_ in zip(x, y, z)]this is vectorised as far as pure Python goes.forin listcomps have little to do with Python's generalfor. The former actually has a lower-level implementation and is faster than the later.