I would like to convert a list with shape (1200, 140, 150, 130) to a numpy array, but the standard numpydata = np.array(mylist) uses to much memory.
Is there any less memory consuming way to do this?
If there's memory for the final result, but np.array internals is using too much memory, you might get around that processing the list in blocks. For example:
In [236]: res = np.zeros((10,3,4),int)
In [237]: alist = np.random.randint(0,10,(10,3,4)).tolist()
In [238]: for i,row in enumerate(alist):
...: res[i] = row
In [240]: np.allclose(res, np.array(alist))
Out[240]: True
For small arrays this iteration will be slower, but with large ones, memory management issues might out weight the iteration costs.
np.zeros((1200,140,150,130)), that is an array of the required size? That test whether there's memory for your result. It certainly possible thatnp.arraymakes some temporary copy(s) of the input, since it has to read the whole thing to figure out shape and eventual dtype (and possibly convert elements to common dtype). So there's a lot going on in compiled code. Iterating on the list and assigning individual (140,150,130) arrays to thezerosmight reduce the memory use. With large arrays there's often a trade off between iteration and memory management.