From the sys.getsizeof docs:
Only the memory consumption directly attributed to the object is
accounted for, not the memory consumption of objects it refers to.
sys.getsizeof returns the memory consumption of the list object itself, not including the objects contained by the list. A single one of your arrays:
In [3]: arr = np.zeros(dtype=np.float64, shape=(60, 2094))
In [4]: arr.size
Out[4]: 125640
In [5]: arr.nbytes
Out[5]: 1005120
The python object wrapping the primitive array adds about 100 bytes.
Note, there is always overhead for being an object, note:
In [6]: sys.getsizeof(arr)
Out[6]: 1005232
The actual memory consumption, then is about:
In [7]: arr.nbytes*1e-9
Out[7]: 0.00100512 # one megabyte
And if we had 2940 of them, just those objects would be:
In [8]: arr.nbytes*2940*1e-9
Out[8]: 2.9550528000000003 # almost 3 gigabytes
If I actually put these all in a list:
In [13]: alist = []
In [14]: alist.append(arr)
In [15]: for _ in range(2940 - 1):
...: alist.append(arr.copy())
...:
The list object itself is essentially backed by an array of py_object pointers. On my machine (64bit) a pointer will be one machine word, i.e. 64bits or 8 bytes. So:
In [19]: sys.getsizeof(alist)
Out[19]: 23728
In [20]: 8*len(alist) # 8 bytes per pointer
Out[20]: 23520
So sys.getsizeof is only accounting for an array of pointers, plus object overhead, but that isn't even close to accounting for the 3 gigabytes consumed by the array objects being pointed to.
Lo and behold:
In [21]: arr = np.array(alist)
In [22]: arr.shape
Out[22]: (2940, 60, 2094)
In [23]: arr.size
Out[23]: 369381600
In [24]: arr.nbytes
Out[24]: 2955052800
In [25]: arr.nbytes* 1e-9
Out[25]: 2.9550528000000003
Xwhen I increase the data size, but don't seem to be getting a "Memory" error for the list of arrays. I also checked the memory size with task manager and it seems to be true.sys.getsizeofonly gives you the size of the list, not including the objects in the list. That is the source of the discrepancy.