I am working with a code that involves the creating of a large relatively sparse matrix and the solution of a least squares minimization problem using it. However, I have been getting memory errors when I run my code, despite the fact that it seems that the matrix should not be large enough to strain my system (27000 by 2100 roughly, in my test cases)
I have created a simplified code that has the same storage requirements as my test case and also generates a memory error (note that the "sparse" matrix is not actually very sparse as I am testing with a smaller scale problem than what the actual intended dataset will entail):
import numpy as np
from scipy import sparse
BM = sparse.lil_matrix((27000, 3000))
for i in range(0, 3000):
local_mat = np.random.rand(30,30,30)
local_mat[local_mat<0.1] = 0
vals = local_mat.ravel()
nonzero = vals.nonzero()
BM[nonzero, i] = vals[nonzero]
If I change the parameters such that there are more zero entries in the sparse matrix, I will still get a memory error from scipy.sparse.linalg.lsq_linear after filling the rows of the matrix and performing a minimization problem with it
It goes without saying that I get a memory error if I use a dense matrix as well.
I have tried to increase the size of my paging file to 2-4 gigabytes but that hasn't helped, though it seems like this should not be that memory intensive regardless
BM.nnzis 729012. The last iteration has set 24182 of possible 27000 elements.. This is not sparse at all!