I am making a 3d array of zeros and then filling it. But, due to the size of the numpy array it runs into memory issues even with 64 gb ram. Am i doing it wrong?
X_train_one_hot shape is (47827, 30, 20000) and encInput is of shape (47827, 30, 200)
X_train_one_hot_shifted = np.zeros((X_train_one_hot.shape[0], 30, 20200))
#X_train_one_hot.shape[0] = 48000
for j in range(0, X_train_one_hot.shape[0]):
current = np.zeros((30, 20000))
current[0][0] = 1
current[1:] = X_train_one_hot[j][0:29]
# print(current.shape, encInput[i].shape)
combined = np.concatenate((current,encInput[j]), axis=1)
X_train_one_hot_shifted[j] = combined
Any ideas to reduce memory consumption? Another interesting thing is since the shape of X_train_one_hot is also almost same, but that does not throw any error.
EDIT : The program gets killed in the for loop with the error message :
TERM_MEMLIMIT: job killed after reaching LSF memory usage limit.
Also, most of the array is sparse since X_train_one_hot a one_hot encoding of 20000 size