This piece of code is originally written in numpy and I'm trying to utilise GPU computation by rewriting it in pytorch, but as I'm new to pytorch a lot of problems occured to me. Firstly I'm confused by the dimension of the tensors. Sometimes after operating on the tensors, only transposing the tensor would fix the problem, is there anyway I can stop doing .t()? The major problem here is that in the line ar = torch.stack ... the error "TypeError: 'Tensor' object is not callable " occurs. Any suggestion/correction would be appreciated. Thxxx
def vec_datastr(vector):
vector = vector.float()
# Find the indices corresponding to non-zero entries
index = torch.nonzero(vector)
index = index.t()
# Compute probability
prob = vector ** 2
if torch.sum(prob) == 0:
prob = 0
else:
prob = prob / torch.sum(prob)
d = depth(vector)
CumProb = torch.ones((2**d-len(prob.t()),1), device ='cuda')
cp = torch.cumsum(prob, dim=0)
cp = cp.reshape((len(cp.t()),1))
CumProb = torch.cat((cp, CumProb),0)
vector = vector.t()
prob = prob.t()
ar = torch.stack((index, vector([index,1]), prob([index, 1]), CumProb([index, 1]))) # Problems occur here
ar = ar.reshape((len(index), 4))
# Store the data as a 4-dimensional array
output = dict()
output = {'index':ar[:,0], 'value': ar[:,1], 'prob':ar[:,2], 'CumProb': ar[:,3]}
return output