I am wondering if it is expected that if I call numpy.sum on a PyTorch tensor I get an error. As an aside, it does work on Pandas DataFrames.
data = [[1.1, 2.2], [1.1, 2.2], [1.1, 2.2]]
a = np.array(data)
d = pd.DataFrame(data)
t = torch.tensor(data)
np.sum(a, axis=1) # works obviously
np.sum(d, axis=1) # works
np.sum(t, axis=1) # fails
If you dig a bit inside, you'll see that's quite simply because numpy.sum tries to call torch.Tensor.sum with keyword argumets which do not belong to it, like in the following (https://github.com/numpy/numpy/blob/main/numpy/_core/fromnumeric.py around line 80)
return reduction(axis=axis, out=out, **passkwargs)
which inevitably results in an error.
This seems quite related: https://github.com/numpy/numpy/issues/28024
Can this be considered a bug in numpy? Why can't we just pass the non null arguments to the sum method of the object we are passing to numpy.sum?
I wanted to post an issue on GitHub/numpy but I am not sure I could define this as a bug, and if i click "question" there, then I am told to post here...
numpy.sum, it takes an "array-like". I don't know the definition of "array-like", or if atorch.Tensoris considered as an "array-like". I would only call this a bug if they "officially" supporttorch.Tensorinputs in this function, i.e. if they do consider tensors as "array-like".