5

Assume we have a feature representation with kN neurons before the classification layer. Now, the classification layer produces an output layer of size N with only local connections.

That is, the kth neuron at the output is computed using input neurons at locations from kN to kN+N. Hence, every N locations in the input layer (with stride N) give single neuron value at the output.

This is done using conv1dlocal in Keras, however, the PyTorch does not seem to have this.

Weight matrix in standard linear layer: kNxN = kN^2 variables

Weight matrix with local linear layer: (kx1)@N times = NK variables

1 Answer 1

4

This is currently triaged on the PyTorch issue tracker, in the mean time you can get a similar behavious using fold and unfold. See this answer:

https://github.com/pytorch/pytorch/issues/499#issuecomment-503962218

class LocalLinear(nn.Module):
    def __init__(self,in_features,local_features,kernel_size,padding=0,stride=1,bias=True):
        super(LocalLinear, self).__init__()
        self.kernel_size = kernel_size
        self.stride = stride
        self.padding = padding

        fold_num = (in_features+2*padding-self.kernel_size)//self.stride+1
        self.weight = nn.Parameter(torch.randn(fold_num,kernel_size,local_features))
        self.bias = nn.Parameter(torch.randn(fold_num,local_features)) if bias else None

    def forward(self, x:torch.Tensor):
        x = F.pad(x,[self.padding]*2,value=0)
        x = x.unfold(-1,size=self.kernel_size,step=self.stride)
        x = torch.matmul(x.unsqueeze(2),self.weight).squeeze(2)+self.bias
        return x
Sign up to request clarification or add additional context in comments.

Comments

Your Answer

By clicking “Post Your Answer”, you agree to our terms of service and acknowledge you have read our privacy policy.

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.