In PyTorch, I want to create a hidden layer whose neurons are not fully connected to the output layer. I try to concatenate the output of two linear layers but run into the following error:
RuntimeError: size mismatch, m1: [2 x 2], m2: [4 x 4]
my current code:
class NeuralNet2(nn.Module):
def __init__(self):
super(NeuralNet2, self).__init__()
self.input = nn.Linear(2, 40)
self.hiddenLeft = nn.Linear(40, 2)
self.hiddenRight = nn.Linear(40, 2)
self.out = nn.Linear(4, 4)
def forward(self, x):
x = self.input(x)
xLeft, xRight = torch.sigmoid(self.hiddenLeft(x)), torch.sigmoid(self.hiddenRight(x))
x = torch.cat((xLeft, xRight))
x = self.out(x)
return x
I don't get why there is a size mismatch? Is there an alternative way to implement non-fully-connected layers in pytorch?
net = NeuralNet2(), as well as some inputx = torch.Tensor(np.array([1,2])), and then simply callnet(x). Which version are you using?torch.cat((xLeft, xRight), axis=1).