1

I am kind of new to keras. Here is what I am trying to achieve. I have a keras-model which takes as input an image to produce 512 vector. I create this as:

enter image description here

input_img = keras.layers.Input( shape=(240, 320, 3 ) )
cnn = make_vgg( input_img )
out = NetVLADLayer(num_clusters = 16)( cnn )
model = keras.models.Model( inputs=input_img, outputs=out )

Now, for the training, each of my samples are actually 13 images. Say I have 2500 samples then my data's dimensions are 2500x13x240x320x3. I want the model to be applied independently to the 13 images. I came across the TimeDistributed layer in keras and wondering how can I use it to achieve my objective.

t_input = Input( shape=(13,240,320,3) )
# How to use TimeDistributed with model? 
t_out = TimeDistributed( out )
t_model = Model( inputs=t_input, outputs=t_out )

I am expecting t_out of size: None,13,512. The above code, however, throws a ValueError. Can anyone help my understanding?

1 Answer 1

2

The error occurs in this line:

t_out = TimeDistributed(out)

It happens because out is a tensor, but TimeDistributed expects a layer as argument. This layer will be applied to every temporal slice (dimension of index one) of the input. You could instead do the following:

t_input = Input(shape=(13, 240, 320, 3))
t_out = TimeDistributed(model)(t_input)
t_model = Model(inputs=t_input, outputs=t_out)
Sign up to request clarification or add additional context in comments.

Comments

Your Answer

By clicking “Post Your Answer”, you agree to our terms of service and acknowledge you have read our privacy policy.

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.