0

I am completely loosing my patience with TF and Python, I cant get this to work, "ValueError: setting an array element with a sequence." on testx when sess.run is invoked.

I have tried a lot of different things.. It's almost as if TF is broken, could anyone assist?

import tensorflow as tf
import numpy as np

nColsIn = 1
nSequenceLen = 4
nBatches = 8
nColsOut = 1
rnn_size = 228

modelx = tf.placeholder("float",[None,nSequenceLen,1])
modely = tf.placeholder("float",[None,nColsOut])

testx = [tf.convert_to_tensor(np.zeros([nColsIn,nBatches])) for b in range(nSequenceLen)]
testy = np.zeros([nBatches, nColsOut])

layer = {
    'weights': tf.Variable(tf.random_normal([rnn_size, nColsOut],dtype=tf.float64),),
    'biases': tf.Variable(tf.random_normal([nColsOut],dtype=tf.float64))}

lstm_cell = tf.nn.rnn_cell.BasicLSTMCell(rnn_size, forget_bias=1.0)
outputs, states = tf.nn.static_rnn(lstm_cell,modelx ,dtype=tf.float64)
prediction = tf.matmul(outputs[-1], layer['weights']) + layer['biases']

cost = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(logits=prediction,labels=modely))
optimizer = tf.train.AdamOptimizer().minimize(cost)

with tf.Session() as sess:
    sess.run(tf.global_variables_initializer())

    correct = tf.equal(tf.argmax(prediction, 1), tf.argmax(modely, 1));
    accuracy = tf.reduce_mean(tf.cast(correct, 'float'))

    _, epoch_loss = sess.run([optimizer, cost], feed_dict={modelx: testx, modely: testy})
    print('Epoch Loss: ',epoch_loss,' Accuracy: ', accuracy.eval({modelx: testx, modely: testy}))

1 Answer 1

1

This is probably what you want. You'll find a few remarks in comments in the code.

import tensorflow as tf
import numpy as np

nColsIn = 1
nSequenceLen = 4
nBatches = 8
nColsOut = 1
rnn_size = 228

# As you use static_rnn it has to be a list of inputs
modelx = [tf.placeholder(tf.float64,[nBatches, nColsIn]) for _ in range(nSequenceLen)]
modely = tf.placeholder(tf.float64,[None,nColsOut])

# testx should be a numpy array and is not part of the graph
testx = [np.zeros([nBatches,nColsIn]) for _ in range(nSequenceLen)]
testy = np.zeros([nBatches, nColsOut])

layer = {
    'weights': tf.Variable(tf.random_normal([rnn_size, nColsOut],dtype=tf.float64),),
    'biases': tf.Variable(tf.random_normal([nColsOut],dtype=tf.float64))}

lstm_cell = tf.nn.rnn_cell.BasicLSTMCell(rnn_size, forget_bias=1.0)
# Replaced testx by modelx
outputs, states = tf.nn.static_rnn(lstm_cell,modelx, dtype=tf.float64)
# output is of shape (8, 4, 128), you probably want the last element in the sequence direction
prediction = tf.matmul(outputs[-1], layer['weights']) + layer['biases']

cost = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(logits=prediction,labels=modely))
optimizer = tf.train.AdamOptimizer().minimize(cost)

if __name__ == '__main__':
    with tf.Session() as sess:
        sess.run(tf.global_variables_initializer())

        correct = tf.equal(tf.argmax(prediction, 1), tf.argmax(modely, 1));
        accuracy = tf.reduce_mean(tf.cast(correct, 'float'))
        feed_dict = {k: v for k,v in zip(modelx, testx)}
        feed_dict[modely] = testy
        _, epoch_loss = sess.run([optimizer, cost], feed_dict=feed_dict)
        print('Epoch Loss: ',epoch_loss,' Accuracy: ', accuracy.eval(feed_dict))
Sign up to request clarification or add additional context in comments.

5 Comments

Wow. Thank you. That works. I will run some tests on data and verify it processes it the way I expect. Because the original sample (MNIST RNN) did not actually go through the batches I believe. Ie it would only train on first entry. (Just a theory, maybe data was simply mangled the wrong way.)
You could for example create a list of (different) placeholders of length the sequence length and use feed dict with a list of testx (instead of using a single placeholder).
Could you modify your example accordingly? Because I get errors such asunhashable type: 'list' for testx or "setting an array element with a sequence."
Done. The reason of your error was probably because you tried to pass a list of placeholders as a key of your feed_dict.
it eats the whole batch and trains on it. Thats really big step forward for me thank you. Have not come across this method of creating feed_dict anywhere.. Good to find it.

Your Answer

By clicking “Post Your Answer”, you agree to our terms of service and acknowledge you have read our privacy policy.

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.