1

I have noticed a strange behavior when training my Keras model.

I have 2 functions:

  • generate_net, which returns a compiled keras model
  • train_net, which trains this model

When I call them like this, the working memory remains more or less constant:

   for i in range(1,100):
        model = generate_net(...)
    for i in range(1,100):
        model = train_net(model=model, ...)

However, if I call it like this, the working memory increases with each iteration (which leads to a crash in the real use case):

   for i in range(1,100):
        model = generate_net(...)
        model = train_net(model=model, ...)

Does anyone know why this behavior occurs?

EDIT: If I add this into the for-loop of the second example the memory still increases from iteration to iteration.

        del model
        gc.collect() 
        tf.keras.backend.clear_session() 
        gc.collect() 

0

Your Answer

By clicking “Post Your Answer”, you agree to our terms of service and acknowledge you have read our privacy policy.

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.