2

Database is postgresql-9.5.1 in docker. My host machine has 3.75 GB memory, linux. In some methods I am inserting 490000 rows one after another using psycopg2 with below code.

student_list = [(name, surname, explanation)]
args_str = ','.join(cur.mogrify("(%s,%s,%s)", x) for x in student_list)
cur.execute('INSERT INTO students (name, surname, explanation) VALUES ' + args_str)

This makes my database docker memory seems full and gives these errors:

LOG: server process (PID 11219) was terminated by signal 9: Killed
DETAIL: Failed process was running
LOG: terminating any other active server processes
docker@test_db WARNING: terminating connection because of crash of another server process
docker@test_db DETAIL: The postmaster has commanded this server process to roll back the current transaction and exit, because another server process exited abnormally and possibly corrupted shared memory.
docker@test_db HINT: In a moment you should be able to reconnect to the database and repeat your command.
docker@test_db WARNING: terminating connection because of crash of another server process
docker@test_db DETAIL: The postmaster has commanded this server process to roll back the current transaction and exit, because another server process exited abnormally and possibly corrupted shared memory.
... docker@test_db FATAL: the database system is in recovery mode
LOG: all server processes terminated; reinitializing
LOG: database system was interrupted; last known up at 2017-06-06 09:39:40 UTC
LOG: database system was not properly shut down; automatic recovery in progress
docker@test_db FATAL: the database system is in recovery mode
docker@test_db FATAL: the database system is in recovery mode
docker@test_db FATAL: the database system is in recovery mode
LOG: autovacuum launcher started

Script gives that log:

Inner exception
SSL SYSCALL error: EOF detected

I tried put some sleep time between consecutive queries but got same result. Is there any limitation for that? Also I tried to connect and disconnect for each query but got same result. These are my connect and disconnect methods.

def connect():
    conn = psycopg2.connect(database=database_name,
                            user=database_user,
                            host=database_host,
                            port=database_port)
    conn
  .set_isolation_level(psycopg2.extensions.ISOLATION_LEVEL_AUTOCOMMIT)
    cur = conn.cursor()
    return conn, cur

def disconnect(conn, cur):
    cur.close()
    conn.close()
4
  • Are you passing some custom memory command to docker run? Commented Jun 6, 2017 at 11:47
  • No, just docker run command Commented Jun 6, 2017 at 11:58
  • Check the dmesg command in your host. It is probably some out of memory problem. You should see there that Postgres is being killed. Commented Jun 6, 2017 at 12:04
  • Are you running docker directly in your host, or are you using docker-machine? Commented Jun 6, 2017 at 12:05

1 Answer 1

0

Here is what I did. Actually my memory was full enough. That's why linux OS used to kill the process in Postgresql. There were 1M values in every insert process. The trick was I divided data lists to chunks and tried it 100k by 100k. That works very well. Thanks for your helps.

Sign up to request clarification or add additional context in comments.

Comments

Your Answer

By clicking “Post Your Answer”, you agree to our terms of service and acknowledge you have read our privacy policy.

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.