4

I am pushing pandas dataframe in redshift table and getting following error

cur.execute("INSERT INTO sir_main VALUES " + str(args_str))
psycopg2.ProgrammingError: Statement is too large. Statement Size: 58034743 
bytes. Maximum Allowed: 16777216 bytes`

And it halts the execution. Is there any way to configure the limit while pushing into database?

1
  • 1
    No, also if this is your regular process you should consider changing your approach so that you load your data to s3 first, then use aws copy command. Commented Nov 26, 2018 at 13:28

1 Answer 1

2

If you are loading more than a few hundred rows you should save the dataframe as a flat file to S3 and load it into Redshift using COPY. https://docs.aws.amazon.com/redshift/latest/dg/r_COPY.html

Sign up to request clarification or add additional context in comments.

Comments

Your Answer

By clicking “Post Your Answer”, you agree to our terms of service and acknowledge you have read our privacy policy.

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.