0

Let's pretend that I have 1,000,000 rows to insert data and I don't care about data loss.

How much faster would 1 commit after 1,000,000 rows be rather than 1 commit every 100,000 rows?

Same question for updates and deletes.

How can I change the commit size? i.e. don't commit the transaction until you have processed XXX number of rows?

1 Answer 1

1

There's no "commit size". There is the commit time -- the moment you issue the COMMIT command after counting the number of inserted (updated, deleted) rows, or otherwise limiting the batch size.

Once you figure that out, you will be able to answer the "how much faster" question by running tests on your system with your data.

Your Answer

By clicking “Post Your Answer”, you agree to our terms of service and acknowledge you have read our privacy policy.

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.