1

I'm working on a system where I need to store processed data across approximately 30 tables in a SQL Server database. For frequently used tables, I'm using JPA, while the majority of the insertions are handled using JDBC due to performance and control needs.

Currently, I wrap the entire operation in a Spring @Transactional(rollbackFor = Exception.class, propagation = Propagation.REQUIRED) annotation. This approach ensures that any failure during the operation rolls back all the changes — which is good. However, it locks all the involved tables until the full transaction completes, which negatively affects scalability and performance.

What I need:

  • A way to persist data across multiple tables efficiently.
  • Maintain transactional integrity — if any part of the process fails, the entire operation should roll back and leave the database in the exact previous state.
  • Minimize long-held locks or table blocking that affect scalability.

Questions:

  1. Is there a better way to manage transactions when working with a mix of JPA and JDBC across many tables?
  2. Can @Transactional be used in a smarter way (e.g., splitting into smaller units with proper rollback)?
  3. Would programmatic transaction management, nested transactions, or custom rollback logic be more suitable here?
  4. Are there any best practices for this kind of multi-table write operation to balance consistency and performance?

I’m looking for a scalable and clean approach to ensure rollback behaves exactly as expected, without locking the entire database section for the duration of the transaction.

1
  • 1
    You can try out the snapshot isolation perhaps, it would allow reading rest of the rows not participating in transaction. See stackoverflow.com/questions/2741016/… Otherwise, you probably can't avoid complete blocking when writing things in transactions. Btw, what does "locking all tables means"? What kind of queries are getting blocked? Commented Aug 4 at 18:44

1 Answer 1

0

There are several somewhat separate things in play here which makes it hard to give a particularly concrete answer to what is a slightly abstract question, but there are some general points to make:

  1. First of all, you need to work out which operation(s) you are making which are acquiring locks within your SQL Server DB, as those will be the ones you either want to minimize entirely, or perhaps delays until the end of the DB transaction

  2. If you were able to split your business logic out into separate transactions that could individually fail (accepting the fact that earlier transactions in the process were committed), then you could indeed deal with the problem more scalably

  3. On the other hand, nested transactions and/or custom rollback logic are only going to help you if it's possible to recover from a partial failure during the transaction, and to continue on to a successful conclusion of your transaction - that's only possible to evaluate with a detailed knowledge of exactly what your operations are performing.

Sign up to request clarification or add additional context in comments.

Comments

Your Answer

By clicking “Post Your Answer”, you agree to our terms of service and acknowledge you have read our privacy policy.

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.