1

I have a table that acts like a queue (let's call it queue) and has a sequence from 1..N.

Some triggers inserts on this queue (the triggers are inside transactions).

Then external machines have the sequence number and asks the remote database: give me sequences greater than 10 (for example).

The problem:

In some cases transaction 1 and 2 begins (numbers are examples). But transaction 2 ends before transaction 1. And in between host have asked queue for sequences greater than N and transaction 1 sequences are skipped.

How to prevent this?

7
  • This may be difficult in PostGreSQL because PostGreSQL is unnable to have pessimistic locking that will serialize the transaction. Try with ACCESS EXCLUSIVE table explicit locking. Commented May 11, 2021 at 12:42
  • That would work I think, but seems like a huge performance hit, it would serialize all transactions, would't it? Commented May 11, 2021 at 13:16
  • I am thinking on a double queue solution where a job locks queue1 and copies to queue2. So queue2 would not skip records and locking would be quickier. Commented May 11, 2021 at 14:25
  • You can inspire you from Microsoft SQL Server which have queues tables for Service Broker and implements a RECEIVE pseudo SQL order that discard while reading (simultaneous SELECT and DELETE) the rows... By the way @Thiago Sayão does the same in a reversed manner but keeps the row in a purging table... Commented May 11, 2021 at 14:42
  • By the way, recently PG has a process that seems to be similar in a new contrib.... A way to explore : pgxn.org/dist/pg_message_queue Commented May 11, 2021 at 14:44

1 Answer 1

1

I would proceed like this:

  • add a column state to the table that you change as soon as you process an entry

  • get the next entry with

    SELECT ... FROM queuetab
    WHERE state = 'new'
    ORDER BY seq
    LIMIT 1
    FOR UPDATE SKIP LOCKED;
    
  • update state in the row you found and process it

As long as you do the last two actions in a single transaction, that will make sure that you are never blocked, get the first available entry and never skip an entry.

Sign up to request clarification or add additional context in comments.

2 Comments

The probem is that the queue has N consumers, so I can't have a state column.
That should work with any number of concurrent consumers without locking anybody. The state is independent of which consumer grabs an entry (but of course you can update the state to reflect the consumer that processed it).

Your Answer

By clicking “Post Your Answer”, you agree to our terms of service and acknowledge you have read our privacy policy.

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.