1

I applied over a postgres USER table and unique contraint over email. The problem that I am facing now is that the constraint seems to register each value I insert (or try to insert) no matter if a record with that value exists or not. I.e

Table:

if i insert [email protected], delete the value and try to insert [email protected] again it says:

sqlalchemy.exc.IntegrityError: (psycopg2.errors.UniqueViolation) duplicate key value violates unique constraint "email"

my doubt is: the unique constraint guarantees that the value is newer always or that there is only one record of that value in the column?

documentations says it the second but the experience shows is the first one


more details:

|      Column      |            Type             | Nullable |
|------------------|-----------------------------|----------|
| id               | integer                     | not null |
| email            | character varying(100)      |          |
| password         | character varying(100)      |          |
| name             | character varying(1000)     |          |
| lastname         | character varying(1000)     |          |
| dni              | character varying(20)       |          |
| cellphone        | character varying(20)       |          |
| accepted_terms   | boolean                     |          |
| investor_test    | boolean                     |          |
| validated_email  | boolean                     |          |
| validated_cel    | boolean                     |          |
| last_login_at    | timestamp without time zone |          |
| current_login_at | timestamp without time zone |          |
| last_login_ip    | character varying(100)      |          |
| current_login_ip | character varying(100)      |          |
| login_count      | integer                     |          |
| active           | boolean                     |          |
| fs_uniquifier    | character varying(255)      | not null | 
| confirmed_at     | timestamp without time zone |          |

Indexes:
    "bondusers_pkey" PRIMARY KEY, btree (id)
    "bondusers_email_key" UNIQUE CONSTRAINT, btree (email)
    "bondusers_fs_uniquifier_key" UNIQUE CONSTRAINT, btree (fs_uniquifier)

Insert Statement:

INSERT INTO bondusers (email, password, name, lastname, dni, cellphone, accepted_terms, investor_test, validated_email, validated_cel, last_login_at, current_login_at, last_login_ip, current_login_ip, login_count, active, fs_uniquifier, confirmed_at) VALUES ('[email protected]', '$pbkdf2-sha256$29000$XyvlfI8x5vwfYwyhtBYi5A$Hhfrzvqs94MjTCmDOVmmnbUyf7ho4kLEY8UYUCdHPgM', 'mail', 'mail3', '123123123', '1139199196', false, false, false, false, NULL, NULL, NULL, NULL, NULL, true, '1c4e60b34a5641f4b560f8fd1d45872c', NULL);

ERROR:  duplicate key value violates unique constraint "bondusers_fs_uniquifier_key"
DETAIL:  Key (fs_uniquifier)=(1c4e60b34a5641f4b560f8fd1d45872c) already exists.

but when:

select * from bondusers where fs_uniquifier = '1c4e60b34a5641f4b560f8fd1d45872c';

result is 0 rows

3
  • Can you share your code for the insertion and deletion? My gut instinct is that you're missing something with the transaction management here. Commented Jan 29, 2023 at 18:28
  • thanks. i updated the code on the original code Commented Jan 29, 2023 at 19:18
  • Off-topic: use data type inet for an ip address, not a varchar. I never use a varchar for anything Commented Jan 29, 2023 at 19:31

2 Answers 2

2

I assume that if you run the INSERT, DELETE, INSERT directly within Postgres command line it works OK?

I noticed your error references SQLAlchemy (sqlalchemy.exc.IntegrityError), so I think it may be that and not PostgreSQL. Within a transaction SQLAlchemy's Unit of Work pattern can re-order SQL statements for performance reasons.

The only ref I could find was here https://github.com/sqlalchemy/sqlalchemy/issues/5735#issuecomment-735939061 :

if there are no dependency cycles between the target tables, the flush proceeds as follows:

<snip/>

  • a. within a particular table, INSERT operations are processed in the order in which objects were add()'ed

  • b. within a particular table, UPDATE and DELETE operations are processed in primary key order

So if you have the following within a single transaction:

  • INSERT x
  • DELETE x
  • INSERT x

when you commit it, it's probably getting reordered as:

  • INSERT x
  • INSERT x
  • DELETE x

I have more experience with this problem in Java/hibernate. The SQLAlchemy docs do claim it's unit of work pattern is "Modeled after Fowler's "Unit of Work" pattern as well as Hibernate, Java's leading object-relational mapper." so probably relevant here too

Sign up to request clarification or add additional context in comments.

Comments

0

To supplement Ed Brook's insightful answer, you can work around the problem by flushing the session after deleting the record:

with Session() as s, s.begin():    
    u = s.scalars(sa.select(User).where(User.user == 'a')).first()    
    s.delete(u)    
    s.flush()    
    s.add(User(user='a')) 

Another solution would be to use a deferred constraint, so that the state of the index is not evaluated until the end of the transaction:

class User(Base):
    ...

    __table_args__ = (
        sa.UniqueConstraint('user', deferrable=True, initially='deferred'),
    )

but note, from the PostgreSQL documentation:

deferrable constraints cannot be used as conflict arbitrators in an INSERT statement that includes an ON CONFLICT DO UPDATE clause.

1 Comment

thanks for your answer. is clear and make sense. after reviewing this, I got the itdea that the problem I was facing is that the module I use (flask security too) does not follow this pattern of session management, and when using with gunicorn on a multithread scenario.

Your Answer

By clicking “Post Your Answer”, you agree to our terms of service and acknowledge you have read our privacy policy.

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.