67

I've set up a table accordingly:

CREATE TABLE raw (
    id          SERIAL,
    regtime     float NOT NULL,
    time        float NOT NULL,
    source      varchar(15),
    sourceport  INTEGER,
    destination varchar(15),
    destport    INTEGER,
    blocked     boolean
); ... + index and grants

I've successfully used this table for a while now, and all of a sudden the following insert doesn't work any longer..

INSERT INTO raw(
    time, regtime, blocked, destport, sourceport, source, destination
) VALUES (
    1403184512.2283964, 1403184662.118, False, 2, 3, '192.168.0.1', '192.168.0.2'
);

The error is: ERROR: integer out of range

Not even sure where to begin debugging this.. I'm not out of disk-space and the error itself is kinda discreet.

11
  • 2
    Show the whole insert command. Commented Jun 19, 2014 at 14:11
  • @ClodoaldoNeto that is it.. copy and pasted.. Unix timestamp is 1403184512.2283964 and 1403184662.118 respectively, both are fine and does not affect the result in any way what so ever. Also they are placed at the beginning of both the insert clumn definitions and the value definitions. So the position is not the issue here. Commented Jun 19, 2014 at 14:32
  • 2
    Any chance that your id generator has passed 2^31? Commented Jun 19, 2014 at 14:47
  • 2
    Try select max(id) from raw. You also might try changing the type of ID from SERIAL (4 byte signed integer) to BIGSERIAL (8 byte signed integer). Share and enjoy. Commented Jun 19, 2014 at 14:52
  • 3
    Cannot reproduce on PostgreSQL 9.3. The sequence underlying the "id" column is the most likely problem. What does select currval('raw_id_seq') return? (The name of your sequence might be different; mine is PostgreSQL's default.) Commented Jun 19, 2014 at 14:54

2 Answers 2

102

SERIAL columns are stored as INTEGERs, giving them a maximum value of 231-1. So after ~2 billion inserts, your new id values will no longer fit.

If you expect this many inserts over the life of your table, create it with a BIGSERIAL (internally a BIGINT, with a maximum of 263-1).

If you discover later on that a SERIAL isn't big enough, you can increase the size of an existing field with:

ALTER TABLE raw ALTER COLUMN id TYPE BIGINT;

Note that it's BIGINT here, rather than BIGSERIAL (as serials aren't real types). And keep in mind that, if you actually have 2 billion records in your table, this might take a little while...

Sign up to request clarification or add additional context in comments.

4 Comments

The syntax i used was ALTER TABLE raw ALTER COLUMN id TYPE BIGINT but i guess it's all the same :)
@Torxed Yeah, the COLUMN keyword is optional, but it does make things a little clearer. Updated ;)
The alter table is not practically doable on a live database indeed it rebuilds the table locking it for the full duration of it.
@Gaetano: True, if you caught this early then you might want something more elaborate to avoid the downtime (e.g. add a trigger-maintained duplicate of id and rename it once it's populated). But if every insert is already failing with an integer out of range error, then your system is not exactly "live", and I think you'll just need to wait it out...
2

You may also need to do some typecasts to get rid of the error due to operation overflows

I just wanted to illustrate that in some cases, the issue is not just that you need a BIGINT column, but that some intermediate operation is overflowing.

For example, the following gives ERROR: integer out of range, even though we made the column BIGINT:

CREATE TABLE tmp(i BIGINT);
INSERT INTO tmp SELECT 2147483647 + 1;

The problem is that 2147483647 = 2**31 - 1, the maximum integer that fits into INTEGER, so when we add 1 it overflows and we get the error.

The same issue happens if we just SELECT without any tables involved:

SELECT 2147483647 + 1;

To solve the issue, we could typecast either as:

SELECT 2147483647::BIGINT + 1;

or as:

SELECT 2147483647 + 1::BIGINT;

so we understand that so long one of the operators is BIGINT, the result gets implicitly typecast without error.

It is also worth noting that

SELECT 2147483648 + 1;

does not give any error because when we use a 2147483648 literal, that doesn't fit into INTEGER, so PostgreSQL assumes it is BIGINT by default.

generate_series typecasting

Another case where the issue might come up is when using generate_series to generate some large test data, this is what brought me here in the first place, e.g.:

SELECT i + 1 FROM generate_series(2147483647, 2147483647) AS s(i);

gives the error for similar reasons as above, because if the arguments of generate_series are INTEGER, then so are the returned values. One good clean solution in this case is to typecast the arguments of generate_series to BIGINT as in:

SELECT i + 1 FROM generate_series(2147483647::BIGINT, 2147483647::BIGINT) AS s(i);

Tested on PostgreSQL 16.6, Ubuntu 24.04.1.

Comments

Your Answer

By clicking “Post Your Answer”, you agree to our terms of service and acknowledge you have read our privacy policy.

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.