28

I have a table with a column abc carrying the unix timestamp (eg. 13898161481435) and I want to run a between dates select.

It would be not efficient to do a

where TO_CHAR(TO_TIMESTAMP(abc / 1000), 'DD/MM/YYYY') > '14/01/2014 00:00:00' and ..;

which would convert every record.

Rather do something like
where abc > ('14/01/2014 00:00:00' tobigint()) and abc < ...

But I cant find any reference, though for the reverse case.

4 Answers 4

43

Try this

WHERE abc > extract(epoch from timestamp '2014-01-28 00:00:00')

PostgreSQL Docs

Sign up to request clarification or add additional context in comments.

1 Comment

You could use this also for the exact date with a simple modification like select extract(epoch from now())
6

You do not need to convert it to char to compare it.

WHERE to_timestamp(abc/1000) > timestamp '2014-01-28 00:00:00'

I don't think that conversion would be very inefficient because timestamps are stored internally in a similar format to epoch secs (admittedly with a different origin and resolution).

If you really want to go the other way:

WHERE abc > extract(epoch from timestamp '2014-01-28 00:00:00')

Comments

3

Interesting observation though, while

select count(*) from cb.logs where to_timestamp(timestmp/1000) > timestamp '2014-01-15 00:00:00' and to_timestamp(timestmp/1000) < timestamp '2014-01-15 23:59:59';

takes almost 10 seconds (my db with 1,5 mill records), the below only 1,5 sec

select count(*) from cb.logs where (timestmp > (select extract(epoch from timestamp '2014-01-15 00:00:00') * 1000) and timestmp < (select extract(epoch from timestamp '2014-01-15 23:59:59') * 1000));

and the below about 1sec

select count(*) from cb.logs where (timestmp > extract(epoch from timestamp '2014-01-15 00:00:00') * 1000) and (timestmp < extract(epoch from timestamp '2014-01-15 23:59:59') * 1000);

to count ~40.000 records

Most likely because the division I would say.

2 Comments

The first query cannot use an index defined on the column timestmp whereas the other two can. Check the execution plan.
I'm with horse. There is no way any arithmetic operation could make that difference. Would be interesting to see 'explain analyse' for those cases.
2

1

select count(*) from cb.logs where to_timestamp(timestmp/1000) > timestamp '2014-01-15 00:00:00' and to_timestamp(timestmp/1000) < timestamp '2014-01-15 23:59:59';  
8600ms

"Aggregate  (cost=225390.52..225390.53 rows=1 width=0)"
"  ->  Seq Scan on logs  (cost=0.00..225370.34 rows=8073 width=0)"
"        Filter: ((to_timestamp(((timestmp / 1000))::double precision) > '2014-01-15 00:00:00'::timestamp without time zone) AND (to_timestamp(((timestmp / 1000))::double precision) < '2014-01-15 23:59:59'::timestamp without time zone))"

2

select count(*) from cb.logs where (timestmp > (select extract(epoch from timestamp '2014-01-15 00:00:00') * 1000) and timestmp < (select extract(epoch from timestamp '2014-01-15 23:59:59') * 1000));
1199ms
"Aggregate  (cost=209245.94..209245.95 rows=1 width=0)"
"  InitPlan 1 (returns $0)"
"    ->  Result  (cost=0.00..0.01 rows=1 width=0)"
"  InitPlan 2 (returns $1)"
"    ->  Result  (cost=0.00..0.01 rows=1 width=0)"
"  ->  Seq Scan on logs  (cost=0.00..209225.74 rows=8073 width=0)"
"        Filter: (((timestmp)::double precision > $0) AND ((timestmp)::double precision < $1))"

Comments

Your Answer

By clicking “Post Your Answer”, you agree to our terms of service and acknowledge you have read our privacy policy.

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.