0

I have two servers (let's say x.x.x.x - main one and y.y.y.y - secondary one)
On the main server I have a Django application running which stores its data into Postgres database, secondary one is absolutely empty.
Every minute it creates about 900+ lines in one table, so, eventually, it became over 3M lines in that table and the processing of all those objects (filtering, sorting) became really slow because of its amount. However, I need only those lines which were written within last 3 days, no more. Still, I cannot simply remove the data because I need it for analysis in the future, so I need to keep it somewhere.
What I think about is creating another database on the secondary server and keep all the extra data there. So I need to transfer all the data that is older than 3 days from local (primary) server to the remote (secondary) server.
The regularity can be achieved using cron which is a trivial task.
What's not trivial is the command I need to execute in cron. I don't think there is a built-in SQL command to do this, so I'm wondering if this is possible at all.

I think the command should look something like this

INSERT INTO remote_server:table
SELECT * FROM my_table;

Also, I think it's also worth mentioning that the table I'm having troubles with is being constantly updated as I've written above. So, may be these updates are causing speed problems when executing some filter or sorting queries.

2
  • 3 million records is a small table, this should not be a problem at all. When you say it is slow what is the elapsed for a query? Commented Jan 12, 2020 at 18:55
  • About 1 minute, which is really long. Probably, there may be some inefficient code, however, figuring it out looks harder than moving some data. Obviously, it's a temporary solution but it's a quick one. Commented Jan 13, 2020 at 6:04

1 Answer 1

1

You have several options:

If you want to stick with the manual copy, you can setup a foreign server that connects from the secondary to the main. Then create a foreign table to access the table from the main server. Maybe access through the foreign table is already fast enough so that you don't actually need to physically copy the data. But if you want to have a "disconnected" copy, you can simply run insert into local_table select * from foreign_table or create a materialized view that is refreshed through cron.

Another solution that is a bit easier to setup (but probably slower) is to use the dblink module to access the remote server.

And finally you have the option to setup logical replication for that table from the main server to the secondary. Then you don't need any cron job, as any changes on the primary will automatically be applied to the table on the secondary server.

Sign up to request clarification or add additional context in comments.

Comments

Your Answer

By clicking “Post Your Answer”, you agree to our terms of service and acknowledge you have read our privacy policy.

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.