-1

We have an Oracle DB running for a production environment. To not effect online processes we keep the data limited in this. Like one year. There is a job running everyday to delete older entries.

We wish to store all data in a seperate server without deletion.

So in this long-term server we want to insert/update/delete rows except the deletion of older ones in short-term DB.

How can we manage that?

I have read something about Golden Gate of Oracle. However, I am not a DB expert and it is too compicated for me. I could not understand that this can be achieved with it.

Maybe it can be done with triggers. I tried to keep tracking of the tables by triggers and store primary keys of the row that has changed. I am not sure about the performance of this. And also what about the user update the primary key value?

What if there is some changes in a table? Like adding new columns or alter one of the columns.

I do not think there is an easy way to do this. What is my best option?

1
  • Replicating data is not about programming? While there is an off-the-shelf program that might help with this problem, that does not mean this is not a programming problem. Commented Apr 4 at 22:20

1 Answer 1

1

GoldenGate is an extra cost option that has to be managed by DBAs, not developers.

Triggers would work, but would be very invasive - you really don't want the latency and concurrency issues involved in doing synchronous DML over a database link every time your local tables get modified.

If the # of tables you're concerned about is limited and you're handy with PL/SQL, you can write your own replication/extract job in PL/SQL and schedule it. Use metadata date cols to find new/changed rows, or log changes via trigger in local log tables and have your asynchronous replication job use that to key off of. Ensure your delete job deletes only rows that have already been replicated over.

There are lots of options, none of which are "easy". However, I suggest you reconsider the whole notion. There really is no need to be progressively deleting rows from tables "to not effect online processes". If you properly design indexes for your queries, the amount of data in your tables should not affect the performance of your onlines, because your queries shouldn't be having to scan old data, but only the data needed.

If the table size is impacting your online queries, then your queries are doing full table scans when they shouldn't be. Tune the queries and properly index the tables to support those queries. If you have report-like analytic queries that need to scan a lot of data that would make index use inappropriate, use partitioning and partition-prune those queries so they full scan only the partition(s) needed, skipping the older ones. In either scenario, if properly designed the amount of historical data you have in your table should not slow your onlines down. This is a far better solution than trying to create an archive database.

Sign up to request clarification or add additional context in comments.

Comments

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.