I work on a system which downloads data from a cloud system to a local database (PostgreSQL, MySQL, ...). Now I'm having an issue with PostgreSQL performance because it takes a lot of time to insert the data.
A number of columns and the size of the data may vary. In a sample project, I have a table with approx. 170 columns. There is one unique index - but even after dropping the index the speed of the insert did not change.
I'm using JDBC driver to connect to the database and I'm inserting data in batches of 250 rows (using NamedParameterJdbcTemplate).
It took me approx. 18 seconds to insert the data on Postgres. The same data set on MySQL took me just a second. That's a huge difference - where does it come from? Is Postgres JDBC driver that slow? Can it be configured somehow to make it faster? Am I missing something else? The difference between Postgres and MySQL is so huge. Any other ideas how to make it faster?
I made a sample project which is available on Github - https://github.com/varad/postgresql-vs-mysql. Everything happens in LetsGo class in the "run" method.
letsGo.run(Type.MYSQL); letsGo.run(Type.POSTGRES);)? Also how are you checking the times?create table t1 as select * from your_table limit 250;and thenpg_dump --inserts -t t2to a file and then try running the file inpsqlmeasuring time (\timing onswitch) - this will give you the expected speed ofinsertof 250 rows on your machine. then create index etc.. and measure again.