Skip to main content

Questions tagged [postgresql-performance]

Performance issues with PostgreSQL queries

Filter by
Sorted by
Tagged with
0 votes
1 answer
50 views

I just discovered that logical replication doesn't work between different schemas, like you cannot publish schema1.table1 in server1 to schema2.table1 in server2. In my setup, I have multiple servers ...
sophia sophia's user avatar
1 vote
1 answer
101 views

I have the following table partitioned by projects. CREATE TABLE IF NOT EXISTS entries ( id bigint NOT NULL, status_id bigint NOT NULL, group_id bigint NOT NULL, project_id bigint ...
Josh's user avatar
  • 13
0 votes
2 answers
106 views

In PostgreSQL 15, there is a large non-partitioned table (~2.4 TB) where VACUUM has stopped completing in an acceptable time and has been stuck in the vacuuming indexes phase for 4 days. ...
Fahrenheit's user avatar
0 votes
2 answers
90 views

In PostgreSQL 16.9 I have a table Time (duration, resourceId, date, companyId) representing timesheet entries and table Resources (id, name); I want to list sum of Time durations per week and employee ...
Lukas Macha's user avatar
1 vote
0 answers
44 views

My question is about PostgreSQL. I found similar questions for MS SQL server but I don't know if the answers apply here. My table looks like this: scores ====== | ID | UserID | ValidFrom | ValidUntil ...
MrSnrub's user avatar
  • 181
0 votes
1 answer
104 views

I saw some log entries that indicated transaction time outliers of up to 10s at times, where transaction times are typically below 1s. To get a view of how often this happens, is there a way to get ...
nsandersen's user avatar
0 votes
0 answers
53 views

I am getting ERROR: invalid memory alloc request size 2727388320 after I tried vacuuming in EnterpriseDB. It is getting bigger and bigger. How to solve this issue now ?
MD Nasirul Islam's user avatar
0 votes
1 answer
53 views

I am modeling my star schema. So, I want to create my dimension table for the customer, and the primary key from the raw table is a varchar. Is it advisable to make a surrogate key that will stand as ...
Chidinma Okeh's user avatar
1 vote
0 answers
92 views

I have a table with around 400k rows. With the current autovacuum configuration, the table is automatically vacuumed and analyzed up to 4 times daily. There are usually a couple of hours between the ...
Ezenwa's user avatar
  • 23
0 votes
1 answer
54 views

How long should the duration of a pg_basebackup of around 500G usually should be? Currently it takes about 7hrs. What could be done to speed up this process? We create the backup locally and move it ...
Johannes's user avatar
2 votes
1 answer
365 views

I have a SQL statement that runs slow the first time. The second time it will take only 6s and the first time will take more than 50s. How can I reproduce my observations with the first run of the ...
Dolphin's user avatar
  • 939
0 votes
1 answer
127 views

I have the following tables: CREATE TABLE IF NOT EXISTS users ( id NUMERIC(20, 0) NOT NULL DEFAULT NEXTVAL('users_sequence') PRIMARY KEY, list_id ...
Hasan Can Saral's user avatar
6 votes
2 answers
421 views

I have the following table (in PostgreSQL 14.6): create table waste_trajectory ( id uuid default uuid_generate_v4() not null primary key, observation_id uuid not null, ...
6006604's user avatar
  • 173
1 vote
0 answers
114 views

My web server is deployed on Kubernetes with horizontal pod scaling and a separate, non auto-scaling, PostgreSQL service which runs a both a master and readonly replica nodes, with high-availability ...
Alechko's user avatar
  • 229
1 vote
1 answer
184 views

We are using PostgreSql-13 as our core server, and encountered a performance bottleneck. The hardware includes 2 CPUs(AMD EPYC9754 with 128 core 256 threads of each), 128GB memory, hardware RAID0 ...
Leon's user avatar
  • 413
0 votes
2 answers
36 views

As you know in postgreSQL database server for best performance It is recommended to use data (data_directory) and pg_wal directory under seperate partitions. Therefore How to set data_directory as /...
Siyavus's user avatar
0 votes
1 answer
94 views

We are running postgres 14.12 in rds and expirence very slow IO reads.. around 30MB/s on index scans. we can't figure out what might be the cause of it. any ideas to what we should / could check? ...
edena's user avatar
  • 1
0 votes
1 answer
45 views

We have a PostgreSQL (w/TimescaleDB extension) running on a Ubuntu server where load has steadily increased over the year. We'd like to keep load under control. For CPU, or load, we'd like to ...
Mikko Ohtamaa's user avatar
0 votes
3 answers
216 views

My postgresql instance keeps throwing this error and it temporarily goes away after I restart the database. Here's the logs: ERROR: parallel worker failed to initialize 2025-02-25 21:00:51.586 UTC [...
Starbody's user avatar
0 votes
1 answer
62 views

Let's pretend that I have 1,000,000 rows to insert data and I don't care about data loss. How much faster would 1 commit after 1,000,000 rows be rather than 1 commit every 100,000 rows? Same question ...
Chicken Sandwich No Pickles's user avatar
0 votes
1 answer
112 views

Will a hash index make this query faster? LIKE 'abc%' Hash indices can speed up point queries.
Marlon Brando's user avatar
0 votes
2 answers
148 views

My table is like this: CREATE TABLE IF NOT EXISTS public.ticks_a_2507 ( tick_time timestamp(6) with time zone NOT NULL, tick_nano smallint NOT NULL, trade_day date NOT NULL, -- other ...
Leon's user avatar
  • 413
0 votes
1 answer
2k views

After upgrading the Postgres Server from Version 15 to Version 17 running on RHEL 9, few of the SQLs changed the plan and running pretty slow. I am basically an Oracle DBA and thinking to perform the ...
Naveed Iftikhar's user avatar
0 votes
1 answer
376 views

I have a long running bug where some larger queries, sometimes run much much longer due to being stuck on wait_even MessageQueueSend. The difference can be anything from <100% to 1000s percent when ...
user20061's user avatar
0 votes
2 answers
368 views

I'm working with a table that contains approximately 70 million records. I need to create a primary key and several indexes on this table. The SQL queries I'm using are as follows: BEGIN; ALTER TABLE ...
Purushottam Nawale's user avatar

1
2 3 4 5
25