Skip to main content
Filter by
Sorted by
Tagged with
Advice
1 vote
3 replies
38 views

I run periodic VACUUM FREEZE on my main database to avoid aggressive vacuuming during busy times, wondering if it's safe to run freeze on template databases too before they reach the threshold (200M ...
Ali's user avatar
  • 7,577
0 votes
0 answers
40 views

I am working with a sqlite database consisting of two tables, a headers table and a data table. The size of headers is negligible and the size of data is roughly 75GB (~465Million rows). The database ...
Jax's user avatar
  • 93
0 votes
0 answers
112 views

I have created an external table in one of my Databricks notebooks. Below is the code used to create the table: dataset = spark.read.table('catalog.schema.input_table') dataset_in_scope.write....
DumbCoder's user avatar
  • 515
0 votes
0 answers
79 views

A few days ago we received the following error messages and started vacuum/analyze manually: ERROR: database is not accepting commands to avoid wraparound data loss in database "xxxx" HINT: ...
CJ Chang's user avatar
  • 397
0 votes
0 answers
64 views

So we have a springboot app using a postrgressql db. We used to have some data that were mapped as CLOBs and BLOBs. Since we decided to move away from large objects we migrated the database and while ...
physicsuser's user avatar
0 votes
1 answer
70 views

import logging import boto3 import sys import argparse from urllib.parse import urlparse import os import signal import time try: sys.path.append(os.path.join(os.path.dirname(__file__), "lib&...
akansha's user avatar
  • 17
0 votes
0 answers
420 views

When I use pgamin v4 8.11 to vacuum a table on a remote server not in my local network it fails with the error: psql: error: connection to server at "xx.xxx.xx.xx", port 5432 failed: SSL ...
jgm_GIS's user avatar
  • 225
0 votes
2 answers
156 views

Last night we did a vacuum full operation on partitioned tables. A few hours after the vacuum full, I found cpu rising with an unknown session called “startup” when queried by top command. We cannot ...
Gin's user avatar
  • 1
0 votes
1 answer
699 views

Here is the documentation from databricks Delta Live Tables performs maintenance tasks within 24 hours of a table being updated. Maintenance can improve query performance and reduce cost by removing ...
Nithish's user avatar
0 votes
1 answer
124 views

I have a query involving a JSONB column in PostgreSQL, structured as follows: select ((option ->> 'fruit_id')) from table1 t1 where ((option ->> 'fruit_id')) = '1' limit 10 This query is ...
Person239183218930's user avatar
0 votes
0 answers
37 views

Version - PostgreSQL 10.21, compiled by Visual C++ build 1800, 64-bit Platform - Windows We are currently experiencing issues in the database where foreign key (FK) constraints are missing, leading to ...
rootcause000's user avatar
0 votes
1 answer
954 views

On postgres 14, we observed a 4x spike in the frequency of auto vacuum with correlation to our usual morning increase in production traffic. Database logs showed "Aggressive autovaccum to prevent ...
Ravin Abraham's user avatar
1 vote
1 answer
4k views

I'm connecting to PostgreSQL using command psql "host=localhost port=6432 dbname=mydatabase user=myuser password=mypassword connect_timeout=5" and I want to run vacuum analyze but the ...
Mikko Rantalainen's user avatar
0 votes
1 answer
529 views

I have a process that runs VACUUM manually on a list of redshift tables on a daily basis to maintain consistent query performance. But sometimes, vacuuming one table takes about 2 hours. Is this ...
thox's user avatar
  • 169
0 votes
0 answers
360 views

I fail to understand why pg_settings parameter vacuum_freeze_min_age is named differently than the table parameter autovacuum_freeze_min_age. Isn't the purpose of this setting on table level to ...
Peter's user avatar
  • 1,417
1 vote
1 answer
404 views

Running vacuum on some of our systems takes 3 seconds for an empty table: create table t (c int); vacuum t; -- 3 seconds vacuum t; -- 3 seconds vacuum t; -- 3 seconds ... On my local installation it ...
Peter's user avatar
  • 1,417
2 votes
2 answers
1k views

I have a table store_record with 45 million records. I want to run a simple count query on the largest database_id. Note I have an index on database_id. SELECT COUNT(*) FROM store_record WHERE ...
Johnny Metz's user avatar
  • 5,847
0 votes
0 answers
247 views

I am newbie to Postgresql, My project is in financial transactions having a few tables with huge transaction data which will have frequent insert/update/delete on it. Initially when I started, came ...
Anuya Varde's user avatar
1 vote
1 answer
303 views

I'm using sqflite database as asset database that contains one table which size is around 5MB. User activities are changing some of the fields in that table. I'm considering to apply scheduled vacuum ...
Nurol Alacaatli's user avatar
0 votes
1 answer
4k views

I've got a Postgres 11 database. The database has logical dumps (REPLICA using streaming replication). Am I wright in thinking that if I've got tables in which information is inserted, but not ...
Gerzzog's user avatar
  • 309
4 votes
2 answers
4k views

Is it possible to run PostgreSQL 11's VACUUM FULL for a short while and then get some benefit? Or does cancelling it midway cause all of its progress to be lost? I've read about pg_repack (https://aws....
Neil C. Obremski's user avatar
3 votes
1 answer
4k views

Postgres doc tells that partitioned tables are not processed by autovacuum. But still I see that last_autovacuum column from pg_stat_user_tables is populated with recent timestamps for live partitions....
user3714601's user avatar
  • 1,271
1 vote
0 answers
399 views

I have a rather large sqlite3 database that I've built/populated through python over a few months. I have done many inserts/deletes etc while putting it together and it now uses the majority of my ...
Emi OB's user avatar
  • 3,395
0 votes
1 answer
427 views

Let say I have HDF5 dataset with maxshape=(None,1000), chunk=(1,1000). Then whenever I need to delete a some row I just zero-it (many): ds[ix,:] = 0 What is the fastest way to vacuum-zeroth-rows ...
sten's user avatar
  • 7,536
1 vote
0 answers
1k views

I'm struggling with a bloated table that I'm unable to shrink. It has just 6 rows but its size is 140MB+ and it's continously updated\deleted by quick transactions. I tried using VACUUM and VACUUM ...
Giorgio Morina's user avatar

1
2 3 4 5 6