6

How to load larger amount of csv.gz file into Postgresql without unzipping to csv file ,since I tried pipeline command (mkfifo pipelinename) but it doesn't work for me. Is there any other solution to solve this issue ?

I have try to load it from local to postgresql using following command command : zcat file.csv.gz | psql -U username -d database;

Result : out of memory

Need : I want to load a big size csv.gz (around 15+ GB) file from centos to postgresql database.

1
  • 1
    stackoverflow.com/a/41741644/2235885 (didn't work for me is not a very useful description. What exactly did you try? What went wrong? What was the error massage?) Commented Sep 4, 2018 at 14:26

3 Answers 3

7

Note that this should also work from inside psql:

\copy TABLE_NAME FROM PROGRAM 'gzip -dc FILENAME.csv.gz' DELIMITER ',' CSV HEADER NULL ''

Sign up to request clarification or add additional context in comments.

2 Comments

Hi @Vincent any ideas how to use it when working with multiples of gzip files?
@relayino, as @kemin-zhou pointed in another answer, you can use zcat, which supports multiple files.
4

Just to share my simple examples using zcat instead of gzip. Simply less typing. I am using zcat to expand the gzipped file.

\copy tmptable from program 'zcat O1variant.tab.gz' with (format csv, delimiter E'\t', header TRUE)

Comments

0

It works for me:
7z e -so file.csv.gz | psql -U username -d databasename -c "COPY tablename FROM STDIN WITH (FORMAT csv, HEADER true);"

I use 7zip to do that. For example, You can use gzip.

Comments

Your Answer

By clicking “Post Your Answer”, you agree to our terms of service and acknowledge you have read our privacy policy.

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.