I would like to have a way to enter into the Postgresql container and get a data dump from it.
11 Answers
Use the following command from a UNIX or a Windows terminal:
docker exec <container_name> pg_dump <schema_name> > backup
The following command will dump only inserts from all tables:
docker exec <container_name> pg_dump --column-inserts --data-only <schema_name> > inserts.sql
5 Comments
docker exec <container_name> pg_dump -U <user> --column-inserts --data-only <schema_name> > inserts.sqlexport PGUSER=postgres to make it work (Testet in postgres 9.6).I have container named postgres with mounted volume -v /backups:/backups
To backup gziped DB my_db I use:
docker exec postgres pg_dump -U postgres -F t my_db | gzip >/backups/my_db-$(date +%Y-%m-%d).tar.gz
Now I have
user@my-server:/backups$ ls
my_db-2016-11-30.tar.gz
7 Comments
no such file or directory: /backups/my_db-2017-03-19.tar.gz, do you have an idea why ?docker exec -t postgres bash -c 'pg_dump -U postgres -F t my_db | gzip >/backups/my_db-$(date +%Y-%m-%d).tar.gz'tar gz, it's doing just a gz but you're naming the SQL file with the extension tar and then gzipping it. If you gunzip it, you get an unreadable file called my_db-date.tar, but if you rename the .tar to .sql, it's actually an SQL file. You should probably make it output as just "filename.gz" or use tar -cz if you actually want a .tar.gz file extensionAlthough the mountpoint solution above looked promising, the following is the only solution that worked for me after multiple iterations:
docker run -it -e PGPASSWORD=my_password postgres:alpine pg_dump -h hostname -U my_user my_db > backup.sql
What was unique in my case: I have a password on the database that needs to be passed in; needed to pass in the tag (alpine); and finally the hosts version of the psql tools were different to the docker versions.
1 Comment
docker exec -e PGPASSWORD=mypass container pg_dump -U myuser mydb > file.bkup.sqlSee the PostgreSQL Documentation for Backup and Restore. As others have described, use docker exec to run commands on containers, in this case either pg_dump or pg_dumpall to create dump files. Write them to a docker volume to prevent increasing the container size and provide access to the dump from outside the container.
TLDR
docker exec <container_name> pg_dump [-U db_user] -f [filepath] <db>
e.g.
docker exec db pg_dump -U admin -f /db-backups/db.pg_dump.bak nextcloud
Explanation
Although output redirection is ubiquitous throughout the documentation, you will have trouble with it when using docker exec.
docker exec db pg_dump db > db.pg_dump.bak
What you want to happen
docker exec db (pg_dump db > db.pg_dump.bak)
What is actually happening
(docker exec db pg_dump db) > db.pg_dump.bak
I had trouble trying to use shell quoting to fix this, maybe because of how it is treated in the container. Fortunately, we don't have to use output redirection at all. The man page for pg_dump documents a -f option that takes the destination instead. This avoids the problem entirely.
Comments
Another workaround method is to start postgre sql with a mountpoint to the location of the dump in docker.
like docker run -v <location of the files>.
Then perform a docker inspect on the docker running container
docker inspect <image_id>
you can find "Volumes" tag inside and a corresponding location.Go to the location and you can find all the postgresql/mysql files.It worked for me.Let us know if that worked for you also.
Good luck
Comments
To run the container that has the Postgres user and password, you need to have preconfigured variables as container environment variable. For example:
docker run -it --rm --link <container_name>:<data_container_name> -e POSTGRES_PASSWORD=<password> postgres /usr/bin/pg_dump -h <data_container_name> -d <database_name> -U <postgres_username> > dump.sql
1 Comment
--link is a legacy feature.