Use gsutil for GCS Operations:
First, you'll need to use gsutil, Google's command-line tool for interacting with Cloud Storage, to download or upload data.
# Download from GCS to local or compute engine instance
gsutil cp gs://my-bucket/test-data.gz .
# Decompress if needed
gunzip test-data.gz
# Use psql or another client to import data
psql -h <cloudsql-instance-ip> -U myuser -d mydb -c "\copy mytable FROM '/path/to/test-data' WITH CSV HEADER;"
Upload Data:
Similarly, for exporting data, you can export it to a local file and then upload it to GCS.
# Export data to local file
psql -h <cloudsql-instance-ip> -U myuser -d mydb -c "\copy mytable TO '/path/to/output.csv' WITH CSV HEADER;"
# Upload to GCS
gsutil cp /path/to/output.csv gs://my-bucket/output.csv
Method 2: Using Cloud Functions or Cloud Run
For a more automated approach, consider using Google Cloud Functions or Cloud Run:
Cloud Function Example:
import os
from google.cloud import storage
import psycopg2
def import_data_from_gcs(event, context):
# Assuming event contains the GCS object details
bucket_name = event['bucket']
file_name = event['name']
# Download the file to a temporary location
storage_client = storage.Client()
bucket = storage_client.bucket(bucket_name)
blob = bucket.blob(file_name)
file_path = f'/tmp/{file_name}'
blob.download_to_filename(file_path)
# Connect to Cloud SQL
conn = psycopg2.connect(
dbname='yourdbname',
user='youruser',
password='yourpassword',
host='/cloudsql/your-project:region:instance-name'
)
cur = conn.cursor()
# Import data
with open(file_path, 'r') as f:
cur.copy_expert(f"COPY your_table FROM STDIN WITH CSV HEADER", f)
conn.commit()
cur.close()
conn.close()
# Optionally, remove the temporary file
os.remove(file_path)
# Similar logic can be used for export by reversing the process.