0

I am running SQL query from python API and want to collect data in Structured(column-wise data under their header).CSV format.

This is the code so far I have.

sql = "SELECT id,author From researches WHERE id < 20 " 
cursor.execute(sql)
data = cursor.fetchall()
print (data)
with open('metadata.csv', 'w', newline='') as f_handle:
    writer = csv.writer(f_handle)
    header = ['id', 'author']
    writer.writerow(header)
    for row in data:
        writer.writerow(row)

Now the data is being printed on the console but not getting in .CSV file this is what I am getting as output:

What is that I am missing?

6
  • 1
    What problem are you having? Python has csv.writer to write CSV to a file, so all you have to do is read the SQL results into a list. Commented Nov 4, 2017 at 5:27
  • What have you tried so far ? Commented Nov 4, 2017 at 5:30
  • The problem is I don't know how to do it. Can you help me the sample code based on this query? Commented Nov 4, 2017 at 5:30
  • I have edited the code. Now I am able to get the Data but to write it to a .csv file I am facing difficulty. Commented Nov 4, 2017 at 5:37
  • Check out the examples in the docs. I'm not sure, but you might be able to write rows from the cursor directly into a csv.writer(). Commented Nov 4, 2017 at 5:39

4 Answers 4

3

Here is a simple example of what you are trying to do:

import sqlite3 as db
import csv

# Run your query, the result is stored as `data`
with db.connect('vehicles.db') as conn:
    cur = conn.cursor()
    sql = "SELECT make, style, color, plate FROM vehicle_vehicle"
    cur.execute(sql)
    data = cur.fetchall()

# Create the csv file
with open('vehicle.csv', 'w', newline='') as f_handle:
    writer = csv.writer(f_handle)
    # Add the header/column names
    header = ['make', 'style', 'color', 'plate']
    writer.writerow(header)
    # Iterate over `data`  and  write to the csv file
    for row in data:
        writer.writerow(row)
Sign up to request clarification or add additional context in comments.

3 Comments

Thanks, @diek. Just one question, after getting everything inside data.We are opening as 'vehicle.csv' where does this file come from and what part it is playing? I mean we haven't written data to 'vehicle.csv' so ..?
@Hayat Python will create the file. The role that it is playing is the answer to your question, you want to create a csv file.
@Hayat I added comments to the code, hopefully this will help you understand. Reading this will help you better understand docs.python.org/3/tutorial/inputoutput.html
1
import pandas as pd

import numpy as np

from sqlalchemy import create_engine

from urllib.parse import quote_plus

params = quote_plus(r'Driver={SQL Server};Server=server_name;                        Database=DB_name;Trusted_Connection=yes;')

engine = create_engine("mssql+pyodbc:///?odbc_connect=%s" % params)

sql_string = '''SELECT id,author From researches WHERE id < 20 '''

final_data_fetch = pd.read_sql_query(sql_string, engine)

final_data_fetch.to_csv('file_name.csv')

Hope this helps!

2 Comments

Using Pandas makes it quick and easier to work on data with python, hopefully this code helps, I use it for connecting with DB everyday at work
This method is slower and causes more memory errors if you're pulling in a lot of data (at least for me it does), but is a nicer method for smaller data
1

with mysql - export csv with mysqlclient library - utf8

import csv

import MySQLdb as mariadb;
import sys

tablelue="extracted_table"

try:
   conn = mariadb.connect(
      host="127.0.0.1",
      port=3306,
      user="me",
      password="mypasswd",
      database="mydb")

   cur = conn.cursor()

   instruction="show columns from " + tablelue
   cur.execute(instruction)  
   
   myresult = cur.fetchall()
   
   work=list()
   i=0
   for x in myresult:
     work.append(x[0])
     i=i+1


   wsql = "SELECT * FROM " + tablelue
   cur.execute(wsql)
   wdata = cur.fetchall()

   # Create the csv file
   fichecrit=tablelue+".csv"
   with open(fichecrit, 'w', newline='', encoding="utf8") as f_handle:
      writer = csv.writer(f_handle,delimiter=";")
      # Add the header/column names
      header = work
      writer.writerow(header)
      # Iterate over `data`  and  write to the csv file
      for row in wdata:
         writer.writerow(row)
         
   conn.close()

except Exception as e:
   print(f"Error: {e}")

sys.exit(0) 

1 Comment

Your answer could be improved with additional supporting information. Please edit to add further details, such as citations or documentation, so that others can confirm that your answer is correct. You can find more information on how to write good answers in the help center.
0

You can dump all results to the csv file without looping:

data = cursor.fetchall()
...
writer.writerows(data)

2 Comments

note for large datasets, this will take a HUGE amount of memory and is not feasible
That was not the topic - processing a large amount of data. But if that would be a problem, then it would be solved so that in the while loop, reading from the database with fetchmany() and as long as there is data through the loop, it will be written. An example of an optimization.

Your Answer

By clicking “Post Your Answer”, you agree to our terms of service and acknowledge you have read our privacy policy.

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.