Python csv from database query adding a custom column to csv file - python

here is what I try to achieve my current code is working fine I get the query to run on my sql server but I will need to gather information from several servers. How would I add a column with the dbserver listed in that column?
import pyodbc
import csv
f = open("dblist.ini")
dbserver,UID,PWD = [ variable[variable.find("=")+1 :] for variable in f.readline().split("~")]
connectstring = "DRIVER={SQL server};SERVER=" + dbserver + ";DATABASE=master;UID="+UID+";PWD="+PWD
cnxn = pyodbc.connect(connectstring)
cursor = cnxn.cursor()
fd = open('mssql1.txt', 'r')
sqlFile = fd.read()
fd.close()
cursor.execute(sqlFile)
with open("out.csv", "wb") as csv_file:
csv_writer = csv.writer(csv_file, delimiter = '!')
csv_writer.writerow([i[0] for i in cursor.description]) # write headers
csv_writer.writerows(cursor)

You could add the extra information in your sql query. For example:
select "dbServerName", * from table;
Your cursor will return with an extra column in front of your real data that has the db Server name. The downside to this method is you're transferring a little more extra data.

Related

Export MS SQL table with `null` values to CSV

I am trying to figure out how to create a csv file that contains the null values I have in my MS SQL database table. Right now the script I am using fills up the null values with '' (empty strings). How I am supposed to instruct the csv Writer to keep the null values?
example of source table
ID,Date,Entitled Key
10000002,NULL,805
10000003,2020-11-22 00:00:00,805
export_sql_to_csv.py
import csv
import os
import pyodbc
filePath = os.getcwd() + '/'
fileName = 'rigs_latest.csv'
server = 'ip-address'
database = 'db-name'
username = 'admin'
password = 'password'
# Database connection variable.
connect = pyodbc.connect('DRIVER={ODBC Driver 17 for SQL Server};SERVER=' +
server+';DATABASE='+database+';UID='+username+';PWD=' + password)
cursor = connect.cursor()
sqlSelect = "SELECT * FROM my_table"
cursor.execute(sqlSelect)
results = cursor.fetchall()
# Extract the table headers.
headers = [i[0] for i in cursor.description]
# Open CSV file for writing.
csvFile = csv.writer(open(filePath + fileName, 'w', newline=''),
delimiter=',', lineterminator='\r\n',
quoting=csv.QUOTE_NONE, escapechar='\\')
# Add the headers and data to the CSV file.
csvFile.writerow(headers)
csvFile.writerows(results)
Example of the result after running the above script:
ID,Date,Entitled Key
10000002,,805
10000003,2020-11-22 00:00:00,805
The main reason why I would like to keep the null values is that I would like to convert that csv file into series of insert SQL statements and execute those against Aurora Serverless PostgreSQL database. The database doesn't accept empty strings for the type date and results in that error: ERROR: invalid input syntax for type date: ""
As described in the docs for the csv module, the None value is written to CSV as '' (empty string) by design. All other non-string values call str first.
So if you want your CSV to have the string null instead of '' then you have to modify the values before they reach the CSV writer. Perhaps:
results = [
['null' if val is None else val for val in row] for row in results
]

Export Oracle database column headers with column names into csv

I am making a program that fetches column names and dumps the data into csv format.
Now everything is working just fine and data is being dumped into csv, the problem is,
I am not able to fetch headers into csv. If I open the exported csv file into excel, only data shows up not the column headers. How do I do that?
Here's my code:
import cx_Oracle
import csv
dsn_tns = cx_Oracle.makedsn(--Details--)
conn = cx_Oracle.connect(--Details--)
d = conn.cursor()
csv_file = open("profile.csv", "w")
writer = csv.writer(csv_file, delimiter=',', lineterminator="\n", quoting=csv.QUOTE_NONNUMERIC)
d.execute("""
select * from all_tab_columns where OWNER = 'ABBAS'
""")
tables_tu = d.fetchall()
for row in tables_tu:
writer.writerow(row)
conn.close()
csv_file.close()
What code do I use to export headers too in csv?
Place this just above your for loop:
writer.writerow(i[0] for i in d.description)
Because d.description is a read-only attribute containing 7-tuples that look like:
(name,
type_code,
display_size,
internal_size,
precision,
scale,
null_ok)

Python- execute SQL queries from list of tables and store results in seperate file for each table name

I am trying to read a file which contains a list of table_names and I want to execute a simple query:
SELECT *
FROM $TABLE_NAME
from each SQL Server database.
The results of which I need to store in a separate .csv file.
Can you please help how to achieve this?
You have to read data from server and write into csv:
get data from sql:
import pyodbc
import csv
mydb = pyodbc.connect("Driver={SQL Server Native Client 11.0};"
"Server=Server;"
"Database=Database;"
"uid=username;pwd=password")
cursor = mydb.cursor()
sql = """SELECT * FROM $TABLE_NAME"""
cursor.execute(sql)
row = cursor.fetchall()
write data into csv:
with open('test.csv', 'w', newline= '') as f:
a = csv.writer(f, delimiter=',')
a.writerow(["Header 1", "Header 2"]) ## etc
a.writerows(row)
Give this code a try.
import pyodbc
import csv
# SQL Server Connection settings
conn = pyodbc.connect("Driver={SQL Server Native Client 11.0};"
"Server=server;"
"Database=dbName;"
"uid=User;pwd=password"
"Trusted_Connection=yes;")
cursor = conn.cursor()
inputFile= open("absolute_inputfile_path","w+")
outputDataLocation="absolute_outputfile_path"
# Reading inout file line by line, assuming each line is a table name
line = inputFile.readline()
while line:
tableName = line
line = f.readline()
query = "SELECT * FROM " + str(tableName)
# Read query data
cursor.execute(query)
rows = cursor.fetchall()
# Write to File as CSV
fileWriter = open(outputDataLocation + "/" + str(tableName), 'w')
myFile = csv.writer(fileWriter)
myFile.writerows(rows)
fileWriter.close()
inputFile.close()

Python MySQLdb - Export query to csv without line terminators

So basically im using MySQLdb query dialy images of my tables and i want to save them in .csv but one of the fields has line terminators (\n) and i cant figure out how to get rid of them so my csv doesnt break.
Here is the python im using:
results = cur.execute(sql)
db = MySQLdb.connect(host="",
user="",
passwd="",
db="" )
cur = db.cursor()
sql = """" big query here """
results = cur.execute(sql)
with open("out.csv", "wb") as csv_file:
csv_writer = csv.writer(csv_file)
csv_writer.writerow([i[0] for i in cur.description])
csv_writer.writerow(cur)
Is there a easy way to replace \n chars for just spaces?
Try this:
import csv
import sys
csv_writer = csv.writer(sys.stdout, lineterminator='\n')
Or:
with open("out.csv","wb",newline='') as csv_file:
If the newline is in appearing in the text of your column maybe something like this wouldwork.
csv_writer.writerow([i[0].replace('\n',' ') for i in cur.description])

Insert CSV into SQL database in python

I want to insert the data in my CSV file into the table that I created before.
so lets say I created a table named T
the csv_file is the following:
Last,First,Student Number,Department
Gonzalez,Oliver,1862190394,Chemistry
Roberts,Barbara,1343146197,Computer Science
Carter,Raymond,1460039151,Philosophy
Building on what was shared by Mumpo.
This has worked for me when inserting a CSV to SQL Server. You just need to provide your connection details, filepath, and the table you want to write to. The only caveat is your table must already exist, as this code will insert a CSV to an existing table.
import pyodbc
import csv
# DESTINATION CONNECTION
drivr = ""
servr = ""
db = ""
username = ""
password = ""
my_cnxn = pyodbc.connect('DRIVER={};SERVER={};DATABASE={};UID={};PWD={}'.format(drivr,servr,db,username,password))
my_cursor = cnxn.cursor()
def insert_records(table, yourcsv, cursor, cnxn):
#INSERT SOURCE RECORDS TO DESTINATION
with open(yourcsv) as csvfile:
csvFile = csv.reader(csvfile, delimiter=',')
header = next(csvFile)
headers = map((lambda x: x.strip()), header)
insert = 'INSERT INTO {} ('.format(table) + ', '.join(headers) + ') VALUES '
for row in csvFile:
values = map((lambda x: "'"+x.strip()+"'"), row)
b_cursor.execute(insert +'('+ ', '.join(values) +');' )
b_cnxn.commit() #must commit unless your sql database auto-commits
table = <sql-table-here>
mycsv = '...T.csv' # SET YOUR FILEPATH
insert_records(table, mycsv, my_cursor, my_cnxn)
cursor.close()

Categories