CSV - MYSQL Using Python - python

After reading several inputs I still can't get this to work.
Most likely I'm doing it all wrong but I've tried several different approaches
What I'm trying to do is extract data from a CSV and add it into my newly created database/table
My csv input look like this
NodeName,NeId,Object,Time,Interval,Direction,NeAlias,NeType,Position,AVG,MAX,MIN,percent_0-5,percent_5-10,percent_10-15,percent_15-20,percent_20-25,percent_25-30,percent_30-35,percent_35-40,percent_40-45,percent_45-50,percent_50-55,percent_55-60,percent_60-65,percent_65-70,percent_70-75,percent_75-80,percent_80-85,percent_85-90,percent_90-95,percent_95-100,IdLogNum,FailureDescription
X13146PAZ,5002,1/11/100,2016-05-16 00:00:00,24,Near End,GE0097-TN01.1,AMM 20PB,-,69684,217287,772,10563,8055,10644,15147,16821,13610,7658,2943,784,152,20,3,0,0,0,0,0,0,0,0,0,-
...
X13146PAZ,5002,1/11/102,2016-05-16 00:00:00,24,Near End,GE0097-TN01.1,AMM 20PB,-,3056,28315,215,86310,90,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,-
...
X13146PAZ,5002,1/11/103,2016-05-16 00:00:00,24,Near End,GE0097-TN01.1,AMM 20PB,-,769,7195,11,86400,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,-
The mysql table is created but possibly that might be the issue as some ar varchar columns and some are integer columns
My server is a Ubuntu if that is of any use
My Code
# -*- coding: utf-8 -*-
#Imports
from datetime import date, timedelta
import sys
import MySQLdb as mdb
import csv
import os
#Vars
Yesterday = date.today() - timedelta(1)
#Opening document
RX_Document = open('./reports/X13146PAZ_TN_WAN_ETH_BAND_RX_' + Yesterday.strftime("%Y%m%d") + "_231500.csv" , 'r')
RX_Document_Str = './reports/X13146PAZ_TN_WAN_ETH_BAND_RX_' + Yesterday.strftime("%Y%m%d") + "_231500.csv"
csv_data = csv.reader(file(RX_Document_Str))
con = mdb.connect('localhost', 'username', 'password','tn_rx_utilization');
counter = 0
for row in csv_data:
if counter == 0:
print row
continue
counter = 1
if counter == 1:
cur = con.cursor()
cur.execute('INSERT INTO RX_UTIL(NodeName, NeId, Object, Time, Interval1,Direction,NeAlias,NeType,Position,AVG,MAX,MIN,percent_5-10,percent_10-15,percent_15-20,percent_20-25,percent_25-30,percent_30-35,percent_35-40,percent_40-45,percent_45-50,percent_50-55,percent_55-60,percent_60-65,percent_65-70,percent_70-75,percent_75-80,percent_80-85,percent_85-90,percent_90-95,percent_95-100,IdLogNum,FailureDescription)' 'VALUES("%s","%s","%s","%s","%s","%s","%s","%s","%s","%s","%s","%s","%s","%s","%s","%s","%s","%s","%s","%s","%s","%s","%s","%s","%s","%s","%s","%s","%s","%s","%s","%s","%s","%s")',tuple(row[:34]))
con.commit()
#cur.execute("SELECT VERSION()")
#ver = cur.fetchone()
con.commit()
con.close()

You should not put the placeholder %s in quotes ":
cur.execute('''INSERT INTO RX_UTIL(NodeName, NeId, Object, Time, Interval1,Direction,
NeAlias,NeType,Position,AVG,MAX,MIN,"percent_5-10","percent_10-15",
"percent_15-20","percent_20-25","percent_25-30","percent_30-35",
"percent_35-40","percent_40-45","percent_45-50","percent_50-55",
"percent_55-60","percent_60-65","percent_65-70","percent_70-75",
"percent_75-80","percent_80-85","percent_85-90","percent_90-95",
"percent_95-100",IdLogNum,FailureDescription)
VALUES(%s,%s,%s,%s,%s,%s,%s,%s,%s,%s,%s,%s,%s,%s,%s,%s,%s,%s,%s,%s,
%s,%s,%s,%s,%s,%s,%s,%s,%s,%s,%s,%s,%s)''', tuple(row[:33]))

You are missing Percent_0-5 from your Insert
Remove the quotes from the %s references, this needs to be in String format, but the underlying data type will be passed.
There may be issues with datatype resulting from the csv reader. Have Python eval() the csv data to alter type as an INT. Here is some more information from another post:
Read data from csv-file and transform to correct data-type
cur.execute('INSERT INTO RX_UTIL(NodeName, NeId, Object, Time, Interval1,Direction,NeAlias,NeType,Position,AVG,MAX,MIN,percent_0-5,percent_5-10,percent_10-15,percent_15-20,percent_20-25,percent_25-30,percent_30-35,percent_35-40,percent_40-45,percent_45-50,percent_50-55,percent_55-60,percent_60-65,percent_65-70,percent_70-75,percent_75-80,percent_80-85,percent_85-90,percent_90-95,percent_95-100,IdLogNum,FailureDescription)' 'VALUES(%s,%s,%s,%s,%s,%s,%s,%s,%s,%s,%s,%s,%s,%s,%s,%s,%s,%s,%s,%s,%s,%s,%s,%s,%s,%s,%s,%s,%s,%s,%s,%s,%s,%s)',tuple(row[:34]))

Related

Python query to run sql query

I have a Python script i managed to piece together to query a database and provide back a csv file. Looking for help to set a beginning date (2022 Mar 15) and range (65 days forward) to iterate through. I need to run the query for each day and write the results separately to their own CSV file with the date in the file name. So, I believe I need to set the initial date and how many days to increment through and pass the variable to the SQL statement and to the CSV export. I've been playing around with it but not making progress.
import mysql.connector
import sys
import csv
mydb = mysql.connector.connect(
host="my ip",
user="db username",
password="db pwd"
)
mycursor = mydb.cursor()
from datetime import timedelta, datetime
date = datetime(2022,3,15)
for i in range(65):
date += timedelta(days=1)
print(date)
mycursor.execute ("SELECT * FROM myDB WHERE Division like 'ABC' AND ContactDate LIKE date")
rows = mycursor.fetchall()
headers = [col[0] for col in mycursor.description]
rows.insert(0, tuple(headers))
fp = open('Calls-date.csv', 'w+', newline="")
myFile = csv.writer(fp)
myFile.writerows(rows)
fp.close()
mydb.close()

How to query MySQL Record value to Python Variables?

I want to give python variables with values that I fetch from MySQL database.
#!/usr/bin/python -u
# -*- coding: UTF-8 -*-
import time
import datetime
import mysql.connector
import sys
db = mysql.connector.connect(
host = "localhost",
user = "admin",
password = "admin",
db = "testonly"
)
mycursor = db.cursor()
if __name__ == '__main__':
temp = 0
mycursor.execute("SELECT temperature FROM table ORDER BY primarykey DESC LIMIT 1;") #By selecting one column in a row, I fetch only one record from the talbe.
data = mycursor.fetchone()
for temperature in data:
print(temperature)
temp = data['temperature']
sys.exit()
Then I have error like so:
File "test.py", line 28, in <module>
temp = data['temperature']
TypeError: tuple indices must be integers, not str
In which way I can give value to python variable for later usage?
By default, fetchone returns a tuple with the data from your database. As it currently stands, you need to access your data by index
temp = data[0]
If you want to access your data by the temperature key, you need to use specify your cursor
from mysql.connector.cursor import MySQLCursorDict
...
mycursor = db.cursor(cursor_class=MySQLCursorDict)
...
temp = data['temperature']
Your object data is a tuple and can't be referenced like that. You need to use this:
temp = data[0]

Parsing just a column of a CSV file with multiple columns

I am learning Python and am currently working with it to parse a CSV file.
The CSV file has 3 columns:
Full_name, university, and Birth_Year.
I have successfully loaded,read, and printed the content of a given CSV file into Python, but here’s where I am stuck at:
I want to use and parse ONLY the column Full_name to 3 columns: first, middle, and last. If there are only 2 words in the name, then the middle name should be null.
The resulting parsed output should then be inserted to a sql db through Python.
Here’s my code so far:
import csv
if __name__ == '__main__':
if len (sys.argv) != 2:
print("Please enter the csv file too: python name_parsing.py student_info.csv")
sys.exit()
else:
with open(sys.argv[1], "r" ) as file:
reader = csv.DictReader(file) #I use DictReader because the csv file has > 1 col
# Print the names on the cmd
for row in reader:
name = row["Full_name"]
for name in reader:
if len(name) == 2:
print(first_name = name[0])
print(middle_name = None)
print(last_name = name[2])
if len(name) == 3 : # The assumption is that name is either 2 or 3 words only.
print(first_name = name[0])
print(middle_name = name[1])
print(last_name = name[2])
db.execute("INSERT INTO name (first, middle, last) VALUES(?,?,?)",
row["first_name"], row["middle_name"], row["last_name"])
Running the program above gives me no output whatsoever. How to parse my code the right way? Thank you.
I created a sample file based on your description. The content looks as below:
Full_name,University,Birth_Year
Prakash Ranjan Gupta,BPUT,1920
Hari Shankar,NIT,1980
John Andrews,MIT,1950
Arbaaz Aslam Khan,REC,2005
And then I executed the code below. It runs fine on my jupyter notebook. You can add the lines (sys.argv) != 2 etc) with this as you need. I have used sqlite3 database I hope this works. In case you want the if/main block added to this, let me know: can edit.
This is going by your code. (Otherwise You can do this using pandas in an easier way I believe.)
import csv
import sqlite3
con = sqlite3.connect('name_data.sql') ## Make DB connection and create a table if it does not exist
cur = con.cursor()
cur.execute('''CREATE TABLE IF NOT EXISTS UNIV_DATA
(FIRSTNAME TEXT,
MIDDLE_NAME TEXT,
LASTNAME TEXT,
UNIVERSITY TEXT,
YEAR TEXT)''')
with open('names_data.csv') as fh:
read_data = csv.DictReader(fh)
for uniData in read_data:
lst_nm = uniData['Full_name'].split()
if len(lst_nm) == 2:
fn,ln = lst_nm
mn = None
else:
fn,mn,ln = lst_nm
# print(fn,mn,ln,uniData['University'],uniData['Birth_Year'] )
cur.execute('''
INSERT INTO UNIV_DATA
(FIRSTNAME, MIDDLE_NAME, LASTNAME, UNIVERSITY, YEAR)
VALUES(?,?,?,?,?)''',
(fn,mn,ln,uniData['University'],uniData['Birth_Year'])
)
con.commit()
cur.close()
con.close()
If you want to read the data in the table UNIV_DATA:
Option 1: (prints the rows in the form of tuple)
import sqlite3
con = sqlite3.connect('name_data.sql') #Make connection to DB and create a connection object
cur = con.cursor() #Create a cursor object
results = cur.execute('SELECT * FROM UNIV_DATA') # Execute the query and store the rows retrieved in 'result'
[print(result) for result in results] #Traverse through 'result' in a loop to print the rows retrieved
cur.close() #close the cursor
con.close() #close the connection
Option 2: (prints all the rows in the form of a pandas data frame - execute in jupyter ...preferably )
import sqlite3
import pandas as pd
con = sqlite3.connect('name_data.sql') #Make connection to DB and create a connection object
df = pd.read_sql('SELECT * FROM UNIV_DATA', con) #Query the table and store the result in a dataframe : df
df
When you call name = row["Full_name"] it is going to return a string representing the name, e.g. "John Smith".
In python strings can be treated like lists, so in this case if you called len(name) it would return 10 as "John Smith" has 10 characters. As this doesn't equal 2 or 3, nothing will happen in your for loop.
What you need is some way to turn the string into a list that containing the first, second and last names. You can do this using the split function. If you call name.split(" ") it would split the string whenever there is a space, continuing the above example this would return ["John", "Smith"] which should make your code work.

pyodbc.DataError: ('22018',"[22018] [Microsoft][ODBC SQL Server Driver][SQL Server]Conversion failed ] error

I have written the following snippet to import a CSV file into an MS SQL Server database but it gives me an error. It is based on code written for Sqlite for Python and changed for MSSQL.
import csv, pyodbc
import logging
def _get_col_datatypes(fin):
dr = csv.DictReader(fin) # comma is default delimiter
fieldTypes = {}
for entry in dr:
feildslLeft = [f for f in dr.fieldnames if f not in fieldTypes.keys()]
if not feildslLeft: break # We're done
for field in feildslLeft:
data = entry[field]
# Need data to decide
if len(data) == 0:
continue
if data.isdigit():
fieldTypes[field] = "INTEGER"
else:
fieldTypes[field] = "TEXT"
# TODO: Currently there's no support for DATE in sqllite
if len(feildslLeft) > 0:
raise Exception("Failed to find all the columns data types - Maybe some are empty?")
return fieldTypes
def escapingGenerator(f):
for line in f:
yield line.encode("ascii", "xmlcharrefreplace").decode("ascii")
def csvToDb(csvFile, outputToFile = False):
# TODO: implement output to file
with open(csvFile,mode='r') as fin:
dt = _get_col_datatypes(fin)
fin.seek(0)
reader = csv.DictReader(fin)
# Keep the order of the columns name just as in the CSV
fields = reader.fieldnames
cols = []
# Set field and type
for f in fields:
cols.append("%s %s" % (f, dt[f]))
# Generate create table statement:
stmt = "CREATE TABLE ads (%s)" % ",".join(cols)
con = pyodbc.connect('DRIVER={SQL Server};SERVER=localhost;DATABASE=sd;UID=Test;PWD=11')
cur = con.cursor()
cur.execute(stmt)
fin.seek(0)
reader = csv.reader(escapingGenerator(fin))
# Generate insert statement:
stmt = "INSERT INTO ads VALUES(%s);" % ','.join('?' * len(cols))
cur.executemany(stmt, reader)
con.commit()
return con
csvToDb('Books.csv')
The error I am getting is
pyodbc.DataError: ('22018', "[22018] [Microsoft][ODBC SQL Server Driver][SQL Server]Conversion failed when converting the varchar value 'a' to data type int. (245) (SQLExecDirectW)")
Also please suggest if you think there are any other methods to dynamically import CSV or text files into an MSSQL database
The error message
Conversion failed when converting the varchar value 'a' to data type int.
reveals that your code can be "fooled" into thinking that a column is integer when it is really text, presumably because it only looks at the first row of data. Testing reveals that both
ID,txt1,txt2,int1
1,foo,123,3
2,bar,abc,4
and
"ID","txt1","txt2","int1"
1,"foo","123",3
2,"bar","abc",4
result in your code producing the CREATE TABLE statement:
CREATE TABLE ads (ID INTEGER,txt1 TEXT,txt2 INTEGER,int1 INTEGER)
which is wrong because the [txt2] column is not really INTEGER.
You could investigate tweaking your code to look at more than the first data row. (Microsoft's own import routines often default to the first eight rows when attempting to auto-detect data types.) You could also just import all columns as text and then convert them later in SQL server.
However, given that there must be hundreds – if not thousands – of examples out there for importing CSV data to SQL Server you should also consider doing a more exhaustive search for existing (debugged) code before you continue investing time and effort into "rolling your own solution".

How do I create a CSV file from database in Python?

I have a Sqlite 3 and/or MySQL table named "clients"..
Using python 2.6, How do I create a csv file named Clients100914.csv with headers?
excel dialect...
The Sql execute: select * only gives table data, but I would like complete table with headers.
How do I create a record set to get table headers. The table headers should come directly from sql not written in python.
w = csv.writer(open(Fn,'wb'),dialect='excel')
#w.writelines("header_row")
#Fetch into sqld
w.writerows(sqld)
This code leaves me with file open and no headers. Also cant get figure out how to use file as log.
import csv
import sqlite3
from glob import glob; from os.path import expanduser
conn = sqlite3.connect( # open "places.sqlite" from one of the Firefox profiles
glob(expanduser('~/.mozilla/firefox/*/places.sqlite'))[0]
)
cursor = conn.cursor()
cursor.execute("select * from moz_places;")
with open("out.csv", "w", newline='') as csv_file: # Python 3 version
#with open("out.csv", "wb") as csv_file: # Python 2 version
csv_writer = csv.writer(csv_file)
csv_writer.writerow([i[0] for i in cursor.description]) # write headers
csv_writer.writerows(cursor)
PEP 249 (DB API 2.0) has more information about cursor.description.
Using the csv module is very straight forward and made for this task.
import csv
writer = csv.writer(open("out.csv", 'w'))
writer.writerow(['name', 'address', 'phone', 'etc'])
writer.writerow(['bob', '2 main st', '703', 'yada'])
writer.writerow(['mary', '3 main st', '704', 'yada'])
Creates exactly the format you're expecting.
You can easily create it manually, writing a file with a chosen separator. You can also use csv module.
If it's from database you can alo just use a query from your sqlite client :
sqlite <db params> < queryfile.sql > output.csv
Which will create a csv file with tab separator.
How to extract the column headings from an existing table:
You don't need to parse an SQL "create table" statement. This is fortunate, as the "create table" syntax is neither nice nor clean, it is warthog-ugly.
You can use the table_info pragma. It gives you useful information about each column in a table, including the name of the column.
Example:
>>> #coding: ascii
... import sqlite3
>>>
>>> def get_col_names(cursor, table_name):
... results = cursor.execute("PRAGMA table_info(%s);" % table_name)
... return [row[1] for row in results]
...
>>> def wrong_way(cur, table):
... import re
... cur.execute("SELECT sql FROM sqlite_master WHERE name=?;", (table, ))
... sql = cur.fetchone()[0]
... column_defs = re.findall("[(](.*)[)]", sql)[0]
... first_words = (line.split()[0].strip() for line in column_defs.split(','))
... columns = [word for word in first_words if word.upper() != "CONSTRAINT"]
... return columns
...
>>> conn = sqlite3.connect(":memory:")
>>> curs = conn.cursor()
>>> _ignored = curs.execute(
... "create table foo (id integer, name text, [haha gotcha] text);"
... )
>>> print get_col_names(curs, "foo")
[u'id', u'name', u'haha gotcha']
>>> print wrong_way(curs, "foo")
[u'id', u'name', u'[haha'] <<<<<===== WHOOPS!
>>>
Other problems with the now-deleted "parse the create table SQL" answer:
Stuffs up with e.g. create table test (id1 text, id2 int, msg text, primary key(id1, id2)) ... needs to ignore not only CONSTRAINT, but also keywords PRIMARY, UNIQUE, CHECK and FOREIGN (see the create table docs).
Needs to specify re.DOTALL in case there are newlines in the SQL.
In line.split()[0].strip() the strip is redundant.
This is simple and works fine for me.
Lets say you have already connected to your database table and also got a cursor object. So following on on from that point.
import csv
curs = conn.cursor()
curs.execute("select * from oders")
m_dict = list(curs.fetchall())
with open("mycsvfile.csv", "wb") as f:
w = csv.DictWriter(f, m_dict[0].keys())
w.writerow(dict((fn,fn) for fn in m_dict[0].keys()))
w.writerows(m_dict)
unless i'm missing something, you just want to do something like so...
f = open("somefile.csv")
f.writelines("header_row")
logic to write lines to file (you may need to organize values and add comms or pipes etc...)
f.close()
It can be easily done using pandas and sqlite3. In extension to the answer from Cristian Ciupitu.
import sqlite3
from glob import glob; from os.path import expanduser
conn = sqlite3.connect(glob(expanduser('data/clients_data.sqlite'))[0])
cursor = conn.cursor()
Now use pandas to read the table and write to csv.
clients = pd.read_sql('SELECT * FROM clients' ,conn)
clients.to_csv('data/Clients100914.csv', index=False)
This is more direct and works all the time.
The below code works for Oracle with Python 3.6 :
import cx_Oracle
import csv
# Create tns
dsn_tns=cx_Oracle.makedsn('<host>', '<port>', service_name='<service_name>')
# Connect to DB using user, password and tns settings
conn=cx_Oracle.connect(user='<user>', password='<pass>',dsn=dsn_tns)
c=conn.cursor()
#Execute the Query
c.execute("select * from <table>")
# Write results into CSV file
with open("<file>.csv", "w", newline='') as csv_file:
csv_writer = csv.writer(csv_file)
csv_writer.writerow([i[0] for i in c.description]) # write headers
csv_writer.writerows(c)

Categories