importing from excel to mysql - python

I am trying to import data from excel to MySQl below is my code , problem here is that it only writes the last row from my excel sheet to MySQl db and i want it to import all the rows from my excel sheet.
import pymysql
import xlrd
book = xlrd.open_workbook('C:\SqlExcel\Backup.xlsx')
sheet = book.sheet_by_index(0)
# Connect to the database
connection = pymysql.connect(host='localhost',
user='root',
password='',
db='test')
cursor = connection.cursor()
query = """INSERT INTO report_table (FirstName, LastName) VALUES (%s, %s)"""
for r in range(1, sheet.nrows):
fname = sheet.cell(r,1).value
lname = sheet.cell(r,2).value
values = (fname, lname)
cursor.execute(query, values)
connection.commit()
cursor.close()
connection.close()

You code is currently only storing the last pair, and writing that to the database. You need to call fname and lname inside the loop and write each pair seperately to the database.
You can ammend your code to this:
import pymysql
import xlrd
book = xlrd.open_workbook('C:\SqlExcel\Backup.xlsx')
sheet = book.sheet_by_index(0)
# Connect to the database
connection = pymysql.connect(host='localhost',
user='root',
password='',
db='test',
autocommit=True)
cursor = connection.cursor()
query = """INSERT INTO report_table (FirstName, LastName) VALUES (%s, %s)"""
# loop over each row
for r in range(1, sheet.nrows):
# extract each cell
fname = sheet.cell(r,1).value
lname = sheet.cell(r,2).value
# extract cells into pair
values = fname, lname
# write pair to db
cursor.execute(query, values)
# close everything
cursor.close()
connection.close()
Note: You can set autocommit=True in the connect phase. PyMySQL disables autocommit by default. This means you dont have to call cursor.commit() after your query.

Your variable values have to be inside the for instruction like this :
import pymysql
import xlrd
book = xlrd.open_workbook('C:\SqlExcel\Backup.xlsx')
sheet = book.sheet_by_index(0)
# Connect to the database
connection = pymysql.connect(host='localhost',
user='root',
password='',
db='test')
cursor = connection.cursor()
query = """INSERT INTO report_table (FirstName, LastName) VALUES (%s, %s)"""
for r in range(1, sheet.nrows):
fname = sheet.cell(r,1).value
lname = sheet.cell(r,2).value
values = (fname, lname)
cursor.execute(query, values)
connection.commit()
cursor.close()
connection.close()

Sorry, I don't know much about databases, so nor about pymysql. But assumed all the rest is correct I guess it could work like:
...
cursor = connection.cursor()
query = """INSERT INTO report_table (FirstName, LastName) VALUES (%s, %s)"""
for r in range(1, sheet.nrows):
fname = sheet.cell(r,1).value
lname = sheet.cell(r,2).value
values = (fname, lname)
cursor.execute(query, values)
connection.commit()
cursor.close()
connection.close()

Is this something you will do on a regular basis? I see the script you're writing but I am not sure if this is something you need to run over and over again or if you are just importing the data into MySQL once.
If this is a one shot deal, you can try this.
Open the spreadsheet and SELECT ALL then COPY all your data. Paste it into a text document and save the text document (let's say the text document will be in c:\temp\exceldata.txt). You can then load it all into the table with one command:
LOAD DATA INFILE 'c:/temp/exceldata.txt'
INTO TABLE report_table
FIELDS TERMINATED BY '\t'
LINES TERMINATED BY '\r\n'
IGNORE 1 LINES;
I am making a few assumptions here:
The spreadsheet has only two columns and they are in the same order as the fields in your table.
You do NOT need to clear out the table before the load. If you do, issue the command TRUNCATE TABLE report_table; before the load.
Note, I chose a tab delimited format because I prefer it. You could save the file as a .CSV file and adjust the command as follows:
LOAD DATA INFILE 'c:/temp/exceldata.txt'
INTO TABLE report_table
FIELDS TERMINATED BY ','
OPTIONALLY ENCLOSED BY '"'
LINES TERMINATED BY '\r\n'
IGNORE 1 LINES;
The "optionally enclosed by" is there because Excel will put quotes around text data with a comma in it.
If you need to do this on a regular basis, you can still use the CSV method by writing an excel script that saves the file to a .CSV copy whenever the spreadsheet is saved. I have done that too.
I have never written python but this is how I do it in PHP.
HTH

This code worked for me after taking help from the above suggestion the error was of indentation now its working :)
import pymysql
import xlrd
book = xlrd.open_workbook('C:\SqlExcel\Backup.xlsx')
sheet = book.sheet_by_index(0)
# Connect to the database
connection = pymysql.connect(host='localhost',
user='root',
password='',
db='test',
autocommit=True)
cursor = connection.cursor()
query = """INSERT INTO report_table (FirstName, LastName) VALUES (%s, %s)"""
for r in range(1, sheet.nrows):
fname = sheet.cell(r,1).value
lname = sheet.cell(r,2).value
values = (fname, lname)
cursor.execute(query, values)
cursor.close()
connection.close()

Related

importing single .csv into mysql with python

when running this code i am getting a Error while connecting to MySQL Not all parameters were used in the SQL statement
I have tried also to ingest these with another technique
import mysql.connector as msql
from mysql.connector import Error
import pandas as pd
empdata = pd.read_csv('path_to_file', index_col=False, delimiter = ',')
empdata.head()
try:
conn = msql.connect(host='localhost', user='test345',
password='test123')
if conn.is_connected():
cursor = conn.cursor()
cursor.execute("CREATE DATABASE timetheft")
print("Database is created")
except Error as e:
print("Error while connecting to MySQL", e)
try:
conn = msql.connect(host='localhost', database='timetheft', user='test345', password='test123')
if conn.is_connected():
cursor = conn.cursor()
cursor.execute("select database();")
record = cursor.fetchone()
print("You're connected to database: ", record)
cursor.execute('DROP TABLE IF EXISTS company;')
print('Creating table....')
create_contracts_table = """
CREATE TABLE company ( ID VARCHAR(40) PRIMARY KEY,
Company_Name VARCHAR(40),
Country VARCHAR(40),
City VARCHAR(40),
Email VARCHAR(40),
Industry VARCHAR(30),
Employees VARCHAR(30)
);
"""
cursor.execute(create_company_table)
print("Table is created....")
for i,row in empdata.iterrows():
sql = "INSERT INTO timetheft.company VALUES (%S, %S, %S, %S, %S,%S,%S,%S)"
cursor.execute(sql, tuple(row))
print("Record inserted")
# the connection is not auto committed by default, so we must commit to save our changes
conn.commit()
except Error as e:
print("Error while connecting to MySQL", e)
second technique I tried
LOAD DATA LOCAL INFILE 'path_to_file'
INTO TABLE copmany
FIELDS TERMINATED BY ';'
ENCLOSED BY '"'
LINES TERMINATED BY '\n'
IGNORE 1 LINES;
worked better but many errors. only 20% of rows ingested.
Finally here is an excerpt from the .csv (data is consistent throughout all 1K rows)
"ID";"Company_Name";"Country";"City";"Email";"Industry";"Employees"
217520699;"Enim Corp.";"Germany";"Bamberg";"posuere#diamvel.edu";"Internet";"51-100"
352428999;"Lacus Vestibulum Consulting";"Germany";"Villingen-Schwenningen";"egestas#lacusEtiambibendum.org";"Food Production";"100-500"
371718299;"Dictum Ultricies Ltd";"Germany";"Anklam";"convallis.erat#sempercursus.co.uk";"Primary/Secondary Education";"100-500"
676789799;"A Consulting";"Germany";"Andernach";"massa#etrisusQuisque.ca";"Government Relations";"100-500"
718526699;"Odio LLP";"Germany";"Eisenhüttenstadt";"Quisque.varius#euismod.org";"E-Learning";"11-50"
I fixed these issues to get the code to work:
make the number of placeholders in the insert statement equal to the number of columns
the placeholders should be lower-case '%s'
the cell delimiter appears to be a semi-colon, not a comma.
For simply reading a csv with ~1000 rows Pandas is overkill (and iterrows seems not to behave as you expect). I've used the csv module from the standard library instead.
import csv
...
sql = "INSERT INTO company VALUES (%s, %s, %s, %s, %s, %s, %s)"
with open("67359903.csv", "r", newline="") as f:
reader = csv.reader(f, delimiter=";")
# Skip the header row.
next(reader)
# For large files it may be more efficient to commit
# rows in batches.
cursor.executemany(sql, reader)
conn.commit()
If using the csv module is not convenient, the dataframe's itertuples method may be used to iterate over the data:
empdata = pd.read_csv('67359903.csv', index_col=False, delimiter=';')
for tuple_ in empdata.itertuples(index=False):
cursor.execute(sql, tuple_)
conn.commit()
Or the dataframe can be dumped to the database directly.
import sqlalchemy as sa
engine = sa.create_engine('mysql+mysqlconnector:///test')
empdata.to_sql('company', engine, index=False, if_exists='replace')

csv into sqlite table python

Using python, I am trying to import a csv into an sqlite table and use the headers in the csv file to become the headers in the sqlite table. The code runs but the table "MyTable" does not appear to be created. Here is the code:
with open ('dict_output.csv', 'r') as f:
reader = csv.reader(f)
columns = next(reader)
#Strips white space in header
columns = [h.strip() for h in columns]
#reader = csv.DictReader(f, fieldnames=columns)
for row in reader:
print(row)
con = sqlite3.connect("city_spec.db")
cursor = con.cursor()
#Inserts data from csv into table in sql database.
query = 'insert into MyTable({0}) values ({1})'
query = query.format(','.join(columns), ','.join('?' * len(columns)))
print(query)
cursor = con.cursor()
for row in reader:
cursor.execute(query, row)
#cursor.commit()
con.commit()
con.close()
Thanks in advance for any help.
You can use Pandas to make this easy (you may need to pip install pandas first):
import sqlite3
import pandas as pd
# load data
df = pd.read_csv('dict_output.csv')
# strip whitespace from headers
df.columns = df.columns.str.strip()
con = sqlite3.connect("city_spec.db")
# drop data into database
df.to_sql("MyTable", con)
con.close()
Pandas will do all of the hard work for you, including create the actual table!
You haven't marked your answer solved yet so here goes.
Connect to the database just once, and create a cursor just once.
You can read the csv records only once.
I've added code that creates a crude form of the database table based on the column names alone. Again, this is done just once in the loop.
Your insertion code works fine.
import sqlite3
import csv
con = sqlite3.connect("city_spec.sqlite") ## these statements belong outside the loop
cursor = con.cursor() ## execute them just once
first = True
with open ('dict_output.csv', 'r') as f:
reader = csv.reader(f)
columns = next(reader)
columns = [h.strip() for h in columns]
if first:
sql = 'CREATE TABLE IF NOT EXISTS MyTable (%s)' % ', '.join(['%s text'%column for column in columns])
print (sql)
cursor.execute(sql)
first = False
#~ for row in reader: ## we will read the rows later in the loop
#~ print(row)
query = 'insert into MyTable({0}) values ({1})'
query = query.format(','.join(columns), ','.join('?' * len(columns)))
print(query)
cursor = con.cursor()
for row in reader:
cursor.execute(query, row)
con.commit()
con.close()
You can also do it easy with peewee orm. For this you only use an extension from peewee, the playhouse.csv_loader:
from playhouse.csv_loader import *
db = SqliteDatabase('city_spec.db')
Test = load_csv(db, 'dict_output.csv')
You created the database city_spec.db with the headers as fields and the data from the dict_output.csv
If you don't have peewee you can install it with
pip install peewee

Export specific column from one database to another one

I want to export specific column from one database to another one using Python but its not coming:
# Display all Non-Duplicate data
import sqlite3
import csv
conn = sqlite3.connect('data.db')
# STEP 2 : create a small data file with only three fields account_id, product_id and unit_quantity
cursor = conn.execute("SELECT field1,field12,field14 FROM database")
for row in cursor:
print row[0:11]
print "Operation done successfully";
conn.close()
Create second connection and insert directly
conn = sqlite3.connect('data.db')
cursor = conn.execute("SELECT field1,field12,field14 FROM database")
export = sqlite3.connect('exported.db')
#get result as list
for values in cursor.fetchall():
export.execute('INSERT INTO tablename(field1,field12,field14) VALUES (%s, %s, %s)' % (values[0], values[1], values[2]))
export.commit()
export.close()

Writing a csv file into SQL Server database using python

I am trying to write a csv file into a table in SQL Server database using python. I am facing errors when I pass the parameters , but I don't face any error when I do it manually. Here is the code I am executing.
cur=cnxn.cursor() # Get the cursor
csv_data = csv.reader(file(Samplefile.csv')) # Read the csv
for rows in csv_data: # Iterate through csv
cur.execute("INSERT INTO MyTable(Col1,Col2,Col3,Col4) VALUES (?,?,?,?)",rows)
cnxn.commit()
Error:
pyodbc.DataError: ('22001', '[22001] [Microsoft][ODBC SQL Server Driver][SQL Server]String or binary data would be truncated. (8152) (SQLExecDirectW); [01000] [Microsoft][ODBC SQL Server Driver][SQL Server]The statement has been terminated. (3621)')
However when I insert the values manually. It works fine
cur.execute("INSERT INTO MyTable(Col1,Col2,Col3,Col4) VALUES (?,?,?,?)",'A','B','C','D')
I have ensured that the TABLE is there in the database, data types are consistent with the data I am passing. Connection and cursor are also correct. The data type of rows is "list"
Consider building the query dynamically to ensure the number of placeholders matches your table and CSV file format. Then it's just a matter of ensuring your table and CSV file are correct, instead of checking that you typed enough ? placeholders in your code.
The following example assumes
CSV file contains column names in the first line
Connection is already built
File name is test.csv
Table name is MyTable
Python 3
...
with open ('test.csv', 'r') as f:
reader = csv.reader(f)
columns = next(reader)
query = 'insert into MyTable({0}) values ({1})'
query = query.format(','.join(columns), ','.join('?' * len(columns)))
cursor = connection.cursor()
for data in reader:
cursor.execute(query, data)
cursor.commit()
If column names are not included in the file:
...
with open ('test.csv', 'r') as f:
reader = csv.reader(f)
data = next(reader)
query = 'insert into MyTable values ({0})'
query = query.format(','.join('?' * len(data)))
cursor = connection.cursor()
cursor.execute(query, data)
for data in reader:
cursor.execute(query, data)
cursor.commit()
I modified the code written above by Brian as follows since the one posted above wouldn't work on the delimited files that I was trying to upload. The line row.pop() can also be ignored as it was necessary only for the set of files that I was trying to upload.
import csv
def upload_table(path, filename, delim, cursor):
"""
Function to upload flat file to sqlserver
"""
tbl = filename.split('.')[0]
cnt = 0
with open (path + filename, 'r') as f:
reader = csv.reader(f, delimiter=delim)
for row in reader:
row.pop() # can be commented out
row = ['NULL' if val == '' else val for val in row]
row = [x.replace("'", "''") for x in row]
out = "'" + "', '".join(str(item) for item in row) + "'"
out = out.replace("'NULL'", 'NULL')
query = "INSERT INTO " + tbl + " VALUES (" + out + ")"
cursor.execute(query)
cnt = cnt + 1
if cnt % 10000 == 0:
cursor.commit()
cursor.commit()
print("Uploaded " + str(cnt) + " rows into table " + tbl + ".")
You can pass the columns as arguments. For example:
for rows in csv_data: # Iterate through csv
cur.execute("INSERT INTO MyTable(Col1,Col2,Col3,Col4) VALUES (?,?,?,?)", *rows)
If you are using MySqlHook in airflow , if cursor.execute() with params throw san error
TypeError: not all arguments converted during string formatting
use %s instead of ?
with open('/usr/local/airflow/files/ifsc_details.csv','r') as csv_file:
csv_reader = csv.reader(csv_file)
columns = next(csv_reader)
query = '''insert into ifsc_details({0}) values({1});'''
query = query.format(','.join(columns), ','.join(['%s'] * len(columns)))
mysql = MySqlHook(mysql_conn_id='local_mysql')
conn = mysql.get_conn()
cursor = conn.cursor()
for data in csv_reader:
cursor.execute(query, data)
cursor.commit()
I got it sorted out. The error was due to the size restriction restriction of table. It changed the column capacity like from col1 varchar(10) to col1 varchar(35) etc. Now it's working fine.
Here is the script and hope this works for you:
import pandas as pd
import pyodbc as pc
connection_string = "Driver=SQL Server;Server=localhost;Database={0};Trusted_Connection=Yes;"
cnxn = pc.connect(connection_string.format("DataBaseNameHere"), autocommit=True)
cur=cnxn.cursor()
df= pd.read_csv("your_filepath_and_filename_here.csv").fillna('')
query = 'insert into TableName({0}) values ({1})'
query = query.format(','.join(df.columns), ','.join('?' * len(df1.columns)))
cur.fast_executemany = True
cur.executemany(query, df.values.tolist())
cnxn.close()
You can also import data into SQL by using either:
The SQL Server Import and Export Wizard
SQL Server Integration Services (SSIS)
The OPENROWSET function
More details can be found on this webpage:
https://learn.microsoft.com/en-us/sql/relational-databases/import-export/import-data-from-excel-to-sql?view=sql-server-2017

Error in SQLite query

Merged with I would like some advice why this would not insert data into my SQL table.
I am receiving this error:
query() argument 1 must be string or read-only buffer, not tuple.
I am not sure what it is after trying to change my code:
def insert_popularity(Category, filename, cursor):
txt_file = file(filename, 'r')
for line in txt_file:
# Split the line on whitespace
number, value = line.split()
# construct the SQL statement
sql = ("""INSERT INTO popularity (PersonNumber, Category, Value)
VALUES(%s, %s, %s)""", (number, Category, value))
# execute the query
cursor.execute(sql)
connection = MySQLdb.connect(host='localhost', user='root', \
passwd='password', db='dogs')
cursor = connection.cursor()
Category = 'dogs'
insert_popularity(Category, 'dogs.txt', cursor)
connection.commit()
cursor.close()
connection.close()

Categories