Fastest way to load .xlsx file into MySQL database - python

I'm trying to import data from a .xlsx file into a SQL database.
Right now, I have a python script which uses the openpyxl and MySQLdb modules to
establish a connection to the database
open the workbook
grab the worksheet
loop thru the rows the the worksheet, extracting the columns I need
and inserting each record into the database, one by one
Unfortunately, this is painfully slow. I'm working with a huge data set, so I need to find a faster way to do this (preferably with Python). Any ideas?
wb = openpyxl.load_workbook(filename="file", read_only=True)
ws = wb['My Worksheet']
conn = MySQLdb.connect()
cursor = conn.cursor()
cursor.execute("SET autocommit = 0")
for row in ws.iter_rows(row_offset=1):
sql_row = # data i need
cursor.execute("INSERT sql_row")
conn.commit()

Disable autocommit if it is on! Autocommit is a function which causes MySQL to immediately try to push your data to disk. This is good if you only have one insert, but this is what causes each individual insert to take a long time. Instead, you can turn it off and try to insert the data all at once, committing only once you've run all of your insert statements.
Something like this might work:
con = mysqldb.connect(
host="your db host",
user="your username",
passwd="your password",
db="your db name"
)
con.execute("SET autocommit = 0")
cursor = con.cursor()
data = # some code to get data from excel
for datum in data:
cursor.execute("your insert statement".format(datum))
con.commit()
con.close()

Consider saving workbook's worksheet as a CSV, then use MySQL's LOAD DATA INFILE. This is often a very fast read.
sql = """LOAD DATA INFILE '/path/to/data.csv'
INTO TABLE myTable
FIELDS TERMINATED BY ','
OPTIONALLY ENCLOSED BY '\"'
LINES TERMINATED BY '\n'"""
cursor.execute(sql)
con.commit()

Related

How increate efffciency insert data in PostGIS with Python?

I need to insert 46mln points into PostGIS database in a decent time. Inserting 14mln points was executing around 40 minutes, it its awful and inefficient.
I created database with spatial GIST index and wrote this code:
import psycopg2
import time
start = time.time()
conn = psycopg2.connect(host='localhost', port='5432', dbname='test2', user='postgres', password='alfabet1')
filepath = "C:\\Users\\nmt1m.csv"
curs = conn.cursor()
with open(filepath, 'r') as text:
for i in text:
i = i.replace("\n", "")
i = i.split(sep=" ")
curs.execute(f"INSERT INTO nmt_1 (geom, Z) VALUES (ST_GeomFromText('POINTZ({i[0]} {i[1]} {i[2]})',0), {i[2]});")
conn.commit()
end = time.time()
print(end - start)
curs.close()
conn.close()
Im looking for the best way to inserting data, it not must be in python.
Thanks ;)
Cześć! Welcome to SO.
There are a few things you can do to speed up your bulk insert:
If the target table is empty or is not being used in a production system, consider dropping the indexes right before inserting the data. After the insert is complete you can recreate them. This will avoid PostgreSQL to re-index your table after every insert, which in your case means 46 million times.
If the target table can be entirely built from your CSV file, consider creating an UNLOGGED TABLE. Unlogged tables are much faster than "normal" tables, since they (as the name suggests) are not logged in the WAL file (write-ahead log). Unlogged tables might be lost in case of database crash or an unclean shutdown!
Use either the PostgreSQL COPY command or copy_from as #MauriceMeyer pointed out. If for some reason you must stick to inserts, make sure you're not committing after every insert ;-)
Cheers
Thanks Jim for help, according to your instructions better way to insert data is:
import psycopg2
import time
start = time.time()
conn = psycopg2.connect(host='localhost', port='5432', dbname='test2',
user='postgres', password='alfabet1')
curs = conn.cursor()
filepath = "C:\\Users\\Jakub\\PycharmProjects\\test2\\testownik9_NMT\\nmt1m.csv"
curs.execute("CREATE UNLOGGED TABLE nmt_10 (id_1 FLOAT, id_2 FLOAT, id_3 FLOAT);")
with open(filepath, 'r') as text:
curs.copy_from(text, 'nmt_10', sep=" ")
curs.execute("SELECT AddGeometryColumn('nmt_10', 'geom', 2180, 'POINTZ', 3);")
curs.execute("CREATE INDEX nmt_10_index ON nmt_10 USING GIST (geom);")
curs.execute("UPDATE nmt_10 SET geom = ST_SetSRID(ST_MakePoint(id_1, id_2, id_3), 2180);")
conn.commit()
end = time.time()
print(end - start)
cheers

How do I run a scraper on each entry in a database?

I'm scraping data with requests library and parsing them with Beautiful Soup.
I'm storing scraped data in mysql db.
I want to run a scraper each time when it finds a new entry in a table.
Assuming you have your scraping method already, let's call it scrape_data()
You can use the MySQL-Python-Connector to run a query on the database directly to scrape as it reads each row (although you might want to buffer them into memory to handle disconnects)
# Importing the MySQL-Python-connector
import mysql.connector as mysqlConnector
# Creating connection with the MySQL Server Running. Remember to use your own credentials.
conn = mysqlConnector.connect(host='localhost',user='root',passwd='root')
# Handle bad connections
if conn:
print("Connection Successful :)")
else:
print("Connection Failed :(")
# Creating a cursor object to traverse the resultset
cur = conn.cursor()
# Assuming the column is called data in a table called table. Replace as needed.
cur.execute("SELECT data FROM table")
for row in cur:
scrape_data(row[0]) # Assumes data is the first column.
# Closing the connection - or you will end up with a resource leak
conn.close()
Note
You can find the official connector here.

pyodbc creating corrupt excel file

I'm using pyodbc to create a new excel file, and the operations seem to execute fine but the resultant xlsx file is corrupt.
I've stripped the code down to this minimal code snippet:
import pyodbc
# Setup path and driver connection string
spreadsheet_path = "C:\\temp\\test_spreadsheet.xlsx"
conn_str = (r'Driver={{Microsoft Excel Driver (*.xls, *.xlsx, *.xlsm, *.xlsb)}};'
r'DBQ={}; ReadOnly=0').format(spreadsheet_path)
with pyodbc.connect(conn_str, autocommit=True) as conn:
# Create table
cursor = conn.cursor()
query = "create table sheet1 (COL1 TEXT, COL2 NUMBER);"
cursor.execute(query)
cursor.commit()
# Insert a row
query = "insert into sheet1 (COL1, COL2) values (?, ?);"
cursor.execute(query, "apples", 10)
cursor.commit()
# Check the row is there
query = "select * from sheet1;"
cursor.execute(query)
for r in cursor.fetchall():
print(r)
print("done")
Note this will create a new spreadsheet in the location specified by spreadsheet_path. I had to use a full path, because the ODBC driver doesn't like relative paths.
I have enabled autocommit, and manually called cursor.commit() just to test if it makes a difference, and it does not.
Any ideas?
--
After doing some searching, I found this guide to using the Excel ODBC driver with PowerShell and it mentions:
The problem is in the Workbook you create. Whether you name it XLS or
XSLX it produces an XLSX spreadsheet, in the latest zipped Office Open
XML form. The trouble is that, with my version of the driver, I can
only get Excel to read it with the XLS filetype, since it says that
there is an error if you try to open it as an .XLSX file. I suspect
that the ODBC driver hasn’t been that well tested by Microsoft.
If I change the file to be .xls, I can open it in excel (although it gives me a warning about the format and extension not matching). The data is valid though. Is this all due to Microsoft's crappy driver? Or is there something I'm doing wrong here?

Fast data moving from CSV to SQLite by Python

I have a problem. There are hundreds of CSV files, ca. 1,000,000 lines each one.
I need to move that data in a specific way, but script working very slow (it passing few ten of tousands per hour).
My code:
import sqlite3 as lite
import csv
import os
my_file = open('file.csv', 'r')
reader = csv.reader(my_file, delimiter=',')
date = '2014-09-29'
con = lite.connect('test.db', isolation_level = 'exclusive')
for row in reader:
position = row[0]
item_name = row[1]
cur = con.cursor()
cur.execute("CREATE TABLE IF NOT EXISTS [%s] (Date TEXT, Position INT)" % item_name)
cur.execute("INSERT INTO [%s] VALUES(?, ?)" % item_name, (date, position))
con.commit()
I found an information saying about isolation_level and single accessing to database, but it didn't work well.
Lines CSV files have a structure: 1,item1 | 2,item2
Does anyone could to help me? Thanks!
Don't do sql inserts. Prepare CSV file first, then do:
.separator <separator>
.import <loadFile> <tableName>
See here: http://cs.stanford.edu/people/widom/cs145/sqlite/SQLiteLoad.html
You certainly don't want to create a new cursor object for each row to insert - and checking for table creation at each line will certainly slow you down s well -
I'd suggest doing this in 2 passes: first
you create the needed tables, on the second pass you record
the data. If it is still slow, you could make a
a more sophisticated in-memory collection of data
to be inserted and perform "executemany" - but this would
require some sophistication to group data by name in memory
prior to comitting;.
import sqlite3 as lite
import csv
import os
my_file = open('file.csv', 'r')
reader = csv.reader(my_file, delimiter=',')
date = '2014-09-29'
con = lite.connect('test.db', isolation_level = 'exclusive')
cur = con.cursor()
table_names = set(row[1] for row in reader)
my_file.seek(0)
for name in table_names:
cur.execute("CREATE TABLE IF NOT EXISTS [%s] (Date TEXT, Position INT)" % item_name)
for row in reader:
position = row[0]
item_name = row[1]
cur.execute("INSERT INTO [%s] VALUES(?, ?)" % item_name, (date, position))
con.commit()
The code is inefficient in that it performs two SQL statements for each row in CSV. Try to optimize.
Is there a way to process CSV first and convert it to SQL statements?
Are rows in CSV grouped by tables (item name's)? If yes, you can accumulate the rows to be inserted into the same table (generate a set of INSERT statements for the same table) and only prefix the resulting set of statements with CREATE TABLE IF NOT EXISTS once, not every of them.
If possible, use bulk insert. If I get it right, bulk insert is introduced with SQLite v.3.27.1. More on this: Is it possible to insert multiple rows at a time in an SQLite database?
If needed, bulk insert in chunks. More on this: Bulk insert huge data into SQLite using Python
I have the same problem. Now it is solved! I would like to share the methods with everyone who is facing the same problem!
We use sqlite3 database as an example, and other databases may also work but are not sure. We adopt pandas and sqlites modules in python.
This can convert a list of csv files [file1,file2,...] into talbes [table1,table2,...] quickly.
import pandas as pd
import sqlite3 as sql
DataBasePath="C:\\Users\\...\\database.sqlite"
conn=sql.connect(DataBasePath)
filePath="C:\\Users\\...\\filefolder\\"
datafiles=["file1","file2","file3",...]
for f in datafiles:
df=pd.read_csv(filePath+f+".csv")
df.to_sql(name=f,con=conn,if_exists='append', index=False)
conn.close()
What's more, this code can create database if it doesn't exist. The argument of pd.to_sql() 'if_exists' is important. Its value is "fail" as default, which will import data if it exists otherwise does nothing; "replace" will drop the table first if it exists then create new table and import data; "append" will import data if it exists otherwise creates a new one can import data.

Converting dbf to sqlite using Python is not populating table

I've struggled over this issue for over an hour now. I'm trying to create a Sqlite database using a dbf table. When I create a list of records derived from a dbf to be used as input for the Sqlite executemany statement, the Sqlite table comes out empty. When I try to replicate the issue using Python interactively, the Sqlite execution is successful. The list generated from the dbf is populated when I run it - so the problem lies in the executemany statement.
import sqlite3
from dbfpy import dbf
streets = dbf.Dbf("streets_sample.dbf")
conn = sqlite3.connect('navteq.db')
conn.execute('PRAGMA synchronous = OFF')
conn.execute('PRAGMA journal_mode = MEMORY')
conn.execute('DROP TABLE IF EXISTS STREETS')
conn.execute('''CREATE TABLE STREETS
(blink_id CHAR(8) PRIMARY KEY,
bst_name VARCHAR(39),
bst_nm_pref CHAR(2));''')
alink_id = []
ast_name = []
ast_nm_pref = []
for i in streets:
alink_id.append(i["LINK_ID"])
ast_name.append(i["ST_NAME"])
ast_nm_pref.append(i["ST_NM_PREF"])
streets_table = zip(alink_id, ast_name, ast_nm_pref)
conn.executemany("INSERT OR IGNORE INTO STREETS VALUES(?,?,?)", streets_table)
conn.close()
This may not be the only issue, but you want to call conn.commit() to save the changes to the SQLite database. Reference: http://www.python.org/dev/peps/pep-0249/#commit

Categories