Exporting MYSQL tables to csv through python script? - python

Although there are many solutions to export the mysql tables to csv using python.
I want to know the best way of doing that?
Currently I am storing around 50 tables to csv which takes around 47 minutes and also takes more than 16gb of memory.
The code is :
sqlEngine = create_engine(f'mysql+pymysql://{MYSQL_READER_USERNAME}:%s#{MYSQL_READER_HOST}/{MYSQL_DB_NAME}' % urllib.parse.quote(f'{MYSQL_READER_PASSWORD}'), pool_recycle=3600)
def export_table(name, download_location):
table = pd.read_sql(f'select /*+ MAX_EXECUTION_TIME(100000000) */ * from {name}', sqlEngine)
table.to_csv(os.path.join(download_location, name + '.csv'), index=False)
tables = ['table1', ... , 'table50']
for table in tqdm(tables):
print(f'\t => \t Storing {table}')
export_table(table, store_dir)
I have seen many methods to store to csv like:
using Cursor
using pyodbc library
pandas read sql method.
Is there any other method or technique and which one is best to reduce
memory or execution time ?

Related

what is the fastest way to insert data into snowflake db table

I have multiple .csv.gz files (each greater than 10GB) that need to be parsed - multiple rows are read to create one row insertion. The approach I'm taking is as follows:
read .csv.gz file
save soon-to-be-inserted rows into a buffer
if there is enough data in the buffer, perform multirow insertion to database table
Now snowflake limits maximum number of expressions to 16384. I've been running this for about a day but the speed at which it is inserting is very slow. I am using sqlalchemy to do this:
url = "snowflake://<my snowflake url>"
engine = create_engine(url)
savedvalues = []
with pd.read_csv(datapath, header=0, chunksize=10**6) as reader:
for chunk in reader:
for index, row in chunk.iterrows():
"""
<parsing data>
"""
savedvalues.append(<parsed values>)
if(len(savedvalues) > 16384):
stmt = mytable.insert().values(savedvalues)
with engine.connect() as conn:
conn.execute(stmt)
savedvalues = []
Is there a faster way to insert data into snowflake database tables?
I'm looking into COPY INTO <table> operation but not sure if this is truly faster than what I'm doing right now.
Any suggestions would be much appreciated!
Here is an article describing a Python multithreaded approach to bulk loading into Snowflake Zero to Snowflake: Multi-Threaded Bulk Loading with Python. Also note to optimize the number of parallel operations for a load, Snowflake recommends data files roughly 100-250 MB (or larger) in size compressed.

How to speed up file reading with Pandas with read_*(file) when I don't need all the rows and columns?

I know there are some other threads that explain how to deal with big files and Pandas, but in fact, I do not have memory problems, I just want to open a lot of Excel files to get a bunch of rows each time (sometimes even only one), and sometimes I don't even need all the columns.
I've seen in other threads people proposing the usecols and nrows but it appears that pandas still loads the entire sheet, then keeps only the selected rows and cols. To be sure, I've wrote this :
start = time.time()
couples2015 = pd.read_excel(fileInput)
total = time.time() - start
#Reloading file, with only some lines and cols
start = time.time()
couples2015 = pd.read_excel(fileInput, header=4, usecols=0, nrows=10)
total = time.time() - start
and in both cases, it tooks about 55 seconds to load.
And that's only for a 50 Mo file, but I have to load & extract a lot of files, from 50 Mo up to 500 Mo (even sometimes up to 1 Gb)
Is there a way to extract some rows and cols without loading the whole files ?
If not, would creating DB with my excel files to use read_sql_table() be faster ?
Thanks !
[edit : moreover, each file has several sheets, but I often want only one or two. Even in I use sheet_name=0, it seems that it still opens and loads everysheet, as time is almost the sameā€¦]
If using Excel for Windows, consider directly querying workbooks using the Jet/ACE SQL Engine (Windows .dll files) via an ODBC connection to the installed Excel driver. Doing so, each sheet serves as a database table and typical SQL semantics (JOIN, UNION, WHERE, GROUP BY) is available and can be read with pandas.read_sql.
Adjust below SQL statement with actual column, sheet, and ranges.
import pyodbc
import pandas as pd
strfile = "C:\Path\To\Workbook.xlsx"
conn = pyodbc.connect(r'Driver={{Microsoft Excel Driver (*.xls, *.xlsx, *.xlsm, *.xlsb)}};DBQ={};'
.format(strfile), autocommit=True)
strSQL = """SELECT Col1, Col2, Col3
FROM [Sheet1$A4:C10]
"""
df = pd.read_sql(strSQL, conn)
conn.close()
For data without headers, consider an inline Excel query that specifies no headers and data starting on first row of specified range.
strSQL = """SELECT F1, F2, F3
FROM [Excel 12.0 Xml;HDR=NO;IMEX=1;Database=C:\Path\To\Same\Workbook.xlsx].[Sheet$A6:L10000]
WHERE F2 = 'Some Value';
"""
By the way, if your last row is unknown, simply give it a very large number. The query engine selects only the used rows.

How to avoid 'database disk image malformed' error while loading large json file in python / pandas?

I am trying to read a table from sqlite database (of size 4 gb). Each cell of the table is a json (few cells have large json format files in it).
The query works fine when I execute it within db browser but in Python it gives an error : 'Database disk image is malformed'
I have tried with different tables and the problem persists. The number of rows to fetch with the query, is about 5000 . However, each cell in itself might have a long json structured string (of about 10000 lines).
I have already tried working with replicas of database and with other databases. I tried following as well, in the db
Pragma integrity check;
Pragma temp_store = 2; // to force data into RAM
The problem seems to be linked with Pandas / Python than the actual DB:
import sqlite3
import pandas as pd
conn = sqlite3.connect(db)
sql = """
select a.Topic, a.Timestamp, a.SessionId, a.ContextMask, b.BuildUUID, a.BuildId, a.LayerId,
a.Payload
from MessageTable a
inner JOIN
BuildTable b
on a.BuildId = b.BuildId
where a.Topic = ('Engine/Sensors/SensorData')
and b.BuildUUID = :job
"""
cur = conn.cursor()
cur.execute(sql, {"job" : '06c95a97-40c7-49b7-ad1b-0d439d412464'})
sensordf = pd.DataFrame(data = cur.fetchall(), columns = ['Topic', 'Timestamp_epoch', 'SessionId', 'ContextMask'
'BuildUUID', 'BuildId', 'LayerId', 'Payload'])
I expect the output to be in pandas dataframe with the last column containing json values in each cell. I can further write some script to parse from json to extract more data.

exponential in SQLite3 in python

I'm wrting a python code that creates a SQLite database and does some calculations for massive tables. To begin with, reason i'm doing it in SQLite through python is memory, my data is huge that will break into a memory error if run in, say, pandas. and if chuncked it'll take ages, generally because pandas is slow with merges and groupes, etc.
So my issue now is at some point, i want to calculate exponential of one column in a table (sample code below) but it seems that SQLite doesn't have an EXP function.
I can write data to a dataframe and then use numpy to calculate the EXP but that then beats the whole point that pushed my twoards DBs and not have the additional time of reading/writing back and forth between the DB and python.
so my question is this: is there a way around this to calculate the exponential within the database? i've read that i can create the function within sqlite3 in python, but i have no idea how. If you know how or can direct me to where i can find relavent info then i would be thankful, thanks.
Sample of my code where i'm trying to do the calculation, note here i'm just providing a sample where the table is coming directly from a csv, but in my process it's actually created within the DB after lots of megres and group bys:
import sqlite3
#set path and files names
folderPath = 'C:\\SCP\\'
inputDemandFile = 'demandFile.csv'
#set connection to database
conn = sqlite3.connect(folderPath + dataBaseName)
cur = conn.cursor()
#read demand file into db
inputDemand = pd.read_csv(folderPath + inputDemandFile)
inputDemand.to_sql('inputDemand', conn, if_exists='replace', index=False)
#create new table and calculate EXP
cur.execute('CREATE TABLE demand_exp AS SELECT from_zone_id, to_zone_id, EXP(demand) AS EXP_Demand FROM inputDemand;')
i've read that i can create the function within sqlite3 in python, but i have no idea how.
That's conn.create_function()
https://docs.python.org/3/library/sqlite3.html#sqlite3.Connection.create_function
>>> import math
>>> conn.create_function('EXP', 1, math.exp)
>>> cur.execute('select EXP(1)')
>>> cur.fetchone()
(2.718281828459045,)

python+MySQLdb, simple select is too slow compared to flat file access

I have one simple table with 80 000 rows.
I'm trying to select and save all rows to python list as fast as possible.
It's taking around 4 - 10 seconds.
In contrast If I dump exact same table into csv file and process it with this code
f = open('list.csv','rb')
lines = f.read().splitlines()
f.close()
print len(lines)
It's taking only 0.08 - 0.3 second
I tried MySQLdb and mysql.connector using fetchall() or fetchone()
import time
start = time.time()
import MySQLdb as mdb
con = mdb.connect('127.0.0.1', 'login', 'p', 'db');
with con:
cur = con.cursor()
cur.execute("SELECT * FROM table")
rows = cur.fetchall()
print len(rows)
print 'MySQLdb %s' % (time.time()-start)
Took 3.7 - 8 seconds with high CPU load
Is it possible to achieve same speed like with that csv file?
EDIT
My MySQL server seems to be ok.
In mysql console:
SELECT * from TABLE;
....
80789 rows in set (0.21 sec)
The entire query result will be restored in client side as a list when cur.execute(...) is done. Check self._rows attribute in MySQLdb/cursor.py for this.
That is to say, time cost differs in reading file content and fetching query results from a MySQL database. As we all, a built-in function is always faster than a 3PP function. So I don't think there is a way to make cursor.execute() same speed as open().
As for why open() is faster, I suggest you look into Python source code. Here is the link.
Hope it helps.

Categories