I'm wrting a python code that creates a SQLite database and does some calculations for massive tables. To begin with, reason i'm doing it in SQLite through python is memory, my data is huge that will break into a memory error if run in, say, pandas. and if chuncked it'll take ages, generally because pandas is slow with merges and groupes, etc.
So my issue now is at some point, i want to calculate exponential of one column in a table (sample code below) but it seems that SQLite doesn't have an EXP function.
I can write data to a dataframe and then use numpy to calculate the EXP but that then beats the whole point that pushed my twoards DBs and not have the additional time of reading/writing back and forth between the DB and python.
so my question is this: is there a way around this to calculate the exponential within the database? i've read that i can create the function within sqlite3 in python, but i have no idea how. If you know how or can direct me to where i can find relavent info then i would be thankful, thanks.
Sample of my code where i'm trying to do the calculation, note here i'm just providing a sample where the table is coming directly from a csv, but in my process it's actually created within the DB after lots of megres and group bys:
import sqlite3
#set path and files names
folderPath = 'C:\\SCP\\'
inputDemandFile = 'demandFile.csv'
#set connection to database
conn = sqlite3.connect(folderPath + dataBaseName)
cur = conn.cursor()
#read demand file into db
inputDemand = pd.read_csv(folderPath + inputDemandFile)
inputDemand.to_sql('inputDemand', conn, if_exists='replace', index=False)
#create new table and calculate EXP
cur.execute('CREATE TABLE demand_exp AS SELECT from_zone_id, to_zone_id, EXP(demand) AS EXP_Demand FROM inputDemand;')
i've read that i can create the function within sqlite3 in python, but i have no idea how.
That's conn.create_function()
https://docs.python.org/3/library/sqlite3.html#sqlite3.Connection.create_function
>>> import math
>>> conn.create_function('EXP', 1, math.exp)
>>> cur.execute('select EXP(1)')
>>> cur.fetchone()
(2.718281828459045,)
Related
Although there are many solutions to export the mysql tables to csv using python.
I want to know the best way of doing that?
Currently I am storing around 50 tables to csv which takes around 47 minutes and also takes more than 16gb of memory.
The code is :
sqlEngine = create_engine(f'mysql+pymysql://{MYSQL_READER_USERNAME}:%s#{MYSQL_READER_HOST}/{MYSQL_DB_NAME}' % urllib.parse.quote(f'{MYSQL_READER_PASSWORD}'), pool_recycle=3600)
def export_table(name, download_location):
table = pd.read_sql(f'select /*+ MAX_EXECUTION_TIME(100000000) */ * from {name}', sqlEngine)
table.to_csv(os.path.join(download_location, name + '.csv'), index=False)
tables = ['table1', ... , 'table50']
for table in tqdm(tables):
print(f'\t => \t Storing {table}')
export_table(table, store_dir)
I have seen many methods to store to csv like:
using Cursor
using pyodbc library
pandas read sql method.
Is there any other method or technique and which one is best to reduce
memory or execution time ?
Today I started to learn postgress and I was tryng to do the same thing that I do to load dataframes into my Oracle db
So, for example I have a df that contains 70k of records and 10 columns. My code for this is the following:
from sqlalchemy import create_engine
conn = create_engine('postgresql://'+data['user']+':'+data['password']+'#'+data['host']+':'+data['port_db']+'/'+data['dbname'])
df.to_sql('first_posgress', conn)
This code is kinda the same I use for my Oracle tables but in this case it takes several time to accomplish the task. So I was wondering if there is a better way to do this or it is because in postgress in general is slower.
I found some examples on SO and google but mostly are focused on create the table, not insert a df.
If it is possible for you to use psycopg2 instead of SQLALchemy you can transform your df into a csv and then use cursor.copy_from() to copy the csv into the db.
import io
output = io.StringIO()
df.to_csv(output, sep=",")
output.seek(0)
#psycopg2.cursor:
cursor.copy_from(
output,
target_table, #'first_posgress'
sep=",",
columns=tuple(df.columns)
)
con.commit() #psycopg2 conn
(I don't know if there is an similar function in SQLAlchemy, that is faster too)
Psycopg2 Cursor Documentation
This blogpost contains more information!
Hopefully this is useful for you !
I am trying to read a table from sqlite database (of size 4 gb). Each cell of the table is a json (few cells have large json format files in it).
The query works fine when I execute it within db browser but in Python it gives an error : 'Database disk image is malformed'
I have tried with different tables and the problem persists. The number of rows to fetch with the query, is about 5000 . However, each cell in itself might have a long json structured string (of about 10000 lines).
I have already tried working with replicas of database and with other databases. I tried following as well, in the db
Pragma integrity check;
Pragma temp_store = 2; // to force data into RAM
The problem seems to be linked with Pandas / Python than the actual DB:
import sqlite3
import pandas as pd
conn = sqlite3.connect(db)
sql = """
select a.Topic, a.Timestamp, a.SessionId, a.ContextMask, b.BuildUUID, a.BuildId, a.LayerId,
a.Payload
from MessageTable a
inner JOIN
BuildTable b
on a.BuildId = b.BuildId
where a.Topic = ('Engine/Sensors/SensorData')
and b.BuildUUID = :job
"""
cur = conn.cursor()
cur.execute(sql, {"job" : '06c95a97-40c7-49b7-ad1b-0d439d412464'})
sensordf = pd.DataFrame(data = cur.fetchall(), columns = ['Topic', 'Timestamp_epoch', 'SessionId', 'ContextMask'
'BuildUUID', 'BuildId', 'LayerId', 'Payload'])
I expect the output to be in pandas dataframe with the last column containing json values in each cell. I can further write some script to parse from json to extract more data.
I have used Python to parse a txt file for specific information (dates, $ amounts, lbs, etc) and now I want to export that data to an Oracle table that I made in SQL Developer.
I have successfully connected Python to Oracle with the cx_Oracle module, but I am struggling to export or even print any data to my database from Python.
I am not proficient at using SQL, I know of simple queries and that's about it. I have explored the Oracle docs and haven't found straightforward export commands. When exporting data to an Oracle table via Python is it Python code I am going to be using or SQL code? Is it the same as importing a CSV file, for example?
I would like to understand how to write to an Oracle table from Python; I need to parse and export a very large amount of data so this won't be a one time export/import. I would also ideally like to have a way to preview my import to ensure it aligns correctly with my already created Oracle table, or if a simple undo action exists that would suffice.
If my problem is unclear I am more than happy to clarify it. Thanks for all help.
My code so far:
import cx_Oracle
dsnStr = cx_Oracle.makedsn("sole.wh.whoi.edu", "1526", "sole")
con = cx_Oracle.connect(user="myusername", password="mypassword", dsn=dsnStr)
print (con.version)
#imp 'Book1.csv' [this didn't work]
cursor = con.cursor()
print (cursor)
con.close()
From Import a CSV file into Oracle using CX_Oracle & Python 2.7 you can see overall plan.
So if you already parsed data into csv you can easily do it like:
import cx_Oracle
import csv
dsnStr = cx_Oracle.makedsn("sole.wh.whoi.edu", "1526", "sole")
con = cx_Oracle.connect(user="myusername", password="mypassword", dsn=dsnStr)
print (con.version)
#imp 'Book1.csv' [this didn't work]
cursor = con.cursor()
print (cursor)
text_sql = '''
INSERT INTO tablename (firstfield, secondfield) VALUES(:1,:2)
'''
my_file = 'C:\CSVData\Book1.csv'
cr = csv.reader(open(my_file,"rb"))
for row in cr:
print row
cursor.execute(text_sql, row)
print 'Imported'
con.close()
I would like to get some understanding on the question that I was pretty sure was clear for me. Is there any way to create table using psycopg2 or any other python Postgres database adapter with the name corresponding to the .csv file and (probably the most important) with columns that are specified in the .csv file.
I'll leave you to look at the psycopg2 library properly - this is off the top of my head (not had to use it for a while, but IIRC the documentation is ample).
The steps are:
Read column names from CSV file
Create "CREATE TABLE whatever" ( ... )
Maybe INSERT data
import os.path
my_csv_file = '/home/somewhere/file.csv'
table_name = os.path.splitext(os.path.split(my_csv_file)[1])[0]
cols = next(csv.reader(open(my_csv_file)))
You can go from there...
Create a SQL query (possibly using a templating engine for the fields and then issue the insert if needs be)