Python - writing to SQL server database using sqlalchemy from a pandas dataframe - python

I have a pandas dataframe of approx 300,000 rows (20mb), and want to write to a SQL server database.
I have the following code but it is very very slow to execute. Wondering if there is a better way?
import pandas
import sqlalchemy
engine = sqlalchemy.create_engine('mssql+pyodbc://rea-eqx-dwpb/BIWorkArea?
driver=SQL+Server')
df.to_sql(name='LeadGen Imps&Clicks', con=engine, schema='BIWorkArea',
if_exists='replace', index=False)

If you want to speed up you process with writing into the sql database , you can per-setting the dtypes of the table in your database by the data type of your pandas DataFrame
from sqlalchemy import types, create_engine
d={}
for k,v in zip(df.dtypes.index,df.dtypes):
if v=='object':
d[k]=types.VARCHAR(df[k].str.len().max())
elif v=='float64':
d[k]=types.FLOAT(126)
elif v=='int64':
d[k] = types.INTEGER()
Then
df.to_sql(name='LeadGen Imps&Clicks', con=engine, schema='BIWorkArea', if_exists='replace', index=False,dtype=d)

Related

Trying to read sqlite database to Dask dataframe

I am trying to read a table from a sqlite database in kaggle using Dask,
link to DB : https://www.kaggle.com/datasets/marcilonsilvacunha/amostracnpj?select=amostraCNPJ.sqlite
some of the tables in this database are really large and I want to test how dask can handle them.
I wrote the following code for one of the tables in the smaller sqlite database :
import dask.dataframe as ddf
import sqlite3
# Read sqlite query results into a pandas DataFrame
con = sqlite3.connect("/kaggle/input/amostraCNPJ.sqlite")
df = ddf.read_sql_table('cnpj_dados_cadastrais_pj', con, index_col='cnpj')
# Verify that result of SQL query is stored in the dataframe
print(df.head())
this gives an error:
AttributeError: 'sqlite3.Connection' object has no attribute '_instantiate_plugins'
any help would be apreciated as this is the first time I use Dask to read sqlite.
As the docstring stated, you should not pass a connection object to dask. You need to pass a sqlalchemy compatible connection string
df = ddf.read_sql_table('cnpj_dados_cadastrais_pj',
'sqlite:////kaggle/input/amostraCNPJ.sqlite', index_col='cnpj')

How to use ODBC connection for pyspark.pandas

In my following python code I successfully can connect to MS Azure SQL Db using ODBC connection, and can load data into an Azure SQL table using pandas' dataframe method to_sql(...). But when I use pyspark.pandas instead, the to_sql(...) method fails stating no such method supported. I know pandas API on Spark has reached about 97% coverage. But I was wondering if there is alternate method of achieving the same while still using ODBC.
Question: In the following code sample, how can we use ODBC connection for pyspark.pandas for connecting to Azure SQL db and load a dataframe into a SQL table?
import sqlalchemy as sq
#import pandas as pd
import pyspark.pandas as ps
import datetime
data_df = ps.read_csv('/dbfs/FileStore/tables/myDataFile.csv', low_memory=False, quotechar='"', header='infer')
.......
data_df.to_sql(name='CustomerOrderTable', con=engine, if_exists='append', index=False, dtype={'OrderID' : sq.VARCHAR(10),
'Name' : sq.VARCHAR(50),
'OrderDate' : sq.DATETIME()})
Ref: Pandas API on Spark and this
UPDATE: The data file is about 6.5GB with 150 columns and 15 million records. Therefore, the pandas cannot handle it, and as expected, it gives OOM (out of memory) error.
I noticed you were appending the data to the table, so this work around came to mind.
Break the pyspark.pandas into chunks, and then export each chunk to pandas, and from there append the chunk.
n = len(data_df)//20 # Break it into 20 chunks
list_dfs = np.array_split(data_df, n) # [df[i:i+n] for i in range(0,df.shape[0],n)]
for df in list_dfs:
df = df.to_pandas()
df.to_sql()
As per the official pyspark.pandas documentation by Apache Spark, there is no such method available for this module which can load the pandas DataFrame to SQL Table.
Please see all provided methods here.
As an alternative approach, there are some similar asks mentioned in these SO threads. This might be helpful.
How to write to a Spark SQL table from a Panda data frame using PySpark?
How can I convert a pyspark.sql.dataframe.DataFrame back to a sql table in databricks notebook

Improve loading dataframe into postgress db with python

Today I started to learn postgress and I was tryng to do the same thing that I do to load dataframes into my Oracle db
So, for example I have a df that contains 70k of records and 10 columns. My code for this is the following:
from sqlalchemy import create_engine
conn = create_engine('postgresql://'+data['user']+':'+data['password']+'#'+data['host']+':'+data['port_db']+'/'+data['dbname'])
df.to_sql('first_posgress', conn)
This code is kinda the same I use for my Oracle tables but in this case it takes several time to accomplish the task. So I was wondering if there is a better way to do this or it is because in postgress in general is slower.
I found some examples on SO and google but mostly are focused on create the table, not insert a df.
If it is possible for you to use psycopg2 instead of SQLALchemy you can transform your df into a csv and then use cursor.copy_from() to copy the csv into the db.
import io
output = io.StringIO()
df.to_csv(output, sep=",")
output.seek(0)
#psycopg2.cursor:
cursor.copy_from(
output,
target_table, #'first_posgress'
sep=",",
columns=tuple(df.columns)
)
con.commit() #psycopg2 conn
(I don't know if there is an similar function in SQLAlchemy, that is faster too)
Psycopg2 Cursor Documentation
This blogpost contains more information!
Hopefully this is useful for you !

Pandas Dataframe to Postgres table conversion not working

I am converting a csv file into a Pandas dataframe and then converting it to Postgres table essentially.
The problem is that I am able to create a table in Postgres but I am unable to select column names from the table while querying it.
This is the sample code I have:
import pandas as pd
from sqlalchemy import create_engine
import psycopg2
engine = create_engine('postgresql://postgres:pwd#localhost:5432/test')
def convertcsvtopostgres(csvfileloc, table_name, delimiter):
data = pd.read_csv(csvfileloc, sep=delimiter, encoding='latin-1')
data.head()
data1 = data.rename(columns=lambda x: x.strip())
data1.to_sql(table_name, engine, index=False)
convertcsvtopostgres("Product.csv","t_product","~")
I can do a select * from test.t_product; but I am unable to do a select product_id from test.t_product;
I am not sure if that is happening because of the encoding of the file and the conversion because of that. Is there any way around this, since I do not want to specify the table structure each time.

Pandas to_sql right truncation error

I'm trying to use Pandas to_sql to insert data from .csv files into a mssql db. No matter how I seem do it I run into this error:
pyodbc.DataError: ('String data, right truncation: length 8 buffer 4294967294', '22001')
The code I'm running looks like this:
import pandas as pd
from sqlalchemy import create_engine
df = pd.read_csv('foo.csv')
engine = create_engine("mssql+pyodbc://:#Test")
with engine.connect() as conn, conn.begin():
df.to_sql(name='test', con=conn, schema='foo', if_exists='append', index=False)
Any help would be appreciated!
P.S I'm still fairly new to python and mssql.
Okay so I didn't have my DSN configured correctly. The driver I was using was SQL Server and I needed to change it to ODBC Driver 13 for SQL Server. That fixed all my problems.

Categories