I am trying to read a table from a sqlite database in kaggle using Dask,
link to DB : https://www.kaggle.com/datasets/marcilonsilvacunha/amostracnpj?select=amostraCNPJ.sqlite
some of the tables in this database are really large and I want to test how dask can handle them.
I wrote the following code for one of the tables in the smaller sqlite database :
import dask.dataframe as ddf
import sqlite3
# Read sqlite query results into a pandas DataFrame
con = sqlite3.connect("/kaggle/input/amostraCNPJ.sqlite")
df = ddf.read_sql_table('cnpj_dados_cadastrais_pj', con, index_col='cnpj')
# Verify that result of SQL query is stored in the dataframe
print(df.head())
this gives an error:
AttributeError: 'sqlite3.Connection' object has no attribute '_instantiate_plugins'
any help would be apreciated as this is the first time I use Dask to read sqlite.
As the docstring stated, you should not pass a connection object to dask. You need to pass a sqlalchemy compatible connection string
df = ddf.read_sql_table('cnpj_dados_cadastrais_pj',
'sqlite:////kaggle/input/amostraCNPJ.sqlite', index_col='cnpj')
Related
In my following python code I successfully can connect to MS Azure SQL Db using ODBC connection, and can load data into an Azure SQL table using pandas' dataframe method to_sql(...). But when I use pyspark.pandas instead, the to_sql(...) method fails stating no such method supported. I know pandas API on Spark has reached about 97% coverage. But I was wondering if there is alternate method of achieving the same while still using ODBC.
Question: In the following code sample, how can we use ODBC connection for pyspark.pandas for connecting to Azure SQL db and load a dataframe into a SQL table?
import sqlalchemy as sq
#import pandas as pd
import pyspark.pandas as ps
import datetime
data_df = ps.read_csv('/dbfs/FileStore/tables/myDataFile.csv', low_memory=False, quotechar='"', header='infer')
.......
data_df.to_sql(name='CustomerOrderTable', con=engine, if_exists='append', index=False, dtype={'OrderID' : sq.VARCHAR(10),
'Name' : sq.VARCHAR(50),
'OrderDate' : sq.DATETIME()})
Ref: Pandas API on Spark and this
UPDATE: The data file is about 6.5GB with 150 columns and 15 million records. Therefore, the pandas cannot handle it, and as expected, it gives OOM (out of memory) error.
I noticed you were appending the data to the table, so this work around came to mind.
Break the pyspark.pandas into chunks, and then export each chunk to pandas, and from there append the chunk.
n = len(data_df)//20 # Break it into 20 chunks
list_dfs = np.array_split(data_df, n) # [df[i:i+n] for i in range(0,df.shape[0],n)]
for df in list_dfs:
df = df.to_pandas()
df.to_sql()
As per the official pyspark.pandas documentation by Apache Spark, there is no such method available for this module which can load the pandas DataFrame to SQL Table.
Please see all provided methods here.
As an alternative approach, there are some similar asks mentioned in these SO threads. This might be helpful.
How to write to a Spark SQL table from a Panda data frame using PySpark?
How can I convert a pyspark.sql.dataframe.DataFrame back to a sql table in databricks notebook
I am currently working on a data pipeline with pyspark. As part of the pipeline, I write a spark dataframe to mysql using the following function:
def jdbc_insert_overwrite_table(df, mysql_user, mysql_pass, mysql_host, mysql_port, mysql_db, num_executors, table_name,
logger):
mysql_url = "jdbc:mysql://{}:{}/{}?characterEncoding=utf8".format(mysql_host, mysql_port, mysql_db)
logger.warn("JDBC Writing to table " + table_name)
df.write.format('jdbc')\
.options(
url=mysql_url,
driver='com.mysql.cj.jdbc.Driver',
dbtable=table_name,
user=mysql_user,
password=mysql_pass,
truncate=True,
numpartitions=num_executors,
batchsize=100000
).mode('Overwrite').save()
This works with no issue. However, later on in the pipeline (within the same PySpark app/ spark session), this table is a dependency for another transformation, and I try reading from this table using the following function:
def read_mysql_table_in_session_df(spark, mysql_conn, query_str, query_schema):
cursor = mysql_conn.cursor()
cursor.execute(query_str)
records = cursor.fetchall()
df = spark.createDataFrame(records, schema=query_schema)
return df
And I get this MySQL error: Error 1412: Table definition has changed, please retry transaction.
I've been able to resolve this by closing and ping(reconnect=True) to the database, but I don't like this solution as it feels like a band-aid.
Any ideas why I'm getting this error? I've confirmed writing to the table does not change the table definition (schema wise, at least).
Today I started to learn postgress and I was tryng to do the same thing that I do to load dataframes into my Oracle db
So, for example I have a df that contains 70k of records and 10 columns. My code for this is the following:
from sqlalchemy import create_engine
conn = create_engine('postgresql://'+data['user']+':'+data['password']+'#'+data['host']+':'+data['port_db']+'/'+data['dbname'])
df.to_sql('first_posgress', conn)
This code is kinda the same I use for my Oracle tables but in this case it takes several time to accomplish the task. So I was wondering if there is a better way to do this or it is because in postgress in general is slower.
I found some examples on SO and google but mostly are focused on create the table, not insert a df.
If it is possible for you to use psycopg2 instead of SQLALchemy you can transform your df into a csv and then use cursor.copy_from() to copy the csv into the db.
import io
output = io.StringIO()
df.to_csv(output, sep=",")
output.seek(0)
#psycopg2.cursor:
cursor.copy_from(
output,
target_table, #'first_posgress'
sep=",",
columns=tuple(df.columns)
)
con.commit() #psycopg2 conn
(I don't know if there is an similar function in SQLAlchemy, that is faster too)
Psycopg2 Cursor Documentation
This blogpost contains more information!
Hopefully this is useful for you !
I need to upload a table I modified to my oracle database. I exported the table as pandas dataframe modified it and now want to upload it to the DB.
I am trying to do this using the df.to_sql function as follows:
import sqlalchemy as sa
import pandas as pd
engine = sa.create_engine('oracle://"IP_address_of_server"/"serviceDB"')
df.to_sql("table_name",engine, if_exists='replace', chunksize = None)
I always get this error: DatabaseError: (cx_Oracle.DatabaseError) ORA-12505: TNS:listener does not currently know of SID given in connect descriptor (Background on this error at: http://sqlalche.me/e/4xp6).
I am not an expert of this, so I could not understand what the matter is, specially that the IP_address I am givingg is the right one.
Could anywone help? Thanks a lot!
I'm trying to upload a pandas DataFrame directly to Redshift using the to_sql function.
connstr = 'redshift+psycopg2://%s:%s#%s.redshift.amazonaws.com:%s/%s' %
(username, password, cluster, port, db_name)
def send_data(df, block_size=10000):
engine = create_engine(connstr)
with engine.connect() as conn, conn.begin():
df.to_sql(name='my_table_clean', schema='my_schema', con=conn, index=False,
if_exists='replace', chunksize=block_size)
del engine
The table my_schema.my_table_clean exists (but is empty), and the connection built using connstr is also valid (verified by a correspond retrieve_data method). The retrieve function pulls data from my_table and my script cleans it up using pandas to output to my_table_clean.
The problem is, I keep getting the following error:
TypeError: _get_column_info() takes exactly 9 arguments (8 given)
during the to_sql function.
I can't seem to figure out what is causing this error. Is anyone familiar with it?
Using
python 2.7.13
pandas 0.20.2
sqlalchemy 1.2.0.
Note: I'm trying to circumvent S3 -> Redshift for this script since I don't want to create a folder in my bucket just for one file, and this single script doesn't conform to my overall ETL structure. I'm hoping to just run this one script after the ETL that creates the original my_table.