Pandas DF columns to SQLite tables - python

I have a database that I have created in sqlite that was created from a dictionary of pandas dataframes that were created from a file of csv's. When I import the database to sqlite, each of the table names ends in ".csv" How can I strip this?
Here is the code:
import os
import glob
import pandas as pd
files = glob.glob(os.path.join("staging" + "/*.csv"))
print(files)
# Create an empty dictionary to hold the dataframes from csvs
dict_ = {}
# Write the files into the dictionary
for file in files:
fname = os.path.basename(file)
dict_[fname] = pd.read_csv(file, header = 0, dtype = str, encoding = 'cp1252').fillna('')
And for the sqlite DB:
# Create the SQL Lite database
import sqlite3
conn = sqlite3.connect("medicare_hospital_compare.db")
# Convert the dict_[file]'s to SQL tables
for key, df in dict_.items():
df.to_sql(key, conn, flavor = None, schema = None, if_exists = 'replace',
index = True, index_label = None, chunksize = None, dtype = None)

Related

Handle big files with python & pandas

Thanks for reading my post.
I need to deal with big files, let me give you more context, I extract some tables from a database convert those tables to CSV and after that, I convert them to JSON.
All that is to send the information to BigQuery.
Now my script works fine but I have a problem, some tables I extract are so so big one of them has 14 Gb, my problem is my server memory just has 8 Gb, exist any way to integrate some to my script to split or append the information ???
My script:
import pyodbc
import fileinput
import csv
import pandas as pd
import json
import os
import sys
conn = pyodbc.connect("Driver={SQL Server};"
"Server=TEST;"
"username=test;"
"password=12345;"
"Database=TEST;"
"Trusted_Connection=no;")
cursor = conn.cursor()
query = "SELECT * FROM placeholder where "
with open(r"D:\Test.txt") as file:
lines = file.readlines()
print(lines)
for user_input in lines:
result = query.replace("placeholder", user_input)
print(result)
sql_query = pd.read_sql(result,conn)
df = pd.DataFrame(sql_query)
user_inputs = user_input.strip("\n")
filename = os.path.join('D:\\', user_inputs + '.csv')
df.to_csv (filename, index = False, encoding='utf-8', sep = '~', quotechar = "`", quoting=csv.QUOTE_ALL)
print(filename)
filename_json = os.path.join('D:\\', user_inputs + '.jsonl')
csvFilePath = (filename)
jsonFilePath = (filename_json)
print(filename_json)
df_o = df.applymap(lambda x: x.strip() if isinstance(x, str) else x)
df_o.to_json(filename_json, orient = "records", lines = True, date_format = "iso", double_precision = 15, force_ascii = False, date_unit = 'ms', default_handler = str)
dir_name = "D:\\"
test = os.listdir(dir_name)
for item in test:
if item.endswith(".csv"):
os.remove(os.path.join(dir_name, item))
cursor.close()
conn.close()
I'm really new to python, I hope you can help me to integrate some into my script.
Really thanks so many guys !!!
Kind regards.
For large data sets you should avoid reading all of it at once and then writing it all at once. You should do partial reads and partial writes.
Since you are using BigQuery you should use paritions to limit the query output. Have some logic to update the partition offsets. For each partition you can generate one file per parition. In this case your output would be like output-1.csv, output-2.csv etc.
An example of using parition:
SELECT * FROM placeholder
WHERE transaction_date >= '2016-01-01'
As a bonus tip, avoid doing Select * as BigQuery is columnar storage system mentioning the columns you would want to read will significatnly improve the peformance.

Export MS SQL table with `null` values to CSV

I am trying to figure out how to create a csv file that contains the null values I have in my MS SQL database table. Right now the script I am using fills up the null values with '' (empty strings). How I am supposed to instruct the csv Writer to keep the null values?
example of source table
ID,Date,Entitled Key
10000002,NULL,805
10000003,2020-11-22 00:00:00,805
export_sql_to_csv.py
import csv
import os
import pyodbc
filePath = os.getcwd() + '/'
fileName = 'rigs_latest.csv'
server = 'ip-address'
database = 'db-name'
username = 'admin'
password = 'password'
# Database connection variable.
connect = pyodbc.connect('DRIVER={ODBC Driver 17 for SQL Server};SERVER=' +
server+';DATABASE='+database+';UID='+username+';PWD=' + password)
cursor = connect.cursor()
sqlSelect = "SELECT * FROM my_table"
cursor.execute(sqlSelect)
results = cursor.fetchall()
# Extract the table headers.
headers = [i[0] for i in cursor.description]
# Open CSV file for writing.
csvFile = csv.writer(open(filePath + fileName, 'w', newline=''),
delimiter=',', lineterminator='\r\n',
quoting=csv.QUOTE_NONE, escapechar='\\')
# Add the headers and data to the CSV file.
csvFile.writerow(headers)
csvFile.writerows(results)
Example of the result after running the above script:
ID,Date,Entitled Key
10000002,,805
10000003,2020-11-22 00:00:00,805
The main reason why I would like to keep the null values is that I would like to convert that csv file into series of insert SQL statements and execute those against Aurora Serverless PostgreSQL database. The database doesn't accept empty strings for the type date and results in that error: ERROR: invalid input syntax for type date: ""
As described in the docs for the csv module, the None value is written to CSV as '' (empty string) by design. All other non-string values call str first.
So if you want your CSV to have the string null instead of '' then you have to modify the values before they reach the CSV writer. Perhaps:
results = [
['null' if val is None else val for val in row] for row in results
]

Create an excel file from BytesIO using python

I am using pandas library to store excel into bytesIO memory. Later, I am storing this bytesIO object into SQL Server as below-
df = pandas.DataFrame(data1, columns=['col1', 'col2', 'col3'])
output = BytesIO()
writer = pandas.ExcelWriter(output,engine='xlsxwriter')
df.to_excel(writer)
writer.save()
output.seek(0)
workbook = output.read()
#store into table
Query = '''
INSERT INTO [TABLE]([file]) VALUES(?)
'''
values = (workbook)
cursor = conn.cursor()
cursor.execute(Query, values)
cursor.close()
conn.commit()
#Create excel file.
Query1 = "select [file] from [TABLE] where [id] = 1"
result = conn.cursor().execute(Query1).fetchall()
print(result[0])
Now, I want to pull the BytesIO object back from table and create an excel file and store it locally. How Do I do it?
Finally, I got solution.Below are the steps performed:
Takes Dataframe and convert it to excel and store it in memory in BytesIO format.
Store BytesIO object in Database column having varbinary(max)
Pull the stored BytesIO object and create an excel file locally.
Python Code:
#Get Required data in DataFrame:
df = pandas.DataFrame(data1, columns=['col1', 'col2', 'col3'])
#Convert the data frame to Excel and store it in BytesIO object `workbook`:
output = BytesIO()
writer = pandas.ExcelWriter(output,engine='xlsxwriter')
df.to_excel(writer)
writer.save()
output.seek(0)
workbook = output.read()
#store into Database table
Query = '''
INSERT INTO [TABLE]([file]) VALUES(?)
'''
values = (workbook)
cursor = conn.cursor()
cursor.execute(Query, values)
cursor.close()
conn.commit()
#Retrieve the BytesIO object from Database
Query1 = "select [file] from [TABLE] where [id] = 1"
result = conn.cursor().execute(Query1).fetchall()
WriteObj = BytesIO()
WriteObj.write(result[0][0])
WriteObj.seek(0)
df = pandas.read_excel(WriteObj)
df.to_excel("outputFile.xlsx")

Extract data from json file python3

How can I get all the data in my image database into the database ?
My code :
import re
import json
import sqlite3
connection = sqlite3.connect('example.db')
cursor = connection.cursor()
print ("Opened database successfully");
with open('tem.txt', encoding='utf-8-sig') as json_file:
data = json.load(json_file)
for p in data:
data[p] = re.sub("<[^>]+>", "", str(data[p]))
print("%s: %s" % (p, data[p]))
I use SQLite:
Use pandas to read the Json and create a dataframe and then write it into the DB
I have added a sample code.
from sqlalchemy import create_engine
import pandas as pd
df = pd.read_json ('path\data.json')
engine = sqlalchemy.create_engine('sqlite:///my.db', echo=False)
df.to_sql('mytable', con=engine, if_exists='append')
The advantage of this is that with pandas you can make changes in the data very easily.

How to open and convert sqlite database to pandas dataframe

I have downloaded some datas as a sqlite database (data.db) and I want to open this database in python and then convert it into pandas dataframe.
This is so far I have done
import sqlite3
import pandas
dat = sqlite3.connect('data.db') #connected to database with out error
pandas.DataFrame.from_records(dat, index=None, exclude=None, columns=None, coerce_float=False, nrows=None)
But its throwing this error
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/usr/local/lib/python2.7/dist-packages/pandas/core/frame.py", line 980, in from_records
coerce_float=coerce_float)
File "/usr/local/lib/python2.7/dist-packages/pandas/core/frame.py", line 5353, in _to_arrays
if not len(data):
TypeError: object of type 'sqlite3.Connection' has no len()
How to convert sqlite database to pandas dataframe
Despite sqlite being part of the Python Standard Library and is a nice and easy interface to SQLite databases, the Pandas tutorial states:
Note In order to use read_sql_table(), you must have the SQLAlchemy
optional dependency installed.
But Pandas still supports sqlite3 access if you want to avoid installing SQLAlchemy:
import sqlite3
import pandas as pd
# Create your connection.
cnx = sqlite3.connect('file.db')
df = pd.read_sql_query("SELECT * FROM table_name", cnx)
As stated here, but you need to know the name of the used table in advance.
The line
data = sqlite3.connect('data.db')
opens a connection to the database. There are no records queried up to this. So you have to execute a query afterward and provide this to the pandas DataFrame constructor.
It should look similar to this
import sqlite3
import pandas as pd
dat = sqlite3.connect('data.db')
query = dat.execute("SELECT * From <TABLENAME>")
cols = [column[0] for column in query.description]
results= pd.DataFrame.from_records(data = query.fetchall(), columns = cols)
I am not really firm with SQL commands, so you should check the correctness of the query. should be the name of the table in your database.
Parsing a sqlite .db into a dictionary of dataframes without knowing the table names:
def read_sqlite(dbfile):
import sqlite3
from pandas import read_sql_query, read_sql_table
with sqlite3.connect(dbfile) as dbcon:
tables = list(read_sql_query("SELECT name FROM sqlite_master WHERE type='table';", dbcon)['name'])
out = {tbl : read_sql_query(f"SELECT * from {tbl}", dbcon) for tbl in tables}
return out
Search sqlalchemy, engine and database name in google (sqlite in this case):
import pandas as pd
import sqlalchemy
db_name = "data.db"
table_name = "LITTLE_BOBBY_TABLES"
engine = sqlalchemy.create_engine("sqlite:///%s" % db_name, execution_options={"sqlite_raw_colnames": True})
df = pd.read_sql_table(table_name, engine)
I wrote a piece of code up that saves tables in a database file such as .sqlite or .db and creates an excel file out of it with each table as a sheet or makes individual tables into csvs.
Note: You don't need to know the table names in advance!
import os, fnmatch
import sqlite3
import pandas as pd
#creates a directory without throwing an error
def create_dir(dir):
if not os.path.exists(dir):
os.makedirs(dir)
print("Created Directory : ", dir)
else:
print("Directory already existed : ", dir)
return dir
#finds files in a directory corresponding to a regex query
def find(pattern, path):
result = []
for root, dirs, files in os.walk(path):
for name in files:
if fnmatch.fnmatch(name, pattern):
result.append(os.path.join(root, name))
return result
#convert sqlite databases(.db,.sqlite) to pandas dataframe(excel with each table as a different sheet or individual csv sheets)
def save_db(dbpath=None,excel_path=None,csv_path=None,extension="*.sqlite",csvs=True,excels=True):
if (excels==False and csvs==False):
print("Atleast one of the parameters need to be true: csvs or excels")
return -1
#little code to find files by extension
if dbpath==None:
files=find(extension,os.getcwd())
if len(files)>1:
print("Multiple files found! Selecting the first one found!")
print("To locate your file, set dbpath=<yourpath>")
dbpath = find(extension,os.getcwd())[0] if dbpath==None else dbpath
print("Reading database file from location :",dbpath)
#path handling
external_folder,base_name=os.path.split(os.path.abspath(dbpath))
file_name=os.path.splitext(base_name)[0] #firstname without .
exten=os.path.splitext(base_name)[-1] #.file_extension
internal_folder="Saved_Dataframes_"+file_name
main_path=os.path.join(external_folder,internal_folder)
create_dir(main_path)
excel_path=os.path.join(main_path,"Excel_Multiple_Sheets.xlsx") if excel_path==None else excel_path
csv_path=main_path if csv_path==None else csv_path
db = sqlite3.connect(dbpath)
cursor = db.cursor()
cursor.execute("SELECT name FROM sqlite_master WHERE type='table';")
tables = cursor.fetchall()
print(len(tables),"Tables found :")
if excels==True:
#for writing to excel(xlsx) we will be needing this!
try:
import XlsxWriter
except ModuleNotFoundError:
!pip install XlsxWriter
if (excels==True and csvs==True):
writer = pd.ExcelWriter(excel_path, engine='xlsxwriter')
i=0
for table_name in tables:
table_name = table_name[0]
table = pd.read_sql_query("SELECT * from %s" % table_name, db)
i+=1
print("Parsing Excel Sheet ",i," : ",table_name)
table.to_excel(writer, sheet_name=table_name, index=False)
print("Parsing CSV File ",i," : ",table_name)
table.to_csv(os.path.join(csv_path,table_name + '.csv'), index_label='index')
writer.save()
elif excels==True:
writer = pd.ExcelWriter(excel_path, engine='xlsxwriter')
i=0
for table_name in tables:
table_name = table_name[0]
table = pd.read_sql_query("SELECT * from %s" % table_name, db)
i+=1
print("Parsing Excel Sheet ",i," : ",table_name)
table.to_excel(writer, sheet_name=table_name, index=False)
writer.save()
elif csvs==True:
i=0
for table_name in tables:
table_name = table_name[0]
table = pd.read_sql_query("SELECT * from %s" % table_name, db)
i+=1
print("Parsing CSV File ",i," : ",table_name)
table.to_csv(os.path.join(csv_path,table_name + '.csv'), index_label='index')
cursor.close()
db.close()
return 0
save_db();
If data.db is your SQLite database and table_name is one of its tables, then you can do:
import pandas as pd
df = pd.read_sql_table('table_name', 'sqlite:///data.db')
No other imports needed.
i have stored my data in database.sqlite table name is Reviews
import sqlite3
con=sqlite3.connect("database.sqlite")
data=pd.read_sql_query("SELECT * FROM Reviews",con)
print(data)

Categories