Duplicated values as an output of python function - python

I would like create a Python function for multiple SQL inserts. This function I will use in my Airflow DAG for inserts into Snowflake db. I will need do create an SnowflakeOperator which will use this function. I'm just start to use Airflow so please correct me if I'm wrong.
My example:
I'm connecting to Snowflake db in order to get data from table with information schema name and table name. This output I'm using for inserts per schema. I'm selecting schema and creating variable my_schema = 'my_schema'.
First approach:
sql = "SELECT SCHEMA, TABLE FROM TABLE"
cur.execute(sql)
df = pd.DataFrame.from_records(iter(cur), columns=[x[0] for x in cur.description])
my_dict = dict()
for i in df['SCHEMA'].unique().tolist():
df_x = df[df['SCHEMA'] == i]
my_dict[i] = df_x['TABLE'].tolist()
for schema, tables in my_dict.items():
for table in tables:
query = f"INSERT INTO {schema}.{table} SELECT * FROM {schema}.{table} where col2 = 1;"
try:
cur.execute(query)
except snowflake.connector.errors.ProgrammingError as e:
# Something went wrong with the insert
logging.error(f"Inserting in {schema}.{table}: {e}")
conn.close()
For testing I created pandas datframe with two columns schema and table.
data = [['test', 'table01'], ['test', 'table02'], ['my_schema', 'table03'], ['schemaxxx', 'table04']]
# Create the pandas DataFrame
df_new = pd.DataFrame(data, columns=['schema', 'table'])
I created function for inserts.
my_schema = 'my_schema'
def my_insert_fnc(df):
my_dict = dict()
for i in df['schema'].unique().tolist():
df_x = df[df['schema'] == i]
my_dict[i] = df_x['table'].tolist()
sql_list = []
for schema, tables in my_dict.items():
for table in tables:
if schema == my_schema:
sql_list.append(f"INSERT INTO {schema}.{table} SELECT * FROM {schema}.{table} where col2 = 1;")
print(sql_list)
But I'm getting duplicates.
my_insert_fnc(df_new)
['INSERT INTO my_schema.table03 SELECT * FROM my_schema.table03 where col2 = 1;']
['INSERT INTO my_schema.table03 SELECT * FROM my_schema.table03 where col2 = 1;']
I would like to remove duplicates and logging errors.
try:
cur.execute(query)
except snowflake.connector.errors.ProgrammingError as e:
# Something went wrong with the insert
logging.error(f"Inserting in {schema}.{table}: {e}")
This function as I mentioned I need to use in my Airflow DAG so It needs to give me a string output in order to use it in SnowflakeOperator. Please correct me if I'm wrong.

Related

Looping through dataframe values in columns and using them as a FROM clause using SQL

I am running BigQuery in Jupyter notebook.
query ="""
SELECT
table_catalog,
table_schema,
table_name,
FROM `Project-A.schema_A`.INFORMATION_SCHEMA.COLUMN_FIELD_PATHS
"""
The output leads me to the following table:
# This is the output of the query
data = {'table_catalog':['project-A', 'project-A', 'project-A', 'project-A','Project-A','Project-A','Project-A'],
'table_catalog':['schema_A', 'schema_A', 'schema_A', 'schema_A','schema_A','schema_A','schema_A']
'table_name':['Table_A', 'Table_B', 'Table_B', 'Table_C','Table_C','Table_A','Table_A']}
d# Create DataFrame
df = pd.DataFrame(data)
I want to use Table_A, Table_B and Table_C in my next query in the FROM CLAUSE such that it looks like:
query =f"""
SELECT
*
FROM Project-A.Schema_A.{I want to edit this dyanmically - either Table_A, Table_B, Table_C}"""
I tried the following but have been failing, please help me with this:
list_of_tables = list(df['table_name'].unique())
def loop_tables(x):
for tables in list_of_tables:
if x == tables
# x = df['table_name']
loop_tables()
try this
def loop_tables():
list_of_dataframes = []
for table in list_of_tables:
print(table)
dynamic_sql = "select * from project.dataset."
dynamic_sql += table
df = client.query(dynamic_sql).to_dataframe()
list_of_dataframes.append(df)
return list_of_dataframes

How to upsert pandas DataFrame to PostgreSQL table?

I've scraped some data from web sources and stored it all in a pandas DataFrame. Now, in order harness the powerful db tools afforded by SQLAlchemy, I want to convert said DataFrame into a Table() object and eventually upsert all data into a PostgreSQL table. If this is practical, what is a workable method of going about accomplishing this task?
Update: You can save yourself some typing by using this method.
If you are using PostgreSQL 9.5 or later you can perform the UPSERT using a temporary table and an INSERT ... ON CONFLICT statement:
import sqlalchemy as sa
# …
with engine.begin() as conn:
# step 0.0 - create test environment
conn.exec_driver_sql("DROP TABLE IF EXISTS main_table")
conn.exec_driver_sql(
"CREATE TABLE main_table (id int primary key, txt varchar(50))"
)
conn.exec_driver_sql(
"INSERT INTO main_table (id, txt) VALUES (1, 'row 1 old text')"
)
# step 0.1 - create DataFrame to UPSERT
df = pd.DataFrame(
[(2, "new row 2 text"), (1, "row 1 new text")], columns=["id", "txt"]
)
# step 1 - create temporary table and upload DataFrame
conn.exec_driver_sql(
"CREATE TEMPORARY TABLE temp_table AS SELECT * FROM main_table WHERE false"
)
df.to_sql("temp_table", conn, index=False, if_exists="append")
# step 2 - merge temp_table into main_table
conn.exec_driver_sql(
"""\
INSERT INTO main_table (id, txt)
SELECT id, txt FROM temp_table
ON CONFLICT (id) DO
UPDATE SET txt = EXCLUDED.txt
"""
)
# step 3 - confirm results
result = conn.exec_driver_sql("SELECT * FROM main_table ORDER BY id").all()
print(result) # [(1, 'row 1 new text'), (2, 'new row 2 text')]
I have needed this so many times, I ended up creating a gist for it.
The function is below, it will create the table if it is the first time persisting the dataframe and will update the table if it already exists:
import pandas as pd
import sqlalchemy
import uuid
import os
def upsert_df(df: pd.DataFrame, table_name: str, engine: sqlalchemy.engine.Engine):
"""Implements the equivalent of pd.DataFrame.to_sql(..., if_exists='update')
(which does not exist). Creates or updates the db records based on the
dataframe records.
Conflicts to determine update are based on the dataframes index.
This will set unique keys constraint on the table equal to the index names
1. Create a temp table from the dataframe
2. Insert/update from temp table into table_name
Returns: True if successful
"""
# If the table does not exist, we should just use to_sql to create it
if not engine.execute(
f"""SELECT EXISTS (
SELECT FROM information_schema.tables
WHERE table_schema = 'public'
AND table_name = '{table_name}');
"""
).first()[0]:
df.to_sql(table_name, engine)
return True
# If it already exists...
temp_table_name = f"temp_{uuid.uuid4().hex[:6]}"
df.to_sql(temp_table_name, engine, index=True)
index = list(df.index.names)
index_sql_txt = ", ".join([f'"{i}"' for i in index])
columns = list(df.columns)
headers = index + columns
headers_sql_txt = ", ".join(
[f'"{i}"' for i in headers]
) # index1, index2, ..., column 1, col2, ...
# col1 = exluded.col1, col2=excluded.col2
update_column_stmt = ", ".join([f'"{col}" = EXCLUDED."{col}"' for col in columns])
# For the ON CONFLICT clause, postgres requires that the columns have unique constraint
query_pk = f"""
ALTER TABLE "{table_name}" DROP CONSTRAINT IF EXISTS unique_constraint_for_upsert;
ALTER TABLE "{table_name}" ADD CONSTRAINT unique_constraint_for_upsert UNIQUE ({index_sql_txt});
"""
engine.execute(query_pk)
# Compose and execute upsert query
query_upsert = f"""
INSERT INTO "{table_name}" ({headers_sql_txt})
SELECT {headers_sql_txt} FROM "{temp_table_name}"
ON CONFLICT ({index_sql_txt}) DO UPDATE
SET {update_column_stmt};
"""
engine.execute(query_upsert)
engine.execute(f"DROP TABLE {temp_table_name}")
return True
Here is my code for bulk insert & insert on conflict update query for postgresql from pandas dataframe:
Lets say id is unique key for both postgresql table and pandas df and you want to insert and update based on this id.
import pandas as pd
from sqlalchemy import create_engine, text
engine = create_engine(postgresql://username:pass#host:port/dbname)
query = text(f"""
INSERT INTO schema.table(name, title, id)
VALUES {','.join([str(i) for i in list(df.to_records(index=False))])}
ON CONFLICT (id)
DO UPDATE SET name= excluded.name,
title= excluded.title
""")
engine.execute(query)
Make sure that your df columns must be same order with your table.
EDIT 1:
Thanks to Gord Thompson's comment, I realized that this query won't work if there is single quote in columns. Therefore here is a fix if there is single quote in columns:
import pandas as pd
from sqlalchemy import create_engine, text
df.name = df.name.str.replace("'", "''")
df.title = df.title.str.replace("'", "''")
engine = create_engine(postgresql://username:pass#host:port/dbname)
query = text("""
INSERT INTO author(name, title, id)
VALUES %s
ON CONFLICT (id)
DO UPDATE SET name= excluded.name,
title= excluded.title
""" % ','.join([str(i) for i in list(df.to_records(index=False))]).replace('"', "'"))
engine.execute(query)
Consider this function if your DataFrame and SQL Table contain the same column names and types already.
Advantages:
Good if you have a long dataframe to insert. (Batching)
Avoid writing long sql statement in your code.
Fast
.
from sqlalchemy import Table
from sqlalchemy.engine.base import Engine as sql_engine
from sqlalchemy.dialects.postgresql import insert
from sqlalchemy.ext.automap import automap_base
import pandas as pd
def upsert_database(list_input: pd.DataFrame, engine: sql_engine, table: str, schema: str) -> None:
if len(list_input) == 0:
return None
flattened_input = list_input.to_dict('records')
with engine.connect() as conn:
base = automap_base()
base.prepare(engine, reflect=True, schema=schema)
target_table = Table(table, base.metadata,
autoload=True, autoload_with=engine, schema=schema)
chunks = [flattened_input[i:i + 1000] for i in range(0, len(flattened_input), 1000)]
for chunk in chunks:
stmt = insert(target_table).values(chunk)
update_dict = {c.name: c for c in stmt.excluded if not c.primary_key}
conn.execute(stmt.on_conflict_do_update(
constraint=f'{table}_pkey',
set_=update_dict)
)
If you already have a pandas dataframe you could use df.to_sql to push the data directly through SQLAlchemy
from sqlalchemy import create_engine
#create a connection from Postgre URI
cnxn = create_engine("postgresql+psycopg2://username:password#host:port/database")
#write dataframe to database
df.to_sql("my_table", con=cnxn, schema="myschema")

Fetching data from Postgresq Database using sqlalchemy.select() in python

I am using python and SQLalchemy to fetch data from a table.
import sqlalchemy as db
import pandas as pd
DATABASE_URI = 'postgres+psycopg2://postgres:postgresql#localhost:5432/postgres'
engine = db.create_engine(DATABASE_URI)
connection = engine.connect()
project_table = db.Table('project', metadata, autoload=True, autoload_with=engine)
here i want to fetch records based on a list of ids which i have.
l=[557997, 558088, 623106, 558020, 623108, 557836, 557733, 622792, 623511, 623185]
query1 = db.select([project_table ]).where(project_table .columns.project_id.in_(l))
#sql query= "select * from project where project_id in l"
Result = connection.execute(query1)
Rset = Result.fetchall()
df = pd.DataFrame(Rset)
print(df.head())
Here when i print df.head() I am getting an empty dataframe. I am not able to pass a list to the above query. Is there a way to send a list to in to above query.
The result should contain the rows in the table which are equal to project_id's given.
i.e.
project_id project_name project_date project_developer
557997 Test1 24-05-2011 Ajay
558088 Test2 24-06-2003 Alex
These rows will be inserted into dataset.
The Query is
"select * from project where project_id in (557997, 558088, 623106, 558020, 623108, 557836, 557733, 622792, 623511, 623185)"
here as i cant give static values I will insert the values to a list and pass this list to query as a parameter.
This is where i am having a problem. I cant pass a list as a parameter to db.select().How can i pass a list to db.select()
After many trails i have found out that because of large data the query is fetching and also less ram in my workstation, the query returned null(no results). so what I did was
Result = connection.execute(query1)
while True:
rows = Result.fetchmany(10000)
if not rows:
break
for row in rows:
table_data.append(row)
pass
df1 = pd.DataFrame(table_data)
df1.columns = columns
After this the program was working fine.

Data frame does not display column names

I wrote a script which first runs a SQL query to get the data from Redshift (via Databricks). Then, I want to display it in a pandas data frame. The problem is that somehow the names of the columns were removes/are not displayed. Why?
#SQL Query
query = """
SELECT * FROM table1 limit 1;
"""
# Execute the query
try:
cursor.execute(query)
except OperationalError as msg:
print ("Command skipped: ")
#Fetch all rows from the result
rows = cursor.fetchall()
# Convert into a Pandas Dataframe
df = pd.DataFrame( [[ij for ij in i] for i in rows] )
df.head()
Output:
As you can see, the column names turned into numbers (in yellow). The intent was to display column name 1: Customer_id, column name 2: Purchases, column name 3: Product_id etc.
I appreciate any help. Thanks!
As suggested by #Chris you can use pd.read_sql in the following way:-
query = """SELECT * FROM table1 limit 1;"""
connection = psycopg2.connect(user = 'your_username',
password = 'password',
host = 'host_ip',
port = 5432,
database = 'db_name')
data = pd.read_sql(sql=query, con=connection)
Now when you will print your data it will show the column names as well!

How can I handle errors inside of a for loop inside of a cx_Oracle connection?

here's a run down of what I'd like to do: I have a list of table names, and I want to run sql against an oracle database and pull back the table name and row count for every table in my table list. However, not every table name in my list of table names is necessarily actually in the database. This causes my code to throw a database error. What I would like to do, is whenever I come to a table name that is not in the database, I create a dataframe that contains the table name and instead of count(*), there's some text that says 'table not found', or something similar. At the end of the loop I'm concatenating all of the dataframes into one dataframe. The overall goal here is to validate that certain tables exist and that they have the expected row counts.
query_list=[]
df_List=[]
connstr= '%s/%s#%s' %(username, password, server)
conn = cx_Oracle.connect(connstr)
with conn:
query_list = ["SELECT '%s' as tbl, count(*) FROM %s." %(elm, database) +elm for elm in table_list]
df_List = [pd.read_sql(elm,conn) for elm in query_list]
df = pd.concat(df_List)
Consider try/except handling to return query output or table not found output:
def get_table_count(sql, conn, elm):
try:
return pd.read_sql(sql, conn)
except:
return pd.DataFrame({'tbl': elm, 'note': 'table not found'}, index = [0])
with conn:
sql = "SELECT '{t}' as tbl, count(*) as table_count FROM {d}.{t}"
df_List = [get_table_count(sql.format(t = elm, d = database), conn, elm) \
for elm in table_list]
df = pd.concat(df_List, ignore_index = True)
Get a list of all the Table Names which are in the DB, then create a loop to query each Table to get the row count.
Here is a SQL statement to get a list of all Tables in an Oracle DB:
SQL:
SELECT DISTINCT TABLE_NAME FROM ALL_TAB_COLUMNS ORDER BY TABLE_NAME ASC;
Python (to make list of tables you want row counts for and which exist in the DB):
list(set(tables_that_exist_in_DB) - (set(tables_that_exist_in_DB) - set(list_of_tables_you_want)))

Categories