In the website I'm making with Flask, I'm trying to "upgrade" my search bar by using the pg_trgm PostgreSQL extension to match words even after being spelled wrong. However, I'm having trouble figuring out how to use the extension's custom functions alongside SQLAlchemy.
Will I have to use raw SQL to properly perform the queries, or is there a way to do this more cleanly?
Assuming the extension is configured correctly, these functions should be accessible like any other database function. For example, here's how the similarity function might be called using SQLAlchemy core:
import sqlalchemy as sa
engine = sa.create_engine('postgresql+psycopg2:///test', echo=True, future=True)
metadata = sa.MetaData()
tbl = sa.Table('t70546926', metadata,
sa.Column('c1', sa.String, primary_key=True),
sa.Column('c2', sa.String))
tbl.drop(engine, checkfirst=True)
tbl.create(engine)
ins = sa.insert(tbl).values(c1='Alice', c2='Alison')
with engine.begin() as conn:
conn.execute(ins)
query = sa.select(sa.func.similarity(tbl.c.c1, tbl.c.c2).label('similarity_result'))
with engine.connect() as conn:
rows = conn.execute(query)
for row in rows:
print(row.similarity_result)
For Flask-SQLAlchemy, you might do something like this:
result = dbsession.query(sa.func.similarity(val1, val2)).all()
Related
We have data in a Snowflake cloud database that we would like to move into an Oracle database. As we would like to work toward refreshing the Oracle database regularly, I am trying to use SQLAlchemy to automate this.
I would like to do this using Core because my team is all experienced with SQL, but I am the only one with Python experience. I think it would be easier to tweak the data pulls if we just pass SQL strings. Plus the Snowflake db has some columns with JSON that seems easier to parse using direct SQL since I do not see JSON in the SnowflakeDialect.
I have established connections to both databases and am able to do select queries from both. I have also manually created the tables in our Oracle db so that the keys and datatypes match what I am pulling from Snowflake. When I try to insert, though, my Jupyter notebook just continuously says "Executing Cell" and hangs. Any thoughts on how to proceed or how to get the notebook to tell me where the hangup is?
from sqlalchemy import create_engine,pool,MetaData,text
from snowflake.sqlalchemy import URL
import pandas as pd
eng_sf = create_engine(URL( #engine for snowflake
account = 'account'
user = 'user'
password = 'password'
database = 'database'
schema = 'schema'
warehouse = 'warehouse'
role = 'role'
timezone = 'timezone'
))
eng_o = create_engine("oracle+cx_oracle://{}[{}]:{}#{}".format('user','proxy','password','database'),poolclass=pool.NullPool) #engine for oracle
meta_o = MetaData()
meta_o.reflect(bind=eng_o)
person_o = meta_o['bb_lms_person'] # other oracle tables follow this example
meta_sf = MetaData()
meta_sf.reflect(bind=eng_sf,only=['person']) # other snowflake tables as well, but for simplicity, let's look at one
person_sf = meta_sf.tables['person']
person_query = """
SELECT ID
,EMAIL
,STAGE:student_id::STRING as STUDENT_ID
,ROW_INSERTED_TIME
,ROW_UPDATED_TIME
,ROW_DELETED_TIME
FROM cdm_lms.PERSON
"""
with eng_sf.begin() as connection:
result = connection.execute(text(person_query)).fetchall() # this snippet runs and returns result as expected
with eng_o.begin() as connection:
connection.execute(person_o.insert(),result) # this is a coinflip, sometimes it runs, sometimes it just hangs 5ever
eng_sf.dispose()
eng_o.dispose()
I've checked the typical offenders. The keys for both person_o and the result are all lowercase and match. Any guidance would be appreciated.
use the metadata for the table. the fTable_Stage update or inserted as fluent functions and assign values to lambda variables. This is very safe because only metadata field variables can be used in the lambda. I am updating three fields:LateProbabilityDNN, Sentiment_Polarity, Sentiment_Subjectivity
engine = create_engine("mssql+pyodbc:///?odbc_connect=%s" % params)
connection=engine.connect()
metadata=MetaData()
Session = sessionmaker(bind = engine)
session = Session()
fTable_Stage=Table('fTable_Stage', metadata,autoload=True,autoload_with=engine)
stmt=fTable_Stage.update().where(fTable_Stage.c.KeyID==keyID).values(\
LateProbabilityDNN=round(float(late_proba),2),\
Sentiment_Polarity=round(my_valance.sentiment.polarity,2),\
Sentiment_Subjectivity= round(my_valance.sentiment.subjectivity,2)\
)
connection.execute(stmt)
I'm fairly new to the SQLAlchemy ORM. Im using a mySQL database whose schema I imported in a .sql file. I created the engine, connected to the database. I bound both the MetaData and the Session objects to the engine. But when I ran:
for t in metadata.tables:
print(t.name)
I got the following error:
fkey["referred_table"] = rec["TABLENAME"]
KeyError: 'TABLENAME'
So what am I doing wrong here? It is something elementary?
Below is the full code:
import sqlalchemy
from sqlalchemy import create_engine
from sqlalchemy.orm import sessionmaker
from sqlalchemy import *
engine = create_engine('mysql://sunnyahlawat:miq182#localhost/sqsunny')
engine.connect()
Session = sessionmaker(bind=engine)
session = Session()
metadata = MetaData(bind = engine, reflect = True)
#metadata.reflect(bind = engine)
for t in metadata.tables:
print(t.name)
#print(engine.table_names())
If the database being referred is a data dump and the table in question has foreign keys linked to an external database which has not been exported and is not on the same server, this error can come up.
The foreign key constraint fails in such a case.
A possible solution is to drop the constraint - if this is being tried out just in a test environment.
It does not seem like SqlAlchemy supports calling stored procedures. Did anybody find a workaround for this that works with SQL Server?
Sample procedure:
CREATE PROCEDURE list_lock_set #name varchar (5), #requester varchar(30)
AS
BEGIN
SET NOCOUNT ON;
INSERT INTO list_lock (name, requester, acquired) values (#name, #requester, GETDATE())
RETURN 0
END
GO
This works:
import pyodbc
dbh = pyodbc.connect(driver=''{SQL Server}'', server=srv, database=db, uid=uid, pwd=pwd)
dbc = dbh.cursor()
dbc.execute("list_lock_set ?, ?", ['bbc', 'pyodbc'])
dbc.commit()
This does not produce an error but also but does not work:
from sqlalchemy import create_engine
engine = create_engine('mssql+pyodbc://usr:passw#srv/db?driver=SQL Server', echo=True)
engine.execute("list_lock_set ?, ?", ['bbc', 'sqlalchemy'])
Thank you.
EDIT: It appears the best solution is to fish out pyodbc cursor from the engine:
cursor = engine.raw_connection().cursor()
cursor.execute("list_lock_set ?, ?", ['bbc', 'using cursor'])
cursor.commit()
I can also obtain pyodbc Connection:
engine.raw_connection().connection
and set autocommit=True, but that might interfere with engine logic. Many thanks to #Batman.
To have it working in sqlalchemy I managed to do it this way:
from sqlalchemy import create_engine
engine = create_engine('mssql+pyodbc://usr:passw#srv/db?driver=SQL Server', echo=True)
with engine.begin() as conn:
conn.execute("exec dbo.your_proc")
I remember this giving me grief before too. From memory either session.execute() or connection.execute() worked for me. There's also a callproc() method, which is probably the right way to go. http://docs.sqlalchemy.org/en/latest/core/connections.html
Also, I've had issues in the past with MSSQL which seemed to be due to something asynchronous happening where the method was returning before the procedure was finished, which resulted in unpredictable results on the database. I found that putting a time.sleep(1) (or whatever the appropriate number is) right after the call fixed this.
Yes, Even I was facing the same issue. SQLAlchemy was executing the procedure but no actions were happening.
So I tweaked it by adding connection.execute("exec <Procedure_name>")
Following on from #MATEITA LUCIAN:
Add input parameters to the stored procedure
from sqlalchemy import create_engine
engine = create_engine('mssql+pyodbc://usr:passw#srv/db?driver=SQLServer', echo=True)
with engine.begin() as conn:
conn.execute('EXEC dbo.your_proc #textinput_param = "sometext", #intinput_param = 4')
For those that aren't sure about how or what to refer to with the '....//usr:passw#srv/db?driver=SQL server', echo=True... part of creating the engine I would suggest reading this and choosing Option 1 - Provide a DSN.
Retrieve output from the stored procedure
from sqlalchemy import create_engine
engine = create_engine('mssql+pyodbc://usr:passw#srv/db?driver=SQLServer', echo=True)
with engine.begin() as conn:
result = conn.execute('EXEC dbo.your_proc #textinput_param = "sometext", #intinput_param = 4')
for row in result:
print('text_output: %s \t int_output: %d' % (row['text_field'], row['int_field']))
Is it possible to determine fields available in a table (MySQL DB) pragmatically at runtime using SQLAlchemy or any other python library ? Any help on this would be great.
Thanks.
Reflection could do this.
Reflect the database at once using sqlalchemy
meta = MetaData()
meta.reflect(bind=someengine)
users_table = meta.tables['users']
addresses_table = meta.tables['addresses']
# fields of address_table
fields = addresses_table.columns.keys()
See more information at http://docs.sqlalchemy.org/en/rel_0_7/core/schema.html#reflecting-database-objects
You can run the SHOW TABLE TABLENAME and get the columns of the tables.
This is part of the std DB api specification for python (pep 249), namely the description member on cursors, so you do not need SQLAlchemy.
for example for using http://www.pymysql.org/ , user is your table
import pymysql
conn = pymysql.connect(host='127.0.0.1', port=3306, user='root', passwd='', db='mysql')
cur = conn.cursor()
cur.execute("SELECT * FROM user")
print cur.description
This python code should run statements on the database, but the sql statements are not executed:
from sqlalchemy import *
sql_file = open("test.sql","r")
sql_query = sql_file.read()
sql_file.close()
engine = create_engine(
'postgresql+psycopg2://user:password#localhost/test', echo=False)
conn = engine.connect()
print sql_query
result = conn.execute(sql_query)
conn.close()
The test.sql file contains SQL statements which create 89 tables.
The tables are not created if I specify 89 tables, but if I reduce the number of tables to 2 to it works.
Is there a limit on the number of queries that can be executed within the conn.execute? How do a run any number of raw queries like this?
Perhaps, forcing the autocommit:
conn.execute(RAW_SQL).execution_options(autocommit=True))
Other approach is using transactions and doing the commit:
t = conn.begin()
try:
conn.execute(RAW_SQL)
t.commit()
except:
t.rollback()
PD: You can put the execution_options in the create_engine parameters too.
Why do you use raw SQL with SQLAlchemy? If you have no good reason for that, you should use other methods:
http://docs.sqlalchemy.org/en/rel_0_7/orm/tutorial.html
http://docs.sqlalchemy.org/en/rel_0_7/core/schema.html#metadata-describing