KeyError: 'TABLENAME' in mySQL database while using SQLAlchemy - python

I'm fairly new to the SQLAlchemy ORM. Im using a mySQL database whose schema I imported in a .sql file. I created the engine, connected to the database. I bound both the MetaData and the Session objects to the engine. But when I ran:
for t in metadata.tables:
print(t.name)
I got the following error:
fkey["referred_table"] = rec["TABLENAME"]
KeyError: 'TABLENAME'
So what am I doing wrong here? It is something elementary?
Below is the full code:
import sqlalchemy
from sqlalchemy import create_engine
from sqlalchemy.orm import sessionmaker
from sqlalchemy import *
engine = create_engine('mysql://sunnyahlawat:miq182#localhost/sqsunny')
engine.connect()
Session = sessionmaker(bind=engine)
session = Session()
metadata = MetaData(bind = engine, reflect = True)
#metadata.reflect(bind = engine)
for t in metadata.tables:
print(t.name)
#print(engine.table_names())

If the database being referred is a data dump and the table in question has foreign keys linked to an external database which has not been exported and is not on the same server, this error can come up.
The foreign key constraint fails in such a case.
A possible solution is to drop the constraint - if this is being tried out just in a test environment.

Related

How to use PostgreSQL extensions in SQLAlchemy (specifically Flask-SQLAlchemy)?

In the website I'm making with Flask, I'm trying to "upgrade" my search bar by using the pg_trgm PostgreSQL extension to match words even after being spelled wrong. However, I'm having trouble figuring out how to use the extension's custom functions alongside SQLAlchemy.
Will I have to use raw SQL to properly perform the queries, or is there a way to do this more cleanly?
Assuming the extension is configured correctly, these functions should be accessible like any other database function. For example, here's how the similarity function might be called using SQLAlchemy core:
import sqlalchemy as sa
engine = sa.create_engine('postgresql+psycopg2:///test', echo=True, future=True)
metadata = sa.MetaData()
tbl = sa.Table('t70546926', metadata,
sa.Column('c1', sa.String, primary_key=True),
sa.Column('c2', sa.String))
tbl.drop(engine, checkfirst=True)
tbl.create(engine)
ins = sa.insert(tbl).values(c1='Alice', c2='Alison')
with engine.begin() as conn:
conn.execute(ins)
query = sa.select(sa.func.similarity(tbl.c.c1, tbl.c.c2).label('similarity_result'))
with engine.connect() as conn:
rows = conn.execute(query)
for row in rows:
print(row.similarity_result)
For Flask-SQLAlchemy, you might do something like this:
result = dbsession.query(sa.func.similarity(val1, val2)).all()

Fetch from one database, Insert/Update into another using SQLAlchemy

We have data in a Snowflake cloud database that we would like to move into an Oracle database. As we would like to work toward refreshing the Oracle database regularly, I am trying to use SQLAlchemy to automate this.
I would like to do this using Core because my team is all experienced with SQL, but I am the only one with Python experience. I think it would be easier to tweak the data pulls if we just pass SQL strings. Plus the Snowflake db has some columns with JSON that seems easier to parse using direct SQL since I do not see JSON in the SnowflakeDialect.
I have established connections to both databases and am able to do select queries from both. I have also manually created the tables in our Oracle db so that the keys and datatypes match what I am pulling from Snowflake. When I try to insert, though, my Jupyter notebook just continuously says "Executing Cell" and hangs. Any thoughts on how to proceed or how to get the notebook to tell me where the hangup is?
from sqlalchemy import create_engine,pool,MetaData,text
from snowflake.sqlalchemy import URL
import pandas as pd
eng_sf = create_engine(URL( #engine for snowflake
account = 'account'
user = 'user'
password = 'password'
database = 'database'
schema = 'schema'
warehouse = 'warehouse'
role = 'role'
timezone = 'timezone'
))
eng_o = create_engine("oracle+cx_oracle://{}[{}]:{}#{}".format('user','proxy','password','database'),poolclass=pool.NullPool) #engine for oracle
meta_o = MetaData()
meta_o.reflect(bind=eng_o)
person_o = meta_o['bb_lms_person'] # other oracle tables follow this example
meta_sf = MetaData()
meta_sf.reflect(bind=eng_sf,only=['person']) # other snowflake tables as well, but for simplicity, let's look at one
person_sf = meta_sf.tables['person']
person_query = """
SELECT ID
,EMAIL
,STAGE:student_id::STRING as STUDENT_ID
,ROW_INSERTED_TIME
,ROW_UPDATED_TIME
,ROW_DELETED_TIME
FROM cdm_lms.PERSON
"""
with eng_sf.begin() as connection:
result = connection.execute(text(person_query)).fetchall() # this snippet runs and returns result as expected
with eng_o.begin() as connection:
connection.execute(person_o.insert(),result) # this is a coinflip, sometimes it runs, sometimes it just hangs 5ever
eng_sf.dispose()
eng_o.dispose()
I've checked the typical offenders. The keys for both person_o and the result are all lowercase and match. Any guidance would be appreciated.
use the metadata for the table. the fTable_Stage update or inserted as fluent functions and assign values to lambda variables. This is very safe because only metadata field variables can be used in the lambda. I am updating three fields:LateProbabilityDNN, Sentiment_Polarity, Sentiment_Subjectivity
engine = create_engine("mssql+pyodbc:///?odbc_connect=%s" % params)
connection=engine.connect()
metadata=MetaData()
Session = sessionmaker(bind = engine)
session = Session()
fTable_Stage=Table('fTable_Stage', metadata,autoload=True,autoload_with=engine)
stmt=fTable_Stage.update().where(fTable_Stage.c.KeyID==keyID).values(\
LateProbabilityDNN=round(float(late_proba),2),\
Sentiment_Polarity=round(my_valance.sentiment.polarity,2),\
Sentiment_Subjectivity= round(my_valance.sentiment.subjectivity,2)\
)
connection.execute(stmt)

How to fix '(sqlite3.ProgrammingError) SQLite objects created in a thread can only be used in that same thread.' Issue?

I'm utilizing Flask and SqlAlchemy. The database I've created for SqlAlchemy seems to mess up when I try to run my website and will pop up with the error stating that there's a thread error. I'm wondering if it's because I haven't dropped my table from my previous schema. I'm using a linux server to try and run the "python3" and the file to set up my database.
I've tried to physically delete the table from my local drive and the re run it but I still up this error.
from sqlalchemy import create_engine
from sqlalchemy.orm import sessionmaker
from sqlalchemy.orm import scoped_session
from database_setup import Base, Category, Item
engine = create_engine('sqlite:///database_tables.db')
Base.metadata.bind = engine
Session = sessionmaker()
Session.bind = engine
session = Session()
brushes = Category(id = 1, category_name = 'Brushes')
session.add(brushes)
session.commit()
pencils = Category(id = 2, category_name = 'Pencils')
session.add(pencils)
session.commit()
When I am in debug mode using Flask, I click the links I've made using these rows, but after three clicks I get the error
"(sqlite3.ProgrammingError) SQLite objects created in a thread can only be used in that same thread.The object was created in thread id 140244909291264 and this is thread id 140244900898560 [SQL: SELECT category.id AS category_id, category.category_name AS category_category_name FROM category] [parameters: [{}]] (Background on this error at: http://sqlalche.me/e/f405)"
you can use for each thread a session, by indexing them using the thread id _thread.get_ident():
import _thread
engine = create_engine('sqlite:///history.db', connect_args={'check_same_thread': False})
...
Base.metadata.create_all(engine)
sessions = {}
def get_session():
thread_id = _thread.get_ident() # get thread id
if thread_id in sessions:
return sessions[thread_id]
session_factory = sessionmaker(bind=engine)
Session = scoped_session(session_factory)
sessions[thread_id] = Session()
return sessions[thread_id]
then use get_session() where it is needed, in your case:
get_session().add(brushes)
get_session().commit()

'module' object is not callable with sqlalchemy

I'm totally new using sqlalchemy and postgresql. I read this tutorial to build the following piece of code :
import sqlalchemy
from sqlalchemy import create_engine
from sqlalchemy import engine
def connect(user, password, db, host='localhost', port=5432):
'''Returns a connection and a metadata object'''
# We connect with the help of the PostgreSQL URL
# postgresql://federer:grandestslam#localhost:5432/tennis
url = 'postgresql://{}:{}#{}:{}/{}'
url = url.format(user, password, host, port, db)
# The return value of create_engine() is our connection object
con = sqlalchemy.create_engine(url, client_encoding='utf8')
# We then bind the connection to MetaData()
meta = sqlalchemy.MetaData(bind=con, reflect=True)
return con, meta
con, meta = connect('federer', 'grandestslam', 'tennis')
con
engine('postgresql://federer:***#localhost:5432/tennis')
meta
MetaData(bind=Engine('postgresql://federer:***#localhost:5432/tennis'))
When running it I have this error :
File "test.py", line 22, in <module>
engine('postgresql://federer:***#localhost:5432/tennis')
TypeError: 'module' object is not callable
what should I do ? thanks !
So, your problem is happening because you've made this call:
from sqlalchemy import engine
And then you've used this later in the file:
engine('postgresql://federer:***#localhost:5432/tennis')
Strangely, in that section, you have some statements that are just con and meta with no assignments or calls or anything. I'm not sure what you're doing there. I would suggest that you check out SQLalchemy's page on engine and connection use to help get you sorted.
It will of course depend on exactly how you've set up your database. I used the declarative_base module in one of my projects, so my process of setting up a session to connect to my DB looks like this:
from sqlalchemy import create_engine
from sqlalchemy.orm import sessionmaker
# Connect to Database and create database session
engine = create_engine('postgresql://catalog:catalog#localhost/menus')
Base.metadata.bind = engine
DBSession = sessionmaker(bind=engine)
session = DBSession()
And in my database setup file, I've assigned:
Base = declarative_base()
But you'll have to customize it a bit to your particular setup. I hope that helps.
Edit: I see now where those calls to con and meta were coming from, as well as your other confusing lines, it's part of the tutorial you linked to. What he was doing in that tutorial was using the Python interpreter in command line. I'll explain a few of the things he did there in the hope that it helps you some more. Lines beginning with >>> are what he enters in as commands. The other lines are the output he receives back.
>>> con, meta = connect('federer', 'grandestslam', 'tennis') # he creates the connection and meta objects
>>> con # now he calls the connection by itself to have it show that it's connected to his DB
Engine(postgresql://federer:***#localhost:5432/tennis)
>>> meta # here he calls his meta object to show how it, too, is connected
MetaData(bind=Engine(postgresql://federer:***#localhost:5432/tennis))

sqlAlchemy does not recognise changes to DB made outside of session

Something peculiar I've noticed is that any changes committed to the DB outside of the session (such as ones made in MySQL's Workbench) are not recognised in the sqlAlchemy session. I have to close and open a new session for sqlAlchemy to recognise it.
For example, a row I deleted manually is still fetched from sqlAlchemy.
This is how I initialise the session:
engine = create_engine('mysql://{}:{}#{}/{}'.format(username, password, host, schema), pool_recycle=3600)
Session = sessionmaker(bind=engine)
session = Session()
metadata = MetaData()
How can I get sqlAlchemy to recognise them?
My sqlAlchemy version is 0.9.4 and my MySQL version is 5.5.34. We use only sqlAlchemy's Core (no ORM).
To be able to read committed data from others transactions you'll need to set transaction isolation level to READ COMMITTED. For sqlalchemy and mysql:
To set isolation level using create_engine():
engine = create_engine(
"mysql://scott:tiger#localhost/test",
isolation_level="READ COMMITTED")
To set using per-connection execution options:
connection = engine.connect()
connection = connection.execution_options(
isolation_level="READ COMMITTED")
source

Categories