Python SQLAlchemy PostgreSQL find by primary key Deprecate message - python

I use below code to find the object by primary key. Now I am getting this Deprecated features detected message.
How can I re-write this query to fix the deprecated message.
Code:
def find_by_id(self, obj_id):
with self.session() as s:
x = s.query(User).get(obj_id)
return x
Warning:
LegacyAPIWarning: Deprecated API features detected! These feature(s) are not compatible with SQLAlchemy 2.0. To prevent incompati
ble upgrades prior to updating applications, ensure requirements files are pinned to "sqlalchemy<2.0". Set environment variable SQLALCHEMY_WARN_20=1 to show all deprecation warnings. Set environment variable SQLALCHEMY_SILENCE_UBER
_WARNING=1 to silence this message. (Background on SQLAlchemy 2.0 at: https://sqlalche.me/e/b8d9)
x = s.query(User).get(obj_id)

Try filter_by instead of get.
def find_by_id(self, obj_id):
with self.session() as s:
x = s.query(User).filter_by(id=obj_id).first()
return x

Related

pandas to_sql() gives a SADeprecationWarning

The to_sql() function in pandas is now producing a SADeprecationWarning.
df.to_sql(name=tablename, con=c, if_exists='append', index=False )
[..]/lib/python3.8/site-packages/pandas/io/sql.py:1430: SADeprecationWarning:The Connection.run_callable() method is deprecated and will be removed in a future release. Use a context manager instead. (deprecated since: 1.4)
I was getting this even with df.read_sql() command, when running sql select statements. Changing it to the original df.read_sql_query() that it wraps around, got rid of it. I'm suspecting there would be some linkage there.
So, question is, how to do I write a dataframe table to SQL without it getting deprecated in a future release? What does "use a context manager" mean, how can I implement that?
Versions:
pandas: 1.1.5 | SQLAlchemy: 1.4.0 | pyodbc: 4.0.30 | Python: 3.8.0
Working with a mssql database.
OS: Linux Mint Xfce, 18.04. Using a python virtual environment.
If it matters, connection created like so:
conn_str = r'mssql+pyodbc:///?odbc_connect={}'.format(dbString).strip()
sqlEngine = sqlalchemy.create_engine(conn_str,echo=False, pool_recycle=3600)
c = sqlEngine.connect()
And after the db operation,
c.close()
Doing so keeps the main connection sqlEngine "alive" between api calls and lets me use a pooled connection rather than having to connect anew.
Update: according to the pandas team, this will be fixed in Pandas 1.2.4 which as of the time of writing has not been released yet.
Adding this as an answer since Google led here but the accepted answer is not applicable.
Our surrounding code that uses Pandas does use a context manager:
with get_engine(dbname).connect() as conn:
df = pd.read_sql(stmt, conn, **kwargs)
return df
In my case, this error is being thrown from within pandas itself, not in the surrounding code that uses pandas:
/Users/tommycarpenter/Development/python-indexapi/.tox/py37/lib/python3.7/site-packages/pandas/io/sql.py:1430: SADeprecationWarning: The Engine.run_callable() method is deprecated and will be removed in a future release. Use the Engine.connect() context manager instead. (deprecated since: 1.4)
The snippet from pandas itself is:
def has_table(self, name, schema=None):
return self.connectable.run_callable(
self.connectable.dialect.has_table, name, schema or self.meta.schema
)
I raised an issue: https://github.com/pandas-dev/pandas/issues/40825
You could try...
connection_string = r'mssql+pyodbc:///?odbc_connect={}'.format(dbString).strip()
engine = sqlalchemy.create_engine(connection_string, echo=False, pool_recycle=3600)
with engine.connect() as connection:
df.to_sql(name=tablename, con=connection, if_exists='append', index=False)
This approach uses a ContextManager. The ContextManager of the engine returns a connection and automatically invokes connection.close() on it, see. Read more about ContextManager here. Another useful thing to know is, that a connection is a ContextManager as well and handles transactions for you. This means it begins and ends a transaction and in case of an error it automatically invokes a rollback.

Deprecated method when making c2d com from azure IotHub to python script

message = client.receive_message()
This code is now deprecated and when searching for a solution it seems I am the only one with this issue.
I get this warning:
DeprecatedWarning: receive_message is deprecated as of 2.3.0. We
recommend that you use the .on_message_received property to set a
handler instead of message = client.receive_message()
If you have a possible solution, please post it here.
I am running the latest python 3.9 and the latest Azure IoT device library.
You're trying to use a method that's no longer supported (because it's deprecated). As the error message says, the correct way to handle C2D messages is to use an event handler. There is a good example of that here
The part you will be interested in is:
# define behavior for receiving a message
# NOTE: this could be a function or a coroutine
def message_received_handler(message):
print("the data in the message received was ")
print(message.data)
print("custom properties are")
print(message.custom_properties)
print("content Type: {0}".format(message.content_type))
print("")
# set the mesage received handler on the client
device_client.on_message_received = message_received_handler

Datastore delay on creating entities with put()

I am developing an application using with the Cloud Datastore Emulator (2.1.0) and the google-cloud-ndb Python library (1.6).
I find that there is an intermittent delay on entities being retrievable via a query.
For example, if I create an entity like this:
my_entity = MyEntity(foo='bar')
my_entity.put()
get_my_entity = MyEntity.query().filter(MyEntity.foo == 'bar').get()
print(get_my_entity.foo)
it will fail itermittently because the get() method returns None.
This only happens on about 1 in 10 calls.
To demonstrate, I've created this script (also available with ready to run docker-compose setup on GitHub):
import random
from google.cloud import ndb
from google.auth.credentials import AnonymousCredentials
client = ndb.Client(
credentials=AnonymousCredentials(),
project='local-dev',
)
class SampleModel(ndb.Model):
"""Sample model."""
some_val = ndb.StringProperty()
for x in range(1, 1000):
print(f'Attempt {x}')
with client.context():
random_text = str(random.randint(0, 9999999999))
new_model = SampleModel(some_val=random_text)
new_model.put()
retrieved_model = SampleModel.query().filter(
SampleModel.some_val == random_text
).get()
print(f'Model Text: {retrieved_model.some_val}')
What would be the correct way to avoid this intermittent failure? Is there a way to ensure the entity is always available after the put() call?
Update
I can confirm that this is only an issue with the datastore emulator. When testing on app engine and a Firestore in Datastore mode, entities are available immediately after calling put().
The issue turned out to be related to the emulator trying to replicate eventual consistency.
Unlike relational databases, Datastore does not gaurentee that the data will be available immediately after it's posted. This is because there are often replication and indexing delays.
For things like unit tests, this can be resolved by passing --consistency=1.0 to the datastore start command as documented here.

Conflict - Python insert-update into Azure table storage

Working with Python, I have many processes that need to update/insert data into an Azure table storage at the same time using :
table_service.update_entity(table_name, task) <br/>
table_service.insert_entity(table_name, task)
However, the following error occurs:
<br/>AzureConflictHttpError: Conflict
{"odata.error":{"code":"EntityAlreadyExists","message":{"lang":"en-US","value":"The specified entity already exists.\nRequestId:57d9b721-6002-012d-3d0c-b88bef000000\nTime:2019-01-29T19:55:53.5984026Z"}}}
Maybe I need to use a global Lock to avoid operating the same Table Entity concurrently but I don't know how to use it
Azure Tables has a new SDK available for Python that is in a preview release available on pip, here's an update for the newest library.
On a create method you can use a try/except block to catch the expected error:
from azure.data.tables import TableClient
from azure.core.exceptions import ResourceExistsError
table_client = TableClient.from_connection_string(conn_str, table_name="myTableName")
try:
table_client.create_entity(entity=my_entity)
except ResourceExistsError:
print("Entity already exists")
You can use ETag to update entities conditionally after creation.
from azure.data.tables import UpdateMode
from azure.core import MatchConditions
received_entity = table_client.get_entity(
partition_key="my_partition_key",
row_key="my_row_key",
)
etag = received_entity._metadata["etag"]
resp = table_client.update_entity(
entity=my_new_entity,
etag=etag,
mode=UpdateMode.REPLACE,
match_condition=MatchConditions.IfNotModified
)
On update, you can elect for replace or merge, more information here.
(FYI, I am a Microsoft employee on the Azure SDK for Python team)
There isn't a global "Lock" in Azure Table Storage, since it's using optimistic concurrency via ETag (i.e. If-Match header in raw HTTP requests).
If your thread A is performing insert_entity, it should catch the 409 Conflict error.
If your thread B & C are performing update_entity, they should 412 Precondition Failed error, and use a loop to retrieve the latest entity then try to update the entity again.
For more details, please check Managing Concurrency in the Table Service section in https://azure.microsoft.com/en-us/blog/managing-concurrency-in-microsoft-azure-storage-2/

Sqlite / SQLAlchemy: how to enforce Foreign Keys?

The new version of SQLite has the ability to enforce Foreign Key constraints, but for the sake of backwards-compatibility, you have to turn it on for each database connection separately!
sqlite> PRAGMA foreign_keys = ON;
I am using SQLAlchemy -- how can I make sure this always gets turned on?
What I have tried is this:
engine = sqlalchemy.create_engine('sqlite:///:memory:', echo=True)
engine.execute('pragma foreign_keys=on')
...but it is not working!...What am I missing?
EDIT:
I think my real problem is that I have more than one version of SQLite installed, and Python is not using the latest one!
>>> import sqlite3
>>> print sqlite3.sqlite_version
3.3.4
But I just downloaded 3.6.23 and put the exe in my project directory!
How can I figure out which .exe it's using, and change it?
For recent versions (SQLAlchemy ~0.7) the SQLAlchemy homepage says:
PoolListener is deprecated. Please refer to PoolEvents.
Then the example by CarlS becomes:
engine = create_engine(database_url)
def _fk_pragma_on_connect(dbapi_con, con_record):
dbapi_con.execute('pragma foreign_keys=ON')
from sqlalchemy import event
event.listen(engine, 'connect', _fk_pragma_on_connect)
Building on the answers from conny and shadowmatter, here's code that will check if you are using SQLite3 before emitting the PRAGMA statement:
from sqlalchemy import event
from sqlalchemy.engine import Engine
from sqlite3 import Connection as SQLite3Connection
#event.listens_for(Engine, "connect")
def _set_sqlite_pragma(dbapi_connection, connection_record):
if isinstance(dbapi_connection, SQLite3Connection):
cursor = dbapi_connection.cursor()
cursor.execute("PRAGMA foreign_keys=ON;")
cursor.close()
I now have this working:
Download the latest sqlite and pysqlite2 builds as described above: make sure correct versions are being used at runtime by python.
import sqlite3
import pysqlite2
print sqlite3.sqlite_version # should be 3.6.23.1
print pysqlite2.__path__ # eg C:\\Python26\\lib\\site-packages\\pysqlite2
Next add a PoolListener:
from sqlalchemy.interfaces import PoolListener
class ForeignKeysListener(PoolListener):
def connect(self, dbapi_con, con_record):
db_cursor = dbapi_con.execute('pragma foreign_keys=ON')
engine = create_engine(database_url, listeners=[ForeignKeysListener()])
Then be careful how you test if foreign keys are working: I had some confusion here. When using sqlalchemy ORM to add() things my import code was implicitly handling the relation hookups so could never fail. Adding nullable=False to some ForeignKey() statements helped me here.
The way I test sqlalchemy sqlite foreign key support is enabled is to do a manual insert from a declarative ORM class:
# example
ins = Coverage.__table__.insert().values(id = 99,
description = 'Wrong',
area = 42.0,
wall_id = 99, # invalid fkey id
type_id = 99) # invalid fkey_id
session.execute(ins)
Here wall_id and type_id are both ForeignKey()'s and sqlite throws an exception correctly now if trying to hookup invalid fkeys. So it works! If you remove the listener then sqlalchemy will happily add invalid entries.
I believe the main problem may be multiple sqlite3.dll's (or .so) lying around.
As a simpler approach if your session creation is centralised behind a Python helper function (rather than exposing the SQLA engine directly), you can just issue session.execute('pragma foreign_keys=on') before returning the freshly created session.
You only need the pool listener approach if arbitrary parts of your application may create SQLA sessions against the database.
From the SQLite dialect page:
SQLite supports FOREIGN KEY syntax when emitting CREATE statements for tables, however by default these constraints have no effect on the operation of the table.
Constraint checking on SQLite has three prerequisites:
At least version 3.6.19 of SQLite must be in use
The SQLite libary must be compiled without the SQLITE_OMIT_FOREIGN_KEY or SQLITE_OMIT_TRIGGER symbols enabled.
The PRAGMA foreign_keys = ON statement must be emitted on all connections before use.
SQLAlchemy allows for the PRAGMA statement to be emitted automatically for new connections through the usage of events:
from sqlalchemy.engine import Engine
from sqlalchemy import event
#event.listens_for(Engine, "connect")
def set_sqlite_pragma(dbapi_connection, connection_record):
cursor = dbapi_connection.cursor()
cursor.execute("PRAGMA foreign_keys=ON")
cursor.close()
One-liner version of conny's answer:
from sqlalchemy import event
event.listen(engine, 'connect', lambda c, _: c.execute('pragma foreign_keys=on'))
I had the same problem before (scripts with foreign keys constraints were going through but actuall constraints were not enforced by the sqlite engine); got it solved by:
downloading, building and installing the latest version of sqlite from here: sqlite-sqlite-amalgamation; before this I had sqlite 3.6.16 on my ubuntu machine; which didn't support foreign keys yet; it should be 3.6.19 or higher to have them working.
installing the latest version of pysqlite from here: pysqlite-2.6.0
after that I started getting exceptions whenever foreign key constraint failed
hope this helps, regards
If you need to execute something for setup on every connection, use a PoolListener.
Enforce Foreign Key constraints for sqlite when using Flask + SQLAlchemy.
from flask import Flask
from flask_sqlalchemy import SQLAlchemy
def create_app(config: str=None):
app = Flask(__name__, instance_relative_config=True)
if config is None:
app.config.from_pyfile('dev.py')
else:
logger.debug('Using %s as configuration', config)
app.config.from_pyfile(config)
db.init_app(app)
# Ensure FOREIGN KEY for sqlite3
if 'sqlite' in app.config['SQLALCHEMY_DATABASE_URI']:
def _fk_pragma_on_connect(dbapi_con, con_record): # noqa
dbapi_con.execute('pragma foreign_keys=ON')
with app.app_context():
from sqlalchemy import event
event.listen(db.engine, 'connect', _fk_pragma_on_connect)
Source:
https://gist.github.com/asyd/a7aadcf07a66035ac15d284aef10d458

Categories