When using SQLAlchemy (version 1.4.44) to create, drop or otherwise modify tables, the updates don't appear to be committing. Attempting to solve this, I'm following the docs and using the commit() function. Here's a simple example
from sqlalchemy import create_engine, text
engine = create_engine("postgresql://user:password#connection_string:5432/database_name")
with engine.connect() as connection:
sql = "create table test as (select count(1) as result from userquery);"
result = connection.execute(text(sql))
connection.commit()
This produces the error:
AttributeError: 'Connection' object has no attribute 'commit'
What am I missing?
The comment on the question is correct you are looking at the 2.0 docs but all you need to do is set future=True when calling create_engine() to use the "commit as you go" functionality provided in 2.0.
SEE migration-core-connection-transaction
When using 2.0 style with the create_engine.future flag, “commit as
you go” style may also be used, as the Connection features autobegin
behavior, which takes place when a statement is first invoked in the
absence of an explicit call to Connection.begin():
The documentation is actually misleading (version 1.4). We can see using Connection.commit() method in documentation describing rows inserting, but the method doesn't exist.
I have managed to find a clarity explanation for using transations in the transactions section:
The block managed by each .begin() method has the behavior such that the transaction is committed when the block completes. If an exception is raised, the transaction is instead rolled back, and the exception propagated outwards.
Example from documentation below. There is no commit() method calling.
# runs a transaction
with engine.begin() as connection:
r1 = connection.execute(table1.select())
connection.execute(table1.insert(), {"col1": 7, "col2": "this is some data"})
I'm using Pony ORM version 0.7 with a Sqlite3 database on disk, and running into this issue: I am performing a select, then an update, then a select, then another update, and getting an error message of
pony.orm.core.UnrepeatableReadError: Value of Task.order_id for
Task[23654] was updated outside of current transaction (was: 1, now: 2)
I've reduced the problem to the minimum set of commands that causes the problem (i.e. removing anything causes the problem not to occur):
#db_session
def test_method():
tasks = list(map(Task.to_dict, Task.select()))
db.execute("UPDATE Task SET order_id=order_id*2")
task_to_move = select(task for task in Task if task.order_id == 2).first()
task_to_move.order_id = 1
test_method()
For completeness's sake, here is the definition of Task:
class Task(db.Entity):
text = Required(unicode)
heading = Required(int)
create_timestamp = Required(datetime)
done_timestamp = Optional(datetime)
order_id = Required(int)
Also, if I remove the constraint that task.order_id == 2 from my select, the problem no longer occurs, so I assume the problem has something to do with querying based on a field that has been changed since the transaction has started, but I don't know why the error message is telling me that it was changed by a different transaction (unless maybe db.execute is executing in a separate transaction because it is raw SQL?)
I've already looked at this similar question, but the problem was different (Pony ORM reports record "was updated outside of current transaction" while there is not other transaction) and at this documentation (https://docs.ponyorm.com/transactions.html) but neither solved my problem.
Any ideas what might be going on here?
Pony uses optimistic concurrency control by default. For each attribute Pony remembers its current value (potentially modified by application code) as well as original value which was read from the database. During UPDATE Pony checks that the value of column in the database is still the same. If the value is changed, Pony assumes that some concurrent transaction did it, and throw exception in order to avoid the "lost update" situation.
If you execute some raw SQL query, Pony does not know what exactly was modified in the database. So when Pony encounters that the counter value was changed, it mistakenly thinks that the value was changed by another transaction.
In order to avoid the problem you can mark order_id attribute as volatile. Then Pony will assume, that the value of attribute can change at any time (by trigger or raw SQL update), and will exclude that attribute from optimistic checks:
class Task(db.Entity):
text = Required(unicode)
heading = Required(int)
create_timestamp = Required(datetime)
done_timestamp = Optional(datetime)
order_id = Required(int, volatile=True)
Note that Pony will cache the value of volatile attribute and will not re-read the value from the database until the object was saved, so in some situation you can get obsolete value in Python.
Update:
Starting from release 0.7.4 you can also specify optimistic=False option to db_session to turn off optimistic checks for specific transaction that uses raw SQL queries:
with db_session(optimistic=False):
...
or
#db_session(optimistic=False)
def some_function():
...
Also it is possible now to specify optimistic=False option for attribute instead of specifying volatile=True. Then Pony will not make optimistic checks for that attribute, but will still consider treat it as non-volatile
I have some test code that creates tables and drops them for each test case. However, all tests fail after the first one because I am relying on code that uses sa.Table() for something first and only creates the new tables if calling that method errors with NoSuchTableError. However, this error is not thrown after the tables are first dropped, even though the engine correctly reports that they do not exist, and so they are not created again.
I've reproduced this behavior as follows:
>>>import sqlalchemy as sa
>>>from sqlalchemy import *
>>>m = MetaData()
The tables don't exist, so calling sa.Table here errors as expected:
>>>t = sa.Table('test', m, autoload_with=eng)
...
NoSuchTableError: test
But if I create the table and then drop it, sa.Table does not error as expected:
>>>t = sa.Table('test', m, Column('t', String(2)))
>>>m.create_all(eng)
>>>eng.has_table('test')
True
>>>t = sa.Table('test', m, autoload_with=eng)
>>>eng.execute('drop table "test" cascade')
<sqlalchemy.engine.result.ResultProxy at 0x106a06150>
>>>eng.has_table('test')
False
>>>t = sa.Table('test', m, autoload_with=eng)
No error is thrown after that last call to sa.Table, even though the table does not exist.
What do I need to do to get sa.Table() to correctly error after the tables have been dropped? The engine object I am passing to it knows that the tables do not exist, but is there something else I need to do, like refreshing/reconnecting somehow, so that I get the expected behavior?
Turns out I need to refresh the MetaData object (create a new one) each time I call sa.Table if I expect the schema to change. This solves the problem.
I am on the following cx_Oracle version
>>> cx_Oracle.version
'5.0.3'
I am getting this exception in executing a query
"expecting None or a string"
The query is being executed this way
cursor.execute("SELECT * FROM APP_STORE WHERE STORE=:STORE %s" %(order_clause),{'STORE':STORE})
What could be the reason? Similar queries executed earlier in the flow work fine but this one does not.
Appreciate some guidance on this.
You are building your cursor incorrectly. Since you pass a dictionary, you much first prepare your query:
cursor.prepare("SELECT * FROM APP_STORE WHERE STORE=:STORE %s" %(order_clause))
Then you execute it and pass None as the first parameter.
results = cursor.execute(None, {'STORE':STORE})
If you wish to change the STORE parameter and run the query again, all you need to do now is modify the dictionary and rerun the execute statement. prepareing it again is not needed.
More information can be found at the Oracle+Python Querying best practices documentation. The information I provided, above, is in the "Bind Variable Patterns" section (no direct link seems to be available)
I have a SQLAlchemy query object and want to get the text of the compiled SQL statement, with all its parameters bound (e.g. no %s or other variables waiting to be bound by the statement compiler or MySQLdb dialect engine, etc).
Calling str() on the query reveals something like this:
SELECT id WHERE date_added <= %s AND date_added >= %s ORDER BY count DESC
I've tried looking in query._params but it's an empty dict. I wrote my own compiler using this example of the sqlalchemy.ext.compiler.compiles decorator but even the statement there still has %s where I want data.
I can't quite figure out when my parameters get mixed in to create the query; when examining the query object they're always an empty dictionary (though the query executes fine and the engine prints it out when you turn echo logging on).
I'm starting to get the message that SQLAlchemy doesn't want me to know the underlying query, as it breaks the general nature of the expression API's interface all the different DB-APIs. I don't mind if the query gets executed before I found out what it was; I just want to know!
This blogpost by Nicolas Cadou provides an updated answer.
Quoting from the blog post, this is suggested and worked for me:
from sqlalchemy.dialects import postgresql
print str(q.statement.compile(dialect=postgresql.dialect()))
Where q is defined as:
q = DBSession.query(model.Name).distinct(model.Name.value) \
.order_by(model.Name.value)
Or just any kind of session.query().
The documentation uses literal_binds to print a query q including parameters:
print(q.statement.compile(compile_kwargs={"literal_binds": True}))
the above approach has the caveats that it is only supported for basic types, such as ints and strings, and furthermore if a bindparam() without a pre-set value is used directly, it won’t be able to stringify that either.
The documentation also issues this warning:
Never use this technique with string content received from untrusted
input, such as from web forms or other user-input applications.
SQLAlchemy’s facilities to coerce Python values into direct SQL string
values are not secure against untrusted input and do not validate the
type of data being passed. Always use bound parameters when
programmatically invoking non-DDL SQL statements against a relational
database.
This should work with Sqlalchemy >= 0.6
from sqlalchemy.sql import compiler
from psycopg2.extensions import adapt as sqlescape
# or use the appropiate escape function from your db driver
def compile_query(query):
dialect = query.session.bind.dialect
statement = query.statement
comp = compiler.SQLCompiler(dialect, statement)
comp.compile()
enc = dialect.encoding
params = {}
for k,v in comp.params.iteritems():
if isinstance(v, unicode):
v = v.encode(enc)
params[k] = sqlescape(v)
return (comp.string.encode(enc) % params).decode(enc)
Thing is, sqlalchemy never mixes the data with your query. The query and the data are passed separately to your underlying database driver - the interpolation of data happens in your database.
Sqlalchemy passes the query as you've seen in str(myquery) to the database, and the values will go in a separate tuple.
You could use some approach where you interpolate the data with the query yourself (as albertov suggested below), but that's not the same thing that sqlalchemy is executing.
For the MySQLdb backend I modified albertov's awesome answer (thanks so much!) a bit. I'm sure they could be merged to check if comp.positional was True but that's slightly beyond the scope of this question.
def compile_query(query):
from sqlalchemy.sql import compiler
from MySQLdb.converters import conversions, escape
dialect = query.session.bind.dialect
statement = query.statement
comp = compiler.SQLCompiler(dialect, statement)
comp.compile()
enc = dialect.encoding
params = []
for k in comp.positiontup:
v = comp.params[k]
if isinstance(v, unicode):
v = v.encode(enc)
params.append( escape(v, conversions) )
return (comp.string.encode(enc) % tuple(params)).decode(enc)
First let me preface by saying that I assume you're doing this mainly for debugging purposes -- I wouldn't recommend trying to modify the statement outside of the SQLAlchemy fluent API.
Unfortunately there doesn't seem to be a simple way to show the compiled statement with the query parameters included. SQLAlchemy doesn't actually put the parameters into the statement -- they're passed into the database engine as a dictionary. This lets the database-specific library handle things like escaping special characters to avoid SQL injection.
But you can do this in a two-step process reasonably easily. To get the statement, you can do as you've already shown, and just print the query:
>>> print(query)
SELECT field_1, field_2 FROM table WHERE id=%s;
You can get one step closer with query.statement, to see the parameter names. Note :id_1 below vs %s above -- not really a problem in this very simple example, but could be key in a more complicated statement.
>>> print(query.statement)
>>> print(query.statement.compile()) # seems to be equivalent, you can also
# pass in a dialect if you want
SELECT field_1, field_2 FROM table WHERE id=:id_1;
Then, you can get the actual values of the parameters by getting the params property of the compiled statement:
>>> print(query.statement.compile().params)
{u'id_1': 1}
This worked for a MySQL backend at least; I would expect it's also general enough for PostgreSQL without needing to use psycopg2.
For postgresql backend using psycopg2, you can listen for the do_execute event, then use the cursor, statement and type coerced parameters along with Cursor.mogrify() to inline the parameters. You can return True to prevent actual execution of the query.
import sqlalchemy
class QueryDebugger(object):
def __init__(self, engine, query):
with engine.connect() as connection:
try:
sqlalchemy.event.listen(engine, "do_execute", self.receive_do_execute)
connection.execute(query)
finally:
sqlalchemy.event.remove(engine, "do_execute", self.receive_do_execute)
def receive_do_execute(self, cursor, statement, parameters, context):
self.statement = statement
self.parameters = parameters
self.query = cursor.mogrify(statement, parameters)
# Don't actually execute
return True
Sample usage:
>>> engine = sqlalchemy.create_engine("postgresql://postgres#localhost/test")
>>> metadata = sqlalchemy.MetaData()
>>> users = sqlalchemy.Table('users', metadata, sqlalchemy.Column("_id", sqlalchemy.String, primary_key=True), sqlalchemy.Column("document", sqlalchemy.dialects.postgresql.JSONB))
>>> s = sqlalchemy.select([users.c.document.label("foobar")]).where(users.c.document.contains({"profile": {"iid": "something"}}))
>>> q = QueryDebugger(engine, s)
>>> q.query
'SELECT users.document AS foobar \nFROM users \nWHERE users.document #> \'{"profile": {"iid": "something"}}\''
>>> q.statement
'SELECT users.document AS foobar \nFROM users \nWHERE users.document #> %(document_1)s'
>>> q.parameters
{'document_1': '{"profile": {"iid": "something"}}'}
The following solution uses the SQLAlchemy Expression Language and works with SQLAlchemy 1.1. This solution does not mix the parameters with the query (as requested by the original author), but provides a way of using SQLAlchemy models to generate SQL query strings and parameter dictionaries for different SQL dialects. The example is based on the tutorial http://docs.sqlalchemy.org/en/rel_1_0/core/tutorial.html
Given the class,
from sqlalchemy import Column, Integer, String
from sqlalchemy.ext.declarative import declarative_base
Base = declarative_base()
class foo(Base):
__tablename__ = 'foo'
id = Column(Integer(), primary_key=True)
name = Column(String(80), unique=True)
value = Column(Integer())
we can produce a query statement using the select function.
from sqlalchemy.sql import select
statement = select([foo.name, foo.value]).where(foo.value > 0)
Next, we can compile the statement into a query object.
query = statement.compile()
By default, the statement is compiled using a basic 'named' implementation that is compatible with SQL databases such as SQLite and Oracle. If you need to specify a dialect such as PostgreSQL, you can do
from sqlalchemy.dialects import postgresql
query = statement.compile(dialect=postgresql.dialect())
Or if you want to explicitly specify the dialect as SQLite, you can change the paramstyle from 'qmark' to 'named'.
from sqlalchemy.dialects import sqlite
query = statement.compile(dialect=sqlite.dialect(paramstyle="named"))
From the query object, we can extract the query string and query parameters
query_str = str(query)
query_params = query.params
and finally execute the query.
conn.execute( query_str, query_params )
You can use events from ConnectionEvents family: after_cursor_execute or before_cursor_execute.
In sqlalchemy UsageRecipes by #zzzeek you can find this example:
Profiling
...
#event.listens_for(Engine, "before_cursor_execute")
def before_cursor_execute(conn, cursor, statement,
parameters, context, executemany):
conn.info.setdefault('query_start_time', []).append(time.time())
logger.debug("Start Query: %s" % statement % parameters)
...
Here you can get access to your statement
UPDATE: Came up with yet another case where the previous solution here wasn't properly producing the correct SQL statement. After a bit of diving around in SQLAlchemy, it becomes apparent that you not only need to compile for a particular dialect, you also need to take the compiled query and initialize it for the correct DBAPI connection context. Otherwise, things like type bind processors don't get executed and values like JSON.NULL don't get properly translated.
Note, this makes this solution very particular to Flask + Flask-SQLAlchemy + psycopg2 + PostgreSQL. You may need to translate this solution to your environment by changing the dialect and how you reference your connection. However, I'm pretty confident this produces the exact SQL for all data types.
The result below is a simple method to drop in and occasionally but reliably grab the exact, compiled SQL that would be sent to my PostgreSQL backend by just interrogating the query itself:
import sqlalchemy.dialects.postgresql.psycopg2
from flask import current_app
def query_to_string(query):
dialect = sqlalchemy.dialects.postgresql.psycopg2.dialect()
compiled_query = query.statement.compile(dialect=dialect)
sqlalchemy_connection = current_app.db.session.connection()
context = dialect.execution_ctx_cls._init_compiled(
dialect,
sqlalchemy_connection,
sqlalchemy_connection.connection,
compiled_query,
None
)
mogrified_query = sqlalchemy_connection.connection.cursor().mogrify(
context.statement,
context.parameters[0]
)
return mogrified_query.decode()
query = [ .... some ORM query .... ]
print(f"compiled SQL = {query_to_string(query)}")
I've created this little function that I import when I want to print the full query, considering I'm in the middle of a test when the dialect is already bound:
import re
def print_query(query):
regex = re.compile(":(?P<name>\w+)")
params = query.statement.compile().params
sql = regex.sub("'{\g<name>}'", str(query.statement)).format(**params)
print(f"\nPrinting SQLAlchemy query:\n\n")
print(sql)
return sql
I think .statement would possibly do the trick:
http://docs.sqlalchemy.org/en/latest/orm/query.html?highlight=query
>>> local_session.query(sqlalchemy_declarative.SomeTable.text).statement
<sqlalchemy.sql.annotation.AnnotatedSelect at 0x6c75a20; AnnotatedSelectobject>
>>> x=local_session.query(sqlalchemy_declarative.SomeTable.text).statement
>>> print(x)
SELECT sometable.text
FROM sometable
If with SQLAlchemy you are using PyMySQL, you can do one trick.
I was in a hurry and lost a lot of time, so I changed the driver for print the current statement with parameters.
SQLAlchemy intentionally does not support full stringification of literal values.
But PyMySQL has 'mogrify' method which does it, but, SQLALchemy has no HOOK for call it when using ORM insert/update (when it controls the cursor) like db.add or commit/flush (for update).
So, Just go where the driver is using (to know where use):
pip show pycharm
In the folder, find and edit the file cursors.py.
In the method:
def execute(self, query, args=None):
Under the line:
query = self.mogrify(query, args)
Just Add:
print(query)
Will work like a charm, debug, resolve the issue and remove the print.