Simply sqlalchemy filter not working - python

Select all works like this:
q = session.query(products)
Now I want to add a WHERE filter, so I am trying:
q = session.query(products).filter_by(stock_count=0)
I get an error saying 'nonetype' object has no attribute 'class_manager'.
Not sure what the issue is?
Update
The column seems to be mapped fine, as when I do:
q = session.query(products)
for p in q:
print p.stock_count
It outputs the value.
But if I do:
p.stock_count = 6
I get an error also, saying: "can't set attribute"
So I can query for it, but adding the column as a filter, OR setting the value causes an error.
Strange no?

You may be trying to use the orm against a bare Table object.
This code works on 0.5 (the one in base centos 6.2):
#!/usr/bin/env python
from sqlalchemy import create_engine
from sqlalchemy.orm import sessionmaker
from sqlalchemy.ext.declarative import declarative_base
db = create_engine(localopts.connect_string)
Session = sessionmaker(bind=db)
Base = declarative_base()
Base.metadata.reflect(bind=db)
class some_table(Base):
__table__ = Base.metadata.tables['table_name']
session = Session()
for row in session.query(some_table.username).filter_by(username="some-user"):
print row

have you tried adding a .all() after your filter_by:
q = session.query(products).filter_by(stock_count=0).all()

Have you tried Literal Sql? I've had the same error message but when I used literal sql it was gone.
So for your example it would be something like:
q = session.query(products).filter('stock_count==0')

filter_by() works with a keyword dictionary, you actually want to use filter(). Additionaly you can't just use stock_count (probably, you didn't show your table definition code), you have to use products.stock_count or possibly products.__class__.stock_count. So Try:
q=session.query(products).filter(product.stock_count==0)

Related

In SQLAlchemy, is there a way to eager-load multiple aliased selectables of the same class using one query?

I have an SQLAlchemy mapped class MyClass, and two aliases for it. I can eager-load a relationship MyClass.relationship on each alias separately using selectinload() like so:
alias_1, alias_2 = aliased(MyClass), aliased(MyClass)
q = session.query(alias_1, alias_2).options(
selectinload(alias_1.relationship),
selectinload(alias_2.relationship))
However, this results in 2 separate SQL queries on MyClass.relationship (in addition to the main query on MyClass, but this is irrelevant to the question). Since these 2 queries on MyClass.relationship are to the same table, I think that it should be possible to merge the primary keys generated within the IN clause in these queries, and just run 1 query on MyClass.relationship.
My best guess for how to do this is:
alias_1, alias_2 = aliased(MyClass), aliased(MyClass)
q = session.query(alias_1, alias_2).options(
selectinload(MyClass.relationship))
But it clearly didn't work:
sqlalchemy.exc.ArgumentError: Mapped attribute "MyClass.relationship" does not apply to any of the root entities in this query, e.g. aliased(MyClass), aliased(MyClass). Please specify the full path from one of the root entities to the target attribute.
Is there a way to do this in SQLAlchemy?
So, this is exactly the same issue we had. This docs explains how to do it.
You need to add selectin_polymorphic. For anyone else if you are using with_polymorphic in your select then remove it.
from sqlalchemy.orm import selectin_polymorphic
query = session.query(MyClass).options(
selectin_polymorphic(MyClass, [alias_1, alias_2]),
selectinload(MyClass.relationship)
)

Sqlalchemy mass update [duplicate]

I'm starting a new application and looking at using an ORM -- in particular, SQLAlchemy.
Say I've got a column 'foo' in my database and I want to increment it. In straight sqlite, this is easy:
db = sqlite3.connect('mydata.sqlitedb')
cur = db.cursor()
cur.execute('update table stuff set foo = foo + 1')
I figured out the SQLAlchemy SQL-builder equivalent:
engine = sqlalchemy.create_engine('sqlite:///mydata.sqlitedb')
md = sqlalchemy.MetaData(engine)
table = sqlalchemy.Table('stuff', md, autoload=True)
upd = table.update(values={table.c.foo:table.c.foo+1})
engine.execute(upd)
This is slightly slower, but there's not much in it.
Here's my best guess for a SQLAlchemy ORM approach:
# snip definition of Stuff class made using declarative_base
# snip creation of session object
for c in session.query(Stuff):
c.foo = c.foo + 1
session.flush()
session.commit()
This does the right thing, but it takes just under fifty times as long as the other two approaches. I presume that's because it has to bring all the data into memory before it can work with it.
Is there any way to generate the efficient SQL using SQLAlchemy's ORM? Or using any other python ORM? Or should I just go back to writing the SQL by hand?
SQLAlchemy's ORM is meant to be used together with the SQL layer, not hide it. But you do have to keep one or two things in mind when using the ORM and plain SQL in the same transaction. Basically, from one side, ORM data modifications will only hit the database when you flush the changes from your session. From the other side, SQL data manipulation statements don't affect the objects that are in your session.
So if you say
for c in session.query(Stuff).all():
c.foo = c.foo+1
session.commit()
it will do what it says, go fetch all the objects from the database, modify all the objects and then when it's time to flush the changes to the database, update the rows one by one.
Instead you should do this:
session.execute(update(stuff_table, values={stuff_table.c.foo: stuff_table.c.foo + 1}))
session.commit()
This will execute as one query as you would expect, and because at least the default session configuration expires all data in the session on commit you don't have any stale data issues.
In the almost-released 0.5 series you could also use this method for updating:
session.query(Stuff).update({Stuff.foo: Stuff.foo + 1})
session.commit()
That will basically run the same SQL statement as the previous snippet, but also select the changed rows and expire any stale data in the session. If you know you aren't using any session data after the update you could also add synchronize_session=False to the update statement and get rid of that select.
session.query(Clients).filter(Clients.id == client_id_list).update({'status': status})
session.commit()
Try this =)
There are several ways to UPDATE using sqlalchemy
1) for c in session.query(Stuff).all():
c.foo += 1
session.commit()
2) session.query(Stuff).update({"foo": Stuff.foo + 1})
session.commit()
3) conn = engine.connect()
table = Stuff.__table__
stmt = table.update().values({'foo': Stuff.foo + 'a'})
conn.execute(stmt)
conn.commit()
Here's an example of how to solve the same problem without having to map the fields manually:
from sqlalchemy import Column, ForeignKey, Integer, String, Date, DateTime, text, create_engine
from sqlalchemy.exc import IntegrityError
from sqlalchemy.ext.declarative import declarative_base
from sqlalchemy.orm import sessionmaker
from sqlalchemy.orm.attributes import InstrumentedAttribute
engine = create_engine('postgres://postgres#localhost:5432/database')
session = sessionmaker()
session.configure(bind=engine)
Base = declarative_base()
class Media(Base):
__tablename__ = 'media'
id = Column(Integer, primary_key=True)
title = Column(String, nullable=False)
slug = Column(String, nullable=False)
type = Column(String, nullable=False)
def update(self):
s = session()
mapped_values = {}
for item in Media.__dict__.iteritems():
field_name = item[0]
field_type = item[1]
is_column = isinstance(field_type, InstrumentedAttribute)
if is_column:
mapped_values[field_name] = getattr(self, field_name)
s.query(Media).filter(Media.id == self.id).update(mapped_values)
s.commit()
So to update a Media instance, you can do something like this:
media = Media(id=123, title="Titular Line", slug="titular-line", type="movie")
media.update()
If it is because of the overhead in terms of creating objects, then it probably can't be sped up at all with SA.
If it is because it is loading up related objects, then you might be able to do something with lazy loading. Are there lots of objects being created due to references? (IE, getting a Company object also gets all of the related People objects).
Withough testing, I'd try:
for c in session.query(Stuff).all():
c.foo = c.foo+1
session.commit()
(IIRC, commit() works without flush()).
I've found that at times doing a large query and then iterating in python can be up to 2 orders of magnitude faster than lots of queries. I assume that iterating over the query object is less efficient than iterating over a list generated by the all() method of the query object.
[Please note comment below - this did not speed things up at all].

Python SQLAlchemy Query using labeled OVER clause with ORM

This other question says how to use the OVER clause on sqlalchemy:
Using the OVER window function in SQLAlchemy
But how to do that using ORM? I have something like:
q = self.session.query(self.entity, func.count().over().label('count_over'))
This fails when I call q.all() with the following message:
sqlalchemy.exc.InvalidRequestError:
Ambiguous column name 'count(*) OVER ()' in result set! try 'use_labels' option on select statement
How can I solve this?
You have the over syntax almost correct, it should be something like this:
import sqlalchemy
q = self.session.query(
self.entity,
sqlalchemy.over(func.count()).label('count_over'),
)
Example from the docs:
from sqlalchemy import over
over(func.row_number(), order_by='x')
SQLAlchemy Query object has with_entities method that can be used to customize the list of columns the query returns:
Model.query.with_entities(Model.foo, func.count().over().label('count_over'))
Resulting in following SQL:
SELECT models.foo AS models_foo, count(*) OVER () AS count_over FROM models
You got the functions right. They way to use them to produce the desired result would be as follows:
from sqlalchemy import func
q = self.session.query(self.entity, func.count(self.entity).over().label('count_over'))
This will produce a COUNT(*) statement since no Entity.field was specified. I use the following format:
from myschema import MyEntity
from sqlalchemy import func
q = self.session.query(MyEntity, func.count(MyEntity.id).over().label('count'))
That is if there is an id field, of course. But you get the mechanics :-)

How do I get a raw, compiled SQL query from a SQLAlchemy expression?

I have a SQLAlchemy query object and want to get the text of the compiled SQL statement, with all its parameters bound (e.g. no %s or other variables waiting to be bound by the statement compiler or MySQLdb dialect engine, etc).
Calling str() on the query reveals something like this:
SELECT id WHERE date_added <= %s AND date_added >= %s ORDER BY count DESC
I've tried looking in query._params but it's an empty dict. I wrote my own compiler using this example of the sqlalchemy.ext.compiler.compiles decorator but even the statement there still has %s where I want data.
I can't quite figure out when my parameters get mixed in to create the query; when examining the query object they're always an empty dictionary (though the query executes fine and the engine prints it out when you turn echo logging on).
I'm starting to get the message that SQLAlchemy doesn't want me to know the underlying query, as it breaks the general nature of the expression API's interface all the different DB-APIs. I don't mind if the query gets executed before I found out what it was; I just want to know!
This blogpost by Nicolas Cadou provides an updated answer.
Quoting from the blog post, this is suggested and worked for me:
from sqlalchemy.dialects import postgresql
print str(q.statement.compile(dialect=postgresql.dialect()))
Where q is defined as:
q = DBSession.query(model.Name).distinct(model.Name.value) \
.order_by(model.Name.value)
Or just any kind of session.query().
The documentation uses literal_binds to print a query q including parameters:
print(q.statement.compile(compile_kwargs={"literal_binds": True}))
the above approach has the caveats that it is only supported for basic types, such as ints and strings, and furthermore if a bindparam() without a pre-set value is used directly, it won’t be able to stringify that either.
The documentation also issues this warning:
Never use this technique with string content received from untrusted
input, such as from web forms or other user-input applications.
SQLAlchemy’s facilities to coerce Python values into direct SQL string
values are not secure against untrusted input and do not validate the
type of data being passed. Always use bound parameters when
programmatically invoking non-DDL SQL statements against a relational
database.
This should work with Sqlalchemy >= 0.6
from sqlalchemy.sql import compiler
from psycopg2.extensions import adapt as sqlescape
# or use the appropiate escape function from your db driver
def compile_query(query):
dialect = query.session.bind.dialect
statement = query.statement
comp = compiler.SQLCompiler(dialect, statement)
comp.compile()
enc = dialect.encoding
params = {}
for k,v in comp.params.iteritems():
if isinstance(v, unicode):
v = v.encode(enc)
params[k] = sqlescape(v)
return (comp.string.encode(enc) % params).decode(enc)
Thing is, sqlalchemy never mixes the data with your query. The query and the data are passed separately to your underlying database driver - the interpolation of data happens in your database.
Sqlalchemy passes the query as you've seen in str(myquery) to the database, and the values will go in a separate tuple.
You could use some approach where you interpolate the data with the query yourself (as albertov suggested below), but that's not the same thing that sqlalchemy is executing.
For the MySQLdb backend I modified albertov's awesome answer (thanks so much!) a bit. I'm sure they could be merged to check if comp.positional was True but that's slightly beyond the scope of this question.
def compile_query(query):
from sqlalchemy.sql import compiler
from MySQLdb.converters import conversions, escape
dialect = query.session.bind.dialect
statement = query.statement
comp = compiler.SQLCompiler(dialect, statement)
comp.compile()
enc = dialect.encoding
params = []
for k in comp.positiontup:
v = comp.params[k]
if isinstance(v, unicode):
v = v.encode(enc)
params.append( escape(v, conversions) )
return (comp.string.encode(enc) % tuple(params)).decode(enc)
First let me preface by saying that I assume you're doing this mainly for debugging purposes -- I wouldn't recommend trying to modify the statement outside of the SQLAlchemy fluent API.
Unfortunately there doesn't seem to be a simple way to show the compiled statement with the query parameters included. SQLAlchemy doesn't actually put the parameters into the statement -- they're passed into the database engine as a dictionary. This lets the database-specific library handle things like escaping special characters to avoid SQL injection.
But you can do this in a two-step process reasonably easily. To get the statement, you can do as you've already shown, and just print the query:
>>> print(query)
SELECT field_1, field_2 FROM table WHERE id=%s;
You can get one step closer with query.statement, to see the parameter names. Note :id_1 below vs %s above -- not really a problem in this very simple example, but could be key in a more complicated statement.
>>> print(query.statement)
>>> print(query.statement.compile()) # seems to be equivalent, you can also
# pass in a dialect if you want
SELECT field_1, field_2 FROM table WHERE id=:id_1;
Then, you can get the actual values of the parameters by getting the params property of the compiled statement:
>>> print(query.statement.compile().params)
{u'id_1': 1}
This worked for a MySQL backend at least; I would expect it's also general enough for PostgreSQL without needing to use psycopg2.
For postgresql backend using psycopg2, you can listen for the do_execute event, then use the cursor, statement and type coerced parameters along with Cursor.mogrify() to inline the parameters. You can return True to prevent actual execution of the query.
import sqlalchemy
class QueryDebugger(object):
def __init__(self, engine, query):
with engine.connect() as connection:
try:
sqlalchemy.event.listen(engine, "do_execute", self.receive_do_execute)
connection.execute(query)
finally:
sqlalchemy.event.remove(engine, "do_execute", self.receive_do_execute)
def receive_do_execute(self, cursor, statement, parameters, context):
self.statement = statement
self.parameters = parameters
self.query = cursor.mogrify(statement, parameters)
# Don't actually execute
return True
Sample usage:
>>> engine = sqlalchemy.create_engine("postgresql://postgres#localhost/test")
>>> metadata = sqlalchemy.MetaData()
>>> users = sqlalchemy.Table('users', metadata, sqlalchemy.Column("_id", sqlalchemy.String, primary_key=True), sqlalchemy.Column("document", sqlalchemy.dialects.postgresql.JSONB))
>>> s = sqlalchemy.select([users.c.document.label("foobar")]).where(users.c.document.contains({"profile": {"iid": "something"}}))
>>> q = QueryDebugger(engine, s)
>>> q.query
'SELECT users.document AS foobar \nFROM users \nWHERE users.document #> \'{"profile": {"iid": "something"}}\''
>>> q.statement
'SELECT users.document AS foobar \nFROM users \nWHERE users.document #> %(document_1)s'
>>> q.parameters
{'document_1': '{"profile": {"iid": "something"}}'}
The following solution uses the SQLAlchemy Expression Language and works with SQLAlchemy 1.1. This solution does not mix the parameters with the query (as requested by the original author), but provides a way of using SQLAlchemy models to generate SQL query strings and parameter dictionaries for different SQL dialects. The example is based on the tutorial http://docs.sqlalchemy.org/en/rel_1_0/core/tutorial.html
Given the class,
from sqlalchemy import Column, Integer, String
from sqlalchemy.ext.declarative import declarative_base
Base = declarative_base()
class foo(Base):
__tablename__ = 'foo'
id = Column(Integer(), primary_key=True)
name = Column(String(80), unique=True)
value = Column(Integer())
we can produce a query statement using the select function.
from sqlalchemy.sql import select
statement = select([foo.name, foo.value]).where(foo.value > 0)
Next, we can compile the statement into a query object.
query = statement.compile()
By default, the statement is compiled using a basic 'named' implementation that is compatible with SQL databases such as SQLite and Oracle. If you need to specify a dialect such as PostgreSQL, you can do
from sqlalchemy.dialects import postgresql
query = statement.compile(dialect=postgresql.dialect())
Or if you want to explicitly specify the dialect as SQLite, you can change the paramstyle from 'qmark' to 'named'.
from sqlalchemy.dialects import sqlite
query = statement.compile(dialect=sqlite.dialect(paramstyle="named"))
From the query object, we can extract the query string and query parameters
query_str = str(query)
query_params = query.params
and finally execute the query.
conn.execute( query_str, query_params )
You can use events from ConnectionEvents family: after_cursor_execute or before_cursor_execute.
In sqlalchemy UsageRecipes by #zzzeek you can find this example:
Profiling
...
#event.listens_for(Engine, "before_cursor_execute")
def before_cursor_execute(conn, cursor, statement,
parameters, context, executemany):
conn.info.setdefault('query_start_time', []).append(time.time())
logger.debug("Start Query: %s" % statement % parameters)
...
Here you can get access to your statement
UPDATE: Came up with yet another case where the previous solution here wasn't properly producing the correct SQL statement. After a bit of diving around in SQLAlchemy, it becomes apparent that you not only need to compile for a particular dialect, you also need to take the compiled query and initialize it for the correct DBAPI connection context. Otherwise, things like type bind processors don't get executed and values like JSON.NULL don't get properly translated.
Note, this makes this solution very particular to Flask + Flask-SQLAlchemy + psycopg2 + PostgreSQL. You may need to translate this solution to your environment by changing the dialect and how you reference your connection. However, I'm pretty confident this produces the exact SQL for all data types.
The result below is a simple method to drop in and occasionally but reliably grab the exact, compiled SQL that would be sent to my PostgreSQL backend by just interrogating the query itself:
import sqlalchemy.dialects.postgresql.psycopg2
from flask import current_app
def query_to_string(query):
dialect = sqlalchemy.dialects.postgresql.psycopg2.dialect()
compiled_query = query.statement.compile(dialect=dialect)
sqlalchemy_connection = current_app.db.session.connection()
context = dialect.execution_ctx_cls._init_compiled(
dialect,
sqlalchemy_connection,
sqlalchemy_connection.connection,
compiled_query,
None
)
mogrified_query = sqlalchemy_connection.connection.cursor().mogrify(
context.statement,
context.parameters[0]
)
return mogrified_query.decode()
query = [ .... some ORM query .... ]
print(f"compiled SQL = {query_to_string(query)}")
I've created this little function that I import when I want to print the full query, considering I'm in the middle of a test when the dialect is already bound:
import re
def print_query(query):
regex = re.compile(":(?P<name>\w+)")
params = query.statement.compile().params
sql = regex.sub("'{\g<name>}'", str(query.statement)).format(**params)
print(f"\nPrinting SQLAlchemy query:\n\n")
print(sql)
return sql
I think .statement would possibly do the trick:
http://docs.sqlalchemy.org/en/latest/orm/query.html?highlight=query
>>> local_session.query(sqlalchemy_declarative.SomeTable.text).statement
<sqlalchemy.sql.annotation.AnnotatedSelect at 0x6c75a20; AnnotatedSelectobject>
>>> x=local_session.query(sqlalchemy_declarative.SomeTable.text).statement
>>> print(x)
SELECT sometable.text
FROM sometable
If with SQLAlchemy you are using PyMySQL, you can do one trick.
I was in a hurry and lost a lot of time, so I changed the driver for print the current statement with parameters.
SQLAlchemy intentionally does not support full stringification of literal values.
But PyMySQL has 'mogrify' method which does it, but, SQLALchemy has no HOOK for call it when using ORM insert/update (when it controls the cursor) like db.add or commit/flush (for update).
So, Just go where the driver is using (to know where use):
pip show pycharm
In the folder, find and edit the file cursors.py.
In the method:
def execute(self, query, args=None):
Under the line:
query = self.mogrify(query, args)
Just Add:
print(query)
Will work like a charm, debug, resolve the issue and remove the print.

How to create a non-persistent Elixir/SQLAlchemy object?

Because of legacy data which is not available in the database but some external files, I want to create a SQLAlchemy object which contains data read from the external files, but isn't written to the database if I execute session.flush()
My code looks like this:
try:
return session.query(Phone).populate_existing().filter(Phone.mac == ident).one()
except:
return self.createMockPhoneFromLicenseFile(ident)
def createMockPhoneFromLicenseFile(self, ident):
# Some code to read necessary data from file deleted....
phone = Phone()
phone.mac = foo
phone.data = bar
phone.state = "Read from legacy file"
phone.purchaseOrderPosition = self.getLegacyOrder(ident)
# SQLAlchemy magic doesn't seem to work here, probably because we don't insert the created
# phone object into the database. So we set the id fields manually.
phone.order_id = phone.purchaseOrderPosition.order_id
phone.order_position_id = phone.purchaseOrderPosition.order_position_id
return phone
Everything works fine except that on a session.flush() executed later in the application SQLAlchemy tries to write the created Phone object to the database (which fortunately doesn't succeed, because phone.state is longer than the data type allows), which breaks the function which issues the flush.
Is there any way to prevent SQLAlchemy from trying to write such an object?
Update
While I didn't find anything on
using_mapper_options(save_on_init=False)
in the Elixir documentation (maybe you can provide a link?), it seemed to me worth a try (I would have preferred a way to prevent a single instance from being written instead of the whole entity).
At first it seemed that the statement has no effect and I suspected my SQLAlchemy/Elixir versions to be too old, but then I found out that the connection to the PurchaseOrderPosition entity (which I didn't modify) made with
phone.purchaseOrderPosition = self.getLegacyOrder(ident)
causes the phone object to be written again. If I remove the statement, everything seems to be fine.
You need to do
import elixir
elixir.options_defaults['mapper_options'] = { 'save_on_init': False }
to prevent Entity instances which you instantiate being auto-added to the session. Ideally, this should be done as early as possible in your code. You can also do this on a per-entity basis, via using_mapper_options(save_on_init=False) - see the Elixir documentation for more details.
Update:
See this post on the Elixir mailing list indicating that this is the solution.
Also, as Ants Aasma points out, you can use cascade options on the Elixir relationship to set up cascade options in SQLAlchemy. See this page for more details.
Well, sqlalchemy doesn't, by default.
Consider the following self-contained example code.
from sqlalchemy import Column, Integer, Unicode, create_engine
from sqlalchemy.orm import create_session
from sqlalchemy.ext.declarative import declarative_base
e = create_engine('sqlite://')
Base = declarative_base(bind=e)
class User(Base):
__tablename__ = 'users'
id = Column(Integer, primary_key=True)
name = Column(Unicode(50))
# create the empty table and a session
Base.metadata.create_all()
s = create_session(bind=e, autoflush=False, autocommit=False)
# assert the table is empty
assert s.query(User).all() == []
# create a new User instance but don't save it to database:
u = User()
u.name = 'siebert'
# I could run s.add(u) here but I won't
s.flush()
s.commit()
# assert the table is still empty
assert s.query(User).all() == []
So I'm not sure what's implicity adding your instances to the session. Normally you have to manually call s.add(u) to make it go to the session. I'm not familiar with elixir so perhaps this is some elixir trickery... Maybe you could remove it from the session, by using session.expunge().
Old post but I came across a similar issue, in my case in sqlalchemy it was caused by cascading on backrefs:
http://docs.sqlalchemy.org/en/rel_0_7/orm/session.html#backref-cascade
Turn it off on your backrefs so that you have to explicitly add things to the session

Categories