I store the MySQL Compress function to insert compressed blob data to the database.
In a previous question I was instructed to to use
func.compress
( mysql Compress() with sqlalchemy )
The problem now is that I want to read also the data from the database.
In mysql I would have done
SELECT UNCOMPRESS(text) FROM ...
probably I should use a getter in the class.
I tried to do somethin like:
get_html(self):
return func.uncompress(self.text)
but this does not work. It returns an sqlalchemy.sql.expression.Function and not the string.
Moreover I could not find which functions contains sqlalchemy's func.
Any ideas on how i could write a getter in the object so I get back the uncompressed data.
func is actually a really fancy factory object for special function objects which are rendered to SQL at query time - you cannot evaluate them in Python since Python would not know how your database implements compress(). That's why it doesn't work.
SQLAlchemy lets you map SQL expressions to mapped class attributes. If you're using the declarative syntax, extend your class like so (it's untested, but I'm confident this is the way to go):
from sqlalchemy.orm import column_property
class Demo(...):
data_uncompressed = column_property(func.uncompress(data))
Now whenever SQLAlchemy loads an instance from the database, the SELECT query will contain SELECT ..., UNCOMPRESS(demotable.data), ... FROM demotable.
Edit by Giorgos Komninos:
I used the
http://docs.sqlalchemy.org/en/rel_0_7/orm/mapper_config.html#using-a-plain-descriptor
and it worked.
Related
I have an SQLAlchemy mapped class MyClass, and two aliases for it. I can eager-load a relationship MyClass.relationship on each alias separately using selectinload() like so:
alias_1, alias_2 = aliased(MyClass), aliased(MyClass)
q = session.query(alias_1, alias_2).options(
selectinload(alias_1.relationship),
selectinload(alias_2.relationship))
However, this results in 2 separate SQL queries on MyClass.relationship (in addition to the main query on MyClass, but this is irrelevant to the question). Since these 2 queries on MyClass.relationship are to the same table, I think that it should be possible to merge the primary keys generated within the IN clause in these queries, and just run 1 query on MyClass.relationship.
My best guess for how to do this is:
alias_1, alias_2 = aliased(MyClass), aliased(MyClass)
q = session.query(alias_1, alias_2).options(
selectinload(MyClass.relationship))
But it clearly didn't work:
sqlalchemy.exc.ArgumentError: Mapped attribute "MyClass.relationship" does not apply to any of the root entities in this query, e.g. aliased(MyClass), aliased(MyClass). Please specify the full path from one of the root entities to the target attribute.
Is there a way to do this in SQLAlchemy?
So, this is exactly the same issue we had. This docs explains how to do it.
You need to add selectin_polymorphic. For anyone else if you are using with_polymorphic in your select then remove it.
from sqlalchemy.orm import selectin_polymorphic
query = session.query(MyClass).options(
selectin_polymorphic(MyClass, [alias_1, alias_2]),
selectinload(MyClass.relationship)
)
code :
from sqlalchemy import create_engine
from sqlalchemy.ext.declarative import declarative_base
from sqlalchemy.orm import sessionmaker
DBSession = sessionmaker()
DBSession.bind = engine
session = DBSession()
engine = create_engine('mysql://root:password#localhost/db', echo=True)
Base = declarative_base(engine)
class Accounts(Base):
__tablename__ = 'accounts'
__table_args__ = {'autoload': True}
I am trying to store sqlalchemy record object into memcache
from pymemcache.client.base import Client
client = Client(('localhost', 11211))
client.set('testkey', session.query(Users).get(1))
It is storing string object instead of User object
output : '<__main__.Users object at 0x105b69b10>'
Any help ?
Thanks advance
The issue isn't so much about how to store a SQLAlchemy object but rather how to store any Object instance.
This is from docstring of the pymemcache Client class that you've imported:
Values must have a str() method to convert themselves to a byte
string.
You haven't included a definition of the Users class that you are querying your database with so I can only assume you haven't overridden __str__. Therefore, when pymemcache tries to convert your object into a byte string, it is calling python's default __str__ implementation and storing that.
As #SuperShoot mentions in their answer, pymemcache expects to store a str representation. Since your SQLAlchemy model instance (record) is not natively a str representation, pymemcache tries to call its default __str__() method, which produces an undesired result.
What you need is an interim step that will serialize your SQLAlchemy model instance to a str of one structure or another. This makes your logical round-trippable flow:
Serialize your model instance to a str of one structure or another.
Store the serialized str using pymemcache.
When retrieving the str from pymemcache, de-serialize it into a SQLAlchemy model instance to continue working easily with it in your Python code.
This can be a bit complicated with SQLAlchemy models, which is why I recommend using the SQLAthanor library (full disclosure: I'm the author). It lets you serialize your SQLAlchemy model to CSV, JSON, or YAML - which you can then persist to memcache. And it also lets you easily de-serialize a CSV, JSON, or YAML string into a SQLAlchemy model instance as well, letting you easily maintain the whole flow described above.
There's a lot more functionality to the library which you can read about on the docs page: https://sqlathanor.readthedocs.io/en/latest/
The important bit to remember is that when using SQLAthanor, you'll need to decide what format you want to store your data in (I recommend JSON or YAML), and then explicitly configure the columns/attributes you want to have serialized to that format (this is a security feature in the library). Because your code snippet shows that you're using Declarative Reflection, you'll probably want to look at the following sections of the documentation for how to configure SQLAthanor:
https://sqlathanor.readthedocs.io/en/latest/using.html#using-declarative-reflection-with-sqlathanor
https://sqlathanor.readthedocs.io/en/latest/quickstart.html#using-sqlathanor-with-sqlalchemy-reflection
Hope this helps!
I have probably not grasped the use of #hybrid_property fully. But what I try to do is to make it easy to access a calculated value based on a column in another table and thus a join is required.
So what I have is something like this (which works but is awkward and feels wrong):
class Item():
:
#hybrid_property
def days_ago(self):
# Can I even write a python version of this ?
pass
#days_ago.expression
def days_ago(cls):
return func.datediff(func.NOW(), func.MAX(Event.date_started))
This requires me to add the join on the Action table by the caller when I need to use the days_ago property. Is the hybrid_property even the correct approach to simplifying my queries where I need to get hold of the days_ago value ?
One way or another you need to load or access Action rows either via join or via lazy load (note here it's not clear what Event vs. Action is, I'm assuming you have just Item.actions -> Action).
The non-"expression" version of days_ago intends to function against Action objects that are relevant only to the current instance. Normally within a hybrid, this means just iterating through Item.actions and performing the operation in Python against loaded Action objects. Though in this case you're looking for a simple aggregate you could instead opt to run a query, but again it would be local to self so this is like object_session(self).query(func.datediff(...)).select_from(Action).with_parent(self).scalar().
The expression version of the hybrid when formed against another table typically requires that the query in which it is used already have the correct FROM clauses set up, so it would look like session.query(Item).join(Item.actions).filter(Item.days_ago == xyz). This is explained at Join-Dependent Relationship Hybrid.
your expression here might be better produced as a column_property, if you can afford using a correlated subquery. See that at http://docs.sqlalchemy.org/en/latest/orm/mapping_columns.html#using-column-property-for-column-level-options.
So far I have used the following statements in SQLAlchemy to implement table inheritance via ALTER TABLE:
inherit = "ALTER TABLE %(fullname)s INHERIT parent_table"
DDL(inherit, on='postgresql').execute_at("after-create", child_table)
This is deprecated in SQLAlchemy now, and I am a bit confused about the new method through
DDLEvents, DDLElement.execute_if(), listeners and events in general.
What is the correct way to create and execute DDL() constructs in SQLAlchemy 0.7+?
Look at an example in documentation, your code can be rewritten as:
event.listen(child_table, "after-create", DDL(inherit).execute_if(dialect='postgresql'))
does anybody know what is the equivalent to SQL "INSERT OR REPLACE" clause in SQLAlchemy and its SQL expression language?
Many thanks -- honzas
What about Session.merge?
Session.merge(instance, load=True, **kw)
Copy the state an instance onto the persistent instance with the same identifier.
If there is no persistent instance currently associated with the session, it will be loaded. Return the persistent instance. If the given instance is unsaved, save a copy of and return it as a newly persistent instance. The given instance does not become associated with the session. This operation cascades to associated instances if the association is mapped with cascade="merge".
from http://www.sqlalchemy.org/docs/reference/orm/sessions.html
Session.save_or_update(model)
I don't think (correct me if I'm wrong) INSERT OR REPLACE is in any of the SQL standards; it's an SQLite-specific thing. There is MERGE, but that isn't supported by all dialects either. So it's not available in SQLAlchemy's general dialect.
The cleanest solution is to use Session, as suggested by M. Utku. You could also use SAVEPOINTs to save, try: an insert, except IntegrityError: then rollback and do an update instead. A third solution is to write your INSERT with an OUTER JOIN and a WHERE clause that filters on the rows with nulls.
You can use OR REPLACE as a so-called prefix in your SQLAlchemy Insert -- the documentation for how to place OR REPLACE between INSERT and INTO in your SQL statement is here