How to translate SQLAlchemy result rows into nested dicts - python

I am evaluating a potential setup for using SQLAlchemy in an async/await FastAPI app. I am currently composing models and queries using declarative_base classes, and then executing the queries with Databases (the syntax is much more readable and easy to write for model classes; working directly with SQLAlchemy core tables is not my favorite activity). This all works great.
At this point, I have SQLAlchemy result rows, but I need to convert them into generic dicts, potentially nested due to eagerly loaded relationships (only type I will support in this environment). I can't use SQLAlchemy's ORM because 1) I don't have an engine or session; and 2) the ORM assumes that it can just hit the database whenever it needs to load in objects, which is not the case in an async/await FastAPI app.
Does anyone have ideas or pointers for how to accomplish this? I'm struggling to figure out how to associate result rows with particular relationship keys, in particular. I've been poking around in the SQLAlchemy internals for ideas, but it's pretty opaque since a lot of it assumes an entire layer of object caching and session/engine management that just isn't present in my setup.
The two things I could use ideas about:
How to map columns names like table_1_column_name to specific models and their properties
How to detect and map relationships (potentially more than one level deep)
Thanks for any help you can provide!
Update: You can find a runnable example here: https://gist.github.com/onecrayon/dd4803a5099061fa48d52f2d4bc2396b (see lines 92-109 for the relevant place where I need to figure out how to convert a RowProxy to a nested dict by mapping the query column names to the names on the SQLAlchemy model).

If you are db first, sqlalchemy execute method usually returns a Result Proxy object and you can get the result of it with its methods such as fetchone, first, fetchall and then cast it to list or dict.
You can see also this dock

Casting the object into a dict should work if the result is a raw SQLAlchemy row and not an ORM-mapped instance.
Seeing from your comment on another answer, it looks like you need to map the result back into an ORM instance. You can define declarative mappings so that your result gets translated back into a python instance.

SQLALchemy object has an option to return as dict -
Here is the SQLALchemy source doc to help you.
https://docs.sqlalchemy.org/en/13/orm/query.html#sqlalchemy.util.KeyedTuple._asdict
Or You can go with the https://pythonhosted.org/dictalchemy which works as wrapper on top of SQLALchemy.
Hope that helps.

Related

Flask and SQLAlchemy sort in display without new query?

I'm displaying the results from an SQLAlchemy (Flask-SQLAlchemy) query on a particular view. However the sorting/order is only set by what I originally passed into the query ( order_by(desc(SelectedTable.date_changed)) ). I'm trying to now add functionality that each column that is displayed can be selected to order the presentation.
Is there a way to alter the way a returned query object is sorted once it's returned to create this behavior? Or will I need to build custom queries for each possible column that could be sorted by and ascending/descending?
Is there a recipe for implementing something like this? I've tried google, here, the Flask, Flask-SQLAlchemy, and SQLAlchemy docs for something along these lines but haven't seen anything that touches on the subject and beginning to think that I'm going to need to use custom queries or without new queries try some JavaScript in the Jinja Template to achieve this.
Thanks!

Pros and Cons of manually creating an ORM for an existing database?

What are the pros and cons of manually creating an ORM for an existing database vs using database reflection?
I'm writing some code using SQLAlchemy to access a pre-existing database. I know I can use sqlalchemy.ext.automap to automagically reflect the schema and create the mappings.
However, I'm wondering if there is any significant benefit of manually creating the mapping classes vs letting the automap do it's magic.
If there is significant benefit, can SQLAlchemy auto-generate the python mapping classes like Django's inspectdb? That would make creating all of the declarative base mappings much faster, as I'd only have to verify and tweak rather than write from scratch.
Edit:
As #iuridiniz says below, there are a few solutions that mimic Django's inspectdb. See Is there a Django's inspectdb equivalent for SQLAlchemy?. The answers in that thread are not Python3 compatible, so look into sqlacodegen or flask-sqlacodegen if you're looking for something that's actually maintained.
I see a lot of tables that were created with: CREATE TABLE suppliers
AS (SELECT * FROM companies WHERE 1 = 2 );, (a poor man's table copy), which will have no primary keys. If existing tables don't have primary keys, you'll have to constantly catch exceptions and feed Column objects into the mapper. If you've got column objects handy, you're already halfway to writing your own ORM layer. If you just complete the ORM, you won't have to worry about whether tables have primary keys set.

Accessing all columns from MySQL table using flask sqlalchemy

I'm trying to get the columns for a MySQL table whether in a string or a list format.
Since I defined the table through my main app, I could use dir(table_name) to get a list of attributes but those contain private attributes and other built-in attributes like "query" or "query_class". Filtering these would be possible but I'm trying to find an easier way to get the columns without going through the attribute route.
Between reading the flask-sqlalchemy documentation and sqlalchemy documentation, I noticed that using table_name.c will work for sqlalchelmy. Is there an equivalent for flask-sqlalchemy?
Flask-SQLAlchemy does nothing to the SQLAlchemy side of things, it is just a wrapper to make it easier to use with Flask along with some convenience features. You can inspect orm objects the same way you normally would. For example, inspecting a model returns its mapper.
m = db.inspect(MyModel)
# all orm attributes
print(list(m.all_orm_descriptors.keys()))
# just the columns
print(list(m.columns.keys()))
# etc.

Clean way to find many to many relations by array of ids in python-eve and mongo

I have 2 models in python eve say foo and bar,and in foo I have an array of objectIds referring to bars. Is there a clean way to do it in python-eve without defining the custom route in flash and run the query manually with mongo?
and if I force to do it in mongo what is the recommended way to communicate with mongo instance
I am not sure I understand your question but, have you looked into the data_relation setting? See Embedded Resource Serialization. Quoting from the Limitations paragraph:
Currently we support embedding of documents by references located in any subdocuments (nested dicts and lists). For example, a query /invoices?/embedded={"user.friends":1} will return a document with user and all his friends embedded, but only if user is a subdocument and friends is a list of reference (it could be a list of dicts, nested dict, ect.). We do not support multiple layers embeddings. This feature is about serialization on GET requests. There’s no support for POST, PUT or PATCH of embedded documents.
UPDATED
If you simply want to query for documents which are referencing documents in other collections something like this would work:
?where={"reference_field":"54e328ec537d3d20bbdf2ed5"}
That's assuming reference_field is either a list of ids (of type objectid) or a objectid. Also see this answer.
Hope this helps.

How to save to two tables using one SQLAlchemy model

I have an SQLAlchemy ORM class, linked to MySQL, which works great at saving the data I need down to the underlying table. However, I would like to also save the identical data to a second archive table.
Here's some psudocode to try and explain what I mean
my_data = Data() #An ORM Class
my_data.name = "foo"
#This saves just to the 'data' table
session.add(my_data)
#This will save it to the identical 'backup_data' table
my_data_archive = my_data
my_data_archive.__tablename__ = 'backup_data'
session.add(my_data_archive)
#And commits them both
session.commit()
Just a heads up, I am not interested in mapping a class to a JOIN, as in: http://www.sqlalchemy.org/docs/05/mappers.html#mapping-a-class-against-multiple-tables
I list some options below. I would go for the DB trigger if you do not need to work on those objects in your model.
use database trigger to do this job for you
create a SessionExtension which will create and add to session copy-objects (usually on before_flush). Edit-1: You can take versioning example from SA as a basic; this code is doing even more then you need.
see SA Versioning example which will not only give you a copy of the object, but the whole version history, which might be what you wish for
see Reverse mapping from a table to a model in SQLAlchemy question, where the proposed solution is described in the blogpost.
Create 2 identical models: one mapped to main table and another mapped to archive table. Create a MapperExtension with redefined method after_insert() (depending on your demands you might also need after_update() and after_delete()). This method should copy data from main model to archive and add it to the session. There are some tricks to copy all columns and many-to-many relations automagically.
Note, that you'll have to flush() session twice to store both objects since unit of work is computed before mapper extension adds new object to the session. You can redefine Session.flush() to take care of this problem. Also auto-incremented fields are assigned when the object is flushed, so you'll have to delay copying if you need them too.
It is one possible scenario which is proved to work. I'd like to know if there is a better way.

Categories