I'm using an declarative SQLAlchemy class to perform computations. Part of the computations require me to perform the computations for all configurations provided by a different table which doesn't have any foreign key relationships between the two tables.
This analogy is nothing like my real application, but hopefully will help to comprehend what I want to happen.
I have a set of cars and a list of paint colors.
The car object has a factory which provides a car in all possible colors
from sqlalchemy import *
from sqlachemy.orm import *
def PaintACar(car, color):
pass
Base = declarative_base()
class Colors(Base):
__table__ = u'colors'
id = Column('id', Integer)
color= Column('color', Unicode)
class Car(Base):
__table__ = u'car'
id = Column('id', Integer)
model = Column('model', Unicode)
# is this somehow possible?
all_color_objects = collection(...)
# I know this is possible, but would like to know if there's another way
#property
def all_colors(self):
s = Session.object_session(self)
return s.query(A).all()
def CarColorFactory(self):
for color in self.all_color_objects:
yield PaintACar(self, color)
My question: Is it possible to produce all_color_objects somehow? Without having to resort to finding the session and manually issuing a query as in the all_colors property?
It's been a while, so I'm providing the best answer I saw (as a comment by zzzeek). Basically, I was looking for one-off syntactic sugar. My original 'ugly' implementation works just fine.
what better way would there be here besides getting a Session and producing the query you
want? Are you looking for being able to add to the collection and that automatically
flushes things? (just add the objects to the Session?) Do you not like using
object_session(self) >(you can build some mixin class or something that hides that for
you?) It's not really clear >what the problem is. The objects here have no relationship to
the parent class so there's no particular intelligence SQLAlchemy would be able to add.
– zzzeek Jun 17 at 5:03
I working on a website based on Flask and Flask-SQLAlchemy with MySQL. I have a handful bunch of feeds, each feed has a few data, but it needs a function.
At first, I used MySQL-python (with raw SQL) to store data, and feeds were on plugins system so each feed overrides update() function to import data by its way.
Now I changed to use Flask-SQLAlchemy and added Feed model to the database as it helps with SQLAlchemy ORM, but I'm stuck at how to handle update() function?
Keep the plugins system in parallel with the database model, but I think that's unpractical/noneffective.
Extend model class, I'm not sure if that's possible, e.g. FeedOne(Feed) will represent item(name="one") only.
Make update() function handle all feeds, by using if self.name == "" statement.
Added some code bits.
Feed model:
class Feed(db.Model):
id = db.Column(db.Integer, primary_key=True)
name = db.Column(db.String(255))
datapieces = db.relationship('Datapiece', backref = 'feed', lazy = 'dynamic')
update() function
def update(self):
data = parsedata(self.data_source)
for item in data.items:
new_datapiece = Datapiece(feed=self.id, name=item.name, value=item.value)
db.session.add(new_datapiece)
db.session.commit()
What I hope to achieve in option 2 is:
for feed in Feed.query.all():
feed.update()
And every feed will use its own class update().
Extending the class and adding an .update() method is just how it is supposed to work for option 2.
I don't see any problem in it (and i'm using that style of coding with flask/sqlalchemy all the time).
And if you (can) omit the dynamic lazy attribute you could also do a thing like:
self.datapieces.append(new_datapiece)
in your Feed's update function.
I have been using sqlalchemy for a few years now in replacement for Django models. I have found it very convenient to have custom methods attached to these models
i.e.
class Widget(Base):
__tablename__ = 'widgets'
id = Column(Integer, primary_key=True)
name = Column(Unicode(100))
def get_slug(self, max_length=50):
return slugify(self.name)[:max_length]
Is there a performance hit when doing things like session.query(Widget) if the model has a few dozen complex methods (50-75 lines)? Are these loaded into memory for every row returned and would it be more efficient to move some of these less-used methods into helper functions and import as-necessary?
def some_helper_function(widget):
':param widget: a instance of Widget()'
# do something
Thanks!
You would not have any performance hit when loading the objects from the database using SA using just a session.query(...).
And you definitely should not move any methods out to any helper function for the sake of performance, as in doing so you would basically destroy the object oriented paradigm of your model.
There are two (three, but I'm not counting Elixir, as its not "official") ways to define a persisting object with SQLAlchemy:
Explicit syntax for mapper objects
from sqlalchemy import Table, Column, Integer, String, MetaData, ForeignKey
from sqlalchemy.orm import mapper
metadata = MetaData()
users_table = Table('users', metadata,
Column('id', Integer, primary_key=True),
Column('name', String),
)
class User(object):
def __init__(self, name):
self.name = name
def __repr__(self):
return "<User('%s')>" % (self.name)
mapper(User, users_table) # <Mapper at 0x...; User>
Declarative syntax
from sqlalchemy.ext.declarative import declarative_base
Base = declarative_base()
class User(Base):
__tablename__ = 'users'
id = Column(Integer, primary_key=True)
name = Column(String)
def __init__(self, name):
self.name = name
def __repr__(self):
return "<User('%s')>" % (self.name)
I can see that while using the mapper objects, I separate completely the ORM definition from the business logic, while using the declarative syntax, whenever I modify the business logic class, I can edit right there the database class (which ideally should be edited little).
What I'm not completely sure, is which approach is more maintainable for a business application?
I haven't been able to find a comparative between the two mapping methods, to be able to decide which one is a better fit for my project.
I'm leaning towards using the "normal" way (i.e. not the declarative extension) as it allows me to "hide", and keep out of the business view all the ORM logic, but I'd like to hear compelling arguments for both approaches.
"What I'm not completely sure, is which approach is more maintainable for a business application?"
Can't be answered in general.
However, consider this.
The Django ORM is strictly declarative -- and people like that.
SQLAlchemy does several things, not all of which are relevant to all problems.
SQLAlchemy creates DB-specific SQL from general purpose Python. If you want to mess with the SQL, or map Python classes to existing tables, then you have to use explicit mappings, because your focus is on the SQL, not the business objects and the ORM.
SQLAlchemy can use declarative style (like Django) to create everything for you. If you want this, then you are giving up explicitly writing table definitions and explicitly messing with the SQL.
Elixir is an alternative to save you having to look at SQL.
The fundamental question is "Do you want to see and touch the SQL?"
If you think that touching the SQL makes things more "maintainable", then you have to use explicit mappings.
If you think that concealing the SQL makes things more "maintainable", then you have to use declarative style.
If you think Elixir might diverge from SQLAlchemy, or fail to live up to it's promise in some way, then don't use it.
If you think Elixir will help you, then use it.
In our team we settled on declarative syntax.
Rationale:
metadata is trivial to get to, if needed: User.metadata.
Your User class, by virtue of subclassing Base, has a nice ctor that takes kwargs for all fields. Useful for testing and otherwise. E.g.: user=User(name='doe', password='42'). So no need to write a ctor!
If you add an attribute/column, you only need to do it once. "Don't Repeat Yourself" is a nice principle.
Regarding "keeping out ORM from business view": in reality your User class, defined in a "normal" way, gets seriously monkey-patched by SA when mapper function has its way with it. IMHO, declarative way is more honest because it screams: "this class is used in ORM scenarios, and may not be treated just as you would treat your simple non-ORM objects".
I've found that using mapper objects are much simpler then declarative syntax if you use sqlalchemy-migrate to version your database schema (and this is a must-have for a business application from my point of view). If you are using mapper objects you can simply copy/paste your table declarations to migration versions, and use simple api to modify tables in the database. Declarative syntax makes this harder because you have to filter away all helper functions from your class definitions after copying them to the migration version.
Also, it seems to me that complex relations between tables are expressed more clearly with mapper objects syntax, but this may be subjective.
as of current (2019), many years later, sqlalchemy v1.3 allows a hybrid approach with the best of both worlds
https://docs.sqlalchemy.org/en/13/orm/extensions/declarative/table_config.html#using-a-hybrid-approach-with-table
metadata = MetaData()
users_table = Table('users', metadata,
Column('id', Integer, primary_key=True),
Column('name', String),
)
# possibly in a different file/place the orm-declaration
Base = declarative_base(metadata)
class User(Base):
__table__ = Base.metadata.tables['users']
def __str__():
return "<User('%s')>" % (self.name)
How close can I get to defining a model in SQLAlchemy like:
class Person(Base):
pass
And just have it dynamically pick up the field names? anyway to get naming conventions to control the relationships between tables? I guess I'm looking for something similar to RoR's ActiveRecord but in Python.
Not sure if this matters but I'll be trying to use this under IronPython rather than cPython.
It is very simple to automatically pick up the field names:
from sqlalchemy import Table
from sqlalchemy.orm import MetaData, mapper
metadata = MetaData()
metadata.bind = engine
person_table = Table(metadata, "tablename", autoload=True)
class Person(object):
pass
mapper(Person, person_table)
Using this approach, you have to define the relationships in the call to mapper(), so no auto-discovery of relationships.
To automatically map classes to tables with same name, you could do:
def map_class(class_):
table = Table(metadata, class_.__name__, autoload=True)
mapper(class_, table)
map_class(Person)
map_class(Order)
Elixir might do everything you want.
AFAIK sqlalchemy intentionally decouples database metadata and class layout.
You may should investigate Elixir (http://elixir.ematia.de/trac/wiki): Active Record Pattern for sqlalchemy, but you have to define the classes, not the database tables.