All the docs for SQLAlchemy give INSERT and UPDATE examples using the local table instance (e.g. tablename.update()... )
Doing this seems difficult with the declarative syntax, I need to reference Base.metadata.tables["tablename"] to get the table reference.
Am I supposed to do this another way? Is there a different syntax for INSERT and UPDATE recommended when using the declarative syntax? Should I just switch to the old way?
well it works for me:
class Users(Base):
__tablename__ = 'users'
__table_args__ = {'autoload':True}
users = Users()
print users.__table__.select()
...SELECT users.......
via the __table__ attribute on your declarative class
There may be some confusion between table (the object) and tablename (the name of the table, a string). Using the table class attribute works fine for me.
Related
I am currently using SQLAlchemy ORM to deal with my db operations. Now I have a SQL command which requires ON CONFLICT (id) DO UPDATE. The method on_conflict_do_update() seems to be the correct one to use. But the post here says the code have to switch to SQLAlchemy core and the high-level ORM functionalities are missing. I am confused by this statement since I think the code like the demo below can achieve what I want while keep the functionalities of SQLAlchemy ORM.
class Foo(Base):
...
bar = Column(Integer)
foo = Foo(bar=1)
insert_stmt = insert(Foo).values(bar=foo.bar)
do_update_stmt = insert_stmt.on_conflict_do_update(
set_=dict(
bar=insert_stmt.excluded.bar,
)
)
session.execute(do_update_stmt)
I haven't tested it on my project since it will require a huge amount of modification. Can I ask if this is the correct way to deal with ON CONFLICT (id) DO UPDATE with SQLALchemy ORM?
As noted in the documentation, the constraint= argument is
The name of a unique or exclusion constraint on the table, or the constraint object itself if it has a .name attribute.
so we need to pass the name of the PK constraint to .on_conflict_do_update().
We can get the PK constraint name via the inspection interface:
insp = inspect(engine)
pk_constraint_name = insp.get_pk_constraint(Foo.__tablename__)["name"]
print(pk_constraint_name) # tbl_foo_pkey
new_bar = 123
insert_stmt = insert(Foo).values(id=12345, bar=new_bar)
do_update_stmt = insert_stmt.on_conflict_do_update(
constraint=pk_constraint_name, set_=dict(bar=new_bar)
)
with Session(engine) as session, session.begin():
session.execute(do_update_stmt)
I want to pass a class to a function, and don't like to pass the name again.
class TableClass(Base):
__table__ = Table('t1', metadata, autoload=True)
def get_name(TableClass):
print TableClass.GetTableName() # print 't1'
get_name(TableClass)
So, I search it with google, and there is no answer.
According To:
How to discover table properties from SQLAlchemy mapped object
I can use this:
print TableClass.__table__.name
Independent on whether you use declarative extension or not, you can use the Runtime Inspection API:
def get_name(TableClass):
from sqlalchemy import inspect
mapper = inspect(TableClass)
print mapper.tables[0].name
Please note that a class can have multiple tables mapped to it, for example when using Inheritance.
print TableClass.__tablename__
works for me
In SQLAlchemy you can fetch table information with tableclass attributes.
In your example
print TableClass.__tablename__ # Prints 't1'
According to #Aylwyn Lake 's Finding
print TableClass.__table__.name
I just hit this issue myself, passing the object around and didn't want to pass the string of the tablename as well..
Turns out, it's just a property of the Table object, eg:
table = new Table("somename",meta)
...
print("My table is called: " + table.name)
None of the answers worked for me.
This did:
For Class Object:
TableClass.sa_class_manager.mapper.mapped_table.name
For Instance Object:
tableObj.sa_instance_state.mapper.mapped_table.name
I'm a beginner in SQLAlchemy and found query can be done in 2 method:
Approach 1:
DBSession = scoped_session(sessionmaker())
class _Base(object):
query = DBSession.query_property()
Base = declarative_base(cls=_Base)
class SomeModel(Base):
key = Column(Unicode, primary_key=True)
value = Column(Unicode)
# When querying
result = SomeModel.query.filter(...)
Approach 2
DBSession = scoped_session(sessionmaker())
Base = declarative_base()
class SomeModel(Base):
key = Column(Unicode, primary_key=True)
value = Column(Unicode)
# When querying
session = DBSession()
result = session.query(SomeModel).filter(...)
Is there any difference between them?
In the code above, there is no difference. This is because, in line 3 of the first example:
the query property is explicitly bound to DBSession
there is no custom Query object passed to query_property
As #petr-viktorin points out in the answer here, there must be a session available before you define your model in the first example, which might be problematic depending on the structure of your application.
If, however, you need a custom query that adds additional query parameters automatically to all queries, then only the first example will allow that. A custom query class that inherits from sqlalchemy.orm.query.Query can be passed as an argument to query_property. This question shows an example of that pattern.
Even if a model object has a custom query property defined on it, that property is not used when querying with session.query, as in the last line in the second example. This means something like the first example the only option if you need a custom query class.
I see these downsides to query_property:
You cannot use it on a different Session than the one you've configured (though you could always use session.query then).
You need a session object available before you define your schema.
These could bite you when you want to write tests, for example.
Also, session.query fits better with how SQLAlchemy works; query_property looks like it's just added on top for convenience (or similarity with other systems?).
I'd recommend you stick to session.query.
An answer (here) to a different SQLAlchemy question might help. That answer starts with:
You can use Model.query, because the Model (or usually its base class, especially in cases where declarative extension is used) is assigned Session.query_property. In this case the Model.query is equivalent to Session.query(Model).
There are two (three, but I'm not counting Elixir, as its not "official") ways to define a persisting object with SQLAlchemy:
Explicit syntax for mapper objects
from sqlalchemy import Table, Column, Integer, String, MetaData, ForeignKey
from sqlalchemy.orm import mapper
metadata = MetaData()
users_table = Table('users', metadata,
Column('id', Integer, primary_key=True),
Column('name', String),
)
class User(object):
def __init__(self, name):
self.name = name
def __repr__(self):
return "<User('%s')>" % (self.name)
mapper(User, users_table) # <Mapper at 0x...; User>
Declarative syntax
from sqlalchemy.ext.declarative import declarative_base
Base = declarative_base()
class User(Base):
__tablename__ = 'users'
id = Column(Integer, primary_key=True)
name = Column(String)
def __init__(self, name):
self.name = name
def __repr__(self):
return "<User('%s')>" % (self.name)
I can see that while using the mapper objects, I separate completely the ORM definition from the business logic, while using the declarative syntax, whenever I modify the business logic class, I can edit right there the database class (which ideally should be edited little).
What I'm not completely sure, is which approach is more maintainable for a business application?
I haven't been able to find a comparative between the two mapping methods, to be able to decide which one is a better fit for my project.
I'm leaning towards using the "normal" way (i.e. not the declarative extension) as it allows me to "hide", and keep out of the business view all the ORM logic, but I'd like to hear compelling arguments for both approaches.
"What I'm not completely sure, is which approach is more maintainable for a business application?"
Can't be answered in general.
However, consider this.
The Django ORM is strictly declarative -- and people like that.
SQLAlchemy does several things, not all of which are relevant to all problems.
SQLAlchemy creates DB-specific SQL from general purpose Python. If you want to mess with the SQL, or map Python classes to existing tables, then you have to use explicit mappings, because your focus is on the SQL, not the business objects and the ORM.
SQLAlchemy can use declarative style (like Django) to create everything for you. If you want this, then you are giving up explicitly writing table definitions and explicitly messing with the SQL.
Elixir is an alternative to save you having to look at SQL.
The fundamental question is "Do you want to see and touch the SQL?"
If you think that touching the SQL makes things more "maintainable", then you have to use explicit mappings.
If you think that concealing the SQL makes things more "maintainable", then you have to use declarative style.
If you think Elixir might diverge from SQLAlchemy, or fail to live up to it's promise in some way, then don't use it.
If you think Elixir will help you, then use it.
In our team we settled on declarative syntax.
Rationale:
metadata is trivial to get to, if needed: User.metadata.
Your User class, by virtue of subclassing Base, has a nice ctor that takes kwargs for all fields. Useful for testing and otherwise. E.g.: user=User(name='doe', password='42'). So no need to write a ctor!
If you add an attribute/column, you only need to do it once. "Don't Repeat Yourself" is a nice principle.
Regarding "keeping out ORM from business view": in reality your User class, defined in a "normal" way, gets seriously monkey-patched by SA when mapper function has its way with it. IMHO, declarative way is more honest because it screams: "this class is used in ORM scenarios, and may not be treated just as you would treat your simple non-ORM objects".
I've found that using mapper objects are much simpler then declarative syntax if you use sqlalchemy-migrate to version your database schema (and this is a must-have for a business application from my point of view). If you are using mapper objects you can simply copy/paste your table declarations to migration versions, and use simple api to modify tables in the database. Declarative syntax makes this harder because you have to filter away all helper functions from your class definitions after copying them to the migration version.
Also, it seems to me that complex relations between tables are expressed more clearly with mapper objects syntax, but this may be subjective.
as of current (2019), many years later, sqlalchemy v1.3 allows a hybrid approach with the best of both worlds
https://docs.sqlalchemy.org/en/13/orm/extensions/declarative/table_config.html#using-a-hybrid-approach-with-table
metadata = MetaData()
users_table = Table('users', metadata,
Column('id', Integer, primary_key=True),
Column('name', String),
)
# possibly in a different file/place the orm-declaration
Base = declarative_base(metadata)
class User(Base):
__table__ = Base.metadata.tables['users']
def __str__():
return "<User('%s')>" % (self.name)
How close can I get to defining a model in SQLAlchemy like:
class Person(Base):
pass
And just have it dynamically pick up the field names? anyway to get naming conventions to control the relationships between tables? I guess I'm looking for something similar to RoR's ActiveRecord but in Python.
Not sure if this matters but I'll be trying to use this under IronPython rather than cPython.
It is very simple to automatically pick up the field names:
from sqlalchemy import Table
from sqlalchemy.orm import MetaData, mapper
metadata = MetaData()
metadata.bind = engine
person_table = Table(metadata, "tablename", autoload=True)
class Person(object):
pass
mapper(Person, person_table)
Using this approach, you have to define the relationships in the call to mapper(), so no auto-discovery of relationships.
To automatically map classes to tables with same name, you could do:
def map_class(class_):
table = Table(metadata, class_.__name__, autoload=True)
mapper(class_, table)
map_class(Person)
map_class(Order)
Elixir might do everything you want.
AFAIK sqlalchemy intentionally decouples database metadata and class layout.
You may should investigate Elixir (http://elixir.ematia.de/trac/wiki): Active Record Pattern for sqlalchemy, but you have to define the classes, not the database tables.