why use sqlalchemy declarative api? - python

New to sqlalchemy and somewhat novice with programing and python. I had wanted to query a table. It seems I can use the all() function when querying but cannot filter without creating a class.
1.) Can I filter without creating a class and using the declarative api? Is the filtering example stated below incorrect?
2.) When would it be appropriate to use declarative api in sqlalchemy and when would it not be appropriate?
import sqlalchemy as sql
from sqlalchemy import Table, Column, Integer, String, MetaData, ForeignKey
from sqlalchemy.orm import sessionmaker
from sqlalchemy.orm import sessionmaker
db = sql.create_engine('postgresql://postgres:password#localhost:5432/postgres')
engine = db.connect()
meta = MetaData(engine)
session = sessionmaker(bind=engine)
session = session()
files = Table('files',meta,
Column('file_id',Integer,primary_key=True),
Column('file_name',String(256)),
Column('query',String(256)),
Column('results',Integer),
Column('totalresults',Integer),
schema='indeed')
session.query(files).all() #ok
session.query(files).filter(files.file_name = 'test.json') #not ok

If you want to filter by a Table construct, it should be:
session.query(files).filter(files.c.file_name == 'test.json')
You need to create mapped classes if you want to use the ORM features of SQLAlchemy. For example, with the code you currently have, in order to do an update you have to do
session.execute(files.update().values(...))
As opposed to:
file = session.query(File).first()
file.file_name = "new file name"
session.commit()
The declarative API happens to be the easiest way of constructing mapped classes, so use it if you want to use the ORM.

Filter using declarative api this way:
session.query(files).filter(files.file_name == 'test.json').all()
You can also use raw sql queries (docs).
Whether using declarative api or not may depend on your queries complexity, because sometimes sqlalchemy doesn't optimize them right way.

Related

How to automap a backend-agnostic Base in SQLAlchemy

I am trying to take a simple database schema in Oracle and migrate it to a mssql database. There are other ways I can do this but my first thought was to utilize SQLAlchemy's automap and create_all functionality to do it pretty much instantaneously.
Unfortunately when I attempt to do so I run into some conversion errors:
Input:
from sqlalchemy.ext.automap import automap_base
from custom_connections import connect_to_oracle, connect_to_mssql
Base = automap_base()
oracle_engine = connect_to_oracle()
mssql_engine = connect_to_mssql()
Base.prepare(oracle_engine, reflect=True, schema = ‘ORACLE_MAIN_DB’)
Base.metadata.create_all(mssql_engine)
(Note that the connect_to functions are custom functions which return sqlalchemy engines. Currently they just return engines with base settings.)
Output:
CompileError: (in table 'acct', column 'acctnbr'): Compiler <sqlalchemy.dialects.mssql.base.MSTypeCompiler object at 0x00000268E8FF6DA0> can't render element of type <class 'sqlalchemy.dialects.oracle.base.NUMBER'>
The issue is that while Sqlalchemy is converting most types to sqlalchemy types when mapping the Base, it doesn't do the same with Oracle NUMBER types. I attempted a similar trick using alembic autogeneration off the automapped Base, but the Oracle NUMBER types caused issues there as well.
Given all the power behind it, I would have thought Sqlalchemy would be able to handle this without any issues. Is there a technique or setting I could use when running this code which would cause it to convert all types to their Sqlalchemy equivalent when mapping the base instead of just most types?
This is described in the SQLAlchemy docs about reflection:
from sqlalchemy import MetaData, Table, create_engine
from sqlalchemy import event
mysql_engine = create_engine("mysql://scott:tiger#localhost/test")
metadata_obj = MetaData()
my_mysql_table = Table("my_table", metadata_obj, autoload_with=mysql_engine)
# my_table should already exist in your db, this is the table you want to convert
#event.listens_for(metadata_obj, "column_reflect")
def genericize_datatypes(inspector, tablename, column_dict):
column_dict["type"] = column_dict["type"].as_generic()
my_generic_table = Table("my_table", metadata_obj, autoload_with=mysql_engine)
# generic table will be a generic version of my_table
sqlite_eng = create_engine("sqlite:///test.db", echo=True)\
my_generic_table.create(sqlite_eng)

How can I load initial data into a database using sqlalchemy

I want to be able to load data automatically upon creation of tables using SQLAlchemy.
In django, you have fixtures which allow you to easily pre-populate your database with data upon creation of a table. This I found useful especially when you have basic "lookup" tables e.g. product_type, student_type which contain just a few rows or even a table like currencies which will load all the currencies of the world without you having to key them in over and over again when you destroy your models/classes.
My current app isn't using django. I have SQLAlchemy. How can I achieve the same thing? I want the app to know that the database is being created for the first time and hence it populates some tables with data.
I used the event listener to prepopulate database with data upon creation of a table.
Let's say you have ProductType model in your code:
from sqlalchemy import event, Column, Integer, String
from sqlalchemy.ext.declarative import declarative_base
Base = declarative_base()
class ProductType(Base):
__tablename__ = 'product_type'
id = Column(Integer, primary_key=True)
name = Column(String(100))
First, you need to define a callback function, which will be executed when the table is created:
def insert_data(target, connection, **kw):
connection.execute(target.insert(), {'id': 1, 'name':'spam'}, {'id':2, 'name': 'eggs'})
Then you just add the event listener:
event.listen(ProductType.__table__, 'after_create', insert_data)
The short answer is no, SQLAlchemy doesn't provide the same feature as dumpdata and loaddata like Django.
There is https://github.com/kvesteri/sqlalchemy-fixtures that might be useful for you but the workflow is different.

sqlalchemy: Why create a sessionmaker before assign it to a Session object?

Why I always need to do that in 2 steps in SqlAlchemy?
import sqlalchemy as sa
import sqlalchemy.orm as orm
engine = sa.create_engine(<dbPath>, echo=True)
Session = orm.sessionmaker(bind=engine)
my_session = Session()
Why I cannot do it in one shot like (it's could be more simple, no?) :
import sqlalchemy as sa
import sqlalchemy.orm as orm
engine = sa.create_engine(<dbPath>, echo=True)
Session = orm.Session(bind=engine)
The reason sessionmaker() exists is so that the various "configurational" arguments it requires only need to be set up in one place, instead of repeating "bind=engine, autoflush=False, expire_on_commit=False", etc. over and over again. Additionally, sessionmaker() provides an "updateable" interface, such that you can set it up somewhere in your application:
session = sessionmaker(expire_on_commit=False)
but then later, when you know what database you're talking to, you can add configuration to it:
session.configure(bind=create_engine("some engine"))
It also serves as a "callable" to pass to the very common scoped_session() construct:
session = scoped_session(sessionmaker(bind=engine))
With all of that said, these are just conventions that the documentation refers to so that a consistent "how to use" story is presented. There's no reason you can't use the constructor directly if that is more convenient, and I use the Session() constructor all the time. It's just that in a non-trivial application, you will probably end up sticking that constructor call to Session() inside some kind of callable function anyway, sessionmaker() serves as a default for that callable.
In the most general sense, the Session establishes all conversations with the database and represents a “holding zone” for all the objects which you’ve loaded or associated with it during its lifespan. It provides the entrypoint to acquire a Query object, which sends queries to the database using the Session object’s current database connection, populating result rows into objects that are then stored in the Session, inside a structure called the Identity Map - a data structure that maintains unique copies of each object, where “unique” means “only one object with a particular primary key”.
Try to pprint and see whats inside;
import pprint
pprint.pprint(my_session)
Here's the rest of the story: http://docs.sqlalchemy.org/ru/latest/orm/session.html

Can SQLAlchemy automatically create relationships from a database schema?

Starting from an existing (SQLite) database with foreign keys, can SQLAlchemy automatically build relationships?
SQLAlchemy classes are automatically created via __table_args__ = {'autoload': True}.
The goal would be to easily access data from related tables without having to add all the relationships one by one by hand (i.e. without using sqlalchemy.orm.relationship() and sqlalchemy.orm.backref).
[Update] As of SQLAlchemy 0.9.1 there is Automap extension for doing that.
For SQLAlchemy < 0.9.0 it is possible to use sqlalchemy reflection.
SQLAlchemy reflection loads foreign/primary keys relations between tables. But doesn't create relations between mapped classes. Actually reflection doesn't create mapped classes for you - you have to specify mapped class name.
Actually I think that reflection support for loading foreign keys is a great helper and time saving tool. Using it you can build a query using joins without need to specify which columns to use for a join.
from sqlalchemy import *
from sqlalchemy import create_engine, orm
from sqlalchemy.ext.declarative import declarative_base
from sqlalchemy.orm import relationship
metadata = MetaData()
Base = declarative_base()
Base.metadata = metadata
db = create_engine('<db connection URL>',echo=False)
metadata.reflect(bind=db)
cause_code_table = metadata.tables['cause_code']
ndticket_table = metadata.tables['ndticket']
sm = orm.sessionmaker(bind=db, autoflush=True, autocommit=True, expire_on_commit=True)
session = orm.scoped_session(sm)
q = session.query(ndticket_table,cause_code_table).join(cause_code_table)
for r in q.limit(10):
print r
Also when I was using reflection to run queries to existing database - I had to define only mapped classes names, table bindings, relations, BUT there were no need to define table columns for these relations.
class CauseCode(Base):
__tablename__ = "cause_code"
class NDTicket(Base):
__tablename__ = "ndticket"
cause_code = relationship("CauseCode", backref = "ndticket")
q = session.query(NDTicket)
for r in q.limit(10):
print r.ticket_id, r.cause_code.cause_code
Overall SQLAlchemy reflection is already powerful tool and save me time, so adding relations manually is a small overhead for me.
If I would have to develop functionality that will add relations between mapped objects using existing foreign keys, I would start from using reflection with inspector. Using get_foreign_keys() method gives all information required to build relations - referred table name, referred column name and column name in target table. And would use this information for adding property with relationship into mapped class.
insp = reflection.Inspector.from_engine(db)
print insp.get_table_names()
print insp.get_foreign_keys(NDTicket.__tablename__)
>>>[{'referred_table': u'cause_code', 'referred_columns': [u'cause_code'], 'referred_schema': None, 'name': u'SYS_C00135367', 'constrained_columns': [u'cause_code_id']}]
As of SQLAlchemy 0.9.1 the (for now experimental) Automap extension would seem to do just that: http://docs.sqlalchemy.org/en/rel_0_9/orm/extensions/automap.html

SQLAlchemy - Models - using dynamic fields - ActiveRecord

How close can I get to defining a model in SQLAlchemy like:
class Person(Base):
pass
And just have it dynamically pick up the field names? anyway to get naming conventions to control the relationships between tables? I guess I'm looking for something similar to RoR's ActiveRecord but in Python.
Not sure if this matters but I'll be trying to use this under IronPython rather than cPython.
It is very simple to automatically pick up the field names:
from sqlalchemy import Table
from sqlalchemy.orm import MetaData, mapper
metadata = MetaData()
metadata.bind = engine
person_table = Table(metadata, "tablename", autoload=True)
class Person(object):
pass
mapper(Person, person_table)
Using this approach, you have to define the relationships in the call to mapper(), so no auto-discovery of relationships.
To automatically map classes to tables with same name, you could do:
def map_class(class_):
table = Table(metadata, class_.__name__, autoload=True)
mapper(class_, table)
map_class(Person)
map_class(Order)
Elixir might do everything you want.
AFAIK sqlalchemy intentionally decouples database metadata and class layout.
You may should investigate Elixir (http://elixir.ematia.de/trac/wiki): Active Record Pattern for sqlalchemy, but you have to define the classes, not the database tables.

Categories