I’m working on a library where the user shall be able to simply declare a few classes which are automatically backed by the database. In short, somewhere hidden in the code, there is
from sqlalchemy.ext.declarative import declarative_base
Base = declarative_base()
class LibraryBase(Base):
# important library stuff
and the user should then do
class MyStuff(LibraryBase):
# important personal stuff
class MyStuff_2(LibraryBase):
# important personal stuff
mystuff = MyStuff()
Library.register(mystuff)
mystuff.changeIt() # apply some changes to the instance
Library.save(mystuff) # and save it
# same for all other classes
In a static environment, e.g. the user has created one file with all personal classes and imports this file, this works pretty well. All class names are fixed and SQLAlchemy knows how to map each class.
In an interactive environment, things are different: Now, there is a chance of a class being defined twice. Both classes might have different modules; but still SQLAlchemy will complain:
SAWarning: The classname 'MyStuff' is already in the registry of this declarative base, mapped to < class 'OtherModule.MyStuff' >
Is there a way to deal with this? Can I somehow unload a class from its declarative_base so that I can exchange its definition with a new one?
You can use:
sqlalchemy.orm.instrumentation.unregister_class(cl)
del cl._decl_class_registry[cl.__name__]
The first line is to prevent accidental use of your unregisted class. The second unregisters and will prevent the warning.
It looks like, And I'm not really sure this even works, but I think what you want is
sqlalchemy.orm.instrumentation.unregister_class()
http://hg.sqlalchemy.org/sqlalchemy/file/762548ff8eef/lib/sqlalchemy/orm/instrumentation.py#l466
In my project I use this solution.
Where library specified columns defined as mixin by declared_attr and target mapper created by type call with bases, as result I have full functional mapper.
from sqlalchemy import create_engine, BigInteger, Column
from sqlalchemy.orm import sessionmaker, scoped_session
from sqlalchemy.ext.declarative import declarative_base
from sqlalchemy.ext.declarative import declared_attr
Base = declarative_base()
class LibraryBase(object):
__tablename__ = 'model'
#declared_attr
def library_field(self):
return Column(BigInteger)
class MyLibrary(object):
#classmethod
def register(cls, entity):
tablename = entity.__tablename__
Mapper = type('Entity_%s' % tablename, (Base, LibraryBase, entity), {
'__tablename__': tablename,
'id': Column(BigInteger, primary_key=True),
})
return Mapper
#classmethod
def setup(cls):
Base.metadata.create_all()
class MyStaff(object):
__tablename__ = 'sometable1'
#declared_attr
def staff_field(self):
return Column(BigInteger)
def mymethod(self):
print('My method:', self)
class MyStaff2(MyStaff):
__tablename__ = 'sometable2'
if __name__ == '__main__':
engine = create_engine('sqlite://', echo=True)
Base.metadata.bind = engine
Session = scoped_session(sessionmaker(bind=engine))
session = Session()
# register and install
MyStaffMapper = MyLibrary.register(MyStaff)
MyStaffMapper2 = MyLibrary.register(MyStaff2)
MyLibrary.setup()
MyStaffMapper().mymethod()
MyStaffMapper2().mymethod()
session.query(MyStaffMapper.library_field) \
.filter(MyStaffMapper.staff_field != None) \
.all()
Related
Setup: Postgres 13, Python 3.7, SQLAlchemy 1.4
The current structure uses base.py to create an engine, a Scoped Session, and an Augmented Base. All of this gets called
in models.py where we define a table Class, and is called again in inserts.py where we test inserting new values to
the database using the ORM. All of this is working well.
My question is regarding the def __init__(self) function in the models.py table Classes. Without this function the code will error with TypeError: __init__() takes 1 positional argument but 5 were given
As soon as I include the __init__ function the code works properly.
I am perplexed as to why this error is being produced given that all the models are defined via the Declarative system
which means our Class should get an __init__() method constructor which automatically accepts keyword names that
match the columns we’ve mapped.
I suspect there is an error in how I am coding the interactions between db_session = scoped_session and
Base = declarative_base(cls=Base, metadata=metadata_obj) and the way Base is being passed in class NumLimit(Base).
I can't quite work this out and would appreciate being directed to where I am creating this error. Thank you!
base.py
from sqlalchemy import Column, create_engine, Integer, MetaData
from sqlalchemy.orm import declared_attr, declarative_base, scoped_session, sessionmaker
engine = create_engine('postgresql://user:pass#localhost:5432/dev', echo=True)
db_session = scoped_session(
sessionmaker(
bind=engine,
autocommit=False,
autoflush=False
)
)
# Augment the base class by using the cls argument of the declarative_base() function so all classes derived
# from Base will have a table name derived from the class name and an id primary key column.
class Base:
#declared_attr
def __tablename__(cls):
return cls.__name__.lower()
id = Column(Integer, primary_key=True)
# Write all tables to schema 'collect'
metadata_obj = MetaData(schema='collect')
# Instantiate a Base class for our classes definitions
Base = declarative_base(cls=Base, metadata=metadata_obj)
models.py
from base import Base
from sqlalchemy import Column, DateTime, Integer, Text
from sqlalchemy.dialects.postgresql import UUID
import uuid
class NumLimit(Base):
org = Column(UUID(as_uuid=True), default=uuid.uuid4, unique=True)
limits = Column(Integer)
limits_rate = Column(Integer)
rate_use = Column(Integer)
def __init__(self, org, limits, allowance_rate, usage, last_usage):
super().__init__()
self.org = org
self.limits = limits
self.limits_rate = limits_rate
self.rate_use = rate_use
def __repr__(self):
return f'<NumLimit(org={self.org}, limits={self.limits}, limits_rate={self.limits_rate},' \
f' rate_use={self.rate_use})>'
insert.py
def insert_num_limit():
# Generate database schema based on definitions in models.py
Base.metadata.create_all(bind=engine)
# Create instances of the NumLimit class
a_num_limit = NumLimit('123e4567-e89b-12d3-a456-426614174000', 20, 4, 8)
another_limit = NumLimit('123e4567-e89b-12d3-a456-426614174660', 7, 2, 99)
# Use the current session to persist data
db_session.add_all([a_num_limit, another_limit])
# Commit current session to database and close session
db_session.commit()
db_session.close()
return
All parameters of the __init__() generated by sqlalchemy are keyword-only. It is as if the definition was:
def __init__(self, *, id=None, org=None, limits=None, limits_rate=None, rate_use=None):
self.id = id
self.org = org
# etc...
So when you try to supply the arguments positionally: NumLimit('123e4567-e89b-12d3-a456-426614174000', 20, 4, 8), you will get the error TypeError: __init__() takes 1 positional argument but 5 were given, because __init__() indeed only takes one positional argument. If you however supply them as keyword-arguments: NumLimit(id='123e4567-e89b-12d3-a456-426614174000', org=20, limits=4, limits_rate=8) everything works.
I want to write some py.test code to test 2 simple sqlalchemy ORM classes that were created based on this Tutorial. The problem is, how do I set a the database in py.test to a test database and rollback all changes when the tests are done? Is it possible to mock the database and run tests without actually connect to de database?
here is the code for my classes:
from sqlalchemy import create_engine, ForeignKey
from sqlalchemy.ext.declarative import declarative_base
from sqlalchemy import Column, Integer, String
from sqlalchemy.orm import sessionmaker, relationship
eng = create_engine('mssql+pymssql://user:pass#host/my_database')
Base = declarative_base(eng)
Session = sessionmaker(eng)
intern_session = Session()
class Author(Base):
__tablename__ = "Authors"
AuthorId = Column(Integer, primary_key=True)
Name = Column(String)
Books = relationship("Book")
def add_book(self, title):
b = Book(Title=title, AuthorId=self.AuthorId)
intern_session.add(b)
intern_session.commit()
class Book(Base):
__tablename__ = "Books"
BookId = Column(Integer, primary_key=True)
Title = Column(String)
AuthorId = Column(Integer, ForeignKey("Authors.AuthorId"))
Author = relationship("Author")
I usually do that this way:
I do not instantiate engine and session with the model declarations, instead I only declare a Base with no bind:
Base = declarative_base()
and I only create a session when needed with
engine = create_engine('<the db url>')
db_session = sessionmaker(bind=engine)
You can do the same by not using the intern_session in your add_book method but rather use a session parameter.
def add_book(self, session, title):
b = Book(Title=title, AuthorId=self.AuthorId)
session.add(b)
session.commit()
It makes your code more testable since you can now pass the session of your choice when you call the method.
And you are no more stuck with a session bound to a hardcoded database url.
I add a custom --dburl option to pytest using its pytest_addoption hook.
Simply add this to your top-level conftest.py:
def pytest_addoption(parser):
parser.addoption('--dburl',
action='store',
default='<if needed, whatever your want>',
help='url of the database to use for tests')
Now you can run pytest --dburl <url of the test database>
Then I can retrieve the dburl option from the request fixture
From a custom fixture:
#pytest.fixture()
def db_url(request):
return request.config.getoption("--dburl")
# ...
Inside a test:
def test_something(request):
db_url = request.config.getoption("--dburl")
# ...
At this point you are able to:
get the test db_url in any test or fixture
use it to create an engine
create a session bound to the engine
pass the session to a tested method
It is quite a mess to do this in every test, so you can make a usefull usage of pytest fixtures to ease the process.
Below are some fixtures I use:
from sqlalchemy import create_engine
from sqlalchemy.orm import scoped_session, sessionmaker
#pytest.fixture(scope='session')
def db_engine(request):
"""yields a SQLAlchemy engine which is suppressed after the test session"""
db_url = request.config.getoption("--dburl")
engine_ = create_engine(db_url, echo=True)
yield engine_
engine_.dispose()
#pytest.fixture(scope='session')
def db_session_factory(db_engine):
"""returns a SQLAlchemy scoped session factory"""
return scoped_session(sessionmaker(bind=db_engine))
#pytest.fixture(scope='function')
def db_session(db_session_factory):
"""yields a SQLAlchemy connection which is rollbacked after the test"""
session_ = db_session_factory()
yield session_
session_.rollback()
session_.close()
Using the db_session fixture you can get a fresh and clean db_session for each single test.
When the test ends, the db_session is rollbacked, keeping the database clean.
I am trying to setup a database just like in a tutorial but I am getting a programming error that a table doesn't exist when I'm trying to add a User
This is the file that errors (database.py):
from sqlalchemy import create_engine, MetaData
from sqlalchemy.orm import scoped_session, sessionmaker
from sqlalchemy.ext.declarative import declarative_base
engine = create_engine(
"mysql+pymysql://testuser:testpassword#localhost/test?charset=utf8",
connect_args = {
"port": 3306
},
echo="debug",
echo_pool=True
)
db_session = scoped_session(
sessionmaker(
bind=engine,
autocommit=False,
autoflush=False
)
)
Base = declarative_base()
def init_db():
import models
Base.metadata.create_all(bind=engine)
from models import User
db_session.add(
User(username="testuser", password_hash=b"", password_salt=b"", balance=1)
)
db_session.commit()
print("Initialized the db")
if __name__ == "__main__":
init_db()
To init the database (create the tables) I just run the file.
It errors when it creates the test user.
Here is models.py:
from sqlalchemy import Column, Integer, Numeric, Binary, String
from sqlalchemy.orm import relationship
from database import Base
class User(Base):
__tablename__ = "users"
id = Column(Integer, primary_key=True)
username = Column(String(16), unique=True)
password_hash = Column(Binary(32))
password_salt = Column(Binary(32))
balance = Column(Numeric(precision=65, scale=8))
def __repr__(self):
return "<User(balance={})>".format(balance)
I tried:
Committing before adding users (after create_all)
Drop existing tables from the database (although it seems like the table never gets committed)
from models import User instead of import models (before create_all)
Sorry if there are so many simillar questions, I promise I scavenged for answers, but it's always silly mistakes I made sure I didn't make (or atleast the ones I saw).
I am using MariaDB.
Sorry for long post, many thanks in advance.
The Base in database.py isn't the same Base that is imported into models.py.
A simple test is to put a print('creating Base') function call just above the Base = declarative_base() statement, and you'll see it is being created twice.
Python calls the module that is being executed '__main__', which you know as you have the if __name__ == '__main__' conditional at the bottom of your module. So the first Base that is created is __main__.Base. Then, in models.py, from database import Base causes the database module to be parsed again, creating database.Base in the namespace, and that is the Base from which User inherits. Then back in database.py, the Base.metadata.create_all(bind=engine) call is using the metadata from __main__.Base which has no tables in it, and as such creates nothing.
Don't execute out of the module that creates the Base instance. Create another module called main.py (or whatever), and move your init_db() function there and import Base, db_session and engine from database.py into main.py. That way, you are always using the same Base instance. This is example of main.py:
from database import Base, db_session, engine
from models import User
def init_db():
Base.metadata.create_all(bind=engine)
db_session.add(
User(username="testuser", password_hash=b"", password_salt=b"", balance=1)
)
db_session.commit()
print("Initialized the db")
if __name__ == "__main__":
init_db()
Declare Base class once(for each database) & import it to all modules which define table classes (inherited from Base)
For Base (a metaclass) to scan & find out all classes which are inherited from it, we need to import all the modules where such table classes (inherited from Base) are defined to module where we call Metadata.create_all(engine).
You need to import the relevant model where you call "Base.metadata.create_all". Example below to create user table
from ModelBase import Base
from UserModel import User
def create_db_schema(engine):
Base.metadata.create_all(engine,checkfirst=True)
I use SqlAlchemy classical mapping to organize my mappers and models. And I want to create a model classes hierarchy. I use the next code:
import sys
from sqlalchemy import Table
from sqlalchemy.orm import mapper
from some_module import BaseClass, child_classes
table = Table('tab1', MetaData(schema='public'),
Column('id', BigInteger, primary_key=True),
Column('discrim', String(64)))
base_mapper = mapper(BaseClass, table)
module = sys.modules[__name__]
for cls in child_classes:
# Each cls object has a class-level attribute `name`
submapper = mapper(cls, inherits=base_mapper, local_table=None,
polymorphic_on=table.c.discrim,
polymorphic_identity=cls.name)
setattr(module, cls.name + '_mapper', submapper)
And then I want to load models from the DB using the base_mapper.
query = db_session.query(base_mapper)
query = query.filter(base_mapper.mapped_table.c.discrim == 'foo')
print(query.all())
I expect to get an array of Foo class objects, but in the reality I get the BaseClass objects. What am I doing wrong?
Eventually, I found a solution. The main point is to move a polymorphic_on= clause from a subclass mapper to the base class mapper.
# ...
base_mapper = mapper(BaseClass, table,
polymorphic_on=table.c.discrim)
# ...
for cls in child_classes:
submapper = mapper(cls, inherits=base_mapper,
polymorphic_identity=cls.name)
I am working on a Pyramid app with SQLAlchemy as the ORM. I am trying to test a model with a class method:
# this is essentially a global used by all the models
Session = scoped_session(sessionmaker(autocommit=False))
class Role(Base):
__tablename__ = 'role'
id = sa.Column(sa.types.Integer, primary_key=True)
name = sa.Column(sa.types.Text, unique=True, nullable=False)
def __init__(self, **kwargs):
super(Role, self).__init__(**kwargs)
#classmethod
def find_all(self):
return Session.query(Role).order_by(Role.name).all()
I am using factory_boy to test and here is how I am trying to set up my testing factory:
import factory
from factory.alchemy import SQLAlchemyModelFactory
from sqlalchemy.orm import scoped_session, sessionmaker
from zk.model.meta import Base
from zk.model.role import Role
session = scoped_session(sessionmaker())
engine = create_engine('sqlite://')
session.configure(bind=engine)
Base.metadata.create_all(engine)
class RoleFactory(SQLAlchemyModelFactory):
FACTORY_FOR = Role
FACTORY_SESSION = session
However when I try to call RoleFactory.find_all() in a test, I get an error: E UnboundExecutionError: Could not locate a bind configured on mapper Mapper|Role|role, SQL expression or this Session
I tried monkeypatching meta and replacing that global Session with my session, but then I get this error: E AttributeError: type object 'RoleFactory' has no attribute 'find_all'
I tried calling RoleFactory.FACTORY_FOR.find_all() but then I get the same UnboundExecutionError.
Do I need to do something else for factory_boy to know about the class method?
This might be too obvious, but it seems that what you've got is a RoleFactory instance, when you need a Role instance, the factory wouldn't have access to any classmethods since it's not a child of the class. Try doing this and seeing what happens:
role = RoleFactory.build()
roles = role.find_all()