How to model enums backed by integers with sqlachemy? - python

I am using sqlalchemy as a readable way to model my database, I'm only interested in generating the database definition for several engines from my model.
Some of the columns in my tables have type Enum, which works fine in engines such as MySQL, since it has native enum support. However for SQL Server, it generates the column as VARCHAR and sets a constraint to check the values are within the expected enum values I specify.
I'd like to replace this alternative with a numeric based fallback, so that the column type is actually numeric and the constraint checks the numeric values are within the range of the enum size (assumes sequential values starting with 0).
I have tried creating a TypeDecorator with Enum as impl, but this was not enough or I did not know how to make it work. I also tried to just copy the code for the Boolean type and mix it with the Enum type to create my own type, but it seems that database compiler support is required too.
Is there a way in which I can achieve this without having to patch sqlalchemy itself?
Note that I am not interested in querying the database with python, after it's generated, I'm done, so that might simplify, perhaps.

Here's what you need:
import sqlalchemy as sa
class IntEnum(sa.types.TypeDecorator):
impl = sa.Integer
def __init__(self, enumtype, *args, **kwargs):
super().__init__(*args, **kwargs)
self._enumtype = enumtype
def process_bind_param(self, value, dialect):
return value.value
def process_result_value(self, value, dialect):
return self._enumtype(value)
And then you use it like this:
from enum import Enum
from sqlalchemy.ext.declarative import declarative_base
class MyEnum(Enum):
one = 1
two = 2
three = 3
engine = sa.create_engine('sqlite:///:memory:')
session = sa.orm.sessionmaker(bind=engine)()
Base = declarative_base()
class Stuff(Base):
__tablename__ = 'stuff'
id = sa.Column('id', sa.Integer, primary_key=True)
thing = sa.Column('num', IntEnum(MyEnum))
Base.metadata.create_all(engine)
session.add(Stuff(thing=MyEnum.one))
session.add(Stuff(thing=MyEnum.two))
session.add(Stuff(thing=MyEnum.three))
session.commit()
engine.execute(sa.text('insert into stuff values(4, 2);'))
for thing in session.query(Stuff):
print(thing.id, thing.thing)
Really your only problem is that impl needed to be a sa.Integer, as that's what's actually backing the enum, not enum.Enum.

Related

SQLAlchemy: Use Enum within a Mixin [PostgreSQL]

I'm trying to use an Enum within my Mixin model
The mixin model looks this:
from sqlalchemy import Column, Integer, Enum
from enum import Enum as EnumClass
class MyEnum(EnumClass):
active = 1
deactive = 2
deleted = 3
class MyMixin:
id = Column(Integer(), primary_key=True)
status = Column(Enum(MyEnum), nullable=False, unique=False, default=MyEnum.active)
And I have other models inheriting my mixin like this:
class Inherited1(MyMixin, Base):
__tablename__ = 'test_table_1'
class Inherited2(MyMixin, Base):
__tablename__ = 'test_table_2'
I'm using PostgreSQL as my Database, when I try to use Base.metadata.create_all(), It return's the following error:
sqlalchemy.exc.ProgrammingError: (psycopg2.ProgrammingError) type "myenum" already exists
The reason is It's trying to recreate the enum type in the PostgreSQL, a workaround would be defining the enum on the inherited classes and passing name parameter to the Enum type used for the column
But I wanted to know is there any better way to use MyMixin with the Enum type and inherit it for multiple models?
I should add that I am using Alembic for migrating, I even tried to add create_type=False as a parameter to sa.Enum() within the sa.Column() defination for my Enum type, and it didn't work, like this:
sa.Column('status', sa.Enum('active', 'deactive', 'deleted', name='myenum', create_type=False), nullable=False)
Workaround:
I added this part to the question, cause still I don't think it's the best practice to do it
I edited the migration script by changing sa.Enum() to sa.dialects.postgresql.ENUM()
sa.Column('status', sa.dialects.postgresql.ENUM('active', 'deactive', 'deleted', name='myenum', create_type=False), nullable=False)
Seems like the sa.Enum type doesn't take create_type parameter since it is exclusively for PostgreSQL
But I'm still looking for a better way that doesn't require making changes to the migration script

Process fields in SQLAlchemy model (using flask_sqlalchemy)

I am using SQLAlchemy through flask_sqlalchemy. A model receives input from HTML forms. I would like this input to be stripped of any tags. Instead of doing this several times in the code before assignment, I thought it might be better to implement this somehow in the model object.
The possibilities I could think of were:
Derive own column types
Wrap a proxy class around the column types
Define kind of a decorator that does the above
Modify the model object to intercept assignments
The first three solutions seem more elegant, but I don't understand how I need to implement these. The main reason is that I don't understand how exactly SQLAlchemy extracts the table structure and column types from the column variables, and how assignment to these is handled, in particular when access through the flask_sqlalchemy class.
I played around with the last option in the list above, and came up with this (partial) solution:
import bleach
class Example(db.Model):
__tablename__ = 'examples'
id = db.Column(db.Integer, primary_key=True)
field1 = db.Column(db.Text)
field2 = db.Column(db.String(64))
_bleach_columns = ('field1', 'field2')
def __init__(self, **kwargs):
if kwargs is not None:
for key in Example._bleach_columns:
kwargs[key] = bleach.clean(kwargs[key], tags=[], strip=True)
super(Example, self).__init__(**kwargs)
This works when creating objects using Example(field1='foo', field2='bar'). However, I am uncertain how to handle the assignment of individual fields. I was thinking of something along these lines, but am unsure about the parts marked as ASSIGN:
def __setattr__(self, attr, obj):
if(attr in Example._bleach_columns):
ASSIGN(..... , bleach.clean(obj, tags=[], strip=True))
else:
ASSIGN(..... , obj)
More generally, my impression is that this is not the best way to handle tag filtering. I'd therefore appreciate any hint on how to best implement this behaviour, ideally with a decorator of new column types.
It looks like this could be done with a TypeDecorator (link) that applies bleach in process_bind_param. However, I could not figure out how to apply this decorator to the flask_sqlalchemy based column definition in the db.Model-derived class above.
I finally managed to solve this... which was easy, as usual, once one understands what it all is about.
The first thing was to understand that db.Column is the same than SQLAlchemy's column. I thus could use the same syntax. To implement variable length strings, I used a class factory to return the decorators. If there is another solution to implement the length, I'd be interested to hear about it. Anyway, here is the code:
def bleachedStringFactory(len):
class customBleachedString(types.TypeDecorator):
impl = types.String(len)
def process_bind_param(self, value, dialect):
return bleach.clean(value, tags=[], strip=True)
def process_result_value(self, value, dialect):
return value
return customBleachedString
class Example(db.Model):
__tablename__ = 'examples'
id = db.Column(db.Integer, primary_key=True)
field1 = db.Column(bleachedStringFactory(64), unique=True)
field2 = db.Column(bleachedStringFactory(128))

Sqlalchemy (and alembic) Concrete table inheritance without polymorphic union

I'm new to SQLAlchmey, and I'm trying to achieve the following objects/db tables structure (by using alembic as well):
class BaseConfig(Base):
pk = Column(Integer, primary_key=True)
name = Column(Unicode(150), nullable=False, unique=True)
...
# Lots of other general columns, such as description, display_name, creation time etc.
And I want all other configuration classes to inherit the predefined columns from it:
class A(BaseConfig):
__tablename__ = "A"
column1 = Column...
column2 = Column...
class B(BaseConfig):
__tablename__ = "B"
column1 = Column...
column2 = Column...
The BaseConfig table is not a real table, only a class that holds common columns.
Other than that - nothing is related between A and B, and no need for a shared pk etc. It seems that using "polymorphic_union" is not relevant here as well.
By trying to run alembic autogenerate I get the error that BaseConfig doesn't have a table mapped class - which is true, and i really don't see a reason to add the "polymorphic union" option to BaseConfig, since this class is generic.
Any suggestions? (In Django with south this works out of the box, but here it seems that this behavior is not supported easily).
Thanks,
Li
Either use MixIns (read Mixing in Columns part of the documentation), where your BaseConfig does not subclass Base and actual tables subclass both Base and BaseConfig:
class BaseConfig(object):
# ...
class MyModel(BaseConfig, Base):
# ...
OR simply use __abstract__:
class BaseConfig(Base):
__abstract__ = True

SQLAlchemy Decomposing DateTime Fields

I have a model of the form:
class MyModel(base):
# ...
datetime = Column(DateTime)
# ...
and would like to create a .date and a .time property which correspond to the .datetime column. The SQLA documentations shows a couple of examples of combining properties (such as firstname + lastname => fullname) but nothing for decomposing.
I think using #hybrid_property and friends I can do the initial decomposition but am unsure about assignment (so if n = MyModel.query...one() I wish to be able to do n.date = d and have it update the .datetime field.)
My primary RDBMS is MySQL.
(For those that are interested in my motivation for wanting to do this: I have a lot of client-side code which is duck-typed to expect .date and .time fields however many stored procedures and triggers on the server expect a single .datetime column.)
The docs say explicitly that you can do this, you need to use the #hybrid.property and #value.setter decorators and your own code to return the date or time in the expected format:
from sqlalchemy.ext.hybrid import hybrid_property
class SomeClass(object):
#hybrid_property
def value(self):
return self._value
#value.setter
def value(self, value):
self._value = value
Full disclosure: I have used the property feature but not the setter feature.

Dynamic Class Creation in SQLAlchemy

We have a need to create SQLAlchemy classes to access multiple external data sources that will increase in number over time. We use the declarative base for our core ORM models and I know we can manually specify new ORM classes using the autoload=True to auto generate the mapping.
The problem is that we need to be able generate them dynamically taking something like this:
from sqlalchemy.ext.declarative import declarative_base
Base = declarative_base()
stored={}
stored['tablename']='my_internal_table_name'
stored['objectname']='MyObject'
and turning it into something like this dynamically:
class MyObject(Base):
__tablename__ = 'my_internal_table_name'
__table_args__ = {'autoload':True}
We don't want the classes to persist longer than necessary to open a connection, perform the queries, and then closing the connection. Therefore, ideally, we can put the items in the "stored" variable above into a database and pull them as needed. The other challenge is that the object name (e.g. "MyObject") may be used on different connections so we cannot define it once and keep it around.
Any suggestions on how this might be accomplished would be greatly appreciated.
Thanks...
You can dynamically create MyObject using the 3-argument call to type:
type(name, bases, dict)
Return a new type object. This is essentially a dynamic form of the
class statement...
For example:
mydict={'__tablename__':stored['tablename'],
'__table_args__':{'autoload':True},}
MyObj=type(stored['objectname'],(Base,),mydict)
print(MyObj)
# <class '__main__.MyObject'>
print(MyObj.__base__)
# <class '__main__.Base'>
print(MyObj.__tablename__)
# my_internal_table_name
print(MyObj.__table_args__)
# {'autoload': True}

Categories