python class attributes and sql alchemy relationships - python

Ok so I have two problems:
My first would by sqlalchemy related. So I'm forced to use sqllite and I have to implement some cascade deletes. Now I've found relationships do the job however these should be declared normally in the parent table. But due to some design decision I'm forced to do this in the child so I'm doing something like:
class Operation(Base):
"""
The class used to log any action executed in Projects.
"""
__tablename__ = 'OPERATIONS'
id = Column(Integer, primary_key=True)
parameters = Column(String)
..... rest of class here ....
class DataType(Base):
__tablename__ = 'DATA_TYPES'
id = Column(Integer, primary_key=True)
gid = Column(String)
...more params...
parent_operation = relationship(Operation, backref=backref("DATA_TYPES",
order_by=id,
cascade="all,delete"))
...rest of class...
Now this seems to work but I'm still not certain of a few things.
Firstly, what can I do with parent_operation from here on end? I mean I see that the cascade works but I make no use of parent_operation except for the actual declaration.
Secondly, the "DATA_TYPES" in the above case, which is the first parameter in the backref, does this need to be the name of the child table or does it need to be unique per model?
And finally, in my case both Operation and DataType classes are in the same module, so I can pass Operation as the first parameter in the relationship. Now if this wasnt the case and I would have them in separate modules, if I still want to declare this relationship should I pass 'Operation' or 'OPERATION' to the relationship( Classname or Tablename ? )
Now my second is more core Python but since it still has some connections with the above I'll add it here. So I need to be able to add a class attribute dinamically. Basically I need to add a relationship like the ones declared above.
class BaseClass(object)
def __init__(self):
my_class = self.__class__
if not hasattr(my_class, self.__class__.__name__):
reference = "my_class." + self.__class__.__name__ + "= relationship\
('DataType', backref=backref('" + self.__class__.__name__ + "', \
cascade='all,delete'))"
exec reference
The reason of to WHY I need to do this are complicated and have to do with some design decisions(basically I need every class that extends this one to have a relationship declared to the 'DataType' class). Now I know using of the exec statement isn't such a good practice. So is there a better way to do the above?
Regards,
Bogdan

For the second part or your question, keep in mind, anything in your class constructor won't be instrumented by SqlAlchemy. In your example you can simply declare the relationship in its own class (note it does not inherit from you declarative_base class) and then inherit it in any subclass something like this:
class BaseDataType(object):
parent_operation = relationship(Operation, backref="datatype")
class DataTypeA(Base, BaseDataType):
id = Column(Integer, primary_key=True)
class DataTypeB(Base, BaseDataType):
id = Column(Integer, primary_key=True)
The SqlAlchemy documentation gives good examples of what's possible:
http://www.sqlalchemy.org/docs/orm/extensions/declarative.html#mixing-in-relationships

Related

SQLAlchemy: Use Enum within a Mixin [PostgreSQL]

I'm trying to use an Enum within my Mixin model
The mixin model looks this:
from sqlalchemy import Column, Integer, Enum
from enum import Enum as EnumClass
class MyEnum(EnumClass):
active = 1
deactive = 2
deleted = 3
class MyMixin:
id = Column(Integer(), primary_key=True)
status = Column(Enum(MyEnum), nullable=False, unique=False, default=MyEnum.active)
And I have other models inheriting my mixin like this:
class Inherited1(MyMixin, Base):
__tablename__ = 'test_table_1'
class Inherited2(MyMixin, Base):
__tablename__ = 'test_table_2'
I'm using PostgreSQL as my Database, when I try to use Base.metadata.create_all(), It return's the following error:
sqlalchemy.exc.ProgrammingError: (psycopg2.ProgrammingError) type "myenum" already exists
The reason is It's trying to recreate the enum type in the PostgreSQL, a workaround would be defining the enum on the inherited classes and passing name parameter to the Enum type used for the column
But I wanted to know is there any better way to use MyMixin with the Enum type and inherit it for multiple models?
I should add that I am using Alembic for migrating, I even tried to add create_type=False as a parameter to sa.Enum() within the sa.Column() defination for my Enum type, and it didn't work, like this:
sa.Column('status', sa.Enum('active', 'deactive', 'deleted', name='myenum', create_type=False), nullable=False)
Workaround:
I added this part to the question, cause still I don't think it's the best practice to do it
I edited the migration script by changing sa.Enum() to sa.dialects.postgresql.ENUM()
sa.Column('status', sa.dialects.postgresql.ENUM('active', 'deactive', 'deleted', name='myenum', create_type=False), nullable=False)
Seems like the sa.Enum type doesn't take create_type parameter since it is exclusively for PostgreSQL
But I'm still looking for a better way that doesn't require making changes to the migration script

How can I set an index on a declared attribute using SQLAlchemy Declarative ORM?

When I try to define an index on a declared attribute
class Event(Base):
#declared_attr
def user_id(cls):
return Column(BigInteger, ForeignKey("users.user_id"), nullable=False)
idx_event_user_id = Index('idx_event_user_id', user_id)
I get the following error:
sqlalchemy.exc.ArgumentError: Element <sqlalchemy.ext.declarative.api.declared_attr object at 0x1066ec648> is not a string name or column element
Is there another way to do this? Can I set indices on a declared attribute?
When dealing with inheritance/mixins you should pass indexes that you would like to create for all classes and their underlying Table as "inline" definitions using __table_args__, which also should be a declared attribute in this case, as explained in "Creating Indexes with Mixins":
class Event(Base):
#declared_attr
def user_id(cls):
return Column(BigInteger, ForeignKey("users.user_id"), nullable=False)
#declared_attr
def __table_args__(cls):
# Return a tuple of arguments to pass to Table
return (Index(f'idx_{cls.__tablename__}_user_id', 'user_id'),)
This will avoid name conflicts between indexes created for different (sub)classes. Note that here this "inline" form uses string names to identify columns to index, but cls.foo_id will work as well when in a declared attribute. In general there's no need to assign an Index as a model attribute, and in some situations it may even lead to confusion.
The simple solution to indexing a column is to just pass index=True to Column. This is a shortcut for creating an anonymous index for the column in question:
class Event(Base):
#declared_attr
def user_id(cls):
return Column(BigInteger, ForeignKey("users.user_id"), nullable=False, index=True)
When you are not dealing with inheritance/mixins, you do not need the #declared_attr wrapping:
class MyModel(Base):
foo = Column(Integer)
bar = Column(Integer)
# "Inline", but gets the actual column instead of a string name.
# Possible with Declarative.
__table_args__ = (Index('idx_mymodel_foo', foo),)
# Finds the table through the Column used in the definition.
Index('idx_mymodel_bar', MyModel.bar)
The reason why you are getting the error is that during class construction the class definition's body is evaluated and the resulting namespace is then used as the class' namespace. During that evaluation
idx_event_user_id = Index('idx_event_user_id', user_id)
results in Index receiving the declared attribute descriptor object assigned to user_id in that namespace as is, and so SQLAlchemy complains.
When you access descriptors through a constructed class object, or an instance of it, they get to do their thing, which in case of a declared attribute means that it evaluates to the mapped property or special declarative member it represents.

Separating SQLAlchemy mapping class from functional class in python

I'm sure there have been questions about this before, and I'm just not searching for the right keywords...
Is it considered good/bad practice to separate ORM mapping classes from the classes that are used for processing in an application?
For instance:
class Artist:
def __init__(self, name, age, genre):
self.name = name
self.age = age
self.genre = genre
self.score = None # Calculated in real-time, never stored to DB
class ArtistM(Base):
__tablename__ = "artists"
name = Column(String(50))
age = Column(Integer)
genre = Column(String(50))
Conceivable benefit would be to keep the classes used by the primary application completely free of ORM stuff. For instance, assume I've created an Artist class and an entire suite of tools which operate on that class. Later on, I want to start handling LOTS of these objects and decide to add a DB component. Do I want to go back and modify the original Artist class or make new mapping classes on top?
Use inheritance to achieve this:
# Option-1: use ArtistM everywhere
class ArtistM(Artist, Base):
# ...
# Option-2: use Artist everywhere
class Artist(ArtistM):
# ...
I prefer option 1 as it does not blotter your pure classes with the persistent related information. And in your code you could even import them with pure names so that you could use your other code interchangably:
from mypuremodule import Artist
# or
from mymixedinmodule import ArtistM as Artist
# then use 'Artist` everywhere

Sqlalchemy (and alembic) Concrete table inheritance without polymorphic union

I'm new to SQLAlchmey, and I'm trying to achieve the following objects/db tables structure (by using alembic as well):
class BaseConfig(Base):
pk = Column(Integer, primary_key=True)
name = Column(Unicode(150), nullable=False, unique=True)
...
# Lots of other general columns, such as description, display_name, creation time etc.
And I want all other configuration classes to inherit the predefined columns from it:
class A(BaseConfig):
__tablename__ = "A"
column1 = Column...
column2 = Column...
class B(BaseConfig):
__tablename__ = "B"
column1 = Column...
column2 = Column...
The BaseConfig table is not a real table, only a class that holds common columns.
Other than that - nothing is related between A and B, and no need for a shared pk etc. It seems that using "polymorphic_union" is not relevant here as well.
By trying to run alembic autogenerate I get the error that BaseConfig doesn't have a table mapped class - which is true, and i really don't see a reason to add the "polymorphic union" option to BaseConfig, since this class is generic.
Any suggestions? (In Django with south this works out of the box, but here it seems that this behavior is not supported easily).
Thanks,
Li
Either use MixIns (read Mixing in Columns part of the documentation), where your BaseConfig does not subclass Base and actual tables subclass both Base and BaseConfig:
class BaseConfig(object):
# ...
class MyModel(BaseConfig, Base):
# ...
OR simply use __abstract__:
class BaseConfig(Base):
__abstract__ = True

dynamic sqlalchemy columns on class

I have an assortment of sqlalchemy classes e.g:
class OneThing(Base):
id = Column(Integer, Sequence('one_thing_seq'), primary_key=True)
thing = Column(Boolean())
tag = Column(String(255))
class TwoThing(Base):
id = Column(Integer, Sequence('two_thing_seq'), primary_key=True)
thing = Column(Boolean())
tag = Column(String(100))
i.e. fairly standard class contructions for sqlalchemy
Question: Is there a way to get greater control over the column creation or does that need to be relatively static? I'd at least like to consolidate the more mundane columns and their imports across a number of files like this for example(not as a mixin because I already do that for certain columns that are the same across models, but a function that returns a column based on potential vars):
class OneThing(Base):
id = Base.id_column()
thing = Base.bool_column
tag = Base.string_col(255)
class OneThing(Base):
id = Base.id_column()
thing = Base.bool_column
tag = Base.string_col(255)
Seems simple enough and fairly approachable that I will just start reading/typing, but I have not found any examples or the proper search terms for examples, yet. There is no reason that class columns need to be static, and it is probably simple. Is this a thing, or a foolish thing?
from sqlalchemy import Column, Boolean, Integer
def c_id():
return Column(Integer, primary_key=True)
def c_bool():
return Column(Boolean, nullable=False, default=False)
def c_string(len):
return Column(String(len), nullable=False, default='')
class Thing(Base):
id = c_id()
thing = c_bool()
tag = c_string(255)
The SQLAlchemy developer goes into more detail here: http://techspot.zzzeek.org/2011/05/17/magic-a-new-orm/
The Column() call is not magical; you can use any random way to create the appropriate object. The magic (i.e. binding the column to the variable name and the table) happens in Base's metaclass.
So one solution is for you to write your own code which returns a Column() or three -- nothing prevents you from doing this:
class Thing(Base):
id,thing,tag = my_magic_creator()
On the other hand, you can drop all these assignments wholesale, and do the work in a metaclass; see my answer here: Creating self-referential tables with polymorphism in SQLALchemy
for a template on how to do that.

Categories