I'm trying to use an Enum within my Mixin model
The mixin model looks this:
from sqlalchemy import Column, Integer, Enum
from enum import Enum as EnumClass
class MyEnum(EnumClass):
active = 1
deactive = 2
deleted = 3
class MyMixin:
id = Column(Integer(), primary_key=True)
status = Column(Enum(MyEnum), nullable=False, unique=False, default=MyEnum.active)
And I have other models inheriting my mixin like this:
class Inherited1(MyMixin, Base):
__tablename__ = 'test_table_1'
class Inherited2(MyMixin, Base):
__tablename__ = 'test_table_2'
I'm using PostgreSQL as my Database, when I try to use Base.metadata.create_all(), It return's the following error:
sqlalchemy.exc.ProgrammingError: (psycopg2.ProgrammingError) type "myenum" already exists
The reason is It's trying to recreate the enum type in the PostgreSQL, a workaround would be defining the enum on the inherited classes and passing name parameter to the Enum type used for the column
But I wanted to know is there any better way to use MyMixin with the Enum type and inherit it for multiple models?
I should add that I am using Alembic for migrating, I even tried to add create_type=False as a parameter to sa.Enum() within the sa.Column() defination for my Enum type, and it didn't work, like this:
sa.Column('status', sa.Enum('active', 'deactive', 'deleted', name='myenum', create_type=False), nullable=False)
Workaround:
I added this part to the question, cause still I don't think it's the best practice to do it
I edited the migration script by changing sa.Enum() to sa.dialects.postgresql.ENUM()
sa.Column('status', sa.dialects.postgresql.ENUM('active', 'deactive', 'deleted', name='myenum', create_type=False), nullable=False)
Seems like the sa.Enum type doesn't take create_type parameter since it is exclusively for PostgreSQL
But I'm still looking for a better way that doesn't require making changes to the migration script
Related
Consider this code:
from sqlmodel import SQLModel, Field
class Task(SQLModel, table=True):
id = Column(Integer, primary_key=True, index=True)
I only stepped away from python for a few weeks and now there are arguments in the class-inheritance brackets? What does this do, how do I do it and are there drawbacks/benefits? Whats it called? Once I know what its called I can look it up.
EDIT: This is not a typo, this code works as is.
Short answer
table=True is the attribute that the class SQLModel uses to distinguish if something is a Pydantic model or an SQLAlchemy model.
Long answer
Not that long .. but here is the class and here is where the check happens. There is more to it, but basically, it is to distinguish if it should be a Pydantic model or an SQLAlchemy model.
EDIT
The issue arises when trying to inherit from a class that is an attribute of an instance. This mcve repros it, I'll leave the rest of the question below for posterity:
class A:
class SubA:
pass
a = A()
class B(a.SubA):
pass
mypy output:
Name 'a.SubA' is not defined
This passes:
class A:
class SubA:
pass
class B(A.SubA):
pass
The example in this Related Issue is pretty much exactly what Flask-SQLAlchemy does to provide the declarative base class under the db namespace. In the issue, mypy maintainer asserts that they wouldn't support the pattern.
My question is, what is incorrect about the above pattern such that mypy would not support it? Especially in the context that it is used by a large project such as Flask-SQLAlchemy.
Further, what is the best way for users of Flask-SQLAlchemy and mypy to manage this in their projects?
ORIGINAL QUESTION
This question isn't about the lack of Flask-SQLAlchemy stubs. Accepting that, I came across this question.
Please help me to understand why the following does not work as I expect.
In my environment I have only Flask-SQLAlchemy and mypy installed.
I have mypy configured with ignore_missing_imports = True.
A mypy pass over the following:
from flask_sqlalchemy import SQLAlchemy
db = SQLAlchemy()
class Widget(db.Model):
id = db.Column(db.Integer, primary_key=True)
reveals:
error: Name 'db.Model' is not defined
So, I've attempted to subclass SQLAlchemy to provide an annotation for Model, which shows in __annotations__ on the object, yet the mypy analysis of the module doesn't change:
from flask_sqlalchemy import SQLAlchemy
from flask_sqlalchemy.model import DefaultMeta
class TypedSQLAlchemy(SQLAlchemy):
Model: DefaultMeta
db = TypedSQLAlchemy()
print(db.__annotations__) # {'Model': <class 'flask_sqlalchemy.model.DefaultMeta'>}
class Widget(db.Model):
id = db.Column(db.Integer, primary_key=True)
When I execute the file, the print(db.__annotations__) command displays {'Model': <class 'flask_sqlalchemy.model.DefaultMeta'>}, yet still mypy has the same error:
error: Name 'db.Model' is not defined
I would expect that providing an annotation for db.Model should make that error go away.
Earlier Edit
I've initially misinterpreted the error as it doesn't suggest the attribute Model doesn't exist on db, it suggests that the name db.Model doesn't exist in the namespace. But why would it treat db.Model as a full name, and not db as the name defined locally and Model as it's attribute? Is this something to do with trying to inheriting from a class variable?
Also, my annotation was incorrect, should be:
class TypedSQLAlchemy(SQLAlchemy):
Model: Type[DefaultMeta]
You should use A.SubA.
As I know access to nested class via instance variable is not allowed from mypy point of view. Because derived classes from A could override nested class, and mypy cannot recognize this case, something like this:
class A:
class SubA:
pass
class C(A):
class SubA:
pass
def foo(a: A):
class B(a.SubA): # What SubA here ?
pass
foo(C())
Update:
As for Flask-SQLAlchemy, the following workaround was suggested in this discussion:
from app import db
from sqlalchemy.ext.declarative import DeclarativeMeta
BaseModel: DeclarativeMeta = db.Model
class MyModel(BaseModel): ...
If you are using flask_sqlalchemy then you can use from flask_sqlalchemy.model import DefaultMeta instead of DeclarativeMeta.
I'm new to SQLAlchmey, and I'm trying to achieve the following objects/db tables structure (by using alembic as well):
class BaseConfig(Base):
pk = Column(Integer, primary_key=True)
name = Column(Unicode(150), nullable=False, unique=True)
...
# Lots of other general columns, such as description, display_name, creation time etc.
And I want all other configuration classes to inherit the predefined columns from it:
class A(BaseConfig):
__tablename__ = "A"
column1 = Column...
column2 = Column...
class B(BaseConfig):
__tablename__ = "B"
column1 = Column...
column2 = Column...
The BaseConfig table is not a real table, only a class that holds common columns.
Other than that - nothing is related between A and B, and no need for a shared pk etc. It seems that using "polymorphic_union" is not relevant here as well.
By trying to run alembic autogenerate I get the error that BaseConfig doesn't have a table mapped class - which is true, and i really don't see a reason to add the "polymorphic union" option to BaseConfig, since this class is generic.
Any suggestions? (In Django with south this works out of the box, but here it seems that this behavior is not supported easily).
Thanks,
Li
Either use MixIns (read Mixing in Columns part of the documentation), where your BaseConfig does not subclass Base and actual tables subclass both Base and BaseConfig:
class BaseConfig(object):
# ...
class MyModel(BaseConfig, Base):
# ...
OR simply use __abstract__:
class BaseConfig(Base):
__abstract__ = True
I have an assortment of sqlalchemy classes e.g:
class OneThing(Base):
id = Column(Integer, Sequence('one_thing_seq'), primary_key=True)
thing = Column(Boolean())
tag = Column(String(255))
class TwoThing(Base):
id = Column(Integer, Sequence('two_thing_seq'), primary_key=True)
thing = Column(Boolean())
tag = Column(String(100))
i.e. fairly standard class contructions for sqlalchemy
Question: Is there a way to get greater control over the column creation or does that need to be relatively static? I'd at least like to consolidate the more mundane columns and their imports across a number of files like this for example(not as a mixin because I already do that for certain columns that are the same across models, but a function that returns a column based on potential vars):
class OneThing(Base):
id = Base.id_column()
thing = Base.bool_column
tag = Base.string_col(255)
class OneThing(Base):
id = Base.id_column()
thing = Base.bool_column
tag = Base.string_col(255)
Seems simple enough and fairly approachable that I will just start reading/typing, but I have not found any examples or the proper search terms for examples, yet. There is no reason that class columns need to be static, and it is probably simple. Is this a thing, or a foolish thing?
from sqlalchemy import Column, Boolean, Integer
def c_id():
return Column(Integer, primary_key=True)
def c_bool():
return Column(Boolean, nullable=False, default=False)
def c_string(len):
return Column(String(len), nullable=False, default='')
class Thing(Base):
id = c_id()
thing = c_bool()
tag = c_string(255)
The SQLAlchemy developer goes into more detail here: http://techspot.zzzeek.org/2011/05/17/magic-a-new-orm/
The Column() call is not magical; you can use any random way to create the appropriate object. The magic (i.e. binding the column to the variable name and the table) happens in Base's metaclass.
So one solution is for you to write your own code which returns a Column() or three -- nothing prevents you from doing this:
class Thing(Base):
id,thing,tag = my_magic_creator()
On the other hand, you can drop all these assignments wholesale, and do the work in a metaclass; see my answer here: Creating self-referential tables with polymorphism in SQLALchemy
for a template on how to do that.
Ok so I have two problems:
My first would by sqlalchemy related. So I'm forced to use sqllite and I have to implement some cascade deletes. Now I've found relationships do the job however these should be declared normally in the parent table. But due to some design decision I'm forced to do this in the child so I'm doing something like:
class Operation(Base):
"""
The class used to log any action executed in Projects.
"""
__tablename__ = 'OPERATIONS'
id = Column(Integer, primary_key=True)
parameters = Column(String)
..... rest of class here ....
class DataType(Base):
__tablename__ = 'DATA_TYPES'
id = Column(Integer, primary_key=True)
gid = Column(String)
...more params...
parent_operation = relationship(Operation, backref=backref("DATA_TYPES",
order_by=id,
cascade="all,delete"))
...rest of class...
Now this seems to work but I'm still not certain of a few things.
Firstly, what can I do with parent_operation from here on end? I mean I see that the cascade works but I make no use of parent_operation except for the actual declaration.
Secondly, the "DATA_TYPES" in the above case, which is the first parameter in the backref, does this need to be the name of the child table or does it need to be unique per model?
And finally, in my case both Operation and DataType classes are in the same module, so I can pass Operation as the first parameter in the relationship. Now if this wasnt the case and I would have them in separate modules, if I still want to declare this relationship should I pass 'Operation' or 'OPERATION' to the relationship( Classname or Tablename ? )
Now my second is more core Python but since it still has some connections with the above I'll add it here. So I need to be able to add a class attribute dinamically. Basically I need to add a relationship like the ones declared above.
class BaseClass(object)
def __init__(self):
my_class = self.__class__
if not hasattr(my_class, self.__class__.__name__):
reference = "my_class." + self.__class__.__name__ + "= relationship\
('DataType', backref=backref('" + self.__class__.__name__ + "', \
cascade='all,delete'))"
exec reference
The reason of to WHY I need to do this are complicated and have to do with some design decisions(basically I need every class that extends this one to have a relationship declared to the 'DataType' class). Now I know using of the exec statement isn't such a good practice. So is there a better way to do the above?
Regards,
Bogdan
For the second part or your question, keep in mind, anything in your class constructor won't be instrumented by SqlAlchemy. In your example you can simply declare the relationship in its own class (note it does not inherit from you declarative_base class) and then inherit it in any subclass something like this:
class BaseDataType(object):
parent_operation = relationship(Operation, backref="datatype")
class DataTypeA(Base, BaseDataType):
id = Column(Integer, primary_key=True)
class DataTypeB(Base, BaseDataType):
id = Column(Integer, primary_key=True)
The SqlAlchemy documentation gives good examples of what's possible:
http://www.sqlalchemy.org/docs/orm/extensions/declarative.html#mixing-in-relationships