Python typing support for NamedTuple - python

Please check the below code
import typing
import abc
class A(abc.ABC):
#abc.abstractmethod
def f(self) -> typing.NamedTuple[typing.Union[int, str], ...]:
...
class NT(typing.NamedTuple):
a: int
b: str
class B(A):
def f(self) -> NT:
return NT(1, "s")
print(B().f())
I get an error. In parent class A I want to define method f such that I indicate that any child class should override it by returning a NamedTuple that is made up of int ot str fields only.
But I get a error sayin that:
TypeError: 'NamedTupleMeta' object is not subscriptable
Changing the signature as below helps but then how will I tell typing system that the child class can return NamedTuples's that have only int and str's
class A(abc.ABC):
#abc.abstractmethod
def f(self) -> typing.NamedTuple:
...

The issue is that fundamentally typing.NamedTuple is not a proper type. It essentially allows you to use the class factory collections.namedtuple using the syntax of inheritance and type annotations. It's sugar.
This is misleading. Normally, when we expect:
class Foo(Bar):
pass
foo = Foo()
print(isinstance(foo, Bar))
to always print True. But typing.NamedTuple actually, through metaclass machinery, just makes something a descendant of tuple, exactly like collections.namedtuple. Indeed, practically its only reason to exist is to use the NamedTupleMetaclass to intercept class creation. Perhaps the following will be illuminating:
>>> from typing import NamedTuple
>>> class Employee(NamedTuple):
... """Represents an employee."""
... name: str
... id: int = 3
...
>>> isinstance(Employee(1,2), NamedTuple)
False
>>>
>>> isinstance(Employee(1,2), tuple)
True
Some may find this dirty, but as stated in the Zen of Python, practicality beats purity.
And note, people often get confused about collections.namedtuple which is itself not a class, but a class factory. So:
>>> import collections
>>> Point = collections.namedtuple("Point", "x y")
>>> p = Point(0, 0)
>>> isinstance(p, collections.namedtuple)
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
TypeError: isinstance() arg 2 must be a type or tuple of types
Although note, the classes generated by namedtuple/NamedTuple do act as expected when you inherit from them.
Note, your solution:
import typing
import abc
class A(abc.ABC):
#abc.abstractmethod
def f(self) -> typing.Tuple:
...
class NT(typing.NamedTuple):
a: int
b: str
class B(A):
def f(self) -> NT:
return NT(1, "s")
print(B().f())
Doesn't pass mypy:
(py38) juan$ mypy test_typing.py
test_typing.py:18: error: Return type "NT" of "f" incompatible with return type "NamedTuple" in supertype "A"
Found 1 error in 1 file (checked 1 source file)
However, usint Tuple does:
class A(abc.ABC):
#abc.abstractmethod
def f(self) -> typing.Tuple[typing.Union[str, int],...]:
...
Although, that may not be very useful.
What you really want is some sort of structural typing, but I can't think of any way to use typing.Protocol for this. Basically, it can't express "any type with with a variadic number of attributes all of which are typing.Union[int, str].

Related

Annotating type with class and instance

I'm making a semi-singleton class Foo that can have (also semi-singleton) subclasses. The constructor takes one argument, let's call it a slug, and each (sub)class is supposed to have at most one instance for each value of slug.
Let's say I have a subclass of Foo called Bar. Here is an example of calls:
Foo("a slug") -> returns a new instance of Foo, saved with key (Foo, "a slug").
Foo("some new slug") -> returns a new instance Foo, saved with key (Foo, "some new slug").
Foo("a slug") -> we have the same class and slug from step 1, so this returns the same instance that was returned in step 1.
Bar("a slug") -> we have the same slug as before, but a different class, so this returns a new instance of Bar, saved with key (Bar, "a slug").
Bar("a slug") -> this returns the same instance of Bar that we got in step 4.
I know how to implement this: class dictionary associating a tuple of type and str to instance, override __new__, etc. Simple stuff.
My question is how to type annotate this dictionary?
What I tried to do was something like this:
FooSubtype = TypeVar("FooSubtype", bound="Foo")
class Foo:
_instances: Final[dict[tuple[Type[FooSubtype], str], FooSubtype]] = dict()
So, the idea is "whatever type is in the first element of the key ("assigning" it to FooSubtype type variable), the value needs to be an instance of that same type".
This fails with Type variable "FooSubtype" is unbound, and I kinda see why.
I get the same error if I split it like this:
FooSubtype = TypeVar("FooSubtype", bound="Foo")
InstancesKeyType: TypeAlias = tuple[Type[FooSubtype], str]
class Foo:
_instances: Final[dict[InstancesKeyType, FooSubtype]] = dict()
The error points to the last line in this example, meaning it's the value type, not the key one, that is the problem.
mypy also suggests using Generic, but I don't see how to do it in this particular example, because the value's type should somehow relate to the key's type, not be a separate generic type.
This works:
class Foo:
_instances: Final[dict[tuple[Type["Foo"], str], "Foo"]] = dict()
but it allows _instance[(Bar1, "x")] to be of type Bar2 (Bar1 and Bar2 here being different subclasses of Foo). It's not a big problem and I'm ok with leaving it like this, but I'm wondering if there is a better (stricter) approach.
This is a really great question. First I looked through and said "no, you can't at all", because you can't express any relation between dict key and value. However, then I realised that your suggestion is almost possible to implement.
First, let's define a protocol that describes your desired behavior:
from typing import TypeAlias, TypeVar, Protocol
_T = TypeVar("_T", bound="Foo")
# Avoid repetition, it's just a generic alias
_KeyT: TypeAlias = tuple[type[_T], str]
class _CacheDict(Protocol):
def __getitem__(self, __key: _KeyT[_T]) -> _T: ...
def __delitem__(self, __key: _KeyT['Foo']) -> None: ...
def __setitem__(self, __key: _KeyT[_T], __value: _T) -> None: ...
How does it work? It defines an arbitrary data structure with item access, such that cache_dict[(Foo1, 'foo')] resolves to type Foo1. It looks very much like a dict sub-part (or collections.abc.MutableMapping), but with slightly different typing. Dunder argument names are almost equivalent to positional-only arguments (with /). If you need other methods (e.g. get or pop), add them to this definition as well (you may want to use overload). You'll almost certainly need __contains__ which should have the same signature as __delitem__.
So, now
class Foo:
_instances: Final[_CacheDict] = cast(_CacheDict, dict())
class Foo1(Foo): pass
class Foo2(Foo): pass
reveal_type(Foo._instances[(Foo, 'foo')]) # N: Revealed type is "__main__.Foo"
reveal_type(Foo._instances[(Foo1, 'foo')]) # N: Revealed type is "__main__.Foo1"
wow, we have properly inferred value types! We cast dict to the desired type, because our typing is different from dict definitions.
It still has a problem: you can do
Foo._instances[(Foo1, 'foo')] = Foo2()
because _T just resolves to Foo here. However, this problem is completely unavoidable: even had we some infer keyword or Infer special form to spell def __setitem__(self, __key: _KeyT[Infer[_T]], __value: _T) -> None, it won't work properly:
foo1_t: type[Foo] = Foo1 # Ok, upcasting
foo2: Foo = Foo2() # Ok again
Foo._instances[(foo1_t, 'foo')] = foo2 # Ough, still allowed, _T is Foo again
Note that we don't use any casts above, so this code is type-safe, but certainly conflicts with our intent.
So, we probably have to live with __setitem__ unstrictness, but at least have proper type from item access.
Finally, the class is not generic in _T, because otherwise all values will be inferred to declared type instead of function-scoped (you can try using Protocol[_T] as a base class and watch what's happening, it's pretty good for deeper understanding of mypy approach to type inference).
Here's a link to playground with full code.
Also, you can subclass a MutableMapping[_KeyT['Foo'], 'Foo'] to get more methods instead of defining them manually. It will deal with __delitem__ and __contains__ out of the box, but __setitem__ and __getitem__ still need your implementation.
Here's an alternative solution with MutableMapping and get (because get was tricky and funny to implement) (playground):
from collections.abc import MutableMapping
from abc import abstractmethod
from typing import TypeAlias, TypeVar, Final, TYPE_CHECKING, cast, overload
_T = TypeVar("_T", bound="Foo")
_Q = TypeVar("_Q")
_KeyT: TypeAlias = tuple[type[_T], str]
class _CacheDict(MutableMapping[_KeyT['Foo'], 'Foo']):
#abstractmethod
def __getitem__(self, __key: _KeyT[_T]) -> _T: ...
#abstractmethod
def __setitem__(self, __key: _KeyT[_T], __value: _T) -> None: ...
#overload # No-default version
#abstractmethod
def get(self, __key: _KeyT[_T]) -> _T | None: ...
# Ooops, a `mypy` bug, try to replace with `__default: _T | _Q`
# and check Foo._instances.get((Foo1, 'foo'), Foo2())
# The type gets broader, but resolves to more specific one in a wrong way
#overload # Some default
#abstractmethod
def get(self, __key: _KeyT[_T], __default: _Q) -> _T | _Q: ...
# Need this because of https://github.com/python/mypy/issues/11488
#abstractmethod
def get(self, __key: _KeyT[_T], __default: object = None) -> _T | object: ...
class Foo:
_instances: Final[_CacheDict] = cast(_CacheDict, dict())
class Foo1(Foo): pass
class Foo2(Foo): pass
reveal_type(Foo._instances)
reveal_type(Foo._instances[(Foo, 'foo')]) # N: Revealed type is "__main__.Foo"
reveal_type(Foo._instances[(Foo1, 'foo')]) # N: Revealed type is "__main__.Foo1"
reveal_type(Foo._instances.get((Foo, 'foo'))) # N: Revealed type is "Union[__main__.Foo, None]"
reveal_type(Foo._instances.get((Foo1, 'foo'))) # N: Revealed type is "Union[__main__.Foo1, None]"
reveal_type(Foo._instances.get((Foo1, 'foo'), Foo1())) # N: Revealed type is "__main__.Foo1"
reveal_type(Foo._instances.get((Foo1, 'foo'), Foo2())) # N: Revealed type is "Union[__main__.Foo1, __main__.Foo2]"
(Foo1, 'foo') in Foo._instances # We get this for free
Foo._instances[(Foo1, 'foo')] = Foo1()
Foo._instances[(Foo1, 'foo')] = object() # E: Value of type variable "_T" of "__setitem__" of "_CacheDict" cannot be "object" [type-var]
Note that we don't use a Protocol now (because it needs MutableMapping to be a protocol as well) and use abstract methods instead.
Trick, don't use it!
When I was writing this answer, I discovered a mypy bug that you can abuse in a very interesting way here. We started with something like this, right?
from collections.abc import MutableMapping
from abc import abstractmethod
from typing import TypeAlias, TypeVar, Final, TYPE_CHECKING, cast, overload
_T = TypeVar("_T", bound="Foo")
_Q = TypeVar("_Q")
_KeyT: TypeAlias = tuple[type[_T], str]
class _CacheDict(MutableMapping[_KeyT['Foo'], 'Foo']):
#abstractmethod
def __getitem__(self, __key: _KeyT[_T]) -> _T: ...
#abstractmethod
def __setitem__(self, __key: _KeyT[_T], __value: _T) -> None: ...
class Foo:
_instances: Final[_CacheDict] = cast(_CacheDict, dict())
class Foo1(Foo): pass
class Foo2(Foo): pass
Foo._instances[(Foo1, 'foo')] = Foo1()
Foo._instances[(Foo1, 'foo')] = Foo2()
Now let's change __setitem__ signature to a very weird thing. Warning: this is a bug, don't rely on this behavior! If we type __default as _T | _Q, we magically get "proper" typing with strict narrowing to type of first argument.
#abstractmethod
def __setitem__(self, __key: _KeyT[_T], __value: _T | _Q) -> None: ...
Now:
Foo._instances[(Foo1, 'foo')] = Foo1() # Ok
Foo._instances[(Foo1, 'foo')] = Foo2() # E: Incompatible types in assignment (expression has type "Foo2", target has type "Foo1") [assignment]
It is simply wrong, because _Q union part can be resolved to anything and is not used in fact (and moreover, it can't be a typevar at all, because it's used only once in the definition).
Also, this allows another invalid assignment, when right side is not a Foo subclass:
Foo._instances[(Foo1, 'foo')] = object() # passes
I'll report this soon and link the issue to this question.

How to write type hints for an iterable abstract base class?

I need to write an abstract base class for classes that:
derive from an existing class, SomeClassIHaveToDeriveFrom (this is why I can't use a Protocol, I need this to be an abstract base class),
implement the Iterable interface,
contain objects of a specific type, Element (i.e. if we iterate over an instance, we get objects of type Element).
I tried to add a type hint to __iter__ in the abstract base class:
import abc
import collections.abc
import typing
class Element:
pass
class SomeClassIHaveToDeriveFrom:
pass
class BaseIterableClass(
abc.ABC,
collections.abc.Iterable,
SomeClassIHaveToDeriveFrom,
):
#abc.abstractmethod
def __iter__(self) -> typing.Iterator[Element]:
...
class A(BaseIterableClass):
def __iter__(self):
return self
def __next__(self):
return "some string that isn't an Element"
a = A()
a_it = iter(a)
a_el = next(a)
But mypy doesn't detect any errors here, even though a is a BaseIterableClass instance that contains strs instead of Elements. I'm assuming that __iter__ is subject to name mangling, which means that the type hint is ignored.
How can I type hint BaseIterableClass so that deriving from it with an __iter__ function that iterates over something else than Element causes a typing error?
Running mypy in --strict mode actually tells you everything you need.
1) Incomplete Iterable
:13: error: Missing type parameters for generic type "Iterable" [type-arg]
Since Iterable is generic and parameterized with one type variable, you should subclass it accordingly, i.e.
...
T = typing.TypeVar("T", bound="Element")
...
class BaseIterableClass(
abc.ABC,
collections.abc.Iterable[T],
SomeClassIHaveToDeriveFrom,
):
2) Now we get a new error
:17: error: Return type "Iterator[Element]" of "__iter__" incompatible with return type "Iterator[T]" in supertype "Iterable" [override]
Easily solvable:
...
#abc.abstractmethod
def __iter__(self) -> typing.Iterator[T]:
3) Now that we made BaseIterableClass properly generic...
:20: error: Missing type parameters for generic type "BaseIterableClass" [type-arg]
Here we can specify Element:
class A(BaseIterableClass[Element]):
...
4) Missing return types
:21: error: Function is missing a type annotation [no-untyped-def]
:24: error: Function is missing a return type annotation [no-untyped-def]
Since we are defining the methods __iter__ and __next__ for A, we need to annotate them properly:
...
def __iter__(self) -> collections.abc.Iterator[Element]:
...
def __next__(self) -> Element:
5) Wrong return value
Now that we annotated the __next__ return type, mypy picks up that "some string that isn't an Element" is not, in fact, an instance of Element. 🙂
:25: error: Incompatible return value type (got "str", expected "Element") [return-value]
Fully annotated code
from abc import ABC, abstractmethod
from collections.abc import Iterable, Iterator
from typing import TypeVar
T = TypeVar("T", bound="Element")
class Element:
pass
class SomeClassIHaveToDeriveFrom:
pass
class BaseIterableClass(
ABC,
Iterable[T],
SomeClassIHaveToDeriveFrom,
):
#abstractmethod
def __iter__(self) -> Iterator[T]:
...
class A(BaseIterableClass[Element]):
def __iter__(self) -> Iterator[Element]:
return self
def __next__(self) -> Element:
return "some string that isn't an Element" # error
# return Element()
Fixed type argument
If you don't want BaseIterableClass to be generic, you can change steps 1)-3) and specify the type argument for all subclasses. Then you don't need to pass a type argument for A. The code would then look like so:
from abc import ABC, abstractmethod
from collections.abc import Iterable, Iterator
class Element:
pass
class SomeClassIHaveToDeriveFrom:
pass
class BaseIterableClass(
ABC,
Iterable[Element],
SomeClassIHaveToDeriveFrom,
):
#abstractmethod
def __iter__(self) -> Iterator[Element]:
...
class A(BaseIterableClass):
def __iter__(self) -> Iterator[Element]:
return self
def __next__(self) -> Element:
return "some string that isn't an Element" # error
# return Element()
Maybe Iterator instead?
Finally, it seems that you actually want the Iterator interface, since you are defining the __next__ method on your subclass A. In that case, you don't need to define __iter__ at all. Iterator inherits from Iterable and automatically gets __iter__ mixed in, when you inherit from it and implement __next__. (see docs)
Also, since the Iterator base class is abstract already, you don't need to include __next__ as an abstract method.
Then the (generic version of the) code would look like this:
from abc import ABC
from collections.abc import Iterator
from typing import TypeVar
T = TypeVar("T", bound="Element")
class Element:
pass
class SomeClassIHaveToDeriveFrom:
pass
class BaseIteratorClass(
ABC,
Iterator[T],
SomeClassIHaveToDeriveFrom,
):
pass
class A(BaseIteratorClass[Element]):
def __next__(self) -> Element:
return "some string that isn't an Element" # error
# return Element()
Both iter(A()) and next(A()) work.
Hope this helps.

How to use TypeVar for input and output of multiple generic Protocols in python?

I want to use multiple generic protocols and ensure they're compatible:
from typing import TypeVar, Protocol, Generic
from dataclasses import dataclass
# checking fails as below and with contravariant=True or covariant=True:
A = TypeVar("A")
class C(Protocol[A]):
def f(self, a: A) -> None: pass
class D(Protocol[A]):
def g(self) -> A: pass
# Just demonstrates my use case; doesn't have errors:
#dataclass
class CompatibleThings(Generic[A]):
c: C[A]
d: D[A]
Mypy gives the following error:
Invariant type variable 'A' used in protocol where contravariant one is expected
Invariant type variable 'A' used in protocol where covariant one is expected
I know this can be done by making C and D generic ABC classes, but I want to use protocols.
The short explanation is that your approach breaks subtype transitivity; see this section of PEP 544 for more information. It gives a pretty clear explanation of why your D protocol (and, implicitly, your C protocol) run into this problem, and why it requires different types of variance for each to solve it. You can also look on Wikipedia for info on type variance.
Here's the workaround: use covariant and contravariant protocols, but make your generic dataclass invariant. The big hurdle here is inheritance, which you have to handle in order to use Protocols, but is kind of tangential to your goal. I'm going to switch naming here to highlight the inheritance at play, which is what this is all about:
A = TypeVar("A") # Invariant type
A_cov = TypeVar("A_cov", covariant=True) # Covariant type
A_contra = TypeVar("A_contra", contravariant=True) # Contravariant type
# Give Intake its contravariance
class Intake(Protocol[A_contra]):
def f(self, a: A_contra) -> None: pass
# Give Output its covariance
class Output(Protocol[A_cov]):
def g(self) -> A_cov: pass
# Just tell IntakeOutput that the type needs to be the same
# Since a is invariant, it doesn't care that
# Intake and Output require contra / covariance
#dataclass
class IntakeOutput(Generic[A]):
intake: Intake[A]
output: Output[A]
You can see that this works with the following tests:
class Animal:
...
class Cat(Animal):
...
class Dog(Animal):
...
class IntakeCat:
def f(self, a: Cat) -> None: pass
class IntakeDog:
def f(self, a: Dog) -> None: pass
class OutputCat:
def g(self) -> Cat: pass
class OutputDog:
def g(self) -> Dog: pass
compat_cat: IntakeOutput[Cat] = IntakeOutput(IntakeCat(), OutputCat())
compat_dog: IntakeOutput[Dog] = IntakeOutput(IntakeDog(), OutputDog())
# This is gonna error in mypy
compat_fail: IntakeOutput[Dog] = IntakeOutput(IntakeDog(), OutputCat())
which gives the following error:
main.py:48: error: Argument 2 to "IntakeOutput" has incompatible type "OutputCat"; expected "Output[Dog]"
main.py:48: note: Following member(s) of "OutputCat" have conflicts:
main.py:48: note: Expected:
main.py:48: note: def g(self) -> Dog
main.py:48: note: Got:
main.py:48: note: def g(self) -> Cat
So what's the catch? What are you giving up? Namely, inheritance in IntakeOutput. Here's what you can't do:
class IntakeAnimal:
def f(self, a: Animal) -> None: pass
class OutputAnimal:
def g(self) -> Animal: pass
# Ok, as expected
ok1: IntakeOutput[Animal] = IntakeOutput(IntakeAnimal(), OutputAnimal())
# Ok, because Output is covariant
ok2: IntakeOutput[Animal] = IntakeOutput(IntakeAnimal(), OutputDog())
# Both fail, because Intake is contravariant
fails1: IntakeOutput[Animal] = IntakeOutput(IntakeDog(), OutputDog())
fails2: IntakeOutput[Animal] = IntakeOutput(IntakeDog(), OutputAnimal())
# Ok, because Intake is contravariant
ok3: IntakeOutput[Dog] = IntakeOutput(IntakeAnimal(), OutputDog())
# This fails, because Output is covariant
fails3: IntakeOutput[Dog] = IntakeOutput(IntakeAnimal(), OutputAnimal())
fails4: IntakeOutput[Dog] = IntakeOutput(IntakeDog(), OutputAnimal())
So. There it is. You can play around with this more here.

Python typing signature for instance of subclass?

Consider:
from __future__ import annotations
class A:
#classmethod
def get(cls) -> A:
return cls()
class B(A):
pass
def func() -> B: # Line 12
return B.get()
Running mypy on this we get:
$ mypy test.py
test.py:12: error: Incompatible return value type (got "A", expected "B")
Found 1 error in 1 file (checked 1 source file)
Additionally, I have checked to see if old-style recursive annotations work. That is:
# from __future__ import annotations
class A:
#classmethod
def get(cls) -> "A":
# ...
...to no avail.
Of course one could do:
from typing import cast
def func() -> B: # Line 12
return cast(B, B.get())
Every time this case pops up. But I would like to avoid doing that.
How should one go about typing this?
The cls and self parameters are usually inferred by mpyp to avoid a lot of redundant code, but when required they can be specified explicitly by annotations.
In this case the explicit type for the class method would look like the following:
class A:
#classmethod
def get(cls: Type[A]) -> A:
return cls()
So what we really need here is a way to make Type[A] a generic parameter, such that when the class method is called from a child class, you can reference the child class instead. Luckily, we have TypeVar values for this.
Working this into your existing example we will get the following:
from __future__ import annotations
from typing import TypeVar, Type
T = TypeVar('T')
class A:
#classmethod
def get(cls: Type[T]) -> T:
return cls()
class B(A):
pass
def func() -> B:
return B.get()
Now mypy should be your friend again! 😎

Classmethods on generic classes

I try to call a classmethod on a generic class:
from typing import List, Union, TypeVar, Generic
from enum import IntEnum
class Gender(IntEnum):
MALE = 1
FEMALE = 2
DIVERS = 3
T = TypeVar('T')
class EnumAggregate(Generic[T]):
def __init__(self, value: Union[int, str, List[T]]) -> None:
if value == '':
raise ValueError(f'Parameter "value" cannot be empty!')
if isinstance(value, list):
self._value = ''.join([str(x.value) for x in value])
else:
self._value = str(value)
def __contains__(self, item: T) -> bool:
return item in self.to_list
#property
def to_list(self) -> List[T]:
return [T(int(character)) for character in self._value]
#property
def value(self) -> str:
return self._value
#classmethod
def all(cls) -> str:
return ''.join([str(x.value) for x in T])
Genders = EnumAggregate[Gender]
But if I call
Genders.all()
I get the error TypeError: 'TypeVar' object is not iterable. So the TypeVar T isn't properly matched with the Enum Gender.
How can I fix this? The expected behavior would be
>>> Genders.all()
'123'
Any ideas? Or is this impossible?
Python's type hinting system is there for a static type checker to validate your code and T is just a placeholder for the type system, like a slot in a template language. It can't be used as an indirect reference to a specific type.
You need to subclass your generic type if you want to produce a concrete implementation. And because Gender is a class and not an instance, you'd need to tell the type system how you plan to use a Type[T] somewhere, too.
Because you also want to be able to use T as an Enum() (calling it with EnumSubclass(int(character))), I'd also bind the typevar; that way the type checker will understand that all concrete forms of Type[T] are callable and will produce individual T instances, but also that those T instances will always have a .value attribute:
from typing import ClassVar, List, Union, Type, TypeVar, Generic
from enum import IntEnum
T = TypeVar('T', bound=IntEnum) # only IntEnum subclasses
class EnumAggregate(Generic[T]):
# Concrete implementations can reference `enum` *on the class itself*,
# which will be an IntEnum subclass.
enum: ClassVar[Type[T]]
def __init__(self, value: Union[int, str, List[T]]) -> None:
if not value:
raise ValueError('Parameter "value" cannot be empty!')
if isinstance(value, list):
self._value = ''.join([str(x.value) for x in value])
else:
self._value = str(value)
def __contains__(self, item: T) -> bool:
return item in self.to_list
#property
def to_list(self) -> List[T]:
# the concrete implementation needs to use self.enum here
return [self.enum(int(character)) for character in self._value]
#property
def value(self) -> str:
return self._value
#classmethod
def all(cls) -> str:
# the concrete implementation needs to reference cls.enum here
return ''.join([str(x.value) for x in cls.enum])
With the above generic class you can now create a concrete implementation, using your Gender IntEnum fitted into the T slot and as a class attribute:
class Gender(IntEnum):
MALE = 1
FEMALE = 2
DIVERS = 3
class Genders(EnumAggregate[Gender]):
enum = Gender
To be able to access the IntEnum subclass as a class attribute, we needed to use typing.ClassVar[]; otherwise the type checker has to assume the attribute is only available on instances.
And because the Gender IntEnum subclass is itself a class, we need to tell the type checker about that too, hence the use of typing.Type[].
Now the Gender concrete subclass works; the use of EnumAggregate[Gender] as a base class tells the type checker to substitute T for Gender everywhere, and because the implementation uses enum = Gender, the type checker sees that this is indeed correctly satisfied and the code passes all checks:
$ bin/mypy so65064844.py
Success: no issues found in 1 source file
and you can call Genders.all() to produce a string:
>>> Genders.all()
'123'
Note that I'd not store the enum values as strings, but rather as integers. There is little value in converting it back and forth here, and you are limiting yourself to enums with values between 0 and 9 (single digits).
The other answer does not work anymore, at least in Python 3.10. The type annotation ClassVar[Type[T]] results in a mypy error: ClassVar cannot contain type variables is thrown. This is because ClassVar should only be used in a Protocol and structural subtyping, which is not the best answer for the problem at hand.
The following modification of the other answer works:
class EnumAggregate(Generic[T]):
enum: type[T]
[...]
class Genders(EnumAggregate[Gender]):
enum = Gender
Abstract class variables
I would also recommend making enum abstract in some way, so instantiating EnumAggregate[Gender] instead of Genders will raise an error at the time of instantiation, not only at calls of to_list() or all().
This can be done in two ways: Either check the implementation in __init__:
class EnumAggregate(Generic[T]):
enum: type[T]
def __init__
[...]
if not hasattr(type(self), 'enum'):
raise NotImplementedError("Implementations must define the class variable 'enum'")
Or use an abstract class property, see this discussion. This makes mypy happy in several situations, but not Pylance (see here):
class EnumAggregate(Generic[T]):
#property
#classmethod
#abstractmethod
def enum(cls) -> type[T]: ...
[...]
class Genders(EnumAggregate[Gender]):
enum = Gender
However, there are unresolved problems with mypy and decorators, so right now there are spurious errors which might disappear in the future. For reference:
mypy issue 1
mypy issue 2
Discussion whether to deprecate chaining classmethod decorators

Categories