I am having a hard time using python typing annotations when dealing with generics and compound types
Consider the following class:
import typing as ty
T = ty.TypeVar("T")
CT = tuple[bool, T, str]
class MyClass(ty.Generic[T]):
internal1: tuple[bool, T, str]
internal2: CT[T]
internal3: CT[float]
class DerivedMyClass(MyClass[float]):
pass
print(ty.get_type_hints(MyClass))
print(ty.get_type_hints(DerivedMyClass))
where the type of internal 1, 2, 3 is actually a much more lengthy type annotation. The output is:
{
'internal1': tuple[bool, ~T, str],
'internal2': tuple[bool, ~T, str],
'internal3': tuple[bool, float, str]
}
{
'internal1': tuple[bool, ~T, str],
'internal2': tuple[bool, ~T, str],
'internal3': tuple[bool, float, str]
}
Is there a way to make CT aware of the type in the derived class?
If you are able to use Pydantic, this can be pretty easy:
import typing as ty
from pydantic.generics import GenericModel
T = ty.TypeVar("T")
CT = tuple[bool, T, str]
class MyClass(GenericModel, ty.Generic[T]):
internal1: tuple[bool, T, str]
internal2: CT[T]
internal3: CT[float]
DerivedClass = MyClass[float]
print(ty.get_type_hints(DerivedClass))
This outputs:
{'__concrete__': typing.ClassVar[bool], 'internal1': tuple[bool, float, str], 'internal2': tuple[bool, float, str], 'internal3': tuple[bool, float, str]}
Without using pydantic, best of luck! (or read and copy the source code for GenericModel)
Related
How would I use TypeVarTuple for this example?
T = TypeVar(“T”)
Ts = TypeVarTuple(“Ts”)
#dataclass
class S(Generic[T]):
data: T
def data_from_s(*structs: ??) -> ??:
return tuple(x.data for x in structs)
a = data_from_s(S(1), S(“3”)) # is type tuple[int, str]
I don't see any way to do this with the current spec. The main issue I see is that TypeVarTuple does not support bounds. You can't constrain the types referred to by Ts to be bounded to S.
You need to translate somehow tuple[S[T1], S[T2], ...] -> tuple[T1, T2, ...], but you have no way to know that the types contained by Ts are specializations of S or that types themselves are generic with further parameterization.
Without using TypeVarTuple, your goal can be accomplished to some extent with a pattern like the following, using overload to handle subsets of the signature for differing amounts of arguments. I also use an ending / in the overloads to prevent usage of named arguments (forcing positional args to be used), which allows the overloads to match the real method definition.
Obviously, this pattern becomes awkward as you add more ranges of arguments, but in some cases it can be a nice escape hatch.
from dataclasses import dataclass
from typing import Any, Generic, TypeVar, assert_type, overload
T = TypeVar("T")
#dataclass
class S(Generic[T]):
data: T
...
T1 = TypeVar("T1")
T2 = TypeVar("T2")
T3 = TypeVar("T3")
#overload
def data_from_s(s1: S[T1], /) -> tuple[T1]:
...
#overload
def data_from_s(s1: S[T1], s2: S[T2], /) -> tuple[T1, T2]:
...
#overload
def data_from_s(s1: S[T1], s2: S[T2], s3: S[T3], /) -> tuple[T1, T2, T3]:
...
def data_from_s(*structs: S[Any]) -> tuple[Any, ...]:
return tuple(x.data for x in structs)
Which will pass this test:
assert_type(
data_from_s(S(1)),
tuple[int]
)
assert_type(
data_from_s(S(1), S("3")),
tuple[int, str]
)
assert_type(
data_from_s(S(1), S("3"), S(3.9)),
tuple[int, str, float]
)
I don't fully understand the problem but removing ?? solves the problem.
from dataclasses import dataclass
from typing import Generic, TypeVar, TypeVarTuple
T = TypeVar("T")
Ts = TypeVarTuple("Ts")
#dataclass
class S(Generic[T]):
data: T
def data_from_s(*structs):
return tuple(x.data for x in structs)
a = data_from_s(S(1), S("3")) # is type tuple[int, str]
or put T in place of ??
from dataclasses import dataclass
from typing import Generic, TypeVar, TypeVarTuple
T = TypeVar("T")
Ts = TypeVarTuple("Ts")
#dataclass
class S(Generic[T]):
data: T
def data_from_s(*structs: T) -> T:
return tuple(x.data for x in structs)
a = data_from_s(S(1), S("3")) # is type tuple[int, str]
or something like this, this sample has not any issue with mypy if you add --enable-incomplete-feature=TypeVarTuple parameter and call it like mypy test.py --enable-incomplete-feature=TypeVarTuple
from dataclasses import dataclass
from typing import Generic, TypeVar, TypeVarTuple
T = TypeVar("T")
Ts = TypeVarTuple("Ts")
#dataclass
class S(Generic[T]):
data: T
def data_from_s(*structs: S) -> tuple:
return tuple(x.data for x in structs)
a = data_from_s(S(1), S("3"))
I already have a function that returns a value according to the type it receives as its argument:
T = TypeVar('T', int, str)
def get(t: Type[T]) -> T: ...
So that get(int) returns an int and get(str) returns a str.
I also have a version of get that is used to get many values as a tuple:
def get_many(*ts: Type[T]):
return tuple(get(t) for t in ts)
How should the return type of get_many be annotated?
To be clear, get_many(int, str) should return tuple[int, str], get_many(str, str, int) should return tuple[str, str, int] and so on.
Let's say I have a class like this:
class myclass:
def __init__ (self, param1: Tuple[str,...], param2: bool) -> None:
self.member1 = param1
self.member2 = param2
self.member3 = 10
def gimmie(self) -> int | Tuple[str,...]:
return self.member1 if self.member2 else self.member3
Is there any way I can ensure that the return from gimmie is not of type int | Tuple[str,...] but rather is an int or Tuple[str,...]?
Edit:
There are a couple answers that involve significant acrobatics to do this, when all I really was looking to do was cast the return. Each of those answers both comment on a code "smell" because of this.
The problem is simply that I construct an object with a flag and one of the methods returns 1 of 2 types based on that flag. If that's bad design, what would be the "correct" way to do it?
Here is a way to solve this with generics:
from __future__ import annotations
from typing import overload, Literal, Generic, TypeVar, cast
T = TypeVar('T')
class myclass(Generic[T]):
member1: tuple[str, ...]
member2: bool
member3: int
#overload
def __init__(self: myclass[tuple[str, ...]], param1: tuple[str, ...], param2: Literal[True]) -> None:
...
#overload
def __init__(self: myclass[int], param1: tuple[str, ...], param2: Literal[False]) -> None:
...
def __init__(self, param1: tuple[str, ...], param2: bool) -> None:
self.member1 = param1
self.member2 = param2
self.member3 = 10
def gimmie(self) -> T:
return cast(T, self.member1 if self.member2 else self.member3)
reveal_type(myclass(('a', 'b'), True).gimmie())
# note: Revealed type is "builtins.tuple*[builtins.str]"
reveal_type(myclass(('a', 'b'), False).gimmie())
# note: Revealed type is "builtins.int*"
Some notes:
This approach requires annotating the self argument to give it a different static type. Usually, we don't annotate self, so make sure not to forget this!
Sadly I could not get a if b else c to have the right type without adding a cast.
I do agree with Samwise that this kind of type judo is a code smell, and might be hiding problems with the design of your project.
Here's one way to tackle it with subclasses and an #overloaded factory function:
from typing import Literal, Tuple, Union, cast, overload
class MyClass:
def __init__(self, param1: Tuple[str, ...], param2: bool) -> None:
self.member1 = param1
self.__member2 = param2
self.member3 = 10
def gimmie(self) -> Union[int, Tuple[str, ...]]:
return self.member1 if self.__member2 else self.member3
class _MySubclass1(MyClass):
def gimmie(self) -> Tuple[str, ...]:
return cast(Tuple[str, ...], MyClass.gimmie(self))
class _MySubclass2(MyClass):
def gimmie(self) -> int:
return cast(int, MyClass.gimmie(self))
#overload
def myclass(param1: Tuple[str, ...], param2: Literal[True]) -> _MySubclass1:
...
#overload
def myclass(param1: Tuple[str, ...], param2: Literal[False]) -> _MySubclass2:
...
def myclass(param1: Tuple[str, ...], param2: bool) -> MyClass:
if param2:
return _MySubclass1(param1, param2)
else:
return _MySubclass2(param1, param2)
myobj1 = myclass((), True)
myobj2 = myclass((), False)
reveal_type(myobj1.gimmie()) # Revealed type is "builtins.tuple[builtins.str]"
reveal_type(myobj2.gimmie()) # Revealed type is "builtins.int"
Note that this is a lot of work and requires careful attention to make sure the casts match the implementation logic -- I don't know the real-world problem you're trying to solve, but having to go through this much trouble to make the typing line up correctly is often a "smell" in the way you're modeling the data.
Is it possible with Python type hints to specify the types of a dictionary's keys and values as pairs ?
For instance :
If key is an int, value should be a str
If key is a str, value should be an int
If I write :
Dict[Union[int, str], Union[int, str]]
it allows str -> str and int -> int, which are not allowed.
And with :
Union[Dict[int, str], Dict[str, int]]
the dictionary can be either a Dict[int, str] or Dict[str, int], but not both at the same time ...
I also looked into TypedDict, but it requires to give explicitly all the keys.
If using typing.cast is acceptable for your application, then this can be done by making a class that subclasses Dict that has overrides for __setitem__ and __getitem__, casting your dict to that type. From then on, type checkers will infer correct KeyType: ValueType pairs.
The caveats of this approach are that you can't use it to type check proper dict construction, as that happens before the cast. Additionally, you would need to add more overrides for things like update, __iter__ and possibly other dict methods to get type checking for things beyond __setitem__/__getitem__ dict access.
Example:
from typing import Dict, overload, cast
class KVTypePairedDict(Dict):
#overload
def __getitem__(self, key: str) -> int: ...
#overload
def __getitem__(self, key: int) -> str: ...
def __getitem__(self, key): ...
#overload
def __setitem__(self, key: str, value: int) -> None: ...
#overload
def __setitem__(self, key: int, value: str) -> None: ...
def __setitem__(self, key, value): ...
test: KVTypePairedDict = cast(KVTypePairedDict, {"foo": 0, 1: "bar"})
# str keys
a: int = test["foo"]
test["foo"] = 0
c: str = test["foo"] # <-- mypy and PyCharm flag this
test["foo"] = "bar" # <-- mypy and PyCharm flag this
# int keys
d: str = test[1]
test[1] = "bar"
b: int = test[0] # <-- mypy and PyCharm flag this
test[1] = 0 # <-- mypy and PyCharm flag this
I'm trying to define a function which returns another function. The function it returns is overloaded.
For example:
from typing import overload, Union, Callable
#overload
def foo(a: str) -> str:
pass
#overload
def foo(a: int) -> int:
pass
def foo(a: Union[str, int]) -> Union[str, int]:
if isinstance(a, int):
return 1
else:
# str
return "one"
def bar() -> Callable[[Union[str, int]], Union[str, int]]:
return foo # Incompatible return value type (got overloaded function, expected "Callable[[Union[str, int]], Union[str, int]]")
However, my typing for the function bar is coming up as an error using Mypy.
How do I type bar correctly? What am I doing wrong?
The issue here is partly that the Callable type is a little too limited to accurately express the type for foo and also partly that mypy is currently very conservative when analyzing the compatibility of overloads against Callables. (It's hard to do in the general case).
Probably the best approach for now is to just define a more precise return type by using Callback protocol and return that instead:
For example:
from typing import overload, Union, Callable
# Or if you're using Python 3.8+, just 'from typing import Protocol'
from typing_extensions import Protocol
# A callback protocol encoding the exact signature you want to return
class FooLike(Protocol):
#overload
def __call__(self, a: str) -> str: ...
#overload
def __call__(self, a: int) -> int: ...
def __call__(self, a: Union[str, int]) -> Union[str, int]: ...
#overload
def foo(a: str) -> str:
pass
#overload
def foo(a: int) -> int:
pass
def foo(a: Union[str, int]) -> Union[str, int]:
if isinstance(a, int):
return 1
else:
# str
return "one"
def bar() -> FooLike:
return foo # now type-checks
Note: Protocol was added to the typing module as of Python 3.8. If you want it in earlier versions of Python, install the typing_extensions module (pip install typing_extensions`) and import it from there.
Having to copy the signature like this twice is admittedly a bit clunky. People generally seem to agree that this is a problem (there are various issues about this in the typing and mypy issue trackers), but I don't think there's any consensus on how to best solve this yet.
I sorted that out by changing to pyre:
from typing import overload, Union
def foo(a: Union[str, int]) -> Union[str, int]:
if isinstance(a, int):
return 1
else:
return 'a string'
check:
(.penv) nick$: pyre check
ƛ No type errors found