What is equivalent Protocol of Python Callable? - python

I always though that Callable is equivalent to having the dunder __call__ but apparently there is also __name__, because the following code is correct for mypy --strict:
def print_name(f: Callable[..., Any]) -> None:
print(f.__name__)
def foo() -> None:
pass
print_name(foo)
print_name(lambda x: x)
What is actual interface of python Callable?
I dug out what functools.wraps does. AFAIU it sets ('__module__', '__name__', '__qualname__', '__doc__', '__annotations__') - is that the same what the Callable is expected to have?

So the mypy position up until now seems to have been that most of the time, when a variable is annotated with Callable, the user expects it to stand for a user-defined function (i.e. def something(...): ...).
Even though user-defined functions are technically a subtype of the callable and even though they are the ones that define a number of those attributes you mentioned, some users are not aware of this distinction and would be surprised, if mypy raised an error with code like this:
from collections.abc import Callable
from typing import Any
def f(cal: Callable[..., Any]) -> None:
print(cal.__name__)
print(cal.__globals__)
print(cal.__kwdefaults__)
print(cal.foo)
Each of those print-lines should be an error, yet only the last actually triggers one.
Moreover, if we define a minimal callable class that doesn't have those attributes, it is treated as a subtype of Callable by both Python and mypy, creating a logical contradication:
class Bar:
def __call__(self) -> None:
print(f"hi mom")
f(Bar()) # this is valid
print(Bar().__name__) # this is an error
Their argument so far amounts to maintaining convenience for users that have so far failed to see the distinction between callable subtypes, and by extension avoiding confused issues being opened by those users, asking why callables shouldn't have __name__ or those other attributes. (I hope I am being charitable enough with my interpretation.)
I find this to be a very odd position (to put it mildly) and I expressed as much in the issue I opened for this. I'll keep this answer updated, if any new insights are reached in the discussion around the issue.
Bottom line is: You are right, callables must have the __call__ method and do not require anything else.

Related

Type Narrowing of Class Attributes in Python (TypeGuard) without Subclassing

Consider I have a python class that has a attributes (i.e. a dataclass, pydantic, attrs, django model, ...) that consist of a union, i.e. None and and a state.
Now I have a complex checking function that checks some values.
If I use this checking function, I want to tell the type checker, that some of my class attributes are narrowed.
For instance see this simplified example:
import dataclasses
from typing import TypeGuard
#dataclasses.dataclass
class SomeDataClass:
state: tuple[int, int] | None
name: str
# Assume many more data attributes
class SomeDataClassWithSetState(SomeDataClass):
state: tuple[int, int]
def complex_check(data: SomeDataClass) -> TypeGuard[SomeDataClassWithSetState]:
# Assume some complex checks here, for simplicity it is only:
return data.state is not None and data.name.startswith("SPECIAL")
def get_sum(data: SomeDataClass) -> int:
if complex_check(data):
return data.state[0] + data.state[1]
return 0
Explore on mypy Playground
As seen it is possible to do this with subclasses, which for various reason is not an option for me:
it introduces a lot of duplication
some possible libraries used for dataclasses are not happy with being subclasses without side condition
there could be some Metaclass or __subclasses__ magic that handles all subclass specially, i.e. creating database for the dataclasses
So is there an option to type narrow a(n) attribute(s) of a class without introducing a solely new class, as proposed here?
TL;DR: You cannot narrow the type of an attribute. You can only narrow the type of an object.
As I already mentioned in my comment, for typing.TypeGuard to be useful it relies on two distinct types T and S. Then, depending on the returned bool, the type guard function tells the type checker to assume the object to be either T or S.
You say, you don't want to have another class/subclass alongside SomeDataClass for various (vaguely valid) reasons. But if you don't have another type, then TypeGuard is useless. So that is not the route to take here.
I understand that you want to reduce the type-safety checks like if obj.state is None because you may need to access the state attribute in multiple different places in your code. You must have some place in your code, where you create/mutate a SomeDataClass instance in a way that ensures its state attribute is not None. One solution then is to have a getter for that attribute that performs the type-safety check and only ever returns the narrower type or raises an error. I typically do this via #property for improved readability. Example:
from dataclasses import dataclass
#dataclass
class SomeDataClass:
name: str
optional_state: tuple[int, int] | None = None
#property
def state(self) -> tuple[int, int]:
if self.optional_state is None:
raise RuntimeError("or some other appropriate exception")
return self.optional_state
def set_state(obj: SomeDataClass, value: tuple[int, int]) -> None:
obj.optional_state = value
if __name__ == "__main__":
foo = SomeDataClass(optional_state=(1, 2), name="foo")
bar = SomeDataClass(name="bar")
baz = SomeDataClass(name="baz")
set_state(bar, (2, 3))
print(foo.state)
print(bar.state)
try:
print(baz.state)
except RuntimeError:
print("baz has no state")
I realize you mean there are many more checks happening in complex_check, but either that function doesn't change the type of data or it does. If the type remains the same, you need to introduce type-safety for attributes like state in some other place, which is why I suggest a getter method.
Another option is obviously to have a separate class, which is what is typically done with FastAPI/Pydantic/SQLModel for example and use clever inheritance to reduce code duplication. You mentioned this may cause problems because of subclassing magic. Well, if it does, use the other approach, but I can't think of an example that would cause the problems you mentioned. Maybe you can be more specific and show a case where subclassing would lead to problems.

typing a returned function in python3? [duplicate]

How can I specify the type hint of a variable as a function type? There is no typing.Function, and I could not find anything in the relevant PEP, PEP 483.
As #jonrsharpe noted in a comment, this can be done with typing.Callable:
from typing import Callable
def my_function(func: Callable):
Note: Callable on its own is equivalent to Callable[..., Any].
Such a Callable takes any number and type of arguments (...) and returns a value of any type (Any). If this is too unconstrained, one may also specify the types of the input argument list and return type.
For example, given:
def sum(a: int, b: int) -> int: return a+b
The corresponding annotation is:
Callable[[int, int], int]
That is, the parameters are sub-scripted in the outer subscription with the return type as the second element in the outer subscription. In general:
Callable[[ParamType1, ParamType2, .., ParamTypeN], ReturnType]
Another interesting point to note is that you can use the built in function type() to get the type of a built in function and use that.
So you could have
def f(my_function: type(abs)) -> int:
return my_function(100)
Or something of that form
In python3 it works without import typing:
def my_function(other_function: callable):
pass
My specific use case for wanting this functionality was to enable rich code completion in PyCharm. Using Callable didn't cause PyCharm to suggest that the object had a .__code__ attribute, which is what I wanted, in this case.
I stumbled across the types module and..
from types import FunctionType
allowed me to annotate an object with FunctionType and, voilĂ , PyCharm now suggests my object has a .__code__ attribute.
The OP wasn't clear on why this type hint was useful to them. Callable certainly works for anything that implements .__call__() but for further interface clarification, I submit the types module.
Bummer that Python needed two very similar modules.
An easiest and fancy solution is:
def f(my_function: type(lambda x: None)):
return my_function()
This can be proved in the following way:
def poww(num1, num2):
return num1**num2
print(type(lambda x: None) == type(poww))
and the output will be:
True

types of methods with respect to types of methods in superclass in python

I'm pretty new to Python's typing system, and I didn't find this description in the documentation I've reviewed until now.
If Asub is a subclass of Bsup, and Asub has a multi-argument method named foo. What should I do and what should I avoid when annotating the type of the foo in Asub? For example, should I always assure that the return type in the subclass is exactly the same as that in the super class, or that the return type in the subclass is at least a superclass of that in the superclass? And what about the arguments? Should the arguments of the method in the subclass always be the same type or rather a subtype of the arguments in the superclass?
Also, if (as is usually the case), do I need to declare types on the method in the subclass if the types are exactly the same as in the superclass?
class Bsup:
def foo(self, x: A) -> B:
...
class Asub (Bsup):
def foo(self, x: X) -> Y:
...
I've found the answer in the mypy documentation.
Overriding statically typed methods
This documentation explains and gives examples of methods in derived classes returning more specific objects, and annotating the return value as such. It also addresses some error cases, and hints at how mypy handles covariance and contravariance. Thus my interpretation of the documentation gives some indication to which extent mypy implements LSP.

Setting TypeVar upper bound to class defined afterwards

I was looking through the typeshed source and saw that in the pathlib.pyi it does the following:
_P = TypeVar('_P', bound=PurePath)
...
class PurePath(_PurePathBase): ...
I have a similar case with a base class that returns a subclass from __new__ (similar to Path), so the type annotations would therefore be similar as well. However, defining the bound keyword to the class that is defined below it resolves to an NameError since the name has not yet been resolved (as I would've expected; trying due to typeshed source).
from abc import ABC
from typing import Type
from typing import TypeVar
from foo.interface import SomeInterface
_MT = TypeVar('_MT', bound=MyBaseClass)
class MyBaseClass(SomeInterface, ABC):
def __new__(cls: Type[_MT], var: int = 0) -> _MT:
if var == 1:
return object.__new__(FirstSubClass)
return object.__new__(cls)
class FirstSubClass(MyBaseClass): pass
How does typeshed get away with this? It would be perfect for my typing, otherwise I must do:
_MT = TypeVar('_MT', covariant=True, bound=SomeInterface)
And all my linter warnings are satisfied ("expected type _MT, got object instead")...
Better matching case is the typing a factory method since I am using __new__ as a factory similar to how Path does it and as described here. Still, it would be nice to know how typeshed accomplishes this forward reference to bound without using a string.
In runtime Python code, you can use string literal types in the bound attribute to create a forward reference:
>>> _MT = TypeVar('_MT', bound='MyBaseClass')
>>> _MT.__bound__
ForwardRef('MyBaseClass')
As for why a non-string can be used in typeshed, it's because typeshed provides .pyi stub files, which are syntactically valid Python code, but are not meant to be executed, only to be examined by type checkers. There's little I could find on specifications for stub files, but it makes sense to assume that everything is implicitly a string literal. This seems to be implied from the mypy docs:
String literal types are never needed in # type: comments and stub files.

How to make Mypy deal with subclasses in functions as expected

I have the following code:
from typing import Callable
MyCallable = Callable[[object], int]
MyCallableSubclass = Callable[['MyObject'], int]
def get_id(obj: object) -> int:
return id(obj)
def get_id_subclass(obj: 'MyObject') -> int:
return id(obj)
def run_mycallable_function_on_object(obj: object, func: MyCallable) -> int:
return func(obj)
class MyObject(object):
'''Object that is a direct subclass of `object`'''
pass
my_object = MyObject()
# works just fine
run_mycallable_function_on_object(my_object, get_id)
# Does not work (it runs, but Mypy raises the following error:)
# Argument 2 to "run_mycallable_function_on_object" has incompatible type "Callable[[MyObject], int]"; expected "Callable[[object], int]"
run_mycallable_function_on_object(my_object, get_id_subclass)
Since MyObject inherits from object, why doesn't MyCallableSubclass work in every place that MyCallable does?
I've read a bit about the Liskov substitution principle, and also consulted the Mypy docs about covariance and contravariance. However, even in the docs themselves, they give a very similar example where they say
Callable is an example of type that behaves contravariant in types of arguments, namely Callable[[Employee], int] is a subtype of Callable[[Manager], int].
So then why is using Callable[[MyObject], int] instead of Callable[[object], int] throwing an error in Mypy?
Overall I have two questions:
Why is this happening?
How do I fix it?
As I was writing this question, I realized the answer to my problem, so I figured I'd still ask the question and answer it to save people some time with similar questions later.
What's going on?
Notice that last example from the Mypy docs:
Callable is an example of type that behaves contravariant in types of arguments, namely Callable[[Employee], int] is a subtype of Callable[[Manager], int].
Here, Manager subclasses from Employee. That is, if something is expecting a function that can take in managers, it's alright if the function it gets overgeneralizes and can take in any employee, because it will definitely take in managers.
However, in our case, MyObject subclasses from object. So, if something is expecting a function that can take in objects, then it's not okay if the function it gets overspecifies and can only take in MyObjects.
Why? Imagine a class called NotMyObject that inherits from object, but doesn't inherit from MyObject. If a function should be able to take any object, it should be able to take in both NotMyObjects and MyObjects. However, the specific function can only take in MyObjects, so it won't work for this case.
How can I fix it?
Mypy is correct. You need to have the more specific function (MyCallableSubclass) as the type, otherwise either your code could have bugs, or you are typing incorrectly.

Categories