I tried typing.IO as suggested in Type hint for a file or file-like object?, but it doesn't work:
from __future__ import annotations
from tempfile import NamedTemporaryFile
from typing import IO
def example(tmp: IO) -> str:
print(tmp.file)
return tmp.name
print(example(NamedTemporaryFile()))
for this, mypy tells me:
test.py:6: error: "IO[Any]" has no attribute "file"; maybe "fileno"?
and Python runs fine. So the code is ok.
I don't think this can be easily type hinted.
If you check the definition of NamedTemporaryFile, you'll see that it's a function that ends in:
return _TemporaryFileWrapper(file, name, delete)
And _TemporaryFileWrapper is defined as:
class _TemporaryFileWrapper:
Which means there isn't a super-class that can be indicated, and _TemporaryFileWrapper is "module-private". It also doesn't look like it has any members that make it a part of an existing Protocol * (except for Iterable and ContextManager; but you aren't using those methods here).
I think you'll need to use _TemporaryFileWrapper and ignore the warnings:
from tempfile import _TemporaryFileWrapper # Weak error
def example(tmp: _TemporaryFileWrapper) -> str:
print(tmp.file)
return tmp.name
If you really want a clean solution, you could implement your own Protocol that includes the attributes you need, and have it also inherit from Iterable and ContextManager. Then you can type-hint using your custom Protocol.
* It was later pointed out that it does fulfil IO, but the OP requires attributes that aren't in IO, so that can't be used.
Related
Can I use a typed function definition from one stub file in another? E.g., I have my __init__.pyi and also extractor.pyi.
We want to reference some methods from extractor.py, so we keep compatibility and not redefine types.
In extractor.pyi I have:
def _func(arg: str) -> str: ...
Now I use in __init__.pyi:
from .extractor import _func
However, my linter is complaining.
Is it, in general, possible to import stubs from other .pyi files? If yes, how should I do it correctly?
i am trying to define function which would indicates that returns object type of
hashlib.sha256(b'password')
Explicitly what i wanna do is: def encrypt(self, password: str) -> _hashlib.HASH:
Unfortunately ,,_hashlib.HASH" cannot be read properly by my interpreter, there is an error of unknown object. How to handle it?
Add this to the top of your module:
from __future__ import annotations
It will postpone evaluation of the annotation. This will become the default behavior in Python 3.10 anyway.
Let's say we have a function definition like this:
def f(*, model: Optional[Type[pydantic.BaseModel]] = None)
So the function doesn't require pydantic to be installed until you pass something as a model. Now let's say we want to pack the function into pypi package. And my question is if there's a way to avoid bringing pydantic into the package dependencies only the sake of type checking?
I tried to follow dspenser's advice, but I found mypy still giving me Name 'pydantic' is not defined error. Then I found this chapter in the docs and it seems to be working in my case too:
from typing import TYPE_CHECKING
if TYPE_CHECKING:
import pydantic
You can use normal clases (instead of string literals) with __future__.annotations (python 3.8.1):
from __future__ import annotations
from typing import TYPE_CHECKING, Optional, Type
if TYPE_CHECKING:
import pydantic
def f(*, model: Optional[Type[pydantic.BaseModel]] = None):
pass
If for some reason you can't use __future__.annotations, e.g. you're on python < 3.7, use typing with string literals from dspenser's solution.
Python's typing module can use string type names, as well as the type itself, as you have in your example.
If you don't want the type to be evaluated when the code is run, and an exception thrown if the module has not been imported, then you may prefer to use the string name. Your code snippet would then be:
def f(*, model: Optional[Type["pydantic.BaseModel"]] = None)
Your static type checking using mypy should continue to work, but your package will no longer always require the dependency on pydantic.
As you discovered in the docs, python's typing module includes a way to specify that some imports are only required for (string literal) type annotations:
from typing import TYPE_CHECKING
if TYPE_CHECKING:
import pydantic
This will execute the import pydantic statement only when TYPE_CHECKING is True, primarily when using mypy to analyze the types in the code. When running the program normally, the import statement is not executed. For this reason, the option is only to be used with string literal type hints; otherwise, the missing import will cause the program's normal execution to fail.
As lazy evaluation of type hints is to be implemented in Python 4.0,
from __future__ import annotations
can be used to enable this behaviour in Python 3. With this import statement, string literal types are not required.
I would do something like
try:
from pydantic import BaseModel as PydanticBaseModel
except ImportError:
PydanticBaseModel = None
I don't think it's better than what dspencer proposed, but sometimes some weird styleguide may forbid you from using string type names, so I just wanted to point another solution out.
I want to write mypy-typed code that uses asyncio and works on multiple platforms. Specifically, I often have classes and methods that explicitly bind to an event loop. I want to provide a type annotation for the event loop.
When I check the type of the asyncio event loop on Linux, I get:
>>> import asyncio
>>> type(asyncio.get_event_loop())
<class 'asyncio.unix_events._UnixSelectorEventLoop'>
This type is clearly tied to the Unix/Linux platform.
Now, I can write code that explicitly types the event loop with this type:
import asyncio
from asyncio.unix_events import _UnixSelectorEventLoop # type: ignore
def func(loop: _UnixSelectorEventLoop) -> None:
print(loop)
func(asyncio.get_event_loop())
But you'll notice that I have to include a # type: ignore tag on the the _UnixSelectorEventLoop import because asyncio.unix_events has no type stubs. I am also hesitant to import a method that is intended to be private, as indicated by the the underscore at the start of the class name.
As an alternative, I can use AbstractEventLoop as the type:
import asyncio
def func(loop: asyncio.AbstractEventLoop) -> None:
print(loop)
func(asyncio.get_event_loop())
And this passes a mypy type check successfully. I am hesitant to use AbstractEventLoop as my type because it is an abstract type.
Is there an alternative type signature that works across platforms, does not require the use of abstract class definitions, and passes mypy type checking?
If you look at the CPython source code, AbstractEventLoop is actually the correct, OS independent definition of the event loop.
You can find the source code in question here.
So I think, you are actually right, and should feel good about this type-hint choice.
(You may read this question for some background)
I would like to have a gracefully-degrading way to pickle objects in Python.
When pickling an object, let's call it the main object, sometimes the Pickler raises an exception because it can't pickle a certain sub-object of the main object. For example, an error I've been getting a lot is "can’t pickle module objects." That is because I am referencing a module from the main object.
I know I can write up a little something to replace that module with a facade that would contain the module's attributes, but that would have its own issues(1).
So what I would like is a pickling function that automatically replaces modules (and any other hard-to-pickle objects) with facades that contain their attributes. That may not produce a perfect pickling, but in many cases it would be sufficient.
Is there anything like this? Does anyone have an idea how to approach this?
(1) One issue would be that the module may be referencing other modules from within it.
You can decide and implement how any previously-unpicklable type gets pickled and unpickled: see standard library module copy_reg (renamed to copyreg in Python 3.*).
Essentially, you need to provide a function which, given an instance of the type, reduces it to a tuple -- with the same protocol as the reduce special method (except that the reduce special method takes no arguments, since when provided it's called directly on the object, while the function you provide will take the object as the only argument).
Typically, the tuple you return has 2 items: a callable, and a tuple of arguments to pass to it. The callable must be registered as a "safe constructor" or equivalently have an attribute __safe_for_unpickling__ with a true value. Those items will be pickled, and at unpickling time the callable will be called with the given arguments and must return the unpicked object.
For example, suppose that you want to just pickle modules by name, so that unpickling them just means re-importing them (i.e. suppose for simplicity that you don't care about dynamically modified modules, nested packages, etc, just plain top-level modules). Then:
>>> import sys, pickle, copy_reg
>>> def savemodule(module):
... return __import__, (module.__name__,)
...
>>> copy_reg.pickle(type(sys), savemodule)
>>> s = pickle.dumps(sys)
>>> s
"c__builtin__\n__import__\np0\n(S'sys'\np1\ntp2\nRp3\n."
>>> z = pickle.loads(s)
>>> z
<module 'sys' (built-in)>
I'm using the old-fashioned ASCII form of pickle so that s, the string containing the pickle, is easy to examine: it instructs unpickling to call the built-in import function, with the string sys as its sole argument. And z shows that this does indeed give us back the built-in sys module as the result of the unpickling, as desired.
Now, you'll have to make things a bit more complex than just __import__ (you'll have to deal with saving and restoring dynamic changes, navigate a nested namespace, etc), and thus you'll have to also call copy_reg.constructor (passing as argument your own function that performs this work) before you copy_reg the module-saving function that returns your other function (and, if in a separate run, also before you unpickle those pickles you made using said function). But I hope this simple cases helps to show that there's really nothing much to it that's at all "intrinsically" complicated!-)
How about the following, which is a wrapper you can use to wrap some modules (maybe any module) in something that's pickle-able. You could then subclass the Pickler object to check if the target object is a module, and if so, wrap it. Does this accomplish what you desire?
class PickleableModuleWrapper(object):
def __init__(self, module):
# make a copy of the module's namespace in this instance
self.__dict__ = dict(module.__dict__)
# remove anything that's going to give us trouble during pickling
self.remove_unpickleable_attributes()
def remove_unpickleable_attributes(self):
for name, value in self.__dict__.items():
try:
pickle.dumps(value)
except Exception:
del self.__dict__[name]
import pickle
p = pickle.dumps(PickleableModuleWrapper(pickle))
wrapped_mod = pickle.loads(p)
Hmmm, something like this?
import sys
attribList = dir(someobject)
for attrib in attribList:
if(type(attrib) == type(sys)): #is a module
#put in a facade, either recursively list the module and do the same thing, or just put in something like str('modulename_module')
else:
#proceed with normal pickle
Obviously, this would go into an extension of the pickle class with a reimplemented dump method...