I have declared a class as follows:
class Runattr:
pass
run = Runattr()
run.cfg = 'some str'
When I am trying to access run.cfg further in my code, PyCharm gives me the following warning and I am not able to autocomplete run.cfg:
Unresolved attribute reference 'cfg' for class 'Runattr' Inspection info: This inspection detects names that should resolve but don't. Due to dynamic dispatch and duck typing, this is possible in a limited but useful number of cases. Top-level and class-level items are supported better than instance items.
I am new to using classes. Can someone tell me why I am seeing this warning and can this be modified to get rid of the warning?
It's just a PyCharm warning (not an error!), since it can't figure out whether that class really should have a cfg attribute.
Assuming you're using a modern Python version, add an annotation for such a field:
class Runattr:
cfg: str # denotes there would/could be a string `cfg`
run = Runattr()
run.cfg = 'some str'
Classes and instances can nevertheless have any attributes (and heck, you can even do setattr(run, "oh this is fun", True) to have an attribute that's not a valid Python identifier), it's just that IDEs can't be infinitely smart about the dynamic nature here.
you have to declare on a field named cfg
class Runattr:
cfg = ''
run = Runattr()
run.cfg = 'some str'
Related
For context, I am using the Python ctypes library to interface with a C library. It isn't necessary to be familiar with C or ctypes to answer this question however. All of this is taking place in the context of a python module I am creating.
In short, my question is: how can I allow Python linters (e.g. PyCharm or plugin for neovim) to lint objects that are created at runtime? "You can't" is not an answer ;). Of course there is always a way, with scripting and the like. I want to know what I would be looking at for the easiest way.
First I introduce my problem and the current approach I am taking. Second, I will describe what I want to do, and ask how.
Within this C library, a whole bunch of error codes are defined. I translated this information from the .h header file into a Python enum:
# CustomErrors.py
from enum import Enum
class CustomErrors(Enum):
ERROR_BROKEN = 1
ERROR_KAPUTT = 2
ERROR_BORKED = 3
Initially, my approach is to have a single exception class containing a type field which described the specific error:
# CustomException.py
from CustomErrors import CustomErrors
class CustomException(Exception):
def __init__(self, customErr):
assert type(customErr) is CustomError
self.type = customErr
super().__init__()
Then, as needed I can raise CustomException(CustomErrors.ERROR_KAPUTT).
Now, what I want to do is create a separate exception class corresponding to each of the enum items in CustomErrors. I believe it is possible to create types at runtime with MyException = type('MyException', (Exception,), {'__doc__' : 'Docstring for ABC class.'}).
I can create the exception classes at runtime like so:
#CustomException.py
from CustomErrors import CustomErrors
...
for ce in CustomErrors:
n = ce.name
vars()[n] = type(n, (Exception,), {'__doc__' : 'Docstring for {0:s} class.'.format(n)})
Note: the reason I want to create these at runtime is to avoid hard-coding of an Exception list that change in the future. I already have the problem of extracting the C enum automatically on the backburner.
This is all well and good, but I have a problem: static analysis cannot resolve the names of these exceptions defined in CustomException. This means PyCharm and other editors for Python will not be able to automatically resolve the names of the exceptions as a suggested autocomplete list when the user types CustomException.. This is not acceptable, as this is code for the end user, who will need to access the exception names for use in try-except constructs.
Here is the only solution I have been able to think of: writing a script which generates the .py files containing the exception names. I can do this using bash. Maybe people will tell me this is really the only option. But I would like to know what other approaches are suggested for solving this problem. Thanks for reading.
You can add a comment to tell mypy to ignore dynamically defined attribute errors. Perhaps the linters that you use share a similar way to silence such errors.
mypy docs on silencing errors based on error codes
This example shows how to ignore an error about an imported name mypy thinks is undefined:
# 'foo' is defined in 'foolib', even though mypy can't see the
# definition.
from foolib import foo # type: ignore[attr-defined]
import util
class C():
save = util.save
setattr(C, 'load', util.load)
C.save is visible to the linter - but C.load isn't. There's thus some difference between assigning class methods from within the class itself, and from outside. Same deal for documentation builders; e.g. Sphinx won't acknowledge :meth:C.load - instead, need to do :func:util.load, which is misleading if load is meant to be C's method. An IDE (Spyder) also fails to "go to" method via self.load code.
The end-goal is to make linter (+docs & IDE) recognize load as C's method just like C.save is, but class method assignment needs to be dynamic (context). Can this be accomplished?
Note: the purpose of dynamic assignment is to automatically pull methods from modules (e.g. util) instead of having to manually update C upon method addition / removal.
Disclaimer: This solution does not work in all use cases, see comments. I leave it here, since it might still be useful under some circumstances.
I don't know about the linter and Sphinx, but an IDE like PyCharm will recognize the method if you declare it upfront using type hinting. In the code below, without the line load: Callable[[], None], I get the warning 'Unresolved attribute reference', but with the line there are no warnings in the file. Check the docs for more information about type hinting.
Notes:
Even with the more general load: Callable, the type checker is satisfied.
If you don't always declare a callable, a valid declaration is load: Optional[Callable] = None. This means that the default value is None. If you then call it without setting it, you will get an error, but you got that already anyway, that's unrelated to this typing.
p.s. I don't have your utils, so I defined some functions in the file itself.
from typing import Callable
def save():
pass
def load():
pass
class C:
load: Callable[[], None]
save = save
setattr(C, 'load', load)
C.load()
When the python help function is invoked with an argument of string type, it is interpreted by pydoc.Helper.help as a request for information on the topic, symbol, keyword or module identified by the value of the string. For other arguments, help on the object itself is provided, unless the object is an instance of a subclass of str. In this latter case, the pydoc.resolve function looks for a module with a name matching the value of the object and raises an exception if none is found.
To illustrate this, consider the example code:
class Extra(object):
def NewMethod(): return 'New'
Cls1 = type( 'FirstClass', (str,Extra), {'__doc__':'My new class','extra':'An extra attribute'})
inst1 = Cls1('METHODS')
help( 'METHODS' )
help( inst1 )
The first invocation of help produces information on the topic "METHODS", the 2nd produces an error message because the pydoc.resolve function is trying to find a module called "METHODS".
This means that it is difficult to provide effective documentation for user defined sub-classes of str. Would it not be possible for pydoc.resolve to use a test on the type of the object, as is done in pydoc.Helper.help, and allow instances of user defined sub-classes to be treated as other class instances?
This question follows from earlier discussion of a related question here.
The simple answer is that making user-defined subclasses of str is not the most common case—partly because the user-defined data but not the string data would be mutable. By the time you have to deal with such, it’s imagined that you know how to write help(type(x)), and using isinstance rather than type(…) is … is the correct default in general. (The other way, you’d have to use help(str(x)) if you wanted to use it, like any other string, to select a help topic, but that’s surely even rarer.)
I'd like to provide documentation (within my program) on certain dynamically created objects, but still fall back to using their class documentation. Setting __doc__ seems a suitable way to do so. However, I can't find many details in the Python help in this regard, are there any technical problems with providing documentation on an instance? For example:
class MyClass:
"""
A description of the class goes here.
"""
a = MyClass()
a.__doc__ = "A description of the object"
print( MyClass.__doc__ )
print( a.__doc__ )
__doc__ is documented as a writable attribute for functions, but not for instances of user defined classes. pydoc.help(a), for example, will only consider the __doc__ defined on the type in Python versions < 3.9.
Other protocols (including future use-cases) may reasonably bypass the special attributes defined in the instance dict, too. See Special method lookup section of the datamodel documentation, specifically:
For custom classes, implicit invocations of special methods are only guaranteed to work correctly if defined on an object’s type, not in the object’s instance dictionary.
So, depending on the consumer of the attribute, what you intend to do may not be reliable. Avoid.
A safe and simple alternative is just to use a different attribute name of your own choosing for your own use-case, preferably not using the __dunder__ syntax convention which usually indicates a special name reserved for some specific use by the implementation and/or the stdlib.
There are some pretty obvious technical problems; the question is whether or not they matter for your use case.
Here are some major uses for docstrings that your idiom will not help with:
help(a): Type help(a) in an interactive terminal, and you get the docstring for MyClass, not the docstring for a.
Auto-generated documentation: Unless you write your own documentation generator, it's not going to understand that you've done anything special with your a value. Many doc generators do have some way to specify help for module and class constants, but I'm not aware of any that will recognize your idiom.
IDE help: Many IDEs will not only auto-complete an expression, but show the relevant docstring in a tooltip. They all do this statically, and without some special-case code designed around your idiom (which they're unlikely to have, given that it's an unusual idiom), they're almost certain to fetch the docstring for the class, not the object.
Here are some where it might help:
Source readability: As a human reading your source, I can tell the intent from the a.__doc__ = … right near the construction of a. Then again, I could tell the same intent just as easily from a Sphinx comment on the constant.
Debugging: pdb doesn't really do much with docstrings, but some GUI debuggers wrapped around it do, and most of them are probably going to show a.__doc__.
Custom dynamic use of docstrings: Obviously any code that you write that does something with a.__doc__ is going to get the instance docstring if you want it to, and therefore can do whatever it wants with it. However, keep in mind that if you want to define your own "protocol", you should use your own name, not one reserved for the implementation.
Notice that most of the same is true for using a descriptor for the docstring:
>>> class C:
... #property
... def __doc__(self):
... return('C doc')
>>> c = C()
If you type c.__doc__, you'll get 'C doc', but help(c) will treat it as an object with no docstring.
It's worth noting that making help work is one of the reasons some dynamic proxy libraries generate new classes on the fly—that is, a proxy to underlying type Spam has some new type like _SpamProxy, instead of the same GenericProxy type used for proxies to Hams and Eggseses. The former allows help(myspam) to show dynamically-generated information about Spam. But I don't know how important a reason it is; often you already need dynamic classes to, e.g., make special method lookup work, at which point adding dynamic docstrings comes for free.
I think it's preferred to keep it under the class via your doc string as it will also aid any developer that works on the code. However if you are doing something dynamic that requires this setup then I don't see any reason why not. Just understand that it adds a level of indirection that makes things less clear to others.
Remember to K.I.S.S. where applicable :)
I just stumbled over this and noticed that at least with python 3.9.5 the behavior seems to have changed.
E.g. using the above example, when I call:
help(a)
I get:
Help on MyClass in module __main__:
<__main__.MyClass object>
A description of the object
Also for reference, have a look at the pydoc implementation which shows:
def _getowndoc(obj):
"""Get the documentation string for an object if it is not
inherited from its class."""
try:
doc = object.__getattribute__(obj, '__doc__')
if doc is None:
return None
if obj is not type:
typedoc = type(obj).__doc__
if isinstance(typedoc, str) and typedoc == doc:
return None
return doc
except AttributeError:
return None
I'm having some trouble getting mypy to accept type objects. I am
convinced I'm just doing it wrong but my google searches have not led me to any answers so far.
class Good(object):
a = 1
def works(thing: Good):
print(thing.a)
o = Good()
works(o)
Bad = type('Bad', (object, ), dict(a=1))
def fails_mypy(thing: Bad):
print(thing.a)
s = Bad()
fails_mypy(s)
Things constructed like 'Good' are ok, while things constructed like 'Bad' fail mypy checks with:
error: Invalid type "test.Bad"
error: Bad? has no attribute "a"
Based on the Unsupported Python Features section of mypys wiki, runtime creation of classes like this isn't currently supported. It cannot understand what Bad is in your function definition. Using reveal_type(Good) and reveal_type(Bad) when executing mypy should make this clear.
An approach to silence these is by using Any. Either using Python 3.6 variable annotation syntax:
Bad: Any = type('Bad', (), {'a':1})
or, with Python < 3.6:
Bad = type('Bad', (), {'a':1}) # type: Any
(in both cases Any should first be imported from typing)
of course, this basically means that your function now accepts anything. A price to pay, but that's what you get with dynamic languages; since Bar is defined at runtime, it can theoretically be anything :-)