For example:
class MainHandler(tornado.web.RequestHandler):
def get(self):
self.render('data.html', items = [])
It yields the following Pylint error:
warning (W0223, abstract-method, MainHandler) Method 'data_received' is abstract in class 'RequestHandler' but is not overridden
I understand that somehow it wants me to override this data_received method, but I do not understand why, and what it is for?
This is actually a problem with Pylint that's sort of unavoidable with the nature of Python.
The RequestHandler class has a lot of methods that act as hooks you can override in order to do different things, but only some of those hooks may actually be called, depending on your application's code. To make sure you're implementing everything you're supposed to when you're using certain functionality, the default data_received implementation throws a NotImplementedError that will get triggered when you do something that expects your class to have a custom implementation.
Normally this isn't any kind of issue because Python lets you have code paths that fail and doesn't throw any errors. Because Pylint tries to "help" make sure you've done everything you're supposed to, it's seeing that NotImplementedError throw and is warning you that you could trigger it depending on what you do.
The real problem is that because Python is an interpreted language, it's hard for a tool like Pylint to look at your code and make sure it's "safe". Python gives you a lot of flexibility and power, but in turn you bear the burden of keeping your program's logic straight in your head and knowing what possible problems are actually problems, and what aren't.
Luckily, Pylint is aware of its own limitations and gives you nice tools to disable extraneous warnings. Add the comment line
# pylint: disable=W0223
right before your class definition and the warning should stop popping up for this instance while leaving everything else alone.
I am running into the same issue as the OP, except my PyCharm (2018.3.4) seems not to be using Pylint, but its own inspection engine. I managed to solve a similar issue with the similar trick as R Phillip Castagna suggested:
# noinspection PyAbstractClass
class XyzRequestHandler(tornado.web.RequestHandler):
def prepare(self):
print('-' * 100)
self.set_header('Access-Control-Allow-Origin', '*')
def print_n_respond(self, result):
response = json.dumps(result)
print('responding with:\n', response)
print()
self.write(response)
Here is a list of PyCharm's inspections.
Related
I'm relatively new to python, using VSCode for python development. As far as I can tell VSCode is using an extension called "pylance" to handle python support features, such as detecting errors in code as you write.
In the last language I spent time with (Scala), there was a great little expression ??? that could be used to mark a method as incomplete / unimplemented, so that it would not generate any errors in the compiler or IDE, and would just throw an exception if encountered at runtime.
Is there any equivalent in python, specifically that would be understood by pylance? The idea would be that there is an unimplemented function, or one with a return type hint that isn't satisfied because it is incomplete, but this wouldn't throw up any errors that would make it harder to find the problems with the part I'm actually working on.
Obviously this isn't critical, just a matter of preferences, but it would be nice to keep the signal-to-noise ratio down! Thank you
You can use use pass inside of a function definition to avoid having to give a function a body, this will not raise exceptions on its own. Alternatively you can use raise(NotImplementedError()) to raise an error when a function is called.
def foo():
pass
def baz():
raise NotImplementedError
EDIT With Update---
Similar to pass in Python 3+ an ellipsis can be used to indicate code that has not yet been written e.g.
def foo(): ...
If you want it to throw an exception at runtime, the standard thing is to just raise NotImplementedError:
def some_fn():
raise NotImplementedError
I am trying to implement a Python class to facilitate easy exploration of relatively large dataset in Jupyter notebook by exposing various (some what compute intensive) filter methods as class attributes using descriptor protocol. Idea was to take advantage of lazyness of descriptor to only compute on accessing particular attribute.
Consider the following snippet:
import time
accessed_attr = [] # I find this easier then using basic logging for jupyter/ipython
class MyProperty:
def __init__(self,name):
self.name=name
def __get__(self, instance, owner):
if instance is None:
return self
accessed_attr.append(f'accessed {self.name} from {instance} at {time.asctime()}')
setattr(instance, self.name, self.name)
return self.name # just return string
class Dummy:
abc=MyProperty('abc')
bce=MyProperty('bce')
cde=MyProperty('cde')
dummy_inst = Dummy() # instantiate the Dummy class
on dummy_inst.<tab>, I assumed Juptyer would show auto completions abc, bce, cde among other hidden methods and not evaluate them. Printing the logging list accessed_attr shows all __get__ methods for the three descriptors were called, which is not what I expect or want.
A hacky way I figured was to deffer first access to descriptor using a counter like shown in image below, but has its own issues.
I tried other ways using __slots__, modifying __dir__ to trick the kernel, but couldn't find a way to get around the issue.
I understand there is another way using __getattribute__, but it still doesn't seem elegant, I am puzzled with what seemed so trivial turned out to be mystery to me. Any hints, pointers and solutions are appreciated.
Here is my Python 3.7 based environment:
{'IPython': '7.18.1',
'jedi': '0.17.2',
'jupyter': '1.0.0',
'jupyter_core': '4.6.3',
'jupyter_client': '6.1.7'}
It's unfortunately a ca and mouse battle, IPython used to aggressively explore attribute, which ended up being deactivated because of side effects. (see for example why the IPCompleter.limit_to__all__ option was added. Though other users come to complain that dynamic attribute don't show up. So it's likely either jedi that look at those attributes. You can try using c.Completer.use_jedi=False to check that. If it's jedi, then you have to ask the jedi author, if not then I'm unsure, but it's a delicate balance.
Lazy vs exploratory is really complicated subject in IPython, you might be able to register a custom completer (even for dict keys) that might make it easier to explore without computing, or use async await for make sure only calling await obj.attr triggers the computation.
To give details: this is a constrained environment with Python 2.7.14, Pylint 1.8.1, and Astroid 1.6.0 and various other modules, but I can't easily install new modules such as mypy (not even with virtualenv) or make major changes to the codebase.
Due to https://github.com/PyCQA/pylint/issues/400 I'm getting errors from pylint on some of my code. I've worked around this issue in a hacky way (by removing the integer argument to the wait() method) and now I don't get errors, but no checking is done (pylint can't determine what the type of the variable is at all).
What I'd really like to do is tell pylint what the return type is. I've tried reading the Astroid docs, perusing the various astroid/brains files, and looking at other SO questions and answers such as Set multiple inferred types based on arguments for pylint plugin and pylint, coroutines, decorators and type inferencing but I guess I'm just not getting it. I have added my own load-plugins and I know it's loaded during pylint (because it had syntax errors at first :)). But I'm just not sure what it needs to do.
Suppose I had this code:
class Peer(object):
def get_domain(self):
return "foo"
class TestListener(object):
def __init__(self):
self._latest = None
def get(self, timeout):
# tricks here
return self._latest
def peer_joined(self, peer):
self._latest = peer
peer = TestListener().get(3)
print(peer.get_domain())
print(peer.get_foo())
During # tricks here we are really waiting on a threading.Event() during which another thread will invoke peer_joined() and pass an object of type Peer(), but pylint doesn't grok this.
What I'd like to do is annotate the TestListener.get() method to tell pylint that it will return a Peer object, so that pylint will catch the error in the second print call.
I've tried this but clearly I'm missing something fundamental, since it appears my transform method is never even invoked (if I put a bogus method call there no error is printed):
from astroid import MANAGER, register_module_extender
from astroid.builder import AstroidBuilder
def TestListener_transform():
return AstroidBuilder(MANAGER).string_build('''
class TestListener(object):
def get(self, timeout):
return Peer()
''')
register_module_extender(MANAGER, 'TestListener', TestListener_transform)
# for pylint load-plugins
def register(linter):
pass
def get(self, timeout):
"""
:param timeout: How many seconds to wait
:type timeout: int
:rtype: threading.Event
"""
# tricks here
return self._latest
This is a reStructuredText format for writing a python docstring (a multiline string that appear right after function definition.
You can document there all parameter types, and what the function return along with their datatypes.
pylint can parse that docstring to figure out the datatype of that function's return value.
This stackoverflow answer has more comprehensive explanation
What is the standard Python docstring format?
One thing to mention is that pylint currently has limited capabilities for type-checking, as those seen in mypy. It relies more on inferring values and types without type hints, although the intent is to have support for PEP-484 typing at some point.
Now regarding the question, I see that you are using a module extender, while in fact you should use a class transform, as in:
astroid.MANAGER.register_transform(astroid.ClassDef, _class_transform,
optional_transform_predicate)
I'm developing a Jinja extension that performs a few potentially harmful operations, which on failure will raise an Exception that holds some information on what went wrong.
When this occurs, of course the Exception will prevent Jinja from completing its render process and return None rather than a render result. This problem is easily mitigated using a try/catch statement and returning an empty string or whatever is suitable.
However, that effectively throws away the Exception and the debugging info, which I would rather pass on to my error log. The system responsible for setting up the Jinja environment has it's own logging service, which I would prefer to keep decoupled from the extension. I am aware though, that Jinja has an Undefined class which I have successfully used to intercept access of undefined variables in another project.
Is there any way I could raise a special type of exception (UndefinedException did not work), or tell the Jinja environment to log a warning, when an error occurs in my extension, while still allowing it to continue its execution?
What I have in mind is something along the lines of this example:
def _render(*args, **kwrags):
# ...
try:
self.potentially_harmful_operation()
except Exception as e:
self.environment.warning(e)
return Markup("")
# ...
return Markup('<img src"{}">'.format(img_path))
So, to clarify, as I wrote in a comment below:
Fundamentally I guess my question boils down to "how can I make Jinja produce a warning, as it does for e.g. missing variables".
For anyone interested, I managed to solve this by implementing my own Undefined class (largely inspired by make_logging_undefined and supplying that to the environment using the undefined keyword. Depending on your needs, the default make_logging_undefined might very well suffice.
Once that's in place, the trick is to emit the warning in the environment. This is done by simply instantiating the environment's Undefined and executing one of its magic methods (e.g. __str__). Hooking on to my example code from above, that could be accomplished like this:
def _render(*args, **kwrags):
# ...
try:
self.potentially_harmful_operation()
except Exception as e:
str(self.environment.undefined(e))
return Markup("")
# ...
return Markup('<img src"{}">'.format(img_path))
I've started to learn python in the past few days, and while exploring object-oriented programming I'm running into problems.
I'm using Eclipse while running the pydev plugin, am running on the python 3.3 beta, and am using a windows 64 bit system.
I can initialize a class fine and use any methods within it, as long as I'm not trying to extend the superclass (each class I've coded in a different source file)
For example, the following code compiles and runs fine.
class pythonSuper:
string1 = "hello"
def printS():
print pythonSuper.string1
and the code to access and run it...
from stackoverflow.questions import pythonSuper
class pythonSub:
pysuper = pythonSuper.pythonSuper()
pysuper.printS()
Like I said, that works. The following code doesn't
class pythonSuper: """Same superclass as above. unmodified, except for the spacing"""
string1 = "hello"
def printS(self):
print(pythonSuper.string1)
Well, that's not quite true. The superclass is absolutely fine, at least to my knowledge. It's the subclass that weirds out
from stackoverflow.questions import pythonSuper
class pythonSub(pythonSuper):
pass
pythonObject = pythonSub()
pythonSub.pythonSuper.printS()
when the subclass is run Eclipse prints out this error
Traceback (most recent call last):
File "C:\Users\Anish\workspace\Python 3.3\stackoverflow\questions\pythonSub.py",
line 7, in <module>
class pythonSub(pythonSuper):
TypeError: module.__init__() takes at most 2 arguments (3 given)
I have no idea what's going on. I've been learning python from thenewboston's tutorials, but those are outdated (I think his tutorial code uses python version 2.7). He also codes in IDLE, which means that his classes are all contained in one file. Mine, however, are all coded in files of their own. That means I have no idea whether the code errors I'm getting are the result of outdated syntax or my lack of knowledge on this language. But I digress. If anyone could post back with a solution and/or explanation of why the code is going wrong and what I could do to fix it. An explanation would be preferred. I'd rather know what I'm doing wrong so I can avoid and fix the problem in similar situations than just copy and paste some code and see that it works.
Thanks, and I look forward to your answers
I ran your code, albeit with a few modifications and it runs perfectly. Here is my code:
pythonSuper:
class pythonSuper:
string1 = 'hello'
def printS(self):
print(self.string1)
main:
from pythonSuper import pythonSuper as pySuper
class pythonSub(pySuper):
pass
pythonObject = pythonSub()
pythonObject.printS()
NOTE: The change I have made to your code is the following:
In your code, you have written pythonSub.pythonSuper.printS() which is not correct, because via pythonSub you already support a printS() method, directly inherited from the superclass. So there is no need to refer to the superclass explicitly in that statement. The statement that I used to substitute the aforementioned one, pythonObject.printS(), seems to have addressed this issue.
pythonSuper refers to the module, not the class.
class pythonSub(pythonSuper.pythonSuper):
pass