I've written a CLI with click originally as a module and it worked fine. But since my project got bigger I now need to have attributes the CLI can work with, so I tried to turn it into a class, but I'm running into an error doing it. My code is like the following:
import click
import click_repl
import os
from prompt_toolkit.history import FileHistory
class CLI:
def __init__(self):
pass
#click.group(invoke_without_command=True)
#click.pass_context
def cli(self, ctx):
if ctx.invoked_subcommand is None:
ctx.invoke(self.repl)
#cli.command()
def foo(self):
print("foo")
#cli.command()
def repl(self):
prompt_kwargs = {
'history': FileHistory(os.path.expanduser('~/.repl_history'))
}
click_repl.repl(click.get_current_context(), prompt_kwargs)
def main(self):
while True:
try:
self.cli(obj={})
except SystemExit:
pass
if __name__ == "__main__":
foo = CLI()
foo.main()
Without all the selfs and the class CLI: the CLI is working as expected, but as a class it runs into an error: TypeError: cli() missing 1 required positional argument: 'ctx' I don't understand why this happens. As far as I know calling self.cli() should pass self automatically, thus obj={} should be passed as ctx.obj, so it shouldn't make any difference to cli if it's wrapped in a class or not.
Can someone explain to me, why this happens and more important, how I can fix it?
In case it's relevant here is the complete error stack trace:
Traceback (most recent call last):
File "C:/Users/user/.PyCharmCE2018.2/config/scratches/exec.py", line
37, in <module>
foo.main()
File "C:/Users/user/.PyCharmCE2018.2/config/scratches/exec.py", line
30, in main
self.cli(obj={})
File "C:\Users\user\AppData\Local\Programs\Python\Python37\lib\site- packages\click\core.py", line 764, in __call__
return self.main(*args, **kwargs)
File "C:\Users\user\AppData\Local\Programs\Python\Python37\lib\site-packages\click\core.py", line 717, in main
rv = self.invoke(ctx)
File "C:\Users\user\AppData\Local\Programs\Python\Python37\lib\site-packages\click\core.py", line 1114, in invoke
return Command.invoke(self, ctx)
File "C:\Users\user\AppData\Local\Programs\Python\Python37\lib\site-packages\click\core.py", line 956, in invoke
return ctx.invoke(self.callback, **ctx.params)
File "C:\Users\user\AppData\Local\Programs\Python\Python37\lib\site-packages\click\core.py", line 555, in invoke
return callback(*args, **kwargs)
File "C:\Users\user\AppData\Local\Programs\Python\Python37\lib\site-packages\click\decorators.py", line 17, in new_func
return f(get_current_context(), *args, **kwargs)
TypeError: cli() missing 1 required positional argument: 'ctx'
EDIT: The problem seems to be the pass_context call. Usually pass_context would provide the current context as first parameter to the function, so that obj={} would be passed to the context instance. But since I wrapped the click-group into a class, the first spot is taken by the self-reference, so that the current context can't be passed to the function. Any ideas of how to work around that?
I tried changing def cli() the following way:
#click.group(invoke_without_command=True)
def cli(self):
ctx = click.get_current_context()
ctx.obj = {}
if ctx.invoked_subcommand is None:
ctx.invoke(self.repl)
So I don't pass the context by call avoiding a conflict with self, but if I try to run this with self.cli() error TypeError: cli() missing 1 required positional argument: 'self' happens.
Calling it with self.cli(self) runs into TypeError: 'CLI' object is not iterable
I am afraid the click library is not designed to work as a class. Click makes use of decorators. Don't take decorators too lightly. Decorators literally take your function as argument and return a different function.
For example:
#cli.command()
def foo(self):
Is something in line of
foo = cli.command()(foo)
So, I am afraid that click has not support to decorate functions bound to classes, but can only decorate functions that are unbound. So, basically the solution to your answer is, don't use a class.
You might be wondering how to organize your code now. Most languages present you the class as an unit of organization.
Python however goes one step further and gives you modules as well. Basically a file is a module and within this file everything you put in there is automatically associated with that file as a module.
So, just name a file cli.py and create your attributes as global variables. This might give you other problems, since you cannot alter global variables in a function scope, but you can use a class to contain your variables instead.
class Variables:
pass
variables = Variables()
variables.something = "Something"
def f():
variables.something = "Nothing"
Related
I am trying to understand the examples for replacing the use of try-finally and flag variables in Python's documentation
According to the documentation instead of:
cleanup_needed = True
try:
result = perform_operation()
if result:
cleanup_needed = False
finally:
if cleanup_needed:
cleanup_resources()
we could use a small ExitStack-based helper class Callback like this (I added the perform_operation and cleanup_resources function):
from contextlib import ExitStack
class Callback(ExitStack):
def __init__(self, callback, /, *args, **kwds):
super(Callback, self).__init__()
self.callback(callback, *args, **kwds)
def cancel(self):
self.pop_all()
def perform_operation():
return False
def cleanup_resources():
print("Cleaning up resources")
with Callback(cleanup_resources) as cb:
result = perform_operation()
if result:
cb.cancel()
I think the code simulates the exceptional case, where the perform_operation() did not run smoothly and a cleanup is needed (perform_operation() returned False). The Callback class magically takes care of the running the cleanup_resources() function (I can't quite understand why, by the way).
Then I simulated the normal case, where everything runs smoothly and no cleanup is needed, I changed the code to make perform_operation() return True instead. In this case, however, the cleanup_resources function also runs and the code errors out:
$ python minimal.py
Cleaning up resources
Traceback (most recent call last):
File "minimal.py", line 26, in <module>
cb.cancel()
File "minimal.py", line 12, in cancel
self.pop_all()
File "C:\ProgramData\Anaconda3\envs\claw\lib\contextlib.py", line 390, in pop_all
new_stack = type(self)()
TypeError: __init__() missing 1 required positional argument: 'callback'
Can you explain what exactly is going on here and how this whole ExitStack and callack stuff works?
In the following code below the eval call is successful:
from zeep import Client
from zeep import xsd
from zeep.plugins import HistoryPlugin
class TrainAPI:
def __init__(self,LDB_TOKEN):
if LDB_TOKEN == '':
raise Exception("Please configure your OpenLDBWS token")
WSDL = 'http://lite.realtime.nationalrail.co.uk/OpenLDBWS/wsdl.aspx?ver=2017-10-01'
history = HistoryPlugin()
self.client = Client(wsdl=WSDL, plugins=[history])
header = xsd.Element(
'{http://thalesgroup.com/RTTI/2013-11-28/Token/types}AccessToken',
xsd.ComplexType([
xsd.Element(
'{http://thalesgroup.com/RTTI/2013-11-28/Token/types}TokenValue',
xsd.String()),
])
)
self.header_value = header(TokenValue=LDB_TOKEN)
self.token = LDB_TOKEN
return
def __getattr__(self, action):
def method(*args,**kwargs):
print(action,args,kwargs)
print(self)
return eval(f"self.client.service.{action}(*args,**kwargs, _soapheaders=[self.header_value])")
return method
However, if the print(self) line is removed, then the following error is thrown:
File "C:/Users/-/Documents/-/main.py", line 32, in method
return eval(f"self.client.service.{action}(*args,**kwargs, _soapheaders=[self.header_value])")
File "<string>", line 1, in <module>
NameError: name 'self' is not defined
Sorry if I am missing something obvious, but my question is: why is self not defined unless I call something that refers to it (such as print(self)) beforehand within the method function?
Seems complex since the traceback finally refers to line 1 of <string>...
Edit: trying this also returns an error:
def __getattr__(self, action):
def method(*args,**kwargs):
print(action,args,kwargs)
print(f"self.client.service.{action}(*args,**kwargs, _soapheaders=[self.header_value])")
return eval(f"self.client.service.{action}(*args,**kwargs, _soapheaders=[self.header_value])")
return method
Maybe I don't understand how scopes or format strings work.
The reason goes a bit deep into how python works, and I can tell you if you really want, but to solve your actual problem the answer is to avoid eval whenever possible as there is usually a better way. In this case:
method = getattr(self.client.service, action)
return method(*args,**kwargs, _soapheaders=[self.header_value])
I have this code that works fine:
import click
#click.command(context_settings=dict(help_option_names=['-h', '--help']))
#click.option('--team_name', required=True, help='Team name')
#click.option('--input_file', default='url_file.txt', help='Input file name for applications, URLs')
#click.option('--output_file', default='test_results_file.txt', help='Output file name to store test results')
def main(team_name, input_file, output_file):
# function body
if __name__ == '__main__':
main() # how does this work?
As you see, main is being called with no arguments though it is supposed to receive three. How does this work?
As mentioned in comments, this is taken care by the decorators. The click.command decorator turns the function into an instance of click.Command.
Each of options decorators build an instance of a click.Option and attach it to the click.Command object to be used later.
This click.Command object implements a __call__ method which is invoked by your call to main().
def __call__(self, *args, **kwargs):
"""Alias for :meth:`main`."""
return self.main(*args, **kwargs)
It is quite simple and simply invokes click.Command.main().
Near the top of click.Command.main() is:
if args is None:
args = get_os_args()
else:
args = list(args)
This code gets argv from the command line or uses a provided list of args. Further code in this method does, among other things, the parsing of the command line into a context, and the eventual calling of your main() with the values from the click.Option instances built earlier:
with self.make_context(prog_name, args, **extra) as ctx:
rv = self.invoke(ctx)
This is the source of the mysterious 3 arguments.
I am trying to wrap the constructor for pyspark Pipeline.init constructor, and monkey patch in the newly wrapped constructor. However, I am running into an error that seems to have something to do with the way Pipeline.init uses decorators
Here is the code that actually does the monkey patch:
def monkeyPatchPipeline():
oldInit = Pipeline.__init__
def newInit(self, **keywordArgs):
oldInit(self, stages=keywordArgs["stages"])
Pipeline.__init__ = newInit
However, when I run a simple program:
import PythonSparkCombinatorLibrary
from pyspark.ml import Pipeline
from pyspark.ml.classification import LogisticRegression
from pyspark.ml.feature import HashingTF, Tokenizer
PythonSparkCombinatorLibrary.TransformWrapper.monkeyPatchPipeline()
tokenizer = Tokenizer(inputCol="text", outputCol="words")
hashingTF = HashingTF(inputCol=tokenizer.getOutputCol(),outputCol="features")
lr = LogisticRegression(maxIter=10, regParam=0.001)
pipeline = Pipeline(stages=[tokenizer, hashingTF, lr])
I get this error:
Traceback (most recent call last):
File "C:\<my path>\PythonApplication1\main.py", line 26, in <module>
pipeline = Pipeline(stages=[tokenizer, hashingTF, lr])
File "C:<my path>PythonApplication1 \PythonSparkCombinatorLibrary.py", line 36, in newInit
oldInit(self, stages=keywordArgs["stages"])
File "C:\<pyspark_path>\pyspark\__init__.py", line 98, in wrapper
return func(*args, **kwargs)
File "C:\<pyspark_path>\pyspark\ml\pipeline.py", line 63, in __init__
kwargs = self.__init__._input_kwargs
AttributeError: 'function' object has no attribute '_input_kwargs'
Looking into the pyspark interface, I see that Pipeline.init looks like this:
#keyword_only
def __init__(self, stages=None):
"""
__init__(self, stages=None)
"""
if stages is None:
stages = []
super(Pipeline, self).__init__()
kwargs = self.__init__._input_kwargs
self.setParams(**kwargs)
And noting the #keyword_only decorator, I inspected that code as well:
def keyword_only(func):
"""
A decorator that forces keyword arguments in the wrapped method
and saves actual input keyword arguments in `_input_kwargs`.
"""
#wraps(func)
def wrapper(*args, **kwargs):
if len(args) > 1:
raise TypeError("Method %s forces keyword arguments." % func.__name__)
wrapper._input_kwargs = kwargs
return func(*args, **kwargs)
return wrapper
I'm totally confused both about how this code works in the first place, and also why it seems to cause problems with my own wrapper. I see that wrapper is adding a _input_kwargs field to itself, but how is Pipeline.__init__ about to read that field with self.__init__._input_kwargs? And why doesn't the same thing happen when I wrap Pipeline.__init__ again?
Decorator 101. Decorator is a higher-order function which takes a function as its first argument (and typically only), and returns a function. # annotation is just a syntactic sugar for a simple function call, so following
#decorator
def decorated(x):
...
can be rewritten for example as:
def decorated_(x):
...
decorated = decorator(decorated_)
So Pipeline.__init__ is actually a functools.wrapped wrapper which captures defined __init__ (func argument of the keyword_only) as a part of its closure. When it is called, it uses received kwargs as a function attribute of itself. Basically what happens here can be simplified to:
def f(**kwargs):
f._input_kwargs = kwargs # f is in the current scope
hasattr(f, "_input_kwargs")
False
f(foo=1, bar="x")
hasattr(f, "_input_kwargs")
True
When you further wrap (decorate) __init__ the external function won't have _input_kwargs attached, hence the error. If you want to make it work you have apply the same process, as used by the original __init__, to your own version, for example with the same decorator:
#keyword_only
def newInit(self, **keywordArgs):
oldInit(self, stages=keywordArgs["stages"])
but I liked I mentioned in the comments, you should rather consider subclassing.
I'm using ctypes to work with a library written in C. This C library allows me to register a callback function, which I'm implementing in Python.
Here is the callback function type, according to the ctypes API:
_command_callback = CFUNCTYPE(
UNCHECKED(c_int),
POINTER(vedis_context),
c_int,
POINTER(POINTER(vedis_value)))
Here is a decorator I've written to mark a function as a callback:
def wrap_callback(fn):
return _command_callback(fn)
To use this, I am able to simply write:
#wrap_callback
def my_callback(*args):
print args
return 1 # Needed by C library to indicate OK response.
c_library_func.register_callback(my_callback)
I can now invoke my callback (my_callback) from C and this works perfectly well.
The problem I'm encountering is that there will be some boilerplate behavior I would like to perform as part of these callbacks (such as returning a success flag, etc). To minimize boilerplate, I tried to write a decorator:
def wrap_callback(fn):
def inner(*args, **kwargs):
return fn(*args, **kwargs)
return _command_callback(inner)
Note that this is functionally equivalent to the previous example.
#wrap_callback
def my_callback(*args):
print args
return 1
When I attempt to invoke the callback using this approach, however, I receive the following exception, originating from _ctypes/callbacks.c:
Traceback (most recent call last):
File "_ctypes/callbacks.c", line 314, in 'calling callback function'
File "/home/charles/tmp/scrap/z1/src/vedis/vedis/core.py", line 28, in inner
return fn(*args, **kwargs)
SystemError: Objects/cellobject.c:24: bad argument to internal function
I am not sure what is going on here that would cause the first example to work but the second example to fail. Can anyone shed some light on this? Bonus points if you can help me find a way to decorate these callbacks so I can reduce boilerplate code!
Thanks to eryksyn, I was able to fix this issue. The fix looks like:
def wrap_callback(fn):
def inner(*args, **kwargs):
return fn(*args, **kwargs)
return _command_callback(inner), inner
def my_callback(*args):
print args
return 1
ctypes_cb, my_callback = wrap_callback(my_callback)