Argparse Subparsers, and linking to Classes - python

We have a simple Python program to manage various types of in-house servers, using argparse:
manage_servers.py <operation> <type_of_server>
Operations are things like check, build, deploy, configure, verify etc.
Types of server are just different types of inhouse servers we use.
We have a generic server class, then specific types that inherit from that:
class Server
def configure_logging(self, loggin_file):
...
def check(self):
...
def deploy(self):
...
def configure(self):
...
def __init__(self, hostname):
self.hostname = hostname
logging = self.configure_logging(LOG_FILENAME)
class SpamServer(Server):
def check(self):
...
class HamServer(Server):
def deploy(self):
...
My question is how to link that all up to argparse?
Originally, I was using argparse subparses for the operations (check, build, deploy) and another argument for the type.
subparsers = parser.add_subparsers(help='The operation that you want to run on the server.')
parser_check = subparsers.add_parser('check', help='Check that the server has been setup correctly.')
parser_build = subparsers.add_parser('build', help='Download and build a copy of the execution stack.')
parser_build.add_argument('-r', '--revision', help='SVN revision to build from.')
...
parser.add_argument('type_of_server', action='store', choices=types_of_servers,
help='The type of server you wish to create.')
Normally, you'd link each subparse to a method - and then pass in the type_of_server as an argument. However, that's slightly backwards due to the classes- I need to create an instance of the appropriate Server class, then call the operation method inside of that.
Any ideas of how I could achieve the above? Perhaps a different design pattern for Servers? Or a way to still use argparse as is?
Cheers,
Victor

Just use the parser.add_subparsers(dest=... argument with a mapping of type_of_server to classes:
subparsers = parser.add_subparsers(dest='operation', help='The operation that you want to run on the server.')
...
server_types = dict(spam=SpamServer, ham=HamServer)
args = parser.parse_args()
server = server_types[args.type_of_server]()
getattr(server, args.operation)(args)

Related

Need Design Pattern Suggestion

I need help para to beautify this code :)
The method definesAction will call a Class, based on the args. There is some way to generalizing this piece of code, taking into account that the Class's are similar.
Thanks in advance
Main Class
def defineAction(args):
if args.classabc is not None:
for host in config.getList('ABC', 'hosts'):
class_abc = ClassABC(config.getConfigs('ABC', host), args.version[0], user, password)
class_abc.action(args.classabc)
if args.classxyz is not None:
for host in config.getList('XYZ', 'hosts'):
class_xyz = ClassXYZ(config.getConfigs('XYZ', host), args.version[0], user, password)
class_xyz.action(args.classxyz)
# ...
def main():
parser.add_argument('--classabc', choices=['cmd'])
parser.add_argument('--classxyz', choices=['cmd'])
# ...
args = parser.parse_args()
defineAction(args)
SubClasses
class ClassABC:
def __init__(self, configs, user, password):
self.hostConfigs = configs['host']
self.host_username = user
self.host_password = password
def a_method(self):
# This Method is equal in all subclasses
def b_method(self):
# This Method is different all subclasses
def action(self, action):
self.a_method()
self.b_method()
if action == 'cmd':
self.execute_cmd()
Config FILE
[ABC]
hosts=abc_host1
var_abc=value1
[XYZ]
hosts=xyz_host1,xyz_host2
var_xyz=value2
I'm working the assumption that the switches are mutually exclusive (in which case you really want to use a mutually exclusive argument group).
You want the argparser action to set the class. If your command-line switch doesn't need to take any arguments, then I'd use action="store_const" here:
parser.add_argument(
'--classabc', dest="class_", const=ClassABC,
action="store_const")
parser.add_argument(
'--classxyz', dest="class_", const=ClassXYZ,
action="store_const")
On parsing, the above actions set args.class_ to ClassABC or ClassXYZ when one or the other switch is used. Give the classes a class method or an attribute to determine what configuration section to look in, do not hardcode those names anywhere else.
For instance, if both classes have a config_section attribute, (set to 'ABC' for ClassABC and 'XYZ' for ClassXZY), then you can use that attribute in your loop creating instances:
if args.class_:
for host in config.getList(class_.config_section, 'hosts'):
instance = args.class_(config.getConfig(class_.config_section, host), ...)
The idea is to not switch based on args attributes, you can leave this to argparse as it is already determining the different options for you.
If both command-line switches require an additional argument, then create a custom Action subclass:
class StoreClassAction(argparse.Action):
def __call__(self, parser, namespace, values, **kwargs):
setattr(namespace, self.dest, (self.const, values)
then use this as:
parser.add_argument(
'--classabc', dest="class_", choices=['cmd'], const=ClassABC,
action=StoreClassAction)
parser.add_argument(
'--classxyz', dest="class_", choices=['cmd'], const=ClassXYZ,
action=StoreClassAction)
Now the args.class_ argument is set to (classobject, argumentvalue), so you can use:
if args.class_:
cls, action = args.class_
for host in config.getList(cls.config_section, 'hosts'):
instance = args.class_(config.getConfig(cls.config_section, host), ...)
instance.action(action)

Hook on arguments in argparse python

I am interested in hook extra arguments parsed using argparse in one class to another method in another class which already has few arguments parsed using argparse module.
Project 1
def x():
parser = argparse.ArgumentParser()
parser.add_argument('--abc')
Project 2
def y():
parser = argparse.ArgumentParser()
parser.add_argument('--temp1')
parser.add_argument('--temp2')
When I run x(), I want to add the "--abc" argument to the list of argument y() has which is "temp1", "temp2" at runtime. Is inheritance the best way to go and defining the constructors accordingly ? Could someone provide some sample code snippet ?
Thanks !
argparse implements a parents feature that lets you add the arguments of one parser to another. Check the documentation. Or to adapt your case:
parser_x = argparse.ArgumentParser(add_help=False)
parser_x.add_argument('--abc')
parser_y = argparse.ArgumentParser(parents=[parser_x])
parser_y.add_argument('--temp1')
parser_y.add_argument('--temp2')
parser_y.print_help()
prints:
usage: ipython [-h] [--abc ABC] [--temp1 TEMP1] [--temp2 TEMP2]
optional arguments:
-h, --help show this help message and exit
--abc ABC
--temp1 TEMP1
--temp2 TEMP2
The add_help=False is needed to avoid a conflict between the -h that parser_x would normally add with the one that parser_y gets.
Another way is to let x add its argument to a predefined parser:
def x(parser=None):
if parser is None:
parser = argparse.ArgumentParser()
parser.add_argument('--abc')
return parser
def y():
....
return parser
parsery = y()
parserx = x(parsery)
It might also be useful to know that add_argument returns a reference to the argument (Action object) that it created.
parser = argparse.ArgumentParser()
arg1 = parser.add_argument('--abc')
Do this in a shell and you'll see that arg1 displays as:
_StoreAction(option_strings=['--abc'], dest='abc', nargs=None,
const=None, default=None, type=None, choices=None,
help=None, metavar=None)
arg1 is an object that you can place in lists, dictionaries. You could even, in theory, add it to another parser. That's in effect what the parents mechanism does (i.e. copy action references from the parent to the child).
You can inspire yourself from Django's management commands. They are basically setup as follow:
The entry point is run_from_argv which calls create_parser, parse the command line, extract the parsed arguments and provide them to execute;
The create_parser method creates an argparse parser and uses add_argument to prepopulate default options available for all commands. This function then calls the add_arguments method of the class which is meant to be overloaded by subclasses;
The execute method is responsible to handle the various behaviours associated to the default options. It then calls handle which is meant to be overloaded by subclasses to handle the specific options introduced by add_arguments.
Your requirements are not completely clear but I think that in your case you don't need to bother with an execute method. I’d go with:
import argparse
import sys
class BaseParser:
def create_parser(self, progname):
parser = argparse.ArgumentParser(prog=progname)
parser.add_argument('--temp1')
parser.add_argument('--temp2')
self.add_arguments(parser)
return parser
def add_arguments(self, parser):
pass # to be optionnally defined in subclasses
def parse_command_line(self, argv):
parser = create_parser(argv[0])
options = parser.parse_args(argv[1:])
parsed_options = vars(options)
self.handle(**parsed_options) # HAS TO be defined in subclasses
class X(BaseParser):
def add_arguments(self, parser):
parser.add_argument('--abc')
def handle(self, **options):
abc = options['abc']
temp1 = options['temp1']
temp2 = options['temp2']
# do stuff with thoses variables
class Y(BaseParser):
def handle(self, **options):
temp1 = options['temp1']
temp2 = options['temp2']
# do stuff
x = X()
y = Y()
args = sys.argv
x.parse_command_line(args)
y.parse_command_line(args)
You could simplify the code further if X is a subclass of Y.

Cloud Endpoints with Multiple Services Classes

I am starting to use Google Cloud Endpoints and I am running in a problem when specifying multiple services classes. Any idea how to get this working?
ApiConfigurationError: Attempting to implement service myservice, version v1, with multiple classes that aren't compatible. See docstring for api() for examples how to implement a multi-class API.
This is how I am creating my endpoint server.
AVAILABLE_SERVICES = [
FirstService,
SecondService
]
app = endpoints.api_server(AVAILABLE_SERVICES)
and for every service class I am doing this:
#endpoints.api(name='myservice', version='v1', description='MyService API')
class FirstService(remote.Service):
...
#endpoints.api(name='myservice', version='v1', description='MyService API')
class SecondService(remote.Service):
...
Each one of these work perfectly separately, but I am not sure how to get them working when combining them.
Thanks a lot.
The correct way is to create an api object and use the collection
api_root = endpoints.api(name='myservice', version='v1', description='MyService API')
#api_root.collection(resource_name='first')
class FirstService(remote.Service):
...
#api_root.collection(resource_name='second')
class SecondService(remote.Service):
...
where resource name would be inserted in front of method names so that you could use
#endpoints.method(name='method', ...)
def MyMethod(self, request):
...
instead of
#endpoints.method(name='first.method', ...)
def MyMethod(self, request):
...
Putting this in the API server:
The api_root object is equivalent to a remote.Service class decorated with endpoints.api, so you can simply include it in the endpoints.api_server list. For example:
application = endpoints.api_server([api_root, ...])
If I'm not mistaken, you should give different names to each service, so you'll be able to access both, each one with the specific "address".
#endpoints.api(name='myservice_one', version='v1', description='MyService One API')
class FirstService(remote.Service):
...
#endpoints.api(name='myservice_two', version='v1', description='MyService Two API')
class SecondService(remote.Service):
...
I've managed to successfuly deploy single api implemented in two classes. You can try using following snippet (almost directly from google documentation):
an_api = endpoints.api(name='library', version='v1.0')
#an_api.api_class(resource_name='shelves')
class Shelves(remote.Service):
...
#an_api.api_class(resource_name='books', path='books')
class Books(remote.Service):
...
APPLICATION = endpoints.api_server([an_api],
restricted=False)
For local development I'm using a temporary workaround, which is to disable the exception (I know I know...)
In my sdk in google_appengine/google/appengine/ext/endpoints/api_backend_service.py around line 97:
elif service_class != method_class:
pass
# raise api_config.ApiConfigurationError(
# 'SPI registered with multiple classes within one '
# 'configuration (%s and %s). Each call to register_spi should '
# 'only contain the methods from a single class. Call '
# 'repeatedly for multiple classes.' % (service_class,
# method_class))
if service_class is not None:
In combination with that I'm using the construct:
application = endpoints.api_server([FirstService, SecondService, ...])
Again, this won't work in production, you'll get the same exception there. Hopefully this answer will be obsoleted by a future fix.
Confirmed it's now obsolete (tested against 1.8.2).
If it was Java ...
https://developers.google.com/appengine/docs/java/endpoints/multiclass
cloudn't be easier.

Create multiclass API [duplicate]

I am starting to use Google Cloud Endpoints and I am running in a problem when specifying multiple services classes. Any idea how to get this working?
ApiConfigurationError: Attempting to implement service myservice, version v1, with multiple classes that aren't compatible. See docstring for api() for examples how to implement a multi-class API.
This is how I am creating my endpoint server.
AVAILABLE_SERVICES = [
FirstService,
SecondService
]
app = endpoints.api_server(AVAILABLE_SERVICES)
and for every service class I am doing this:
#endpoints.api(name='myservice', version='v1', description='MyService API')
class FirstService(remote.Service):
...
#endpoints.api(name='myservice', version='v1', description='MyService API')
class SecondService(remote.Service):
...
Each one of these work perfectly separately, but I am not sure how to get them working when combining them.
Thanks a lot.
The correct way is to create an api object and use the collection
api_root = endpoints.api(name='myservice', version='v1', description='MyService API')
#api_root.collection(resource_name='first')
class FirstService(remote.Service):
...
#api_root.collection(resource_name='second')
class SecondService(remote.Service):
...
where resource name would be inserted in front of method names so that you could use
#endpoints.method(name='method', ...)
def MyMethod(self, request):
...
instead of
#endpoints.method(name='first.method', ...)
def MyMethod(self, request):
...
Putting this in the API server:
The api_root object is equivalent to a remote.Service class decorated with endpoints.api, so you can simply include it in the endpoints.api_server list. For example:
application = endpoints.api_server([api_root, ...])
If I'm not mistaken, you should give different names to each service, so you'll be able to access both, each one with the specific "address".
#endpoints.api(name='myservice_one', version='v1', description='MyService One API')
class FirstService(remote.Service):
...
#endpoints.api(name='myservice_two', version='v1', description='MyService Two API')
class SecondService(remote.Service):
...
I've managed to successfuly deploy single api implemented in two classes. You can try using following snippet (almost directly from google documentation):
an_api = endpoints.api(name='library', version='v1.0')
#an_api.api_class(resource_name='shelves')
class Shelves(remote.Service):
...
#an_api.api_class(resource_name='books', path='books')
class Books(remote.Service):
...
APPLICATION = endpoints.api_server([an_api],
restricted=False)
For local development I'm using a temporary workaround, which is to disable the exception (I know I know...)
In my sdk in google_appengine/google/appengine/ext/endpoints/api_backend_service.py around line 97:
elif service_class != method_class:
pass
# raise api_config.ApiConfigurationError(
# 'SPI registered with multiple classes within one '
# 'configuration (%s and %s). Each call to register_spi should '
# 'only contain the methods from a single class. Call '
# 'repeatedly for multiple classes.' % (service_class,
# method_class))
if service_class is not None:
In combination with that I'm using the construct:
application = endpoints.api_server([FirstService, SecondService, ...])
Again, this won't work in production, you'll get the same exception there. Hopefully this answer will be obsoleted by a future fix.
Confirmed it's now obsolete (tested against 1.8.2).
If it was Java ...
https://developers.google.com/appengine/docs/java/endpoints/multiclass
cloudn't be easier.

Python use a virtual class to apply a generic "pipe" pattern

I am trying to find out if it would be possible to take the following code, and use the magic of python to simplify code.
Right now I have a command interface that sits on top of a bunch of python sub processes. When I need to communicate with the sub process's I pipe commands to them. Basically it comes down to a string command, and a dictionary of arguments.
Here is the pattern that gets repeated (I showed 1 for simplicitys sake but in reality this is repeated 7 times for different processes)
Create the processes:
class MasterProcess(object):
def __init__(self):
self.stop = multiprocessing.Event()
(self.event_generator_remote, self.event_generator_in)
= multiprocessing.Pipe(duplex=True)
self.event_generator= Process(target=self.create_event_generator,
kwargs={'in': self.event_generator_remote}
)
self.event_generator.start()
def create_event_generator(self, **kwargs):
eg= EventGenerator()
in_pipe = kwargs['in']
while(not self.stop.is_set()):
self.stop.wait(1)
if(in_pipe.poll()):
msg = in_pipe.recv()
cmd = msg[0]
args = msg[1]
if cmd =='create_something':
in_pipe.send(eg.create(args))
else:
raise NotImplementedException(cmd)
And then on the command interface is just pumping commands to the process:
mp.MasterProcess()
pipe = mp.event_generator_remote
>>cmd: create_something args
#i process the above and then do something like the below
cmd = "create_something"
args = {
#bah
}
pipe.send([command, args])
attempt = 0
while(not pipe.poll()):
time.sleep(1)
attempt +=1
if attempt > 20:
return None
return pipe.recv()
What I want to move to is more of a remote facade type deal where the client just calls a method like it would normally, and I translate that call to the above.
For example the new command would look like:
mp.MasterProcess()
mp_proxy = MasterProcessProxy(mp.event_generator_remote)
mp_proxy.create_something(args)
So my virtual class would be MasterProcessProxy, there are really no methods behind the scenes somehow take the method name, and provided args and pipe them to the process?
Does that make sense? Would it be possible to do the same on the other side? Just assume whatever comes down the pipe will be in the form cmd , args where cmd is a local method? and just do a self.() ?
As I am typing this up I understand it is probably confusing, so please let me know what needs clarification.
Thanks.
You can use __getattr__ to create proxy methods for your stub class:
class MasterProcessProxy(object):
def __init__(self, pipe):
self.pipe = pipe
# This is called when an attribute is requested on the object.
def __getattr__(self, name):
# Create a dynamic function that sends a command through the pipe
# Keyword arguments are sent as command arguments.
def proxy(**kwargs):
self.pipe.send([name, kwargs])
return proxy
Now you can use it as you wanted:
mp.MasterProcess()
mp_proxy = MasterProcessProxy(mp.event_generator_remote)
mp_proxy.create_something(spam="eggs", bacon="baked beans")
# Will call pipe.send(["create_something", {"spam":"eggs", "bacon":"baked beans"}])
You might want to check out the twisted framework. This won't beat figuring out how to do it yourself, but will make writing this style of application a lot easier.

Categories