Ansible: Access host/group vars from within custom module - python

Is there a way how one can access host/group vars from within a custom written module? I would like to avoid to pass all required vars as module parameters.
My module is written in Python and I use the boilerplate. I checked pretty much all available vars but they are not stored anywhere:
def main():
pprint(dir())
pprint(globals())
pprint(locals())
for name in vars().keys():
print(name)
Now my only hope is they are somehow accessible through the undocumented module utils.
I guess it is not possible, since the module runs on the target machine and probably the facts/host/group vars are not transferred along with the module...
Edit: Found the module utils now and it doesn't look promising.

Is there a way how one can access host/group vars from within a custom
written module?
Not built-in.
You will have to pass them yourself one way or the other:
Module args.
Serialize to local file system (with pickle or yaml.dump() or json or ...) and send the file over.
any other innovative ideas you can come up with.
Unfortunately you can't just send over whole host/groupvar files as-it-is because you would have to implement the variable scope/precedence resolution algorithm of ansible which is undefined (it's not the Zen philosophy of ansible to define such petty things :P ).
--edit--
I see they have some precedence defined now.
Ansible does apply variable precedence, and you might have a use for
it. Here is the order of precedence from least to greatest (the last
listed variables override all other variables):
command line values (for example, -u my_user, these are not variables)
role defaults (defined in role/defaults/main.yml) 1
inventory file or script group vars 2
inventory group_vars/all 3
playbook group_vars/all 3
inventory group_vars/* 3
playbook group_vars/* 3
inventory file or script host vars 2
inventory host_vars/* 3
playbook host_vars/* 3
host facts / cached set_facts 4
play vars
play vars_prompt
play vars_files
role vars (defined in role/vars/main.yml)
block vars (only for tasks in block)
task vars (only for the task)
include_vars
set_facts / registered vars
role (and include_role) params
include params
extra vars (for example, -e "user=my_user")(always win precedence)
In general, Ansible gives precedence to variables that were defined
more recently, more actively, and with more explicit scope. Variables
in the defaults folder inside a role are easily overridden. Anything
in the vars directory of the role overrides previous versions of that
variable in the namespace. Host and/or inventory variables override
role defaults, but explicit includes such as the vars directory or an
include_vars task override inventory variables.
Ansible merges different variables set in inventory so that more
specific settings override more generic settings. For example,
ansible_ssh_user specified as a group_var is overridden by
ansible_user specified as a host_var. For details about the precedence
of variables set in inventory, see How variables are merged.
Footnotes
1 Tasks in each role see their own role’s defaults. Tasks defined
outside of a role see the last role’s defaults.
2(1,2) Variables defined in inventory file or provided by dynamic
inventory.
3(1,2,3,4,5,6) Includes vars added by ‘vars plugins’ as well as
host_vars and group_vars which are added by the default vars plugin
shipped with Ansible.
4 When created with set_facts’s cacheable option, variables have the
high precedence in the play, but are the same as a host facts
precedence when they come from the cache.

As per your suggestion in your answer here, I did manage to read host_vars and local play vars through a custom Action Plugin.
I'm posting this answer for completeness sake and to give an explicit example of how one might go about this method, although you gave this idea originally :)
Note - this example is incomplete in terms of a fully functioning plugin. It just shows the how to access variables.
from ansible.template import is_template
from ansible.plugins.action import ActionBase
class ActionModule(ActionBase):
def run(self, tmp=None, task_vars=None):
# some boilerplate ...
# init
result = super(ActionModule, self).run(tmp, task_vars)
# more boilerplate ...
# check the arguments passed to the task, where if missing, return None
self._task.args.get('<TASK ARGUMENT NAME>', None)
# or
# check if the play has vars defined
task_vars['vars']['<ARGUMENT NAME>']
# or
# check if the host vars has something defined
task_vars['hostvars']['<HOST NAME FORM HOSTVARS>']['<ARGUMENT NAME>']
# again boilerplate...
# build arguments to pass to the module
some_module_args = dict(
arg1=arg1,
arg2=arg2
)
# call the module with the above arguments...
In case you have your playbook variables with jinja 2 templates, you can resolve these templates in the plugin as follows:
from ansible.template import is_template
# check if the variable is a template through 'is_template'
if is_template(var, self._templar.environment):
# access the internal `_templar` object to resolve the template
resolved_arg = self._templar.template(var_arg)
Some words of caution:
If you have a variable defined in your playbook as follows
# things ...
#
vars:
- pkcs12_path: '{{ pkcs12_full_path }}'
- pkcs12_pass: '{{ pkcs12_password }}'
The variable pkcs12_path must not match the host_vars name.
For instance, if you had pkcs12_path: '{{ pkcs12_path }}', then resolving the template with the above code will cause a recursive exception... This might be obvious to some, but for me it was surprising that the host_vars variable and the playbook variable must not be with the same name.
You can also access variables through task_vars['<ARG_NAME>'], but I'm not sure where it's reading this from. Also it's less explicit than taking variables from task_vars['vars']['<ARG_NAME>'] or from the hostvars.
PS - in the time of writing this, the example follows the basic structure of what Ansible consider an Action Plugin. In the future, the run method might change its signature...

I think you pretty much hit the nail on the head with your thinking here:
I guess it is not possible, since the module runs on the target machine and probably the facts/host/group vars are not transferred along with the module...
However, having said that, if you really have a need for this then there might be a slightly messy way of doing it. As of Ansible 1.8 you can set up fact caching, which uses redis to cache facts between runs of plays. Since redis is pretty easy to use and has clients for most popular programming languages, you could have your module query the redis server for any facts you need. It's not exactly the cleanest way to do it, but it just might work.

Related

Preventing use of global variables in Flask Blueprints to improve unit tests

Problem
I'm seeing examples that use global variables in Blueprints to store objects used by the Blueprint. However, this causes problems in unit-testing in case multiple instances of an application are created.
To illustrate, below is an example where parsed files are cached, based on a setting during blueprint initialization (Note: I know there are many issues with the below code, it's just an example):
from pathlib import Path
import json
from flask import Blueprint, jsonify
from whatevermodule import JSONFileCache
bp = Blueprint('cached_file_loader', __name__, url_prefix='/files')
json_cache = JSONFileCache()
#bp.record
def init(setup_state):
if not setup_state.options['use_caching']:
json_cache.disable()
#bp.route('/<path:path>')
def view_json(path):
localpath = Path('/tmp', path.lstrip('/'))
data = json_cache.get(localpath)
if data is None:
with open(localpath, 'r') as jsonfile:
data = json.load(jsonfile)
json_cache.set(localpath, data)
return jsonify(data)
The problem is, in this example a global variable 'json_cache' is used. This means if I create multiple instances of an app with this blueprint, using different caching settings, these will interfere, causing tests to fail.
What I've tried/considered
Keep using global variables, and just use one application per test file. This actually what I'm using currently. However,
since pytest collects all tests in one process, I have to do find ./tests -iname "test_*.py" -print0 | xargs -0 -n1 pytest
to make sure everything runs .
Storing the objects (in this case the json_cache) on the Blueprint instance. This is however not possible, since the 'bp' variable is also a global variable, that will be reused...
Inside the 'init' function, storing the objects inside setup_state.app.config. This should
work, since this variable is initialized per application. The
docs for configuration handling actually
say:
That way you can create multiple instances of your application with different configurations
attached which makes unit testing a lot easier.
Although option (3) should work, it looks very 'hacky' to me to store functional objects inside a dictionary marked as 'config', where I would only expect configuration variables.
Question
Is there a better way, that looks less 'hacky' that option (3), to store objects used inside blueprints, at the app level during intialization?

Is it bad practice to modify attributes of one module from another module?

I want to define a bunch of config variables that can be imported in all the modules in my project. The values of those variables will be constant during runtime but are not known before runtime; they depend on the input. Usually I'd define a dict in my top module which would be passed to all functions and classes from other modules; however, I was thinking it may be cleaner to simply create a blank config.py module which would be dynamically filled with config variables by the top module:
# top.py
import config
config.x = x
# config.py
x = None
# other.py
import config
print(config.x)
I like this approach because I don't have to save the parameters as attributes of classes in my other modules; which makes sense to me because parameters do not describe classes themselves.
This works but is it considered bad practice?
The question as such may be disputed. But I would generally say yes, it's "bad practice" because scope and impact of change is really getting blurred. Note the use case you're describing really is not about sharing configuration, but about different parts of the program functions, objects, modules exchanging data and as such it's a bit of a variation on (meta)global variable).
Reading common configuration values could be fine, but changing them along the way... you may lose track of what happened where and also in which order as modules get imported / values get modified. For instance assume the config.py and two modules m1.py:
import config
print(config.x)
config.x=1
and m2.py:
import config
print(config.x)
config.x=2
and a main.py that just does:
import m1
import m2
import config
print(config.x)
or:
import m2
import m1
import config
print(config.x)
The state in which you find config in each module and really any other (incl. main.py here) depends on order in which imports have occurred and who assigned what value when. Even for a program entirely under your control, this may get confusing (and source of mistakes) rather quickly.
For runtime data and passing information between objects and modules (and your example is really that and not configuration that is predefined and shared between modules) I would suggest you look into describing the information perhaps in a custom state (config) object and pass it around through appropriate interface. But really just a function / method argument may be all that is needed. The exact form depends on what exactly you're trying to achieve and what your overall design is.
In your example, other.py behaves differently when called or imported before top.py which may still seem obvious and manageable in a minimal example, but really is not a very sound design. Anyone reading the code (incl. future you) should be able to follow its logic and this IMO breaks its flow.
The most trivial (and procedural) example of what for what you've described and now I hopefully have a better grasp of would be other.py recreating your current behavior:
def do_stuff(value):
print(value) # We did something useful here
if __name__ == "__main__":
do_stuff(None) # Could also use config with defaults
And your top.py presumably being the entry point and orchestrating importing and execution doing:
import other
x = get_the_value()
other.do_stuff(x)
You can of course introduce an interface to configure do_stuff perhaps a dict or a custom class even with default implementation in config.py:
class Params:
def __init__(self, x=None):
self.x = x
and your other.py:
def do_stuff(params=config.Params()):
print(params.x) # We did something useful here
And on your top.py you can use:
params = config.Params(get_the_value())
other.do_stuff(params)
But you could also have any use case specific source of value(s):
class TopParams:
def __init__(self, url):
self.x = get_value_from_url(url)
params = TopParams("https://example.com/value-source")
other.do_stuff(params)
x could even be a property which you retrieve every time you access it... or lazily when needed and then cached... Again, it really then is a matter of what you need to do.
"Is it bad practice to modify attributes of one module from another module?"
that it is considered as bad practice - violation of the law of demeter, which means in fact "talk to friends, not to strangers".
Objects should expose behaviour and functions, but should HIDE the data.
DataStructures should EXPOSE data, but should not have any methods (which are exposed). The law of demeter does not apply to such DataStructures. OOP Purists might cover such DataStructures with setters and getters, but it really adds no value in Python.
there is a lot of literature about that like : https://en.wikipedia.org/wiki/Law_of_Demeter
and of course, a must to read: "Clean Code", by Robert C. Martin (Uncle Bob), check it out on Youtube also.
For procedural programming it is perfectly normal to keep data in a DataStructure which does not have any (exposed) methods.
The procedures in the program work with that data. Consider to use the module attrs, see : https://www.attrs.org/en/stable/ for easy creation of such classes.
my prefered method for keeping config is (here without using attrs):
# conf_xy.py
"""
config is code - so why use damned parsers, textfiles, xml, yaml, toml and all that
if You just can use testable code as config that can deliver the correct types, etc.
as well as hinting in Your favorite IDE ?
Here, for demonstration without using attrs package - usually I use attrs (read the docs)
"""
class ConfXY(object):
def __init__(self) -> None:
self.x: int = 1
self.z: float = get_z_from_input()
...
conf_xy=ConfXY()
# other.py
from conf_xy import conf_xy
...
y = conf_xy.x * 2
...

Robot Framework location and name of keyword

I want to create a python library with a 0 argument function that my custom Robot Framework keywords can call. It needs to know the absolute path of the file where the keyword is defined, and the name of the keyword. I know how to do something similar with test cases using the robot.libraries.BuiltIn library and the ${SUITE_SOURCE} and ${TEST NAME} variables, but I can't find anything similar for custom keywords. I don't care how complicated the answer is, maybe I have to dig into the guts of Robot Framework's internal classes and access that data somehow. Is there any way to do this?
Thanks to janne I was able to find the solution.
from robot.running.context import EXECUTION_CONTEXTS
def resource_locator():
name = EXECUTION_CONTEXTS.current.keywords[-1].name
libname = EXECUTION_CONTEXTS.current.get_handler(name).libname
resources = EXECUTION_CONTEXTS.current.namespace._kw_store.resources
path = ""
for key in resources._keys:
if resources[key].name == libname:
path = key
break
return {'name': name, 'path': path}
EXECUTION_CONTEXTS.current.keywords is the stack of keywords called, with the earliest first and the most recent last, so EXECUTION_CONTEXTS.current.keywords[-1] gets the last keyword, which is the keyword that called this function.
EXECUTION_CONTEXTS.current.get_handler(name).libname gets the name of the library in which the keyword name is defined. In the case of user defined keywords, it is the filename (not the full path) minus the extension.
EXECUTION_CONTEXTS.current.namespace._kw_store.resources is a dictionary of all included resources, where the key is the absolute path. Because the file path is the key, I have to search for the key such that the value's name is the name of the resource in which the keyword is defined (libname)
I took a relatively quick look through the sources, and it seems that the execution context does have any reference to currently executing keyword. So, the only way I can think of resolving this is:
Your library needs also to be a listener, since listeners get events when a keyword is started
You need to go through robot.libraries.BuiltIn.EXECUTION_CONTEXT._kw_store.resources to find out which resource file contains the keyword currently executing.
I did not do a POC, so I am not sure whether this actually doable, bu that's the solution that comes to my mind currently.

Mark specific variables as known

I am developing Python scripts which run inside a Jython interpreter. This interpreter sets certain global variables, which I use inside the script.
Pylint of course does not know these variables, so it reports errors all over the place.
Is there a way of making pylint aware that there are certain variables defined outside of its scope?
Alternatively, is there a way that I can define the unknown variables to pylint?
I tried something like
if not globals().has_key('SOME_EXTERNAL_GLOBAL'):
globals()['SOME_EXTERNAL_GLOBAL'] = None
But that did not help (pylint seems to ignore black magic done to globals()).
You have several options:
add variable names to additional-builtins
additional-builtins:
List of additional names supposed to be
defined in builtins. Remember that you should avoid to define new
builtins when possible.
add # pylint: disable=E0602 comment on top of the file to disable undefined-variable check in the file
add # pylint: disable=E0602 comment in the code where the variable is used
run pylint with --disable-msg=E0602 option
Also see:
Pylint ignore specific names
Howto ignore specific undefined variables in Pydev Eclipse
How to disable pylint 'Undefined variable' error for a specific variable in a file?

Building a minimal plugin architecture in Python

I have an application, written in Python, which is used by a fairly technical audience (scientists).
I'm looking for a good way to make the application extensible by the users, i.e. a scripting/plugin architecture.
I am looking for something extremely lightweight. Most scripts, or plugins, are not going to be developed and distributed by a third-party and installed, but are going to be something whipped up by a user in a few minutes to automate a repeating task, add support for a file format, etc. So plugins should have the absolute minimum boilerplate code, and require no 'installation' other than copying to a folder (so something like setuptools entry points, or the Zope plugin architecture seems like too much.)
Are there any systems like this already out there, or any projects that implement a similar scheme that I should look at for ideas / inspiration?
Mine is, basically, a directory called "plugins" which the main app can poll and then use imp.load_module to pick up files, look for a well-known entry point possibly with module-level config params, and go from there. I use file-monitoring stuff for a certain amount of dynamism in which plugins are active, but that's a nice-to-have.
Of course, any requirement that comes along saying "I don't need [big, complicated thing] X; I just want something lightweight" runs the risk of re-implementing X one discovered requirement at a time. But that's not to say you can't have some fun doing it anyway :)
module_example.py:
def plugin_main(*args, **kwargs):
print args, kwargs
loader.py:
def load_plugin(name):
mod = __import__("module_%s" % name)
return mod
def call_plugin(name, *args, **kwargs):
plugin = load_plugin(name)
plugin.plugin_main(*args, **kwargs)
call_plugin("example", 1234)
It's certainly "minimal", it has absolutely no error checking, probably countless security problems, it's not very flexible - but it should show you how simple a plugin system in Python can be..
You probably want to look into the imp module too, although you can do a lot with just __import__, os.listdir and some string manipulation.
Have a look at at this overview over existing plugin frameworks / libraries, it is a good starting point. I quite like yapsy, but it depends on your use-case.
While that question is really interesting, I think it's fairly hard to answer, without more details. What sort of application is this? Does it have a GUI? Is it a command-line tool? A set of scripts? A program with an unique entry point, etc...
Given the little information I have, I will answer in a very generic manner.
What means do you have to add plugins?
You will probably have to add a configuration file, which will list the paths/directories to load.
Another way would be to say "any files in that plugin/ directory will be loaded", but it has the inconvenient to require your users to move around files.
A last, intermediate option would be to require all plugins to be in the same plugin/ folder, and then to active/deactivate them using relative paths in a config file.
On a pure code/design practice, you'll have to determine clearly what behavior/specific actions you want your users to extend. Identify the common entry point/a set of functionalities that will always be overridden, and determine groups within these actions. Once this is done, it should be easy to extend your application,
Example using hooks, inspired from MediaWiki (PHP, but does language really matters?):
import hooks
# In your core code, on key points, you allow user to run actions:
def compute(...):
try:
hooks.runHook(hooks.registered.beforeCompute)
except hooks.hookException:
print('Error while executing plugin')
# [compute main code] ...
try:
hooks.runHook(hooks.registered.afterCompute)
except hooks.hookException:
print('Error while executing plugin')
# The idea is to insert possibilities for users to extend the behavior
# where it matters.
# If you need to, pass context parameters to runHook. Remember that
# runHook can be defined as a runHook(*args, **kwargs) function, not
# requiring you to define a common interface for *all* hooks. Quite flexible :)
# --------------------
# And in the plugin code:
# [...] plugin magic
def doStuff():
# ....
# and register the functionalities in hooks
# doStuff will be called at the end of each core.compute() call
hooks.registered.afterCompute.append(doStuff)
Another example, inspired from mercurial. Here, extensions only add commands to the hg commandline executable, extending the behavior.
def doStuff(ui, repo, *args, **kwargs):
# when called, a extension function always receives:
# * an ui object (user interface, prints, warnings, etc)
# * a repository object (main object from which most operations are doable)
# * command-line arguments that were not used by the core program
doMoreMagicStuff()
obj = maybeCreateSomeObjects()
# each extension defines a commands dictionary in the main extension file
commands = { 'newcommand': doStuff }
For both approaches, you might need common initialize and finalize for your extension.
You can either use a common interface that all your extension will have to implement (fits better with second approach; mercurial uses a reposetup(ui, repo) that is called for all extension), or use a hook-kind of approach, with a hooks.setup hook.
But again, if you want more useful answers, you'll have to narrow down your question ;)
Marty Allchin's simple plugin framework is the base I use for my own needs. I really recommand to take a look at it, I think it is really a good start if you want something simple and easily hackable. You can find it also as a Django Snippets.
I am a retired biologist who dealt with digital micrograqphs and found himself having to write an image processing and analysis package (not technically a library) to run on an SGi machine. I wrote the code in C and used Tcl for the scripting language. The GUI, such as it was, was done using Tk. The commands that appeared in Tcl were of the form "extensionName commandName arg0 arg1 ... param0 param1 ...", that is, simple space-separated words and numbers. When Tcl saw the "extensionName" substring, control was passed to the C package. That in turn ran the command through a lexer/parser (done in lex/yacc) and then called C routines as necessary.
The commands to operate the package could be run one by one via a window in the GUI, but batch jobs were done by editing text files which were valid Tcl scripts; you'd pick the template that did the kind of file-level operation you wanted to do and then edit a copy to contain the actual directory and file names plus the package commands. It worked like a charm. Until ...
1) The world turned to PCs and 2) the scripts got longer than about 500 lines, when Tcl's iffy organizational capabilities started to become a real inconvenience. Time passed ...
I retired, Python got invented, and it looked like the perfect successor to Tcl. Now, I have never done the port, because I have never faced up to the challenges of compiling (pretty big) C programs on a PC, extending Python with a C package, and doing GUIs in Python/Gt?/Tk?/??. However, the old idea of having editable template scripts seems still workable. Also, it should not be too great a burden to enter package commands in a native Python form, e.g.:
packageName.command( arg0, arg1, ..., param0, param1, ...)
A few extra dots, parens, and commas, but those aren't showstoppers.
I remember seeing that someone has done versions of lex and yacc in Python (try: http://www.dabeaz.com/ply/), so if those are still needed, they're around.
The point of this rambling is that it has seemed to me that Python itself IS the desired "lightweight" front end usable by scientists. I'm curious to know why you think that it is not, and I mean that seriously.
added later: The application gedit anticipates plugins being added and their site has about the clearest explanation of a simple plugin procedure I've found in a few minutes of looking around. Try:
https://wiki.gnome.org/Apps/Gedit/PythonPluginHowToOld
I'd still like to understand your question better. I am unclear whether you 1) want scientists to be able to use your (Python) application quite simply in various ways or 2) want to allow the scientists to add new capabilities to your application. Choice #1 is the situation we faced with the images and that led us to use generic scripts which we modified to suit the need of the moment. Is it Choice #2 which leads you to the idea of plugins, or is it some aspect of your application that makes issuing commands to it impracticable?
When i searching for Python Decorators, found a simple but useful code snippet. It may not fit in your needs but very inspiring.
Scipy Advanced Python#Plugin Registration System
class TextProcessor(object):
PLUGINS = []
def process(self, text, plugins=()):
if plugins is ():
for plugin in self.PLUGINS:
text = plugin().process(text)
else:
for plugin in plugins:
text = plugin().process(text)
return text
#classmethod
def plugin(cls, plugin):
cls.PLUGINS.append(plugin)
return plugin
#TextProcessor.plugin
class CleanMarkdownBolds(object):
def process(self, text):
return text.replace('**', '')
Usage:
processor = TextProcessor()
processed = processor.process(text="**foo bar**", plugins=(CleanMarkdownBolds, ))
processed = processor.process(text="**foo bar**")
I enjoyed the nice discussion on different plugin architectures given by Dr Andre Roberge at Pycon 2009. He gives a good overview of different ways of implementing plugins, starting from something really simple.
Its available as a podcast (second part following an explanation of monkey-patching) accompanied by a series of six blog entries.
I recommend giving it a quick listen before you make a decision.
I arrived here looking for a minimal plugin architecture, and found a lot of things that all seemed like overkill to me. So, I've implemented Super Simple Python Plugins. To use it, you create one or more directories and drop a special __init__.py file in each one. Importing those directories will cause all other Python files to be loaded as submodules, and their name(s) will be placed in the __all__ list. Then it's up to you to validate/initialize/register those modules. There's an example in the README file.
Actually setuptools works with a "plugins directory", as the following example taken from the project's documentation:
http://peak.telecommunity.com/DevCenter/PkgResources#locating-plugins
Example usage:
plugin_dirs = ['foo/plugins'] + sys.path
env = Environment(plugin_dirs)
distributions, errors = working_set.find_plugins(env)
map(working_set.add, distributions) # add plugins+libs to sys.path
print("Couldn't load plugins due to: %s" % errors)
In the long run, setuptools is a much safer choice since it can load plugins without conflicts or missing requirements.
Another benefit is that the plugins themselves can be extended using the same mechanism, without the original applications having to care about it.
Expanding on the #edomaur's answer may I suggest taking a look at simple_plugins (shameless plug), which is a simple plugin framework inspired by the work of Marty Alchin.
A short usage example based on the project's README:
# All plugin info
>>> BaseHttpResponse.plugins.keys()
['valid_ids', 'instances_sorted_by_id', 'id_to_class', 'instances',
'classes', 'class_to_id', 'id_to_instance']
# Plugin info can be accessed using either dict...
>>> BaseHttpResponse.plugins['valid_ids']
set([304, 400, 404, 200, 301])
# ... or object notation
>>> BaseHttpResponse.plugins.valid_ids
set([304, 400, 404, 200, 301])
>>> BaseHttpResponse.plugins.classes
set([<class '__main__.NotFound'>, <class '__main__.OK'>,
<class '__main__.NotModified'>, <class '__main__.BadRequest'>,
<class '__main__.MovedPermanently'>])
>>> BaseHttpResponse.plugins.id_to_class[200]
<class '__main__.OK'>
>>> BaseHttpResponse.plugins.id_to_instance[200]
<OK: 200>
>>> BaseHttpResponse.plugins.instances_sorted_by_id
[<OK: 200>, <MovedPermanently: 301>, <NotModified: 304>, <BadRequest: 400>, <NotFound: 404>]
# Coerce the passed value into the right instance
>>> BaseHttpResponse.coerce(200)
<OK: 200>
As one another approach to plugin system, You may check Extend Me project.
For example, let's define simple class and its extension
# Define base class for extensions (mount point)
class MyCoolClass(Extensible):
my_attr_1 = 25
def my_method1(self, arg1):
print('Hello, %s' % arg1)
# Define extension, which implements some aditional logic
# or modifies existing logic of base class (MyCoolClass)
# Also any extension class maby be placed in any module You like,
# It just needs to be imported at start of app
class MyCoolClassExtension1(MyCoolClass):
def my_method1(self, arg1):
super(MyCoolClassExtension1, self).my_method1(arg1.upper())
def my_method2(self, arg1):
print("Good by, %s" % arg1)
And try to use it:
>>> my_cool_obj = MyCoolClass()
>>> print(my_cool_obj.my_attr_1)
25
>>> my_cool_obj.my_method1('World')
Hello, WORLD
>>> my_cool_obj.my_method2('World')
Good by, World
And show what is hidden behind the scene:
>>> my_cool_obj.__class__.__bases__
[MyCoolClassExtension1, MyCoolClass]
extend_me library manipulates class creation process via metaclasses, thus in example above, when creating new instance of MyCoolClass we got instance of new class that is subclass of both MyCoolClassExtension and MyCoolClass having functionality of both of them, thanks to Python's multiple inheritance
For better control over class creation there are few metaclasses defined in this lib:
ExtensibleType - allows simple extensibility by subclassing
ExtensibleByHashType - similar to ExtensibleType, but haveing ability
to build specialized versions of class, allowing global extension
of base class and extension of specialized versions of class
This lib is used in OpenERP Proxy Project, and seems to be working good enough!
For real example of usage, look in OpenERP Proxy 'field_datetime' extension:
from ..orm.record import Record
import datetime
class RecordDateTime(Record):
""" Provides auto conversion of datetime fields from
string got from server to comparable datetime objects
"""
def _get_field(self, ftype, name):
res = super(RecordDateTime, self)._get_field(ftype, name)
if res and ftype == 'date':
return datetime.datetime.strptime(res, '%Y-%m-%d').date()
elif res and ftype == 'datetime':
return datetime.datetime.strptime(res, '%Y-%m-%d %H:%M:%S')
return res
Record here is extesible object. RecordDateTime is extension.
To enable extension, just import module that contains extension class, and (in case above) all Record objects created after it will have extension class in base classes, thus having all its functionality.
The main advantage of this library is that, code that operates extensible objects, does not need to know about extension and extensions could change everything in extensible objects.
setuptools has an EntryPoint:
Entry points are a simple way for distributions to “advertise” Python
objects (such as functions or classes) for use by other distributions.
Extensible applications and frameworks can search for entry points
with a particular name or group, either from a specific distribution
or from all active distributions on sys.path, and then inspect or load
the advertised objects at will.
AFAIK this package is always available if you use pip or virtualenv.
You can use pluginlib.
Plugins are easy to create and can be loaded from other packages, file paths, or entry points.
Create a plugin parent class, defining any required methods:
import pluginlib
#pluginlib.Parent('parser')
class Parser(object):
#pluginlib.abstractmethod
def parse(self, string):
pass
Create a plugin by inheriting a parent class:
import json
class JSON(Parser):
_alias_ = 'json'
def parse(self, string):
return json.loads(string)
Load the plugins:
loader = pluginlib.PluginLoader(modules=['sample_plugins'])
plugins = loader.plugins
parser = plugins.parser.json()
print(parser.parse('{"json": "test"}'))
I have spent time reading this thread while I was searching for a plugin framework in Python now and then. I have used some but there were shortcomings with them. Here is what I come up with for your scrutiny in 2017, a interface free, loosely coupled plugin management system: Load me later. Here are tutorials on how to use it.
I've spent a lot of time trying to find small plugin system for Python, which would fit my needs. But then I just thought, if there is already an inheritance, which is natural and flexible, why not use it.
The only problem with using inheritance for plugins is that you dont know what are the most specific(the lowest on inheritance tree) plugin classes are.
But this could be solved with metaclass, which keeps track of inheritance of base class, and possibly could build class, which inherits from most specific plugins ('Root extended' on the figure below)
So I came with a solution by coding such a metaclass:
class PluginBaseMeta(type):
def __new__(mcls, name, bases, namespace):
cls = super(PluginBaseMeta, mcls).__new__(mcls, name, bases, namespace)
if not hasattr(cls, '__pluginextensions__'): # parent class
cls.__pluginextensions__ = {cls} # set reflects lowest plugins
cls.__pluginroot__ = cls
cls.__pluginiscachevalid__ = False
else: # subclass
assert not set(namespace) & {'__pluginextensions__',
'__pluginroot__'} # only in parent
exts = cls.__pluginextensions__
exts.difference_update(set(bases)) # remove parents
exts.add(cls) # and add current
cls.__pluginroot__.__pluginiscachevalid__ = False
return cls
#property
def PluginExtended(cls):
# After PluginExtended creation we'll have only 1 item in set
# so this is used for caching, mainly not to create same PluginExtended
if cls.__pluginroot__.__pluginiscachevalid__:
return next(iter(cls.__pluginextensions__)) # only 1 item in set
else:
name = cls.__pluginroot__.__name__ + 'PluginExtended'
extended = type(name, tuple(cls.__pluginextensions__), {})
cls.__pluginroot__.__pluginiscachevalid__ = True
return extended
So when you have Root base, made with metaclass, and have tree of plugins which inherit from it, you could automatically get class, which inherits from the most specific plugins by just subclassing:
class RootExtended(RootBase.PluginExtended):
... your code here ...
Code base is pretty small (~30 lines of pure code) and as flexible as inheritance allows.
If you're interested, get involved # https://github.com/thodnev/pluginlib
You may also have a look at Groundwork.
The idea is to build applications around reusable components, called patterns and plugins. Plugins are classes that derive from GwBasePattern.
Here's a basic example:
from groundwork import App
from groundwork.patterns import GwBasePattern
class MyPlugin(GwBasePattern):
def __init__(self, app, **kwargs):
self.name = "My Plugin"
super().__init__(app, **kwargs)
def activate(self):
pass
def deactivate(self):
pass
my_app = App(plugins=[MyPlugin]) # register plugin
my_app.plugins.activate(["My Plugin"]) # activate it
There are also more advanced patterns to handle e.g. command line interfaces, signaling or shared objects.
Groundwork finds its plugins either by programmatically binding them to an app as shown above or automatically via setuptools. Python packages containing plugins must declare these using a special entry point groundwork.plugin.
Here are the docs.
Disclaimer: I'm one of the authors of Groundwork.
In our current healthcare product we have a plugin architecture implemented with interface class. Our tech stack are Django on top of Python for API and Nuxtjs on top of nodejs for frontend.
We have a plugin manager app written for our product which is basically pip and npm package in adherence with Django and Nuxtjs.
For new plugin development(pip and npm) we made plugin manager as dependency.
In Pip package:
With the help of setup.py you can add entrypoint of the plugin to do something with plugin manager(registry, initiations, ...etc.)
https://setuptools.readthedocs.io/en/latest/setuptools.html#automatic-script-creation
In npm package:
Similar to pip there are hooks in npm scripts to handle the installation.
https://docs.npmjs.com/misc/scripts
Our usecase:
plugin development team is separate from core devopment team now. The scope of plugin development is for integrating with 3rd party apps which are defined in any of the categories of the product. The plugin interfaces are categorised for eg:- Fax, phone, email ...etc plugin manager can be enhanced to new categories.
In your case: Maybe you can have one plugin written and reuse the same for doing stuffs.
If plugin developers has need to use reuse core objects that object can be used by doing a level of abstraction within plugin manager so that any plugins can inherit those methods.
Just sharing how we implemented in our product hope it will give a little idea.

Categories