seamless dynamic plugin loading - python

I'm working on an extensible plugin subsystem for a framework, but I'm stuck on how to get the system to dynamically and seamlessly load plugins from within the plugins themselves. I already know how to add the plugins (probably going to use entry-points). As an example:
pluga.py
from framework.plug_lib import plugin
#plugin
def foo():
return 2
#plugin
def bar():
return foo()
plugb.py
from framework.plug_lib import plugin
#plugin
def foo():
return 3
framework.plug_lib.py
loaded_plugins = {}
class Plugin(object):
def __init__(self, plug): #note plug could be class, function, method, etc
self.plug = plug
def plug_load(self):
loaded_plugins[self.plug.__name__] = self.plug
def plug_get_loaded_version(self):
try:
return loaded_plugins[self.plug.__name__]
except KeyError:
raise RuntimeError('No plugin %s loaded.' % self.plug.__name__)
def plugin(plug):
return Plugin(plug)
def load(plugin_path):
# get plugin by searching entry-points, other basic logic
plugin.plug_load()
Usage examples:
from framework.plug_lib import load, loaded_plugins
load('pluga.foo')
load('pluga.bar')
loaded_plugins['bar']() # 2
.
from framework.plug_lib import load, loaded_plugins
load('plugb.foo')
load('pluga.bar')
loaded_plugins['bar']() # 3
This is simplified to try to illustrate the problem concisely. In the real framework, there are multiple different types of plugins, and they all go to different loaded_plugin analogs. If the container for the plugins was a class, I could probably figure a way to shoehorn this in to a descriptor or something, but part of the point of this is to refactor out some boilerplate code that wraps the plugins currently. Can I customize the way the module itself inspects it's own namespace and retrieves it's members with no or at most only a single simple line in the plugin module itself? I basically need something equivalent to __getattribute__ on the module itself so I can make Plugins behave like poor man's descriptors and return self.plug_get_loaded_version() whenever an attempt is made to access anything other than their plug_load() method, I think, but I have no idea how to do that. Maybe package-time code (something in setup.py), but I think then you wouldn't be able to work on the installed packages in develop mode without rebuilding every time. Any ideas out there?

Related

Mock an entire module in python

I have an application that imports a module from PyPI.
I want to write unittests for that application's source code, but I do not want to use the module from PyPI in those tests.
I want to mock it entirely (the testing machine will not contain that PyPI module, so any import will fail).
Currently, each time I try to load the class I want to test in the unittests, I immediately get an import error. so I thought about maybe using
try:
except ImportError:
and catch that import error, then use command_module.run().
This seems pretty risky/ugly and I was wondering if there's another way.
Another idea was writing an adapter to wrap that PyPI module, but I'm still working on that.
If you know any way I can mock an entire python package, I would appreciate it very much.
Thanks.
If you want to dig into the Python import system, I highly recommend David Beazley's talk.
As for your specific question, here is an example that tests a module when its dependency is missing.
bar.py - the module you want to test when my_bogus_module is missing
from my_bogus_module import foo
def bar(x):
return foo(x) + 1
mock_bogus.py - a file in with your tests that will load a mock module
from mock import Mock
import sys
import types
module_name = 'my_bogus_module'
bogus_module = types.ModuleType(module_name)
sys.modules[module_name] = bogus_module
bogus_module.foo = Mock(name=module_name+'.foo')
test_bar.py - tests bar.py when my_bogus_module is not available
import unittest
from mock_bogus import bogus_module # must import before bar module
from bar import bar
class TestBar(unittest.TestCase):
def test_bar(self):
bogus_module.foo.return_value = 99
x = bar(42)
self.assertEqual(100, x)
You should probably make that a little safer by checking that my_bogus_module isn't actually available when you run your test. You could also look at the pydoc.locate() method that will try to import something, and return None if it fails. It seems to be a public method, but it isn't really documented.
While #Don Kirkby's answer is correct, you might want to look at the bigger picture. I borrowed the example from the accepted answer:
import pypilib
def bar(x):
return pypilib.foo(x) + 1
Since pypilib is only available in production, it is not suprising that you have some trouble when you try to unit test bar. The function requires the external library to run, therefore it has to be tested with this library. What you need is an integration test.
That said, you might want to force unit testing, and that's generally a good idea because it will improve the confidence you (and others) have in the quality of your code. To widen the unit test area, you have to inject dependencies. Nothing prevents you (in Python!) from passing a module as a parameter (the type is types.ModuleType):
try:
import pypilib # production
except ImportError:
pypilib = object() # testing
def bar(x, external_lib = pypilib):
return external_lib.foo(x) + 1
Now, you can unit test the function:
import unittest
from unittest.mock import Mock
class Test(unittest.TestCase):
def test_bar(self):
external_lib = Mock(foo = lambda x: 3*x)
self.assertEqual(10, bar(3, external_lib))
if __name__ == "__main__":
unittest.main()
You might disapprove the design. The try/except part is a bit cumbersome, especially if you use the pypilib module in several modules of your application. And you have to add a parameter to each function that relies on the external library.
However, the idea to inject a dependency to the external library is useful, because you can control the input and test the output of your class methods, even if the external library is not within your control. Especially if the imported module is stateful, the state might be difficult to reproduce in a unit test. In this case, passing the module as a parameter may be a solution.
But the usual way to deal with this situation is called dependency inversion principle (the D of SOLID): you should define the (abstract) boundaries of your application, ie what you need from the outside world. Here, this is bar and other functions, preferably grouped in one or many classes:
import pypilib
import other_pypilib
class MyUtil:
"""
All I need from outside world
"""
#staticmethod
def bar(x):
return pypilib.foo(x) + 1
#staticmethod
def baz(x, y):
return other_pypilib.foo(x, y) * 10.0
...
# not every method has to be static
Each time you need one of these functions, just inject an instance of the class in your code:
class Application:
def __init__(self, util: MyUtil):
self._util = util
def something(self, x, y):
return self._util.baz(self._util.bar(x), y)
The MyUtil class must be as slim as possible, but must remain abstract from the underlying library. It is a tradeoff. Obviously, Application can be unit tested (just inject a Mock instead of an instance of MyUtil) while, under some circumstances (like a PyPi library not available during tests, a module that runs inside a framework only, etc.), MyUtil can be only tested within an integration test. If you need to unit test the boundaries of your application, you can use #Don Kirkby's method.
Note that the second benefit, after unit testing, is that if you change the libraries you are using (deprecation, license issue, cost, ...), you just have to rewrite the MyUtil class, using some other libraries or coding it from scratch. Your application is protected from the wild outside world.
Clean Code by Robert C. Martin has a full chapter on the boundaries.
Summary Before using #Don Kirkby's method or any other method, be sure to define the boundaries of your application irrespective of the specific libraries you are using. This, of course, does not apply to the Python standard library...
For a more explicit and granular approach:
import unittest
from unittest.mock import MagicMock, patch
try:
import bogus_module
except ModuleNotFoundError:
bogus_module = MagicMock()
#patch.dict('sys.modules', bogus_module=bogus_module)
class PlatformTests(unittest.TestCase):
...
Using the patch.dict decorator gives you granular control: it only applies to the class / method it is applied to.

Lazy-loading modules in python

I'm trying to put together a system that will handle lazy-loading of modules that don't explicitly exist. Basically I have an http server with a number of endpoints that I don't know ahead of time that I would like to programmatically offer for import. These modules would all have a uniform method signature, they just wouldn't exist ahead of time.
import lazy.route as test
import lazy.fake as test2
test('Does this exist?') # This sends a post request.
test2("This doesn't exist.") # Also sends a post request
I can handle all the logic I need around these imports with a uniform decorator, I just can't find any way of "decorating" imports in python, or actually interacting with them in any kind of programmatic way.
Does anyone have experience with this? I've been hunting around, and the closest thing I've found is the ast module, which would lead to a really awful kind of hacky implementation in my current under my current understanding (something like finding all import statements and manually over-writing the import function)
Not looking for a handout, just a piece of the python codebase to start looking at, or an example of someone that's done something similar.
I got a little clever in my googling and managed to find a PEP that specifically addressed this issue, it just happens to be relatively unknown, probably because the subset of reasonable uses for this is pretty narrow.
I found an excellent piece of example code showing off the new sys.meta_path implementation. I've posted it below for information on how to dynamically bootstrap your import statements.
import sys
class VirtualModule(object):
def hello(self):
return 'Hello World!'
class CustomImporter(object):
virtual_name = 'my_virtual_module'
def find_module(self, fullname, path):
"""This method is called by Python if this class
is on sys.path. fullname is the fully-qualified
name of the module to look for, and path is either
__path__ (for submodules and subpackages) or None (for
a top-level module/package).
Note that this method will be called every time an import
statement is detected (or __import__ is called), before
Python's built-in package/module-finding code kicks in."""
if fullname == self.virtual_name:
# As per PEP #302 (which implemented the sys.meta_path protocol),
# if fullname is the name of a module/package that we want to
# report as found, then we need to return a loader object.
# In this simple example, that will just be self.
return self
# If we don't provide the requested module, return None, as per
# PEP #302.
return None
def load_module(self, fullname):
"""This method is called by Python if CustomImporter.find_module
does not return None. fullname is the fully-qualified name
of the module/package that was requested."""
if fullname != self.virtual_name:
# Raise ImportError as per PEP #302 if the requested module/package
# couldn't be loaded. This should never be reached in this
# simple example, but it's included here for completeness. :)
raise ImportError(fullname)
# PEP#302 says to return the module if the loader object (i.e,
# this class) successfully loaded the module.
# Note that a regular class works just fine as a module.
return VirtualModule()
if __name__ == '__main__':
# Add our import hook to sys.meta_path
sys.meta_path.append(CustomImporter())
# Let's use our import hook
import my_virtual_module
print my_virtual_module.hello()
The full blog post is here

Most Pythonic way to provide function metadata at compile time?

I am building a very basic platform in the form of a Python 2.7 module. This module has a read-eval-print loop where entered user commands are mapped to function calls. Since I am trying to make it easy to build plugin modules for my platform, the function calls will be from my Main module to an arbitrary plugin module. I'd like a plugin builder to be able to specify the command that he wants to trigger his function, so I've been looking for a Pythonic way to remotely enter a mapping in the command->function dict in the Main module from the plugin module.
I've looked at several things:
Method name parsing: the Main module would import the plugin module
and scan it for method names that match a certain format. For
example, it might add the download_file_command(file) method to its
dict as "download file" -> download_file_command. However, getting a
concise, easy-to-type command name (say, "dl") requires that the
function's name also be short, which isn't good for code
readability. It also requires the plugin developer to conform to a
precise naming format.
Cross-module decorators: decorators would let
the plugin developer name his function whatever he wants and simply
add something like #Main.register("dl"), but they would necessarily
require that I both modify another module's namespace and keep
global state in the Main module. I understand this is very bad.
Same-module decorators: using the same logic as above, I could add a
decorator that adds the function's name to some command name->function mapping local to the
plugin module and retrieve the mapping to the Main module with an
API call. This requires that certain methods always be present or
inherited though, and - if my understanding of decorators is correct - the function will only register itself the first time it is run and will unnecessarily re-register itself every subsequent time
thereafter.
Thus, what I really need is a Pythonic way to annotate a function with the command name that should trigger it, and that way can't be the function's name. I need to be able to extract the command name->function mapping when I import the module, and any less work on the plugin developer's side is a big plus.
Thanks for the help, and my apologies if there are any flaws in my Python understanding; I'm relatively new to the language.
Building or Standing on the first part of #ericstalbot's answer, you might find it convenient to use a decorator like the following.
################################################################################
import functools
def register(command_name):
def wrapped(fn):
#functools.wraps(fn)
def wrapped_f(*args, **kwargs):
return fn(*args, **kwargs)
wrapped_f.__doc__ += "(command=%s)" % command_name
wrapped_f.command_name = command_name
return wrapped_f
return wrapped
################################################################################
#register('cp')
def copy_all_the_files(*args, **kwargs):
"""Copy many files."""
print "copy_all_the_files:", args, kwargs
################################################################################
print "Command Name: ", copy_all_the_files.command_name
print "Docstring : ", copy_all_the_files.__doc__
copy_all_the_files("a", "b", keep=True)
Output when run:
Command Name: cp
Docstring : Copy many files.(command=cp)
copy_all_the_files: ('a', 'b') {'keep': True}
User-defined functions can have arbitrary attributes. So you could specify that plug-in functions have an attribute with a certain name. For example:
def a():
return 1
a.command_name = 'get_one'
Then, in your module you could build a mapping like this:
import inspect #from standard library
import plugin
mapping = {}
for v in plugin.__dict__.itervalues():
if inspect.isfunction(v) and v.hasattr('command_name'):
mapping[v.command_name] = v
To read about arbitrary attributes for user-defined functions see the docs
There are two parts in a plugin system:
Discover plugins
Trigger some code execution in a plugin
The proposed solutions in your question address only the second part.
There many ways to implement both depending on your requirements e.g., to enable plugins, they could be specified in a configuration file for your application:
plugins = some_package.plugin_for_your_app
another_plugin_module
# ...
To implement loading of the plugin modules:
plugins = [importlib.import_module(name) for name in config.get("plugins")]
To get a dictionary: command name -> function:
commands = {name: func
for plugin in plugins
for name, func in plugin.get_commands().items()}
Plugin author can use any method to implement get_commands() e.g., using prefixes or decorators — your main application shouldn't care as long as get_commands() returns the command dictionary for each plugin.
For example, some_plugin.py (full source):
def f(a, b):
return a + b
def get_commands():
return {"add": f, "multiply": lambda x,y: x*y}
It defines two commands add, multiply.

Building a minimal plugin architecture in Python

I have an application, written in Python, which is used by a fairly technical audience (scientists).
I'm looking for a good way to make the application extensible by the users, i.e. a scripting/plugin architecture.
I am looking for something extremely lightweight. Most scripts, or plugins, are not going to be developed and distributed by a third-party and installed, but are going to be something whipped up by a user in a few minutes to automate a repeating task, add support for a file format, etc. So plugins should have the absolute minimum boilerplate code, and require no 'installation' other than copying to a folder (so something like setuptools entry points, or the Zope plugin architecture seems like too much.)
Are there any systems like this already out there, or any projects that implement a similar scheme that I should look at for ideas / inspiration?
Mine is, basically, a directory called "plugins" which the main app can poll and then use imp.load_module to pick up files, look for a well-known entry point possibly with module-level config params, and go from there. I use file-monitoring stuff for a certain amount of dynamism in which plugins are active, but that's a nice-to-have.
Of course, any requirement that comes along saying "I don't need [big, complicated thing] X; I just want something lightweight" runs the risk of re-implementing X one discovered requirement at a time. But that's not to say you can't have some fun doing it anyway :)
module_example.py:
def plugin_main(*args, **kwargs):
print args, kwargs
loader.py:
def load_plugin(name):
mod = __import__("module_%s" % name)
return mod
def call_plugin(name, *args, **kwargs):
plugin = load_plugin(name)
plugin.plugin_main(*args, **kwargs)
call_plugin("example", 1234)
It's certainly "minimal", it has absolutely no error checking, probably countless security problems, it's not very flexible - but it should show you how simple a plugin system in Python can be..
You probably want to look into the imp module too, although you can do a lot with just __import__, os.listdir and some string manipulation.
Have a look at at this overview over existing plugin frameworks / libraries, it is a good starting point. I quite like yapsy, but it depends on your use-case.
While that question is really interesting, I think it's fairly hard to answer, without more details. What sort of application is this? Does it have a GUI? Is it a command-line tool? A set of scripts? A program with an unique entry point, etc...
Given the little information I have, I will answer in a very generic manner.
What means do you have to add plugins?
You will probably have to add a configuration file, which will list the paths/directories to load.
Another way would be to say "any files in that plugin/ directory will be loaded", but it has the inconvenient to require your users to move around files.
A last, intermediate option would be to require all plugins to be in the same plugin/ folder, and then to active/deactivate them using relative paths in a config file.
On a pure code/design practice, you'll have to determine clearly what behavior/specific actions you want your users to extend. Identify the common entry point/a set of functionalities that will always be overridden, and determine groups within these actions. Once this is done, it should be easy to extend your application,
Example using hooks, inspired from MediaWiki (PHP, but does language really matters?):
import hooks
# In your core code, on key points, you allow user to run actions:
def compute(...):
try:
hooks.runHook(hooks.registered.beforeCompute)
except hooks.hookException:
print('Error while executing plugin')
# [compute main code] ...
try:
hooks.runHook(hooks.registered.afterCompute)
except hooks.hookException:
print('Error while executing plugin')
# The idea is to insert possibilities for users to extend the behavior
# where it matters.
# If you need to, pass context parameters to runHook. Remember that
# runHook can be defined as a runHook(*args, **kwargs) function, not
# requiring you to define a common interface for *all* hooks. Quite flexible :)
# --------------------
# And in the plugin code:
# [...] plugin magic
def doStuff():
# ....
# and register the functionalities in hooks
# doStuff will be called at the end of each core.compute() call
hooks.registered.afterCompute.append(doStuff)
Another example, inspired from mercurial. Here, extensions only add commands to the hg commandline executable, extending the behavior.
def doStuff(ui, repo, *args, **kwargs):
# when called, a extension function always receives:
# * an ui object (user interface, prints, warnings, etc)
# * a repository object (main object from which most operations are doable)
# * command-line arguments that were not used by the core program
doMoreMagicStuff()
obj = maybeCreateSomeObjects()
# each extension defines a commands dictionary in the main extension file
commands = { 'newcommand': doStuff }
For both approaches, you might need common initialize and finalize for your extension.
You can either use a common interface that all your extension will have to implement (fits better with second approach; mercurial uses a reposetup(ui, repo) that is called for all extension), or use a hook-kind of approach, with a hooks.setup hook.
But again, if you want more useful answers, you'll have to narrow down your question ;)
Marty Allchin's simple plugin framework is the base I use for my own needs. I really recommand to take a look at it, I think it is really a good start if you want something simple and easily hackable. You can find it also as a Django Snippets.
I am a retired biologist who dealt with digital micrograqphs and found himself having to write an image processing and analysis package (not technically a library) to run on an SGi machine. I wrote the code in C and used Tcl for the scripting language. The GUI, such as it was, was done using Tk. The commands that appeared in Tcl were of the form "extensionName commandName arg0 arg1 ... param0 param1 ...", that is, simple space-separated words and numbers. When Tcl saw the "extensionName" substring, control was passed to the C package. That in turn ran the command through a lexer/parser (done in lex/yacc) and then called C routines as necessary.
The commands to operate the package could be run one by one via a window in the GUI, but batch jobs were done by editing text files which were valid Tcl scripts; you'd pick the template that did the kind of file-level operation you wanted to do and then edit a copy to contain the actual directory and file names plus the package commands. It worked like a charm. Until ...
1) The world turned to PCs and 2) the scripts got longer than about 500 lines, when Tcl's iffy organizational capabilities started to become a real inconvenience. Time passed ...
I retired, Python got invented, and it looked like the perfect successor to Tcl. Now, I have never done the port, because I have never faced up to the challenges of compiling (pretty big) C programs on a PC, extending Python with a C package, and doing GUIs in Python/Gt?/Tk?/??. However, the old idea of having editable template scripts seems still workable. Also, it should not be too great a burden to enter package commands in a native Python form, e.g.:
packageName.command( arg0, arg1, ..., param0, param1, ...)
A few extra dots, parens, and commas, but those aren't showstoppers.
I remember seeing that someone has done versions of lex and yacc in Python (try: http://www.dabeaz.com/ply/), so if those are still needed, they're around.
The point of this rambling is that it has seemed to me that Python itself IS the desired "lightweight" front end usable by scientists. I'm curious to know why you think that it is not, and I mean that seriously.
added later: The application gedit anticipates plugins being added and their site has about the clearest explanation of a simple plugin procedure I've found in a few minutes of looking around. Try:
https://wiki.gnome.org/Apps/Gedit/PythonPluginHowToOld
I'd still like to understand your question better. I am unclear whether you 1) want scientists to be able to use your (Python) application quite simply in various ways or 2) want to allow the scientists to add new capabilities to your application. Choice #1 is the situation we faced with the images and that led us to use generic scripts which we modified to suit the need of the moment. Is it Choice #2 which leads you to the idea of plugins, or is it some aspect of your application that makes issuing commands to it impracticable?
When i searching for Python Decorators, found a simple but useful code snippet. It may not fit in your needs but very inspiring.
Scipy Advanced Python#Plugin Registration System
class TextProcessor(object):
PLUGINS = []
def process(self, text, plugins=()):
if plugins is ():
for plugin in self.PLUGINS:
text = plugin().process(text)
else:
for plugin in plugins:
text = plugin().process(text)
return text
#classmethod
def plugin(cls, plugin):
cls.PLUGINS.append(plugin)
return plugin
#TextProcessor.plugin
class CleanMarkdownBolds(object):
def process(self, text):
return text.replace('**', '')
Usage:
processor = TextProcessor()
processed = processor.process(text="**foo bar**", plugins=(CleanMarkdownBolds, ))
processed = processor.process(text="**foo bar**")
I enjoyed the nice discussion on different plugin architectures given by Dr Andre Roberge at Pycon 2009. He gives a good overview of different ways of implementing plugins, starting from something really simple.
Its available as a podcast (second part following an explanation of monkey-patching) accompanied by a series of six blog entries.
I recommend giving it a quick listen before you make a decision.
I arrived here looking for a minimal plugin architecture, and found a lot of things that all seemed like overkill to me. So, I've implemented Super Simple Python Plugins. To use it, you create one or more directories and drop a special __init__.py file in each one. Importing those directories will cause all other Python files to be loaded as submodules, and their name(s) will be placed in the __all__ list. Then it's up to you to validate/initialize/register those modules. There's an example in the README file.
Actually setuptools works with a "plugins directory", as the following example taken from the project's documentation:
http://peak.telecommunity.com/DevCenter/PkgResources#locating-plugins
Example usage:
plugin_dirs = ['foo/plugins'] + sys.path
env = Environment(plugin_dirs)
distributions, errors = working_set.find_plugins(env)
map(working_set.add, distributions) # add plugins+libs to sys.path
print("Couldn't load plugins due to: %s" % errors)
In the long run, setuptools is a much safer choice since it can load plugins without conflicts or missing requirements.
Another benefit is that the plugins themselves can be extended using the same mechanism, without the original applications having to care about it.
Expanding on the #edomaur's answer may I suggest taking a look at simple_plugins (shameless plug), which is a simple plugin framework inspired by the work of Marty Alchin.
A short usage example based on the project's README:
# All plugin info
>>> BaseHttpResponse.plugins.keys()
['valid_ids', 'instances_sorted_by_id', 'id_to_class', 'instances',
'classes', 'class_to_id', 'id_to_instance']
# Plugin info can be accessed using either dict...
>>> BaseHttpResponse.plugins['valid_ids']
set([304, 400, 404, 200, 301])
# ... or object notation
>>> BaseHttpResponse.plugins.valid_ids
set([304, 400, 404, 200, 301])
>>> BaseHttpResponse.plugins.classes
set([<class '__main__.NotFound'>, <class '__main__.OK'>,
<class '__main__.NotModified'>, <class '__main__.BadRequest'>,
<class '__main__.MovedPermanently'>])
>>> BaseHttpResponse.plugins.id_to_class[200]
<class '__main__.OK'>
>>> BaseHttpResponse.plugins.id_to_instance[200]
<OK: 200>
>>> BaseHttpResponse.plugins.instances_sorted_by_id
[<OK: 200>, <MovedPermanently: 301>, <NotModified: 304>, <BadRequest: 400>, <NotFound: 404>]
# Coerce the passed value into the right instance
>>> BaseHttpResponse.coerce(200)
<OK: 200>
As one another approach to plugin system, You may check Extend Me project.
For example, let's define simple class and its extension
# Define base class for extensions (mount point)
class MyCoolClass(Extensible):
my_attr_1 = 25
def my_method1(self, arg1):
print('Hello, %s' % arg1)
# Define extension, which implements some aditional logic
# or modifies existing logic of base class (MyCoolClass)
# Also any extension class maby be placed in any module You like,
# It just needs to be imported at start of app
class MyCoolClassExtension1(MyCoolClass):
def my_method1(self, arg1):
super(MyCoolClassExtension1, self).my_method1(arg1.upper())
def my_method2(self, arg1):
print("Good by, %s" % arg1)
And try to use it:
>>> my_cool_obj = MyCoolClass()
>>> print(my_cool_obj.my_attr_1)
25
>>> my_cool_obj.my_method1('World')
Hello, WORLD
>>> my_cool_obj.my_method2('World')
Good by, World
And show what is hidden behind the scene:
>>> my_cool_obj.__class__.__bases__
[MyCoolClassExtension1, MyCoolClass]
extend_me library manipulates class creation process via metaclasses, thus in example above, when creating new instance of MyCoolClass we got instance of new class that is subclass of both MyCoolClassExtension and MyCoolClass having functionality of both of them, thanks to Python's multiple inheritance
For better control over class creation there are few metaclasses defined in this lib:
ExtensibleType - allows simple extensibility by subclassing
ExtensibleByHashType - similar to ExtensibleType, but haveing ability
to build specialized versions of class, allowing global extension
of base class and extension of specialized versions of class
This lib is used in OpenERP Proxy Project, and seems to be working good enough!
For real example of usage, look in OpenERP Proxy 'field_datetime' extension:
from ..orm.record import Record
import datetime
class RecordDateTime(Record):
""" Provides auto conversion of datetime fields from
string got from server to comparable datetime objects
"""
def _get_field(self, ftype, name):
res = super(RecordDateTime, self)._get_field(ftype, name)
if res and ftype == 'date':
return datetime.datetime.strptime(res, '%Y-%m-%d').date()
elif res and ftype == 'datetime':
return datetime.datetime.strptime(res, '%Y-%m-%d %H:%M:%S')
return res
Record here is extesible object. RecordDateTime is extension.
To enable extension, just import module that contains extension class, and (in case above) all Record objects created after it will have extension class in base classes, thus having all its functionality.
The main advantage of this library is that, code that operates extensible objects, does not need to know about extension and extensions could change everything in extensible objects.
setuptools has an EntryPoint:
Entry points are a simple way for distributions to “advertise” Python
objects (such as functions or classes) for use by other distributions.
Extensible applications and frameworks can search for entry points
with a particular name or group, either from a specific distribution
or from all active distributions on sys.path, and then inspect or load
the advertised objects at will.
AFAIK this package is always available if you use pip or virtualenv.
You can use pluginlib.
Plugins are easy to create and can be loaded from other packages, file paths, or entry points.
Create a plugin parent class, defining any required methods:
import pluginlib
#pluginlib.Parent('parser')
class Parser(object):
#pluginlib.abstractmethod
def parse(self, string):
pass
Create a plugin by inheriting a parent class:
import json
class JSON(Parser):
_alias_ = 'json'
def parse(self, string):
return json.loads(string)
Load the plugins:
loader = pluginlib.PluginLoader(modules=['sample_plugins'])
plugins = loader.plugins
parser = plugins.parser.json()
print(parser.parse('{"json": "test"}'))
I have spent time reading this thread while I was searching for a plugin framework in Python now and then. I have used some but there were shortcomings with them. Here is what I come up with for your scrutiny in 2017, a interface free, loosely coupled plugin management system: Load me later. Here are tutorials on how to use it.
I've spent a lot of time trying to find small plugin system for Python, which would fit my needs. But then I just thought, if there is already an inheritance, which is natural and flexible, why not use it.
The only problem with using inheritance for plugins is that you dont know what are the most specific(the lowest on inheritance tree) plugin classes are.
But this could be solved with metaclass, which keeps track of inheritance of base class, and possibly could build class, which inherits from most specific plugins ('Root extended' on the figure below)
So I came with a solution by coding such a metaclass:
class PluginBaseMeta(type):
def __new__(mcls, name, bases, namespace):
cls = super(PluginBaseMeta, mcls).__new__(mcls, name, bases, namespace)
if not hasattr(cls, '__pluginextensions__'): # parent class
cls.__pluginextensions__ = {cls} # set reflects lowest plugins
cls.__pluginroot__ = cls
cls.__pluginiscachevalid__ = False
else: # subclass
assert not set(namespace) & {'__pluginextensions__',
'__pluginroot__'} # only in parent
exts = cls.__pluginextensions__
exts.difference_update(set(bases)) # remove parents
exts.add(cls) # and add current
cls.__pluginroot__.__pluginiscachevalid__ = False
return cls
#property
def PluginExtended(cls):
# After PluginExtended creation we'll have only 1 item in set
# so this is used for caching, mainly not to create same PluginExtended
if cls.__pluginroot__.__pluginiscachevalid__:
return next(iter(cls.__pluginextensions__)) # only 1 item in set
else:
name = cls.__pluginroot__.__name__ + 'PluginExtended'
extended = type(name, tuple(cls.__pluginextensions__), {})
cls.__pluginroot__.__pluginiscachevalid__ = True
return extended
So when you have Root base, made with metaclass, and have tree of plugins which inherit from it, you could automatically get class, which inherits from the most specific plugins by just subclassing:
class RootExtended(RootBase.PluginExtended):
... your code here ...
Code base is pretty small (~30 lines of pure code) and as flexible as inheritance allows.
If you're interested, get involved # https://github.com/thodnev/pluginlib
You may also have a look at Groundwork.
The idea is to build applications around reusable components, called patterns and plugins. Plugins are classes that derive from GwBasePattern.
Here's a basic example:
from groundwork import App
from groundwork.patterns import GwBasePattern
class MyPlugin(GwBasePattern):
def __init__(self, app, **kwargs):
self.name = "My Plugin"
super().__init__(app, **kwargs)
def activate(self):
pass
def deactivate(self):
pass
my_app = App(plugins=[MyPlugin]) # register plugin
my_app.plugins.activate(["My Plugin"]) # activate it
There are also more advanced patterns to handle e.g. command line interfaces, signaling or shared objects.
Groundwork finds its plugins either by programmatically binding them to an app as shown above or automatically via setuptools. Python packages containing plugins must declare these using a special entry point groundwork.plugin.
Here are the docs.
Disclaimer: I'm one of the authors of Groundwork.
In our current healthcare product we have a plugin architecture implemented with interface class. Our tech stack are Django on top of Python for API and Nuxtjs on top of nodejs for frontend.
We have a plugin manager app written for our product which is basically pip and npm package in adherence with Django and Nuxtjs.
For new plugin development(pip and npm) we made plugin manager as dependency.
In Pip package:
With the help of setup.py you can add entrypoint of the plugin to do something with plugin manager(registry, initiations, ...etc.)
https://setuptools.readthedocs.io/en/latest/setuptools.html#automatic-script-creation
In npm package:
Similar to pip there are hooks in npm scripts to handle the installation.
https://docs.npmjs.com/misc/scripts
Our usecase:
plugin development team is separate from core devopment team now. The scope of plugin development is for integrating with 3rd party apps which are defined in any of the categories of the product. The plugin interfaces are categorised for eg:- Fax, phone, email ...etc plugin manager can be enhanced to new categories.
In your case: Maybe you can have one plugin written and reuse the same for doing stuffs.
If plugin developers has need to use reuse core objects that object can be used by doing a level of abstraction within plugin manager so that any plugins can inherit those methods.
Just sharing how we implemented in our product hope it will give a little idea.

Can I call and set the Python gettext module in a library and a module using it at the same time?

Im a coding a library including textual feedback that I need to translate.
I put the following lines in a _config.py module that I import everywhere in my app :
import gettext, os, sys
pathname = os.path.dirname(sys.argv[0])
localdir = os.path.abspath(pathname) + "/locale"
gettext.install("messages", localdir)
I have the *.mo files in ./locale/lang_LANG/LC_MESSAGES and I apply the _() function to all the strings that need to be translated.
Now I just added a feature for the user, supposedly a programmer, to be able to create his own messages. I don't want him to care about the underlying implementation, so I want him to be able to make it something straightforward like :
lib_object.message = "My message"
I used properties to make it clean, but what if my user whats to translate his own code (that uses mine) and does something like :
import gettext, os, sys
pathname = os.path.dirname(sys.argv[0])
localdir = os.path.abspath(pathname) + "/locale"
gettext.install("user_app", localdir)
lib_object.message = _("My message")
Is it a problem ? What can I do to avoid troubles without bothering my user ?
You can use the class based gettext api to isolate message catalogs. This is also what is recommended in the python gettext documentation.
The drawback is that you, or the other dev, will have to use the gettext method or define the _() method in the local scope, bound to the specific gettext class. An example of a class with its own string catalog:
import gettext
class MyClass(object):
def __init__(self, locale_for_instance):
self.lang = gettext.translation("appname", localedir, \
locale=locale_for_instance)
def some_method(self, arg):
return self.lang.gettext("You called some method")
def other_method(self, arg): # does the same thing
_ = self.lang.gettext
return _("You called some method")
You could stick the code for adding the _() in a decorator, so all the methods that need it is prefixed with something like #with_local_gettext
(Note, I've not tested the above could but It Should Work Just Fine(tm) )
If the goal is to not bother your user (and he's not very good) I guess you could use the class based approach in your code and let the user use the global one.
You can only gettext.install() once. In general it's useless for library work -- gettext.install() will only do the right thing if the module calling it is in charge of the whole program, since it will only provide you with one catalog to load from. Library code should do something akin to what Mailman does: have their own wrapper for gettext() that passes the right arguments for this module, then imports that as '_' in each module that wants to use it.

Categories