Plugin manager in python - python

I am relatively new to python (already did some 1h scripts like a little webserver or a local network chat) and want to program a plugin manager in it.
My idea is, that there is an interface for plugins, that has the following features:
getDependencies -> all dependencies of the plugin to other plugins
getFunctions -> all functions that this plugin introduces
initialize -> a function that is called when loading the plugin
(I could imagine to have a topological sorting algorithm on the dependencies to decide the order in which the plugins are initialized.)
I would like to implement multithreading, meaning that each plugin runs in its own thread, that has a working queue of function-calls that will be executed serially. When a plugin calls the function of another plugin it calls the manager who will in turn insert the function-call into the queue of the other plugin.
Further the manager should provide some kind of event system in which the plugins can register their own events and become listeners to the events of others.
Also I want to be able to reload a plugin if the code has changed or its thread crashed, without shutting down the manager/application. I already read How do I unload (reload) a Python module? in conjunction with this.
To make it clear once more: The manager should not provide any other functionality than supporting its plugins with a common communication interface to each other, the ability to run side by side (in a multithreaded manner without requiring the plugins to be aware of this) and restoring updated/crashed plugins.
So my questions are: Is it possible to do this in python? And if yes are there design mistakes in this rough sketch? I would appreciate any good advice on this.
Other "literature":
Implementing a Plugin System in Python

At the most basic level, first of all, you want to provide a basic Plugin class which is a base for all plugins written for your application.
Next we need to import them all.
class PluginLoader():
def __init__(self, path):
self.path = path
def __iter__(self):
for (dirpath, dirs, files) in os.walk(self.path):
if not dirpath in sys.path:
sys.path.insert(0, dirpath)
for file in files:
(name, ext) = os.path.splitext(file)
if ext == os.extsep + "py":
__import__(name, None, None, [''])
for plugin in Plugin.__subclasses__():
yield plugin
In Python 2.7 or 3.1+, instead of __import__(name, None, None, ['']), consider:
import importlib # just once
importlib.import_module(name)
This loads every plugin file and gives us all plugins. You would then select your plugins as you saw fit, and then use them:
from multiprocessing import Process, Pipe
plugins = {}
for plugin in PluginLoader("plugins"):
... #select plugin(s)
if selected:
plugins[plugin.__name__], child = Pipe()
p = Process(target=plugin, args=(child,))
p.start()
...
for plugin in plugins.values():
plugin.put("EventHappened")
...
for plugin in plugins.values():
event = plugin.get(False)
if event:
... #handle event
This is just what comes to mind at first. Obviously much more would be needed for flesh this out, but it should be a good basis to work from.

Check yapsy plugin https://github.com/tibonihoo/yapsy. This should work for you

Related

How to access the Custom Protocol URL used to invoke the MACOS App which comprises of a single python file (from inside the python code)

I have written a small python file which I am packaging as a .app and installing on macos (latest version). The app is intended to be invoked using a custom protocol similar to "abc://efg/ijk/lmn". The python file employs pyobjc package and intends to use it to implement the business logic. I have option of using, only the Python language to implement my business logic because of legacy reasons.
I have to access the invoking custom URL "abc://efg/ijk/lmn" from inside the python code and parse the values. The "efg" "ijk" and "lmn" in the custom URL will vary and will be used to take some decisions further down the flow.
I have tried multiple things from whatever I could find from the internet but i am unable to access the custom url from with in the python code. The value of sys.argv come as below
sys.argv = ['/Applications/XXXXXApp.app/Contents/MacOS/XXXXXApp', '-psn_0_4490312']
But on windows sys.argv[0] is populated with the custom url.
Will appreciate any directions.
Below code is what I have tried among many other variations of it.
`
from Cocoa import NSObject
mylogger = open(os.path.expanduser("~/Desktop/somefile.txt"), 'w+')
class apple_event_handler(NSObject):
def applicationWillFinishLaunching_(self, notification):
mylogger.write("Will finish ")
def applicationDidFinishLaunching_(self, notification):
mylogger.write("Did Finish")
def handleAppleEvent_withReplyEvent_(self, event, reply_event):
theURL = event.descriptorForKeyword_(fourCharToInt(b'----'))
mylogger.write("********* Handler Invoked !!! *********")
mylogger.write("********* the URL = " + str(theURL))
mylogger.write(*self.args)
aem = NSAppleEventManager.sharedAppleEventManager()
aeh = apple_event_handler.alloc().init()
aem.setEventHandler_andSelector_forEventClass_andEventID_(aeh,
"handleAppleEvent:withReplyEvent:", 1, 1)
`

A timeout decorator class with multiprocessing gives a pickling error

So on windows the signal and the thread approahc in general are bad ideas / don't work for timeout of functions.
I've made the following timeout code which throws a timeout exception from multiprocessing when the code took to long. This is exactly what I want.
def timeout(timeout, func, *arg):
with Pool(processes=1) as pool:
result = pool.apply_async(func, (*arg,))
return result.get(timeout=timeout)
I'm now trying to get this into a decorator style so that I can add it to a wide range of functions, especially those where external services are called and I have no control over the code or duration. My current attempt is below:
class TimeWrapper(object):
def __init__(self, timeout=10):
"""Timing decorator"""
self.timeout = timeout
def __call__(self, f):
def wrapped_f(*args):
with Pool(processes=1) as pool:
result = pool.apply_async(f, (*args,))
return result.get(timeout=self.timeout)
return wrapped_f
It gives a pickling error:
#TimeWrapper(7)
def func2(x, y):
time.sleep(5)
return x*y
File "C:\Users\rmenk\AppData\Local\Continuum\anaconda3\lib\multiprocessing\reduction.py", line 51, in dumps
cls(buf, protocol).dump(obj)
_pickle.PicklingError: Can't pickle <function func2 at 0x000000770C8E4730>: it's not the same object as __main__.func2
I'm suspecting this is due to the multiprocessing and the decorator not playing nice but I don't actually know how to make them play nice. Any ideas on how to fix this?
PS: I've done some extensive research on this site and other places but haven't found any answers that work, be it with pebble, threading, as a function decorator or otherwise. If you have a solution that you know works on windows and python 3.5 I'd be very happy to just use that.
What you are trying to achieve is particularly cumbersome in Windows. The core issue is that when you decorate a function, you shadow it. This happens to work just fine in UNIX due to the fact it uses the fork strategy to create a new process.
In Windows though, the new process will be a blank one where a brand new Python interpreter is started and loads your module. When the module gets loaded, the decorator hides the real function making it hard to find for the pickle protocol.
The only way to get it right is to rely on a trampoline function to be set during the decoration. You can take a look on how is done on pebble but, as long as you're not doing it for an exercise, I'd recommend to use pebble directly as it already offers what you are looking for.
from pebble import concurrent
#concurrent.process(timeout=60)
def my_function(var, keyvar=0):
return var + keyvar
future = my_function(1, keyvar=2)
future.result()
The only problem You have here is that You tested the decorated function in the main context. Move it out to a different module and it will probably work.
I wrote the wrapt_timeout_decorator what uses wrapt & dill & multiprocess & pipes versus pickle & multiprocessing & queue, because it can serialize more datatypes.
It might look simple at first, but under windows a reliable timeout decorator is quite tricky - You might use mine, its quite mature and tested :
https://github.com/bitranox/wrapt_timeout_decorator
On Windows the main module is imported again (but with a name != 'main') because Python is trying to simulate a forking-like behavior on a system that doesn't support forking. multiprocessing tries to create an environment similar to Your main process by importing the main module again with a different name. Thats why You need to shield the entry point of Your program with the famous " if name == 'main': "
import lib_foo
def some_module():
lib_foo.function_foo()
def main():
some_module()
# here the subprocess stops loading, because __name__ is NOT '__main__'
if __name__ = '__main__':
main()
This is a problem of Windows OS, because the Windows Operating System does not support "fork"
You can find more information on that here:
Workaround for using __name__=='__main__' in Python multiprocessing
https://docs.python.org/2/library/multiprocessing.html#windows
Since main.py is loaded again with a different name but "main", the decorated function now points to objects that do not exist anymore, therefore You need to put the decorated Classes and functions into another module. In general (especially on windows) , the main() program should not have anything but the main function, the real thing should happen in the modules. I am also used to put all settings or configurations in a different file - so all processes or threads can access them (and also to keep them in one place together, not to forget typing hints and name completion in Your favorite editor)
The "dill" serializer is able to serialize also the main context, that means the objects in our example are pickled to "main.lib_foo", "main.some_module","main.main" etc. We would not have this limitation when using "pickle" with the downside that "pickle" can not serialize following types:
functions with yields, nested functions, lambdas, cell, method, unboundmethod, module, code, methodwrapper, dictproxy, methoddescriptor, getsetdescriptor, memberdescriptor, wrapperdescriptor, xrange, slice, notimplemented, ellipsis, quit
additional dill supports:
save and load python interpreter sessions, save and extract the source code from functions and classes, interactively diagnose pickling errors
To support more types with the decorator, we selected dill as serializer, with the small downside that methods and classes can not be decorated in the main context, but need to reside in a module.
You can find more information on that here: Serializing an object in __main__ with pickle or dill

python multiprocessing sharing data between separate python processes

Multiprocessing allows me to share data between processes started from within the same python runtime interpreter.
But what if i had a need to share data between processes started by separate python runtime processes?
I was looking at multiprocessing.Manager which seems to be the right construct for it. If I create a manager i can see its address:
>>> from multiprocessing import Manager
>>> m=Manager()
>>> m.address
'/tmp/pymp-o2TCd_/listener-Qld03B'
And the socket is there:
adrian#sammy ~/temp $ netstat -naA unix | grep pymp
unix 2 [ ACC ] STREAM LISTENING 1220401 /tmp/pymp- o2TCd_/listener-Qld03B
If I start a new process with multiprocessing.Process it spawns a new python interpreter that somehow inherits information about these shared constructs like this Manager.
Is there a way to access it from a new python process NOT spawned from the same one that created the Manager?
You are on the (or a) right track with this.
In a comment, stovfl suggests looking at the remote manager section of the Python multiprocessing Manager documentation (Python2, Python3). As you have observed, each manager has a name-able entity (a socket in /tmp in this case) through which each Python process can connect to a peer Python process. Because these are accessible from any process, however, they each have an access key.
The default key for each Manager is the one for the "main process", and it is a string of 32 random bytes:
class _MainProcess(BaseProcess):
def __init__(self):
self._identity = ()
self._name = 'MainProcess'
self._parent_pid = None
self._popen = None
self._config = {'authkey': AuthenticationString(os.urandom(32)),
'semprefix': '/mp'}
# Note that some versions of FreeBSD only allow named
# semaphores to have names of up to 14 characters. Therefore
# we choose a short prefix.
#
# On MacOSX in a sandbox it may be necessary to use a
# different prefix -- see #19478.
#
# Everything in self._config will be inherited by descendant
# processes.
but you may assign your own key, which you can then know and therefore use from anywhere else.
There are other ways to handle this. For instance, you can use XML RPC to export callable functions from one Python process, callable from anything—not just Python—that can speak XML RPC. See the Python2 or Python3 documentation. Heed this warning (this is the py3k variant but it applies in py2k as well):
Warning: The xmlrpc.client module is not secure against maliciously constructed data. If you need to parse untrusted or unauthenticated data see XML vulnerabilities.
Do not, however, assume that using a multiprocessing.Manager instead of XML RPC secures you against maliciously constructed data. Those are just as vulnerable since they will unpickle arbitrary data. See Attacking Python's pickle for more about this.

Why is __import__ causing my program to hang?

I am working on a python app that uses the default python logging system. Part of this system is the ability to define handlers in a logging config file. One of the handlers for this app is the django admin email handler, "django.utils.log.AdminEmailHandler". When the app is initializing the logging system, it makes a call to logging.config.fileconfig. This is done on a background thread and attempts to reload the config file periodically. I believe that is important.
I have traced through the python logging source code down to the method:
def _resolve(name):
"""Resolve a dotted name to a global object."""
name = name.split('.')
used = name.pop(0)
found = __import__(used)
for n in name:
used = used + '.' + n
try:
found = getattr(found, n)
except AttributeError:
__import__(used)
found = getattr(found, n)
return found
in the file python2.7/logging/config.py
When this function is given the paramater "django.utils.log.AdminEmailHandler" in order to create that handler, my app hangs on the command
__import__(used)
where used is "django".
I did a little research and I have seen some mentions of __import__ not being thread safe and to avoid its use in background threads. is this accurate? And knowing that __import__("django") does cause a deadlock, is there anything I could do to prevent it?
I suggest using the default Django LOGGING setting to control logging. For development, starting the server with manage.py runserver will automatically reload Django if any files are changed, including the settings file with the logging configuration. In practice it works quite well!
https://docs.djangoproject.com/en/dev/topics/logging/#examples

Basic cocoa application using dock in Python, but not Xcode and all that extras

It seems that if I want to create a very basic Cocoa application with a dock icon and the like, I would have to use Xcode and the GUI builder (w/ PyObjC).
The application I am intending to write is largely concerned with algorithms and basic IO - and thus, not mostly related to Apple specific stuff.
Basically the application is supposed to run periodically (say, every 3 minutes) .. pull some information via AppleScript and write HTML files to a particular directory. I would like to add a Dock icon for this application .. mainly to showing the "status" of the process (for example, if there is an error .. the dock icon would have a red flag on it). Another advantage of the dock icon is that I can make it run on startup.
Additional bonus for defining the dock right-click menu in a simple way (eg: using Python lists of callables).
Can I achieve this without using Xcode or GUI builders but simply using Emacs and Python?
Install the latest py2app, then make a new directory -- cd to it -- in it make a HelloWorld.py file such as:
# generic Python imports
import datetime
import os
import sched
import sys
import tempfile
import threading
import time
# need PyObjC on sys.path...:
for d in sys.path:
if 'Extras' in d:
sys.path.append(d + '/PyObjC')
break
# objc-related imports
import objc
from Foundation import *
from AppKit import *
from PyObjCTools import AppHelper
# all stuff related to the repeating-action
thesched = sched.scheduler(time.time, time.sleep)
def tick(n, writer):
writer(n)
thesched.enter(20.0, 10, tick, (n+1, writer))
fd, name = tempfile.mkstemp('.txt', 'hello', '/tmp');
print 'writing %r' % name
f = os.fdopen(fd, 'w')
f.write(datetime.datetime.now().isoformat())
f.write('\n')
f.close()
def schedule(writer):
pool = NSAutoreleasePool.alloc().init()
thesched.enter(0.0, 10, tick, (1, writer))
thesched.run()
# normally you'd want pool.drain() here, but since this function never
# ends until end of program (thesched.run never returns since each tick
# schedules a new one) that pool.drain would never execute here;-).
# objc-related stuff
class TheDelegate(NSObject):
statusbar = None
state = 'idle'
def applicationDidFinishLaunching_(self, notification):
statusbar = NSStatusBar.systemStatusBar()
self.statusitem = statusbar.statusItemWithLength_(
NSVariableStatusItemLength)
self.statusitem.setHighlightMode_(1)
self.statusitem.setToolTip_('Example')
self.statusitem.setTitle_('Example')
self.menu = NSMenu.alloc().init()
menuitem = NSMenuItem.alloc().initWithTitle_action_keyEquivalent_(
'Quit', 'terminate:', '')
self.menu.addItem_(menuitem)
self.statusitem.setMenu_(self.menu)
def writer(self, s):
self.badge.setBadgeLabel_(str(s))
if __name__ == "__main__":
# prepare and set our delegate
app = NSApplication.sharedApplication()
delegate = TheDelegate.alloc().init()
app.setDelegate_(delegate)
delegate.badge = app.dockTile()
delegate.writer(0)
# on a separate thread, run the scheduler
t = threading.Thread(target=schedule, args=(delegate.writer,))
t.setDaemon(1)
t.start()
# let her rip!-)
AppHelper.runEventLoop()
Of course, in your real code, you'll be performing your own periodic actions every 3 minutes (rather than writing a temp file every 20 seconds as I'm doing here), doing your own status updates (rather than just showing a counter of the number of files written so far), etc, etc, but I hope this example shows you a viable starting point.
Then in Terminal.App cd to the directory containing this source file, py2applet --make-setup HelloWorld.py, python setup.py py2app -A -p PyObjC.
You now have in subdirectory dist a directory HelloWorld.app; open dist and drag the icon to the Dock, and you're all set (on your own machine -- distributing to other machines may not work due to the -A flag, but I had trouble building without it, probably due to mis-installed egg files laying around this machine;-). No doubt you'll want to customize your icon &c.
This doesn't do the "extra credit" thingy you asked for -- it's already a lot of code and I decided to stop here (the extra credit thingy may warrant a new question). Also, I'm not quite sure that all the incantations I'm performing here are actually necessary or useful; the docs are pretty latitant for making a pyobjc .app without Xcode, as you require, so I hacked this together from bits and pieces of example code found on the net plus a substantial amount of trial and error. Still, I hope it helps!-)
PyObjC, which is included with Mac OS X 10.5 and 10.6, is pretty close to what you're looking for.
Chuck is correct about PyObjC.
You should then read about this NSApplication method to change your icon.
-(void)setApplicationIconImage:(NSImage *)anImage;
For the dock menu, implement the following in an application delegate. You can build an NSMenu programmatically to avoid using InterfaceBuilder.
-(NSMenu *)applicationDockMenu:(NSApplication *)sender;

Categories