Either it's lack of sleep but I feel silly that I can't get this. I have a plugin, I see it get loaded but I can't instantiate it in my main file:
from transformers.FOMIBaseClass import find_plugins, register
find_plugins()
Here's my FOMIBaseClass:
from PluginBase import MountPoint
import sys
import os
class FOMIBaseClass(object):
__metaclass__ = MountPoint
def __init__(self):
pass
def init_plugins(self):
pass
def find_plugins():
plugin_dir = os.path.dirname(os.path.realpath(__file__))
plugin_files = [x[:-3] for x in os.listdir(plugin_dir) if x.endswith("Transformer.py")]
sys.path.insert(0, plugin_dir)
for plugin in plugin_files:
mod = __import__(plugin)
Here's my MountPoint:
class MountPoint(type):
def __init__(cls,name,bases,attrs):
if not hasattr(cls,'plugins'):
cls.plugins = []
else:
cls.plugins.append(cls)
I see it being loaded:
# /Users/carlos/Desktop/ws_working_folder/python/transformers/SctyDistTransformer.pyc matches /Users/carlos/Desktop/ws_working_folder/python/transformers/SctyDistTransformer.py
import SctyDistTransformer # precompiled from /Users/carlos/Desktop/ws_working_folder/python/transformers/SctyDistTransformer.pyc
But, for the life of me, I can't instantiate the 'SctyDistTransformer' module from the main file. I know I'm missing something trivial. Basically, I want to employ a class loading plugin.
To dymically load Python modules from arbitrary folders use imp module:
http://docs.python.org/library/imp.html
Specifically the code should look like:
mod = imp.load_source("MyModule", "MyModule.py")
clz = getattr(mod, "MyClassName")
Also if you are building serious plug-in architecture I recommend using Python eggs and entry points:
http://wiki.pylonshq.com/display/pylonscookbook/Using+Entry+Points+to+Write+Plugins
https://github.com/miohtama/vvv/blob/master/vvv/main.py#L104
Related
I have a relatively complex ecosystem of applications and libraries that are scheduled to run in my environment.
I am trying to improve my logging and in particular I'd like to write debug information to a logging file, and I'd like that log to contain the logger.debug("string") lines from all the imported libraries I wrote, but not from libraries I import from pypi.
example:
import sys
import numpy
from bs4 import BeautifulSoup
import logging
import mylibrary
import myotherlibrary
logger = logging.getLogger(application_name) # I don't use _ _ name _ _ in all of them, but I can change this line as necessary
so in this case when I set logger level to debug, I'd like to see debug information from the current script, from mylibrary and from myotherlibrary , but not from bs4,numpy, etc.
bonus: Ideally I would like to not have to hardcode every time the name of the libraries, but just have the script "know" it (from naming convention maybe?)
If anyone has any ideas it'd be greatly appreciated!
Python doesn't really have a concept of "libraries I wrote" vs "libraries imported with pypi" - a library is a library unfortunately.
However, depending on how your libraries are set up, you may be able to get a realllly hacky custom logger?
By default, Python libraries installed with pip go to a central location - usually something like /usr/local/lib or %APPDATA% on windows. In contrast, local libraries are usually within the same directory as the calling script. We can use this to our advantage!
The following code demonstrates a kinda proof-of-concept - I've left a few methods needing implementing as an exercise ;)
#CustomLogger.py
import __main__
import logging
import os
#create a custom log class, inheriting the current logger class
class CustomLogger(logging.getLoggerClass()):
custom_lib = False
def __init__(self, name):
#initialise the base logger
super().__init__(name)
#get the directory we are being run from
current_dir = os.path.dirname(__main__.__file__)
permutations = ['/', '.py', '.pyc']
#check if we are a custom library, or if we are one installed via pip etc
self.custom_lib = self.checkExists(current_dir, permutations)
self.propagate = not self.custom_lib
def checkExists(self, current_dir, permutations):
#loop through each permutation and see if a file matching that spec exists
#currently looks for .py/.pyc files and directories
for perm in permutations:
file = os.path.join(current_dir, self.name + perm)
if os.path.exists(file):
return True
return False
def isEnabledFor(self, level):
if self.custom_lib:
return super().isEnabledFor(level)
return False
#the hackiest part :)
#these are two sample overrides that only log if we're a custom
#library (i.e. one we've written, not installed)
#there are a few more methods that I've not implemented, a full
#list is available at https://docs.python.org/3/library/logging.html#logging.Logger
def debug(self, msg, *args, **kwargs):
if self.custom_lib:
return super().debug(msg, args, kwargs)
def info(self, msg, *args, **kwargs):
if self.custom_lib:
return super().info(msg, args, kwargs)
#most important part - also override the logger class
#this means that any calls to logging.getLogger() will use our new subclass
logging.setLoggerClass(CustomLogger)
You could then use it like this:
import CustomLogger #needs importing first so it ensures the logger is setup
import sys
import numpy
from bs4 import BeautifulSoup
import logging
import mylibrary
import myotherlibrary
logger = logging.getLogger(application_name) #returns type CustomLogger
I have config data that should be loaded before another code (because another code use it).
So, for now I see only way to do this is to call the function at the top before rest imports:
from Init.Loaders.InitPreLoader import InitPreLoader
# this is my config loader
InitPreLoader.load()
from World.WorldManager import WorldManager
from Init.Loaders.InitLoader import InitLoader
from Init.Registry.InitRegistry import InitRegistry
from Utils.Debug import Logger
# ...
if __name__ == '__main__':
# ...
InitLoader.load()
Does it possible to do this in more elegant way and avoid to violate pep8 ?
P.S. If I need to share more code please let me know
UPD: All my classes declared in separate files
This is PreLoader:
from Typings.Abstract.AbstractLoader import AbstractLoader
from Init.Registry.InitRegistry import InitRegistry
from Config.Init.configs import main_config
class InitPreLoader(AbstractLoader):
#staticmethod
def load(**kwargs):
InitRegistry.main_config = main_config
This is Registry (where I store all my initialized data):
from Typings.Abstract.AbstractRegistry import AbstractRegistry
class InitRegistry(AbstractRegistry):
main_config = None
login_server = None
world_server = None
world_observer = None
identifier_region_map = None
region_octree_map = None
Parent of all classes (except AbstractRegistry) is AbstractBase class (it contains mixin):
from abc import ABC
from Config.Mixins.ConfigurableMixin import ConfigurableMixin
class AbstractBase(ConfigurableMixin, ABC):
pass
This mixin works with main_config from InitRegistry.
Also, after PreLoader's load was called, I load rest data with my InitLoader.load() (see first code snapshot):
from Typings.Abstract.AbstractLoader import AbstractLoader
from Init.Registry.InitRegistry import InitRegistry
from Server.Init.servers import login_server, world_server
from World.Observer.Init.observers import world_observer
from World.Region.Init.regions import identifier_region_map, region_octree_map
class InitLoader(AbstractLoader):
#staticmethod
def load(**kwargs):
InitRegistry.login_server = login_server
InitRegistry.world_server = world_server
InitRegistry.world_observer = world_observer
InitRegistry.identifier_region_map = identifier_region_map
InitRegistry.region_octree_map = region_octree_map
Well, for now I have found solution: I moved from Init.Loaders.InitPreLoader import InitPreLoader to separate file and called InitPreLoader.load() there. But I not like this solution, because my PyCharm IDE highlights it as unused import:
import Init.Init.preloader
from World.WorldManager import WorldManager
# ...
Maybe it is possible to improve this solution ? Or maybe another (more elegant) solition exists ?
Does the interpreter somehow keep a timestamp of when a module is imported? Or is there an easy way of hooking into the import machinery to do this?
The scenario is a long-running Python process that at various points imports user-provided modules. I would like the process to be able to check "should I restart to load the latest code changes?" by checking the module file's timestamps against the time the module was imported.
Here's a way to automatically have an attribute (named _loadtime in the example code below) added to modules when they're imported. The code is based on Recipe 10.12 titled "Patching Modules on Import" in the book Python Cookbook, by David Beazley and Brian Jones, O'Reilly, 2013, which shows a technique that I adapted to do what you want.
For testing purposes I created this trivial target_module.py file:
print('in target_module')
Here's the example code:
import importlib
import sys
import time
class PostImportFinder:
def __init__(self):
self._skip = set() # To prevent recursion.
def find_module(self, fullname, path=None):
if fullname in self._skip: # Prevent recursion
return None
self._skip.add(fullname)
return PostImportLoader(self)
class PostImportLoader:
def __init__(self, finder):
self._finder = finder
def load_module(self, fullname):
importlib.import_module(fullname)
module = sys.modules[fullname]
# Add a custom attribute to the module object.
module._loadtime = time.time()
self._finder._skip.remove(fullname)
return module
sys.meta_path.insert(0, PostImportFinder())
if __name__ == '__main__':
import time
try:
print('importing target_module')
import target_module
except Exception as e:
print('Import failed:', e)
raise
loadtime = time.localtime(target_module._loadtime)
print('module loadtime: {} ({})'.format(
target_module._loadtime,
time.strftime('%Y-%b-%d %H:%M:%S', loadtime)))
Sample output:
importing target_module
in target_module
module loadtime: 1604683023.2491636 (2020-Nov-06 09:17:03)
I don't think there's any way to get around how hacky this is, but how about something like this every time you import? (I don't know exactly how you're importing):
import time
from types import ModuleType
# create a dictionary to keep track
# filter globals to exclude things that aren't modules and aren't builtins
MODULE_TIMES = {k:None for k,v in globals().items() if not k.startswith("__") and not k.endswith("__") and type(v) == ModuleType}
for module_name in user_module_list:
MODULE_TIMES[module_name] = time.time()
eval("import {0}".format(module_name))
And then you can reference this dictionary in a similar way later.
Is there any way to make an implicit initializer for modules (not packages)?
Something like:
#file: mymodule.py
def __init__(val):
global value
value = 5
And when you import it:
#file: mainmodule.py
import mymodule(5)
The import statement uses the builtin __import__ function.
Therefore it's not possible to have a module __init__ function.
You'll have to call it yourself:
import mymodule
mymodule.__init__(5)
These things often are not closed as duplicates, so here's a really nice solution from Pass Variable On Import. TL;DR: use a config module, configure that before importing your module.
[...] A cleaner way to do it which is very useful for multiple configuration
items in your project is to create a separate Configuration module
that is imported by your wrapping code first, and the items set at
runtime, before your functional module imports it. This pattern is
often used in other projects.
myconfig/__init__.py :
PATH_TO_R_SOURCE = '/default/R/source/path'
OTHER_CONFIG_ITEM = 'DEFAULT'
PI = 3.14
mymodule/__init__.py :
import myconfig
PATH_TO_R_SOURCE = myconfig.PATH_TO_R_SOURCE
robjects.r.source(PATH_TO_R_SOURCE, chdir = True) ## this takes time
class SomeClass:
def __init__(self, aCurve):
self._curve = aCurve
if myconfig.VERSION is not None:
version = myconfig.VERSION
else:
version = "UNDEFINED"
two_pi = myconfig.PI * 2
And you can change the behaviour of your module at runtime from the
wrapper:
run.py :
import myconfig
myconfig.PATH_TO_R_SOURCE = 'actual/path/to/R/source'
myconfig.PI = 3.14159
# we can even add a new configuration item that isn't present in the original myconfig:
myconfig.VERSION="1.0"
import mymodule
print "Mymodule.two_pi = %r" % mymodule.two_pi
print "Mymodule.version is %s" % mymodule.version
Output:
> Mymodule.two_pi = 6.28318
> Mymodule.version is 1.0
I have the following interface :
class Interface(object):
__metaclass__ = abc.ABCMeta
#abc.abstractmethod
def run(self):
"""Run the process."""
return
I have a collections of modules that are all in the same directory. Each module contains a single class that implements my interface.
For example Launch.py :
class Launch(Interface):
def run(self):
pass
Let's say I have 20 modules, that implements 20 classes. I would like to be able to launch a module that would check if some of the classes do not implement the Interface.
I know I have to use :
issubclass(Launch, ProcessInterface) to know if a certain class implements my process interface.
introspection to get the class that is in my module.
import modules at runtime
I am just not sure how to do that.
I can manage to use issubclass inside a module.
But I cannot use issubclass if I am outside the module.
I need to :
get the list of all modules in the directory
get the class in each module
do issubclass on each class
I would need a draf of a function that could do that.
You're probably looking for something like this:
from os import listdir
from sys import path
modpath = "/path/to/modules"
for modname in listdir(modpath):
if modname.endswith(".py"):
# look only in the modpath directory when importing
oldpath, path[:] = path[:], [modpath]
try:
module = __import__(modname[:-3])
except ImportError:
print "Couldn't import", modname
continue
finally: # always restore the real path
path[:] = oldpath
for attr in dir(module):
cls = getattr(module, attr)
if isinstance(cls, type) and not issubclass(cls, ProcessInterface):
# do whatever