I am using a custom logging module in my project. If it is not available, I'd like to substitute it with a dummy instead of raising an ImportError.
Here's the code which currently does that:
try:
import logger
except ImportError:
print 'Couldn\'t load logger'
class DummyLogger(object):
def __init__(self):
pass
def log(self, image):
pass
logger = DummyLogger()
I don't think it's an elegant solution. It works, sure, but it ain't nice. Is there a better way?
I would put the dummy implementation into a separate module, called dummy_loggger, and write:
try:
import logger
except ImportError:
import dummy_logger as logger
I've done this in the past with JSON parsers:
try:
import ujson as json # very fast but might not be available in some cases
except ImportError:
import json
You can make it more concise quite easily:
try:
import logger
except ImportError:
print 'Couldn\'t load logger'
class logger(object):
#classmethod
def log(cls, image):
pass
Note that, even in your current version, the empty __init__ should be removed -- it adds no value.
Related
I've been trying to figure out how best to set this up. Cutting it down as much as I can. I have 4 python files: core.py (main), logger_controler.py, config_controller.py, and a 4th as a module or singleton well just call it tool.py.
The way I have it setup is logging has an init function that setup pythons built in logging with the necessary levels, formatter, directory location, etc. I call this init function in main.
import logging
import logger_controller
def main():
logger_controller.init_log()
logger = logging.getLogger(__name__)
if __name__ == "__main__":
main()
config_controller is using configparser and is mainly a singleton as a controller for my config.
import configparser
import logging
logger = logging.getLogger(__name__)
class ConfigController(object):
def __init__(self, *file_names):
self.config_parser = configparser.ConfigParser()
found_files = self.config_parser.read(file_names)
if not found_files:
raise ValueError("No config file found.")
self._validate()
def _validate(self):
...
def read_config(self, section, field):
try:
data = self.config_parser.get(section, field)
except (configparser.NoSectionError, configparser.NoOptionError) as e:
logger.error(e)
data = None
return data
config = ConfigController("config.ini")
And then my problem is trying to create the 4th file and making sure both my logger and config parser are running before it. I'm also wanting this 4th one to be a singleton so it's following a similar format as the config_controller.
So tool.py uses config_controller to pull anything it needs from the config file. It also has some error checking for if config_controller's read_config returns None as that isn't validated in _validate. I did this as I wanted my logging to have a general layer for error checking and a more specific layer. So _validate just checks if required fields and sections are in the config file. Then wherever the field is read will handle extra error checking.
So my main problem is this:
How do I have it where my logger and configparser are both running and available before anything else. I'm very much willing to rework all of this, but I'd like to keep the functionality of it all.
One attempt I tried that works, but seems very messy is making my logger_controler a singleton that just returns python's logging object.
import logging
import os
class MyLogger(object):
def __new__(cls, *args, **kwargs):
init_log()
return logging
def init_log():
...
mylogger = MyLogger()
Then in core.py
from logger_controller import mylogger
logger = mylogger.getLogger(__name__)
I feel like there should be a better way to do the above, but I'm honestly not sure how.
A few ideas:
Would I be able to extend the logging class instead of just using that init_log function?
Maybe there's a way I can make all 3 individual modules such that they each initialize in a correct order? My attempts here didn't quite work as I also have some internal data that I wouldn't want exposed to classes using the module, just the functionality.
I'd like to have it where all 3, logging, configparsing, and the tool, available anywhere I import them.
How I have it setup now "works" but if I were to import the tool.py anywhere in core.py and an error occurs that I need to catch, then my logger won't be able to log it as this tool is loading before the init of my logger.
Does the interpreter somehow keep a timestamp of when a module is imported? Or is there an easy way of hooking into the import machinery to do this?
The scenario is a long-running Python process that at various points imports user-provided modules. I would like the process to be able to check "should I restart to load the latest code changes?" by checking the module file's timestamps against the time the module was imported.
Here's a way to automatically have an attribute (named _loadtime in the example code below) added to modules when they're imported. The code is based on Recipe 10.12 titled "Patching Modules on Import" in the book Python Cookbook, by David Beazley and Brian Jones, O'Reilly, 2013, which shows a technique that I adapted to do what you want.
For testing purposes I created this trivial target_module.py file:
print('in target_module')
Here's the example code:
import importlib
import sys
import time
class PostImportFinder:
def __init__(self):
self._skip = set() # To prevent recursion.
def find_module(self, fullname, path=None):
if fullname in self._skip: # Prevent recursion
return None
self._skip.add(fullname)
return PostImportLoader(self)
class PostImportLoader:
def __init__(self, finder):
self._finder = finder
def load_module(self, fullname):
importlib.import_module(fullname)
module = sys.modules[fullname]
# Add a custom attribute to the module object.
module._loadtime = time.time()
self._finder._skip.remove(fullname)
return module
sys.meta_path.insert(0, PostImportFinder())
if __name__ == '__main__':
import time
try:
print('importing target_module')
import target_module
except Exception as e:
print('Import failed:', e)
raise
loadtime = time.localtime(target_module._loadtime)
print('module loadtime: {} ({})'.format(
target_module._loadtime,
time.strftime('%Y-%b-%d %H:%M:%S', loadtime)))
Sample output:
importing target_module
in target_module
module loadtime: 1604683023.2491636 (2020-Nov-06 09:17:03)
I don't think there's any way to get around how hacky this is, but how about something like this every time you import? (I don't know exactly how you're importing):
import time
from types import ModuleType
# create a dictionary to keep track
# filter globals to exclude things that aren't modules and aren't builtins
MODULE_TIMES = {k:None for k,v in globals().items() if not k.startswith("__") and not k.endswith("__") and type(v) == ModuleType}
for module_name in user_module_list:
MODULE_TIMES[module_name] = time.time()
eval("import {0}".format(module_name))
And then you can reference this dictionary in a similar way later.
My Python package depends on an external library for a few of it's functions. This is a non-Python package and can be difficult to install, so I'd like users to still be able to use my package but have it fail when using any functions that depend on this non-Python package.
What is the standard practice for this? I could only import the non-Python package inside the methods that use it, but I really hate doing this
My current setup:
myInterface.py
myPackage/
--classA.py
--classB.py
The interfaces script myInterface.py imports classA and classB and classB imports the non-Python package. If the import fails I print a warning. If myMethod is called and the package isn't installed there will be some error downstream but I do not catch it anywhere, nor do I warn the user.
classB is imported every time the interface script is called so I can't have anything fail there, which is why I included the pass. Like I said above, I could import inside the method and have it fail there, but I really like keeping all of my imports in one place.
From classB.py
try:
import someWeirdPackage
except ImportError:
print("Cannot import someWeirdPackage")
pass
class ClassB():
...
def myMethod():
swp = someWeirdPackage()
...
If you are only importing one external library, I would go for something along these lines:
try:
import weirdModule
available = True
except ImportError:
available = False
def func_requiring_weirdmodule():
if not available:
raise ImportError('weirdModule not available')
...
The conditional and error checking is only needed if you want to give more descriptive errors. If not you can omit it and let python throw the corresponding error when trying to calling a non-imported module, as you do in your current setup.
If multiple functions do use weirdModule, you can wrap the checking into a function:
def require_weird_module():
if not available:
raise ImportError('weirdModule not available')
def f1():
require_weird_module()
...
def f2():
require_weird_module()
...
On the other hand, if you have multiple libraries to be imported by different functions, you can load them dynamically. Although it doesn't look pretty, python caches them and there is nothing wrong with it. I would use importlib
import importlib
def func_requiring_weirdmodule():
weirdModule = importlib.import_module('weirdModule')
Again, if multiple of your functions import complicated external modules you can wrap them into:
def import_external(name):
return importlib.import_module(name)
def f1():
weird1 = import_external('weirdModule1')
def f2():
weird2 = import_external('weirdModule2')
And last, you could create a handler to prevent importing the same module twice, something along the lines of:
class Importer(object):
__loaded__ = {}
#staticmethod
def import_external(name):
if name in Importer.__loaded__:
return Importer.__loaded__[name]
mod = importlib.import_module(name)
Importer.__loaded__[name] = mod
return mod
def f1():
weird = Importer.import_external('weird1')
def f2():
weird = Importer.import_external('weird1')
Although I'm pretty sure that importlib does caching behing the scenes and you don't really need for manual caching.
In short, although it does look ugly, there is nothing wrong with importing modules dynamically in python. In fact, a lot of libraries rely on this. On the other hand, if it is just for an special case of 3 methods accessing 1 external function, do use your approach or my first one in case you cant to add custom sception handling.
I'm not really sure that there's any best practice in this situation, but I would redefine the function if it's not supported:
def warn_import():
print("Cannot import someWeirdPackage")
try:
import someWeirdPackage
external_func = someWeirdPackage
except ImportError:
external_func = warn_import
class ClassB():
def myMethod(self):
swp = external_func()
b = ClassB()
b.myMethod()
You can create two separate classes for the two cases. The first will be used when the the package exist . The second will used when the package does not exist.
class ClassB1():
def myMethod(self):
print("someWeirdPackage exist")
# do something
class ClassB2(ClassB1):
def myMethod(self):
print("someWeirdPackage does not exist")
# do something or raise Exception
try:
import someWeirdPackage
class ClassB(ClassB1):
pass
except ImportError:
class ClassB(ClassB2):
pass
You can also use given below approach to overcome the problem that you're facing.
class UnAvailableName(object):
def __init__(self, name):
self.target = name
def __getattr_(self, attr):
raise ImportError("{} is not available.".format(attr))
try:
import someWeirdPackage
except ImportError:
print("Cannot import someWeirdPackage")
someWeirdPackage = someWeirdPackage("someWeirdPackage")
class ClassB():
def myMethod():
swp = someWeirdPackage.hello()
a = ClassB()
a.myMethod()
I have a module that imports fine (i print it at the top of the module that uses it)
from authorize import cim
print cim
Which produces:
<module 'authorize.cim' from '.../dist-packages/authorize/cim.pyc'>
However later in a method call, it has mysteriously turned to None
class MyClass(object):
def download(self):
print cim
which when run show that cim is None. The module isn't ever directly assigned to None anywhere in this module.
Any ideas how this can happen?
As you comment it youself - it is likely some code is attributing None to the "cim" name on your module itself - the way for checking for this is if your large module would be made "read only" for other modules -- I think Python allows for this --
(20 min. hacking ) --
Here -- just put this snippet in a "protect_module.py" file, import it, and call
"ProtectdedModule()" at the end of your module in which the name "cim" is vanishing -
it should give you the culprit:
"""
Protects a Module against naive monkey patching -
may be usefull for debugging large projects where global
variables change without notice.
Just call the "ProtectedModule" class, with no parameters from the end of
the module definition you want to protect, and subsequent assignments to it
should fail.
"""
from types import ModuleType
from inspect import currentframe, getmodule
import sys
class ProtectedModule(ModuleType):
def __init__(self, module=None):
if module is None:
module = getmodule(currentframe(1))
ModuleType.__init__(self, module.__name__, module.__doc__)
self.__dict__.update(module.__dict__)
sys.modules[self.__name__] = self
def __setattr__(self, attr, value):
frame = currentframe(1)
raise ValueError("Attempt to monkey patch module %s from %s, line %d" %
(self.__name__, frame.f_code.co_filename, frame.f_lineno))
if __name__ == "__main__":
from xml.etree import ElementTree as ET
ET = ProtectedModule(ET)
print dir(ET)
ET.bla = 10
print ET.bla
In my case, this was related with threading quirks: https://docs.python.org/2/library/threading.html#importing-in-threaded-code
trying to understand and learn how to write packages... testing with something i've always used, logging...
Can you please help me understand why the 'log' variable is not working... and no logging is working on the screen?
Thanks!
main.py :
#!/opt/local/bin/python
import sys
sys.path.append('CLUSTER')
import clusterlogging.differentlogging
clusterlogging.differentlogging.consolelogging()
log.debug("Successfully logged in")
differentlogging.py
#!/opt/local/bin/python
def consolelogging():
import logging
class NullHandler(logging.Handler):
def emit(self, record):
pass
print "Console Logging loaded"
DEFAULTLOGLEVEL=logging.INFO
log = logging.getLogger(__name__)
log.addHandler(NullHandler())
log.debug("Successfully logged in")
def mysqllogging():
print "mysql logging module here"
def sysloglogging():
print "rsyslog logging module here"
output
Console Logging loaded
Traceback (most recent call last):
File "./svnprod.py", line 10, in <module>
log.debug("Successfully logged in")
NameError: name 'log' is not defined
log is a global variable in the differentlogging module. Thus you can access it as
clusterlogging.differentlogging.log.
You could also do something like from clusterlogging.differentlogging import log and then access it as just log.
Edit: actually, on reviewing your code again I don't know what to make of it. Could you please fix up your code indentation so that it makes sense? Are you defining log inside the consolelogging function? If so, you'll need to either make it global with global log or return it from the function and assign it to a variable log on the line where you call the function.
This will return the log array, and you will be able to use the logging function associated.
main.py:
#!/usr/bin/env python
import sys
sys.path.append('CLUSTER')
import clusterlogging.differentlogging
log=clusterlogging.differentlogging.ttylogging()
log.debug("Logging module loaded")
log.info ("It worked")
differentlogging.py :
#!/usr/bin/env python
def ttylogging():
print "Console Logging loaded"
import sys
import logging
class NullHandler(logging.Handler):
def emit(self, record):
pass
DEFAULTLOGLEVEL=logging.INFO
log = logging.getLogger(__name__)
log.addHandler(NullHandler())
log.setLevel(DEFAULTLOGLEVEL)
logStreamHandler = logging.StreamHandler(sys.stdout)
logStreamHandler.setFormatter(logging.Formatter("%(asctime)s %(levelname)5s %(name)s %(lineno)d: %(message)s"))
log.addHandler(logStreamHandler)
return log
def mysqllogging():
print "mysql logging module here"
def sysloglogging():
print "rsyslog logging module here"
Your main.py doesn't do anything to define the name log in the global namespace. Importing a module can define names in the namespace of that module, but can't put anything in the global namespace.
In your main.py you should add this statement:
from clusterlogging.differentlogging import log
By the way, I that is such a long module name, I would use import as:
import clusterlogging.differentlogging as difflogging
log = difflogging.log
EDIT: I originally recommended this but it won't work:
from difflogging import log # doesn't work
You might even want to use a really short name like dl:
import clusterlogging.differentlogging as dl
dl.log('whatever')
Since dl is really short, maybe you don't need to get log bound in the global namespace.
Also, you could get every name from a module by using import * but this is not recommended.
from clusterlogging.differentlogging import * # not recommended
You usually don't want to clutter the global namespace with all the stuff defined in a module. Import just what you need. This is tidier and helps document what you are actually using.