I have a project that uses 3rd party packages (which in turn might be using other packages). I want to apply a common xxxHandler to all packages logging. Since all packages define logger = logging.getLogger(__name__), this is the solution I have implemented:
handler = RotatingFileHandler(filename='app.log')
for name in logging.root.manager.loggerDict:
logging.getLogger(name).addHandler(handler)
This works, but it creates multiple entries per log (probably because of logging propagation). So, I enhanced this a bit by providing a list of names that should get the handler:
def add_handler_to_loggers(loggers=None):
if loggers:
for name in loggers:
logging.getLogger(name).addHandler(handler)
add_handler_to_loggers(['requests', 'sleekxmpp']
And I am happy with this. However, I am not sure I will be missing any important log from some other package.
Questions
Is this a good approach for my problem?
Is there a better approach?
Thank you!
Related
I am new to python and just trying to learn and find better ways to write code. I want to create a custom class for logging and use package logging inside it. I want the function in this class to be reusable and do my logging from other scripts rather than writing custom code in each and every scripts. Is there a good link you guys can share? Is this the right way to handle logging? I just want to avoid writing the same code in every script if I can reuse it from one module.
I would highly appreciate any reply.
You can build a custom class that utilizes the built in python logging library. There isn't really any right way to handle logging as the library allows you to use 5 standard levels indicating the severity of events (DEBUG, INFO, WARNING, ERROR, and CRITICAL). The way you use these levels are application specific. Here's another good explanation of the package.
It's indeed a good idea to keep all your logging configuration (formatters, level, handlers) in one place.
create a class wrapping a custom logger with your configuration
expose methods for logging with different levels
import this class wherever you want
create an instance of this class to log where you want
To make sure all you custom logging objects have the same config, you should make logging class own the configuration.
I don't think there's any links I can share for the whole thing but you can find links for the individual details I mentioned easily enough.
The python logging library allows to log based on different levels:
https://docs.python.org/3/howto/logging.html#logging-levels
But I would like to use it to log based on custom tags, for example "show_intermediate_results" or "display_waypoints_in_the_code" or "report_time_for_each_module" and so on...
Those tags cannot be measured in a severity ladder, during development i would sometimes want to see them and sometimes not depending on what i am developing/debugging at the moment.
So the question is if I can use the logging library to do that?
Btw, i DO want to use the library and not write something by myself because i want it to be thread safe.
As per the documentation, you can use logging.Filter objects with Logger and Handler instances
for more sophisticated filtering than is provided by levels.
I want logging messages from my program, but not from the libraries it uses. I can disable / change the logging level of individual libraries like this:
logging.getLogger('alibrary').setLevel(logging.ERROR)
The problem is, my program uses lots and lots of libraries, which use lots themselves. So doing this individually for every library is a big chore. Is there a better way to do this?
You could set the root logger's level to e.g. ERROR and then selectively set more verbose levels for your own code:
logging.getLogger().setLevel(logging.ERROR)
then assuming the libraries you use are well-behaved with regard to logging, the effective levels of their loggers should effectively be ERROR just as if you had set each one individually.
I want to extend python(2.7)'s logging module (and specifically the logger class).
Currently I have:
class MyLogger(logging.Logger):
def __init__(self, name):
logging.Logger.__init__(self, name)
Is this the right way to initialize MyLogger?
Will I be able to use Logger.manager (logging.Logger.manager)?
Is it possible to "get" a logger (I only know logging.getLogger(name) - which is not available since I'm extending the Logger itself, and I know static methods aren't popular in python as they are in Java, for example)?
Where can I learn more about extending classes? The documentation in python.org is very poor and did not help me.
My goal is to be able to start a logger with standard configurations and handlers based on the caller module's name, and to set the entire system loggers to the same level with a short, readable, call.
Seems like my approach was wrong altogether.
I prefer the way stated in python.org:
Using configuration files for the cleans up code and allows to propagate changes easily.
A configuration file is loaded like so:
# view example on python.org site (logging for multiple modules)
logging.config.fileConfig('logging.conf')
As for batch abilities, since we keep logging.Logger.manager.loggerDict and logging.getLogger, batch operations can use simple loops to create changes (like setting loggers to a single level) throughout the system.
I am developing a project that requires a single configuration file whose data is used by multiple modules.
My question is: what is the common approach to that? should i read the configuration file from each
of my modules (files) or is there any other way to do it?
I was thinking to have a module named config.py that reads the configuration files and whenever I need a config I do import config and then do something like config.data['teamsdir'] get the 'teamsdir' property (for example).
response: opted for the conf.py approach then since it it is modular, flexible and simple
I can just put the configuration data directly in the file, latter if i want to read from a json file a xml file or multiple sources i just change the conf.py and make sure the data is accessed the same way.
accepted answer: chose "Alex Martelli" response because it was the most complete. voted up other answers because they where good and useful too.
I like the approach of a single config.py module whose body (when first imported) parses one or more configuration-data files and sets its own "global variables" appropriately -- though I'd favor config.teamdata over the round-about config.data['teamdata'] approach.
This assumes configuration settings are read-only once loaded (except maybe in unit-testing scenarios, where the test-code will be doing its own artificial setting of config variables to properly exercise the code-under-test) -- it basically exploits the nature of a module as the simplest Pythonic form of "singleton" (when you don't need subclassing or other features supported only by classes and not by modules, of course).
"One or more" configuration files (e.g. first one somewhere in /etc for general default settings, then one under /usr/local for site-specific overrides thereof, then again possibly one in the user's home directory for user specific settings) is a common and useful pattern.
The approach you describe is ok. If you want to add support for user config files, you can use execfile(os.path.expanduser("~/.yourprogram/config.py")).
One nice approach is to parse the config file(s) into a Python object when the application starts and pass this object around to all classes and modules requiring access to the configuration.
This may save a lot of time parsing the config.
If you want to share your config across different machines, you could perhaps put it on a web server and do import like this:
import urllib2
confstr = urllib2.urlopen("http://yourhost/config.py").read()
exec(confstr)
And if you want to share it across different languages, perhaps you can use JSON to encode and parse the configuration:
import urllib2
import simplejson
confstr = urllib2.urlopen("http://yourhost/config.py").read()
config = simplejson.loads(confstr)