Common logging module in Python - python

I am new to python and just trying to learn and find better ways to write code. I want to create a custom class for logging and use package logging inside it. I want the function in this class to be reusable and do my logging from other scripts rather than writing custom code in each and every scripts. Is there a good link you guys can share? Is this the right way to handle logging? I just want to avoid writing the same code in every script if I can reuse it from one module.
I would highly appreciate any reply.

You can build a custom class that utilizes the built in python logging library. There isn't really any right way to handle logging as the library allows you to use 5 standard levels indicating the severity of events (DEBUG, INFO, WARNING, ERROR, and CRITICAL). The way you use these levels are application specific. Here's another good explanation of the package.

It's indeed a good idea to keep all your logging configuration (formatters, level, handlers) in one place.
create a class wrapping a custom logger with your configuration
expose methods for logging with different levels
import this class wherever you want
create an instance of this class to log where you want
To make sure all you custom logging objects have the same config, you should make logging class own the configuration.
I don't think there's any links I can share for the whole thing but you can find links for the individual details I mentioned easily enough.

Related

Share logger in python

I have a project that uses 3rd party packages (which in turn might be using other packages). I want to apply a common xxxHandler to all packages logging. Since all packages define logger = logging.getLogger(__name__), this is the solution I have implemented:
handler = RotatingFileHandler(filename='app.log')
for name in logging.root.manager.loggerDict:
logging.getLogger(name).addHandler(handler)
This works, but it creates multiple entries per log (probably because of logging propagation). So, I enhanced this a bit by providing a list of names that should get the handler:
def add_handler_to_loggers(loggers=None):
if loggers:
for name in loggers:
logging.getLogger(name).addHandler(handler)
add_handler_to_loggers(['requests', 'sleekxmpp']
And I am happy with this. However, I am not sure I will be missing any important log from some other package.
Questions
Is this a good approach for my problem?
Is there a better approach?
Thank you!

OOP: Using conditional statement while initializing a class

This question geared toward OOP best practices.
Background:
I've created a set of scripts that are either automatically triggered by cronjobs or are constantly running in the background to collect data in real time. In the past, I've used Python's smtplib to send myself notifications when errors occur or a job is successfully completed. Recently, I migrated these programs to the Google Cloud platform which by default blocks popular SMTP ports. To get around this I used linux's mail command to continue sending myself the reports.
Originally, my hacky solution was to have two separate modules for sending alerts that were initiated based on an argument I passed to the main script.
Ex:
$ python mycode.py my_arg
if sys.argv[1] == 'my_arg':
mailer = Class1()
else:
mailer = Class2()
I want to improve upon this and create a module that automatically handles this without the added code. The question I have is whether it is "proper" to include a conditional statement while initializing the class to handle the situation.
Ex:
Class Alert(object):
def __init__(self, sys.platform, other_args):
# Google Cloud Platform
if sys.platform == "linux":
#instantiate Class1 variables and methods
#local copy
else:
#instantiate Class2 variables and methods
My gut instinct says this is wrong but I'm not sure what the proper approach would be.
I'm mostly interested in answers regarding how to create OO classes/modules that handle environmental dependencies to provide the same service. In my case, a blocked port requires a different set of code altogether.
Edit: After some suggestions here are my favorite readings on this topic.
http://python-3-patterns-idioms-test.readthedocs.io/en/latest/Factory.html
This seems like a wonderful use-case for a factory class, which encapsulates the conditional, and always returns an instance of one of N classes, all of which implement the same interface, so that the rest of your code can use it without caring about the concrete class being used.
This is a way to do it. But I would rather use something like creating a dynamic class instance. To do that, you could have only one class instead of selecting from two different classes. The class would then take some arguments and return the result depending the on the arguments provided. There are quite some examples out there and I'm sure you can use them in your use-case. Try searching for how to create a dynamic class in python.

Python logging library - beyond logging levels

The python logging library allows to log based on different levels:
https://docs.python.org/3/howto/logging.html#logging-levels
But I would like to use it to log based on custom tags, for example "show_intermediate_results" or "display_waypoints_in_the_code" or "report_time_for_each_module" and so on...
Those tags cannot be measured in a severity ladder, during development i would sometimes want to see them and sometimes not depending on what i am developing/debugging at the moment.
So the question is if I can use the logging library to do that?
Btw, i DO want to use the library and not write something by myself because i want it to be thread safe.
As per the documentation, you can use logging.Filter objects with Logger and Handler instances
for more sophisticated filtering than is provided by levels.

Change logging format in a class instance

This is a Python question but a Django-specific solution is acceptable.
For a class I'm writing I would like to prefix log output on a per-instance basis. I do not want to interfere with the logging destinations that were set up. These are the solutions I can think of:
create a new logger as a sub-logger of the module, reconfigure the parent handlers with different formatters: mylog.info("foo") => prefixfoo
create a wrapper log class with the info(), warn() etc methods, each adding the prefix before calling the wrapped logger: mylog.info("foo")
store the prefix in the instance and manually add: log.info(self.p+"foo")
create a prefix-adding function that I manually wrap all log calls with: log.info(p("foo"))
Obviously I prefer solution 1 but I don't know how to do that.
What is the best solution? I'm a newbie Python programmer so I'm probably trying to solve the wrong problem :-)

extending python logging module

I want to extend python(2.7)'s logging module (and specifically the logger class).
Currently I have:
class MyLogger(logging.Logger):
def __init__(self, name):
logging.Logger.__init__(self, name)
Is this the right way to initialize MyLogger?
Will I be able to use Logger.manager (logging.Logger.manager)?
Is it possible to "get" a logger (I only know logging.getLogger(name) - which is not available since I'm extending the Logger itself, and I know static methods aren't popular in python as they are in Java, for example)?
Where can I learn more about extending classes? The documentation in python.org is very poor and did not help me.
My goal is to be able to start a logger with standard configurations and handlers based on the caller module's name, and to set the entire system loggers to the same level with a short, readable, call.
Seems like my approach was wrong altogether.
I prefer the way stated in python.org:
Using configuration files for the cleans up code and allows to propagate changes easily.
A configuration file is loaded like so:
# view example on python.org site (logging for multiple modules)
logging.config.fileConfig('logging.conf')
As for batch abilities, since we keep logging.Logger.manager.loggerDict and logging.getLogger, batch operations can use simple loops to create changes (like setting loggers to a single level) throughout the system.

Categories