How to structure the Logging in this case (python) - python

I was wondering what would be the best way for me to structure my logs in a special situation.
I have a series of python services that use the same python files for communicating (ex. com.py) with the HW. I have logging implemented in this modules and i would like for it to be dependent(associated) with the main service that is calling the modules.
How should i structure the logger logic so that if i have:
main_service_1->module_for_comunication
The logging goes to file main_serv_1.log
main_service_2->module_for_comunication
The logging goes to file main_serv_2.log
What would be the best practice in this case without harcoding anything?
Is there a way to know the file which is importing the com.py, so that i am able inside of the com.py, to use this information to adapt the logging to the caller?

In my experience, for a situation like this, the cleanest and easiest to implement strategy is to pass the logger to the code that does the logging.
So, create a logger for each service you want to have log to a different file, and pass that logger in to the code from your communications module. You can use __name__ to get the name of the current module (the actual module name, without the .py extension).
In the example below I implemented a fallback for the case when no logger is passed in as well.
com.py
from log import setup_logger
class Communicator(object):
def __init__(self, logger=None):
if logger is None:
logger = setup_logger(__name__)
self.log = logger
def send(self, data):
self.log.info('Sending %s bytes of data' % len(data))
svc_foo.py
from com import Communicator
from log import setup_logger
logger = setup_logger(__name__)
def foo():
c = Communicator(logger)
c.send('foo')
svc_bar.py
from com import Communicator
from log import setup_logger
logger = setup_logger(__name__)
def bar():
c = Communicator(logger)
c.send('bar')
log.py
from logging import FileHandler
import logging
def setup_logger(name):
logger = logging.getLogger(name)
handler = FileHandler('%s.log' % name)
logger.addHandler(handler)
return logger
main.py
from svc_bar import bar
from svc_foo import foo
import logging
# Add a StreamHandler for the root logger, so we get some console output in
# addition to file logging (for easy of testing). Also set the level for
# the root level to INFO so our messages don't get filtered.
logging.basicConfig(level=logging.INFO)
foo()
bar()
So, when you execute python main.py, this is what you'll get:
On the console:
INFO:svc_foo:Sending 3 bytes of data
INFO:svc_bar:Sending 3 bytes of data
And svc_foo.log and svc_bar.log each will have one line
Sending 3 bytes of data
If a client of the Communicator class uses it without passing in a logger, the log output will end up in com.log (fallback).

I see several options:
Option 1
Use __file__. __file__ is the pathname of the file from which the module was loaded (doc). depending of your structure, you should be able to identify the module by performing an os.path.split() like so:
If the folder structure is
+- module1
| +- __init__.py
| +- main.py
+- module2
+- __init__.py
+- main.py
you should be able to obtain the module name with a code placed in main.py:
def get_name():
module_name = os.path.split(__file__)[-2]
return module_name
This is not exactly DRY because you need the same code in both main.py. Reference here.
Option 2
A bit cleaner is to open 2 terminal windows and use an environment variable. E.g. you can define MOD_LOG_NAME as MOD_LOG_NAME="main_service_1" in one terminal and MOD_LOG_NAME="main_service_2" in the other one. Then, in your python code you can use something like:
import os
LOG_PATH_NAME os.environ['MOD_LOG_NAME']
This follows separation of concerns.
Update (since the question evolved a bit)
Once you've established the distinct name, all you have to do is to configure the logger:
import logging
logging.basicConfig(filename=LOG_PATH_NAME,level=logging.DEBUG)
(or get_name())and run the program.

Related

Python logging for custom namespace misses prefix

I tried to enable logging for my python library. I want all logs to go into my custom subset my_logger instead of root. So this is what I tried:
import logging
my_logger = logging.getLogger('my_logger')
my_logger.warning("my hello!")
logging.warning("hello!")
And for some reason my custom logger didn't output subset name (my_logger) in front of it.
my hello!
WARNING:root:hello!
A simple change of the order of root and my loggers fixed the issue:
import logging
logging.warning("hello!")
my_logger = logging.getLogger('my_logger')
my_logger.warning("my hello!")
Output
WARNING:root:hello!
WARNING:my_logger:my hello!
I do not ever want to use a root logger at all. Is it possible to get WARNING:my_logger prefix to my output without logging to the root logger first?
Seems like the logging module is doing first-time configuration during the first call. You can do that first to get it out of the way before using your logger:
import logging
logging.basicConfig() # do the configuration
my_logger = logging.getLogger('my_logger')
my_logger.warning("my hello!")
logging.warning("hello!")
Result:
WARNING:my_logger:my hello!
WARNING:root:hello!
For a library that you expect others to import, this should be done by the code importing the module instead of the library.

Module or singleton that uses logging and config

I've been trying to figure out how best to set this up. Cutting it down as much as I can. I have 4 python files: core.py (main), logger_controler.py, config_controller.py, and a 4th as a module or singleton well just call it tool.py.
The way I have it setup is logging has an init function that setup pythons built in logging with the necessary levels, formatter, directory location, etc. I call this init function in main.
import logging
import logger_controller
def main():
logger_controller.init_log()
logger = logging.getLogger(__name__)
if __name__ == "__main__":
main()
config_controller is using configparser and is mainly a singleton as a controller for my config.
import configparser
import logging
logger = logging.getLogger(__name__)
class ConfigController(object):
def __init__(self, *file_names):
self.config_parser = configparser.ConfigParser()
found_files = self.config_parser.read(file_names)
if not found_files:
raise ValueError("No config file found.")
self._validate()
def _validate(self):
...
def read_config(self, section, field):
try:
data = self.config_parser.get(section, field)
except (configparser.NoSectionError, configparser.NoOptionError) as e:
logger.error(e)
data = None
return data
config = ConfigController("config.ini")
And then my problem is trying to create the 4th file and making sure both my logger and config parser are running before it. I'm also wanting this 4th one to be a singleton so it's following a similar format as the config_controller.
So tool.py uses config_controller to pull anything it needs from the config file. It also has some error checking for if config_controller's read_config returns None as that isn't validated in _validate. I did this as I wanted my logging to have a general layer for error checking and a more specific layer. So _validate just checks if required fields and sections are in the config file. Then wherever the field is read will handle extra error checking.
So my main problem is this:
How do I have it where my logger and configparser are both running and available before anything else. I'm very much willing to rework all of this, but I'd like to keep the functionality of it all.
One attempt I tried that works, but seems very messy is making my logger_controler a singleton that just returns python's logging object.
import logging
import os
class MyLogger(object):
def __new__(cls, *args, **kwargs):
init_log()
return logging
def init_log():
...
mylogger = MyLogger()
Then in core.py
from logger_controller import mylogger
logger = mylogger.getLogger(__name__)
I feel like there should be a better way to do the above, but I'm honestly not sure how.
A few ideas:
Would I be able to extend the logging class instead of just using that init_log function?
Maybe there's a way I can make all 3 individual modules such that they each initialize in a correct order? My attempts here didn't quite work as I also have some internal data that I wouldn't want exposed to classes using the module, just the functionality.
I'd like to have it where all 3, logging, configparsing, and the tool, available anywhere I import them.
How I have it setup now "works" but if I were to import the tool.py anywhere in core.py and an error occurs that I need to catch, then my logger won't be able to log it as this tool is loading before the init of my logger.

extending logging.Logger module in python 3.5

I have been trying to create a new class of Logger by subclassing logging.Logger . Python version is 3.5
I have several modules in my application and I configure the logging only in the main module where I set the logger class using logging.setLoggerClass(...)
However when I retrieve the same Logger instance from some other module, it still creates a new instance of the Logger class and not the child class instance that I defined.
For example my code is :
# module 1
import logging
class MyLoggerClass(logging.getLoggerClass()):
def __init__(name):
super(MyLoggerClass, self).__init__(name)
def new_logger_method(...):
# some new functionality
if __name__ == "__main__":
logging.setLoggerClass(MyLoggerClass)
mylogger = logging.getLogger("mylogger")
# configuration of mylogger instance
# module 2
import logging
applogger = logging.getLogger("mylogger")
print(type(applogger))
def some_function():
applogger.debug("in module 2 some_function")
When this code is executed, I expect the applogger in module 2 to be of type MyLoggerClass. I intend to use the new_logger_method for some new functionality.
However since applogger is turning out to be of type logging.Logger, when the code is run it throws Logger has no attribute named new_logger_method.
Has anyone ever faced this issue?
Thanks in advance for any help!
Pranav
Instead of attempting to affect the global logger by changing the default logger factory, if you want your module to play nicely with any environment you should define a logger just for your module (and its children) and use it as a main logger for everything else deeper in your module structure. The trouble is that you explicitly want to use a different logging.Logger class than the default/globally defined one and the logging module doesn't provide an easy way to do context-based factory switching so you'll have to do it yourself.
There are many ways to do that but my personal preference is to be as explicit as possible and define your own logger module which you'll then import in your other modules in your package whenever you need to obtain a custom logger. In your case, you can create logger.py at the root of your package and do something like:
import logging
class CustomLogger(logging.Logger):
def __init__(self, name):
super(CustomLogger, self).__init__(name)
def new_logger_method(self, caller=None):
self.info("new_logger_method() called from: {}.".format(caller))
def getLogger(name=None, custom_logger=True):
if not custom_logger:
return logging.getLogger(name)
logging_class = logging.getLoggerClass() # store the current logger factory for later
logging._acquireLock() # use the global logging lock for thread safety
try:
logging.setLoggerClass(CustomLogger) # temporarily change the logger factory
logger = logging.getLogger(name)
logging.setLoggerClass(logging_class) # be nice, revert the logger factory change
return logger
finally:
logging._releaseLock()
Feel free to include other custom log initialization logic in it if you so desire. Then from your other modules (and sub-packages) you can import this logger and use its getLogger() to obtain a local, custom logger. For example, all you need in module1.py is:
from . import logger # or `from package import logger` for external/non-relative use
log = logger.getLogger(__name__) # obtain a main logger for this module
def test(): # lets define a function we can later call for testing
log.new_logger_method("Module 1")
This covers the internal use - as long as you stick to this pattern in all your modules/sub-modules you'll have the access to your custom logger.
When it comes to external use, you can write an easy test to show you that your custom logger gets created and that it doesn't interfere with the rest of the logging system therefore your package/module can be declared a good citizen. Under the assumption that your module1.py is in a package called package and you want to test it as a whole from the outside:
import logging # NOTE: we're importing the global, standard `logging` module
import package.module1
logging.basicConfig() # initialize the most rudimentary root logger
root_logger = logging.getLogger() # obtain the root logger
root_logger.setLevel(logging.DEBUG) # set root log level to DEBUG
# lets see the difference in Logger types:
print(root_logger.__class__) # <class 'logging.RootLogger'>
print(package.module1.log.__class__) # <class 'package.logger.CustomLogger'>
# you can also obtain the logger by name to make sure it's in the hierarchy
# NOTE: we'll be getting it from the standard logging module so outsiders need
# not to know that we manage our logging internally
print(logging.getLogger("package.module1").__class__) # <class 'package.logger.CustomLogger'>
# and we can test that it indeed has the custom method:
logging.getLogger("package.module1").new_logger_method("root!")
# INFO:package.module1:new_logger_method() called from: root!.
package.module1.test() # lets call the test method within the module
# INFO:package.module1:new_logger_method() called from: Module 1.
# however, this will not affect anything outside of your package/module, e.g.:
test_logger = logging.getLogger("test_logger")
print(test_logger.__class__) # <class 'logging.Logger'>
test_logger.info("I am a test logger!")
# INFO:test_logger:I am a test logger!
test_logger.new_logger_method("root - test")
# AttributeError: 'Logger' object has no attribute 'new_logger_method'

How to configure/initialize logging using logger for multiple modules only once in Python for entire project?

I have python project with multiple modules with logging. I perform initialization (reading log configuration file and creating root logger and enable/disable logging) in every module before start of logging the messages. Is it possible to perform this initialization only once in one place (like in one class may be called as Log) such that the same settings are reused by logging all over the project?
I am looking for a proper solution to have only once to read the configuration file and to only once get and configure a logger, in a class constructor, or perhaps in the initializer (__init__.py). I don't want to do this at client side (in __main__ ). I want to do this configuration only once in separate class and call this class in other modules when logging is required.
setup using #singleton pattern
#log.py
import logging.config
import yaml
from singleton_decorator import singleton
#singleton
class Log:
def __init__(self):
configFile = 'path_to_my_lof_config_file'/logging.yaml
with open(configFile) as f:
config_dict = yaml.load(f)
logging.config.dictConfig(config_dict)
self.logger = logging.getLogger('root')
def info(self, message):
self.logger.info(message)
#module1.py
from Log import Log
myLog = Log()
myLog.info('Message logged successfully)
#module2.py
from Log import Log
myLog = Log() #config read only once and only one object is created
myLog.info('Message logged successfully)
From the documentation,
Note that Loggers should NEVER be instantiated directly, but always through the module-level function logging.getLogger(name). Multiple calls to getLogger() with the same name will always return a reference to the same Logger object.
You can initialize and configure logging in your main entry point. See Logging from multiple modules in this Howto (Python 2.7).
I had the same problem and I don't have any classes or anything, so I solved it with just using global variable
utils.py:
existing_loggers = {}
def get_logger(name='my_logger', level=logging.INFO):
if name in existing_loggers:
return existing_loggers[name]
# Do the rest of initialization, handlers, formatters etc...

Weird stuff happening while importing modules

I hate to give the question this heading but I actually don't know whats happening so here it goes.
I was doing another project in which I wanted to use logging module. The code is distributed among few files & instead of creating separate logger objects for seperate files, I thought of creating a logs.py with contents
import sys, logging
class Logger:
def __init__(self):
formatter = logging.Formatter('%(filename)s:%(lineno)s %(levelname)s:%(message)s')
stdout_handler = logging.StreamHandler(sys.stdout)
stdout_handler.setFormatter(formatter)
self.logger=logging.getLogger('')
self.logger.addHandler(stdout_handler)
self.logger.setLevel(logging.DEBUG)
def debug(self, message):
self.logger.debug(message)
and use this class like (in different files.)
import logs
b = logs.Logger()
b.debug("Hi from a.py")
I stripped down the whole problem to ask the question here. Now, I have 3 files, a.py, b.py & main.py. All 3 files instantiate the logs.Logger class and prints a debug message.
a.py & b.py imports "logs" and prints their debug message.
main.py imports logs, a & b; and prints it own debug message.
The file contents are like this: http://i.imgur.com/XoKVf.png
Why is debug message from b.py printed 2 times & from main.py 3 times?
Specify a name for the logger, otherwise you always use root logger.
import sys, logging
class Logger:
def __init__(self, name):
formatter = logging.Formatter('%(filename)s:%(lineno)s %(levelname)s:%(message)s')
stdout_handler = logging.StreamHandler(sys.stdout)
stdout_handler.setFormatter(formatter)
self.logger=logging.getLogger(name)
self.logger.addHandler(stdout_handler)
self.logger.setLevel(logging.DEBUG)
def debug(self, message):
self.logger.debug(message)
http://docs.python.org/howto/logging.html#advanced-logging-tutorial :
A good convention to use when naming loggers is to use a module-level
logger, in each module which uses logging, named as follows:
logger = logging.getLogger(__name__)
logging.getLogger('') will return exactly the same object each time you call it. So each time you instantiate a Logger (why use old-style classes here?) you are attaching one more handler resulting in printing to one more target. As all your targets pointing to the same thing, the last call to .debug() will print to each of the three StreamHandler objects pointing to sys.stdout resulting in three lines being printed.
First. Don't create your own class of Logger.
Just configure the existing logger classes with exising logging configuration tools.
Second. Each time you create your own class of Logger you also create new handlers and then attach the new (duplicating) handler to the root logger. This leads to duplication of messages.
If you have several modules that must (1) run stand-alone and (2) also run as part of a larger, composite, application, you need to do this. This will assure that logging configuration is done only once.
import logging
logger= logging.getLogger( __file__ ) # Unique logger for a, b or main
if __name__ == "__main__":
logging.basicConfig( stream=sys.stdout, level=logging.DEBUG, format='%(filename)s:%(lineno)s %(levelname)s:%(message)s' )
# From this point forward, you can use the `logger` object.
logger.info( "Hi" )

Categories