I have a script which imports a logging module (based on logging) and another module (which in turn imports the same logging module as the main one -- to have consistent logging accorss the scripts and modules).
Everything works fine except that I get duplicated messages. Below are the scripts stripped down to the problematic part:
The main script. It sets a logging handler, used in a method of his
# the main script
# it is the one started
import dslogger
import mytestmodule
class MyClass():
def __init__(self):
self.log = dslogger.DSLogger().rootLogger
def dosomething(self):
self.log.debug("hello from dosomething")
mytestmodule.MyTestModule()
MyClass().dosomething()
The mytestmodule. Here stripped down to __init__:
# mytestmodule.py
import dslogger
class MyTestModule():
def __init__(self):
self.log = dslogger.DSLogger().rootLogger
self.log.debug("hello from mytestmodule")
dslogger.py, the logging module:
import logging
class DSLogger():
def __init__(self):
logFormatter = logging.Formatter("%(asctime)s [%(funcName)s] [%(levelname)s] %(message)s")
self.rootLogger = logging.getLogger(__name__)
consoleHandler = logging.StreamHandler()
consoleHandler.setFormatter(logFormatter)
self.rootLogger.setLevel(logging.DEBUG)
self.rootLogger.addHandler(consoleHandler)
When running the main script I get:
2014-11-04 08:56:59,637 [__init__] [DEBUG] hello from mytestmodule
2014-11-04 08:56:59,637 [dosomething] [DEBUG] hello from dosomething
2014-11-04 08:56:59,637 [dosomething] [DEBUG] hello from dosomething
The second and third line are duplicated, even though they are generated by the dosomething() method which is called only once, from the main script. Why is it so?
What's happening here is that you're adding two StreamHandlers to the same logger.
You initialize DSLogger() twice. Each time you initialize it, you call self.rootLogger = logging.getLogger(__name__). But this does not get you two different logger instances, since they are called with same __name__:
import logging
x = logging.getLogger('x')
id(x)
Out[42]: 173704528L
y = logging.getLogger('x')
id(y)
Out[44]: 173704528L
So when you call self.log.debug("hello from mytestmodule"), this is after DSLogger() was initalized once, and thus only one StreamHandler was added to that logger. But when you initialize MyClass, another one is added, and thus the logger now has two StreamHandlers, and every log message is printed twice.
Related
I have 3 classes, a main class that is called App, a secondary class that have some business rules and another class that has some useful functions called Utils.
The main class, App has a method that run another method of the secondary class in parallel, this method of the secondary class uses a method of the class Utils to generate logs, but I think that because of the context manager that I use to run processeses in parallel, the logging module loses his configurations. See:
class App:
otherClass = OtherClass()
utils = Utils()
n_cpus = os.cpu_count()
def call_multiprocessing(self):
#When I run this way the logging messages goes to the console and not to the file
#specified in basic config
with concurrent.futures.ProcessPoolExecutor(max_workers=self.n_cpus) as executor:
co_routines = executor.map(self.otherClass.some_method(), self.list_of_parameters)
class OtherClass()
utils = Utils()
utils.setup_log_folder(folder_name)
def some_method(self, some_parameter):
for a in some_parameter:
#DO SOME STUFF
self.utils.generate_log(some_message)
import logging
class Utils:
def setup_log(folder):
# if someone tried to log something before basicConfig is called, Python creates a default handler that
# goes to the console and will ignore further basicConfig calls. So I Remove the handler if there is one.
root = logging.getLogger()
if root.handlers:
for handler in root.handlers:
root.removeHandler(handler)
logging.basicConfig(
format="%(asctime)s [%(levelname)s] %(message)s",
datefmt='%d/%m/%Y %H:%M:%S',
handlers=[
logging.FileHandler(os.path.join(folder, 'log.txt'), 'w', 'utf-8'),
]
)
The file log.txt is created, but the log messages are shown in console. How can I solve that?
I am coding in Python and I am facing a situation where my object is throwing out multiple instances even though it has been configured for a single instance. So I have multiple modules that is running a script and there is one module called mylogger.py which instantiates the logger object that stores and displays the logs of the script in a file and the console respectively. I am calling the module mylogger.py from multiple modules in my script. But I want to make sure that for those multiple calls from different modules I use the same initial object so that I do not get the same lines being inserted multiple times in my log file. But I am not sure even after proper configuration I am seeing multiple objects being created each time the function setLogger() is called from different modules.
# mylogger.py
import logging
logger = None
def setLogger(filename="logfile.txt"):
global logger
print("*********VALUE OF LOGGER IN FUNCTION******** ", logger)
if logger is None:
logger = logging.getLogger(__name__) # creating logger object only once
if not getattr(logger, 'handler_set', None):
logger.setLevel(logging.INFO)
stream_handler = logging.StreamHandler()
file_handler = logging.FileHandler(filename)
formatter = logging.Formatter('%(message)s')
file_handler.setFormatter(formatter)
logger.addHandler(file_handler)
logger.addHandler(stream_handler)
logger.setLevel(logging.INFO)
logger.propagate = False
logger.handler_set = True
return logger
I am calling this file from multiple modules. I have included the parts which calls the mylogger.py file.
#firstModule.py
import mylogger
import os
import sys
logger = mylogger.setLogger()
logger.info("Logger object in firstModule is ", logger)
# Some code
# and print statements
# goes here
# Trigger next module by calling it using os.system(...)
os.system('python' + path_of_script/secondModule.py)
Now the secondModule.py calls the mylogger.py
#secondModule.py
import mylogger
import os
import sys
logger = mylogger.setLogger()
logger.info("Logger object in secondModule is ", logger)
# Some code
#and print statements
#go here
#And then it calls some other module
So basically the logger object is different for firstModule.py and secondModule.py and for the next subsequent modules which prints out multiple lines of the same statement in the logfile.txt. However, the lines that get printed on the console through streamHandler() is fine as it only prints out single line for a single logger.info(). This seems very weird as both streamHandler() and fileHandler() were configured using the same function.
You have two processes (through os.system()) so of course you have different objects. In order to "Trigger next module", use import instead.
I've developed a module that I would like to be imported into any script and automatically have the module logger hiearchy below the logger established in the main script, no matter what the main logger is called (root, main, etc.)
i.e.
a.py
import module
log = logging.getLogger()
log.info("Test Main")
test()
b.py
import module
log = logging.getLogger('main')
log.info("Test Main")
test()
module.py
mod_log = logging.getLogger(__name__)
def test():
mod_log.info("Test Mod")
If the script ran successfully, I would expect the following output. I just can't seem to get it to run?
a.py
ROOT - Test Main
ROOT.module - Test Mod
b.py
main - Test Main
main.module - Test Mod
This is not possible because there is no way to tell where a logger was defined and which one the "main" logger is supposed to be. The main script could define any number of loggers, including zero. The only logger that is guaranteed to exist is the root logger.
If you are fine with just taking a guess you could make your logger a child of whatever is the first logger that was created.
module.py
import logging
logger_name = __name__
if logging.root.manager.loggerDict:
parent = next(iter(logging.root.manager.loggerDict))
logger_name = parent + '.' + logger_name
mod_log = logging.getLogger(logger_name)
Implement a Singleton Design pattern of the Logger class - whose basically job is to add a formatter, setting logging level and filtering and finally returning an instance of the logger.
Then use this to create an object:
logger = Logger.__call__().get_logger()
print(Logger())
Following the above method will create one and only on instance of logger and no matter where you create the object of the Logger class, it will be the same old instance if it was created before.
I was wondering what the standard set up is for performing logging from within a Python app.
I am using the Logging class, and I've written my own logger class that instantiates the Logging class. My main then instantiates my logger wrapper class. However, my main instantiates other classes and I want those other classes to also be able to write to he log file via the logger object in the main.
How do I make that logger object such that it can be called by other classes? It's almost like we need some sort of static logger object to get this to work.
I guess the long and short of the question is: how do you implement logging within your code structure such that all classes instantiated from within main can write to the same log file? Do I just have to create a new logging object in each of the classes that points to the same file?
I don't know what you mean by the Logging class - there's no such class in Python's built-in logging. You don't really need wrappers: here's an example of how to do logging from arbitrary classes that you write:
import logging
# This class could be imported from a utility module
class LogMixin(object):
#property
def logger(self):
name = '.'.join([__name__, self.__class__.__name__])
return logging.getLogger(name)
# This class is just there to show that you can use a mixin like LogMixin
class Base(object):
pass
# This could be in a module separate from B
class A(Base, LogMixin):
def __init__(self):
# Example of logging from a method in one of your classes
self.logger.debug('Hello from A')
# This could be in a module separate from A
class B(Base, LogMixin):
def __init__(self):
# Another example of logging from a method in one of your classes
self.logger.debug('Hello from B')
def main():
# Do some work to exercise logging
a = A()
b = B()
with open('myapp.log') as f:
print('Log file contents:')
print(f.read())
if __name__ == '__main__':
# Configure only in your main program clause
logging.basicConfig(level=logging.DEBUG,
filename='myapp.log', filemode='w',
format='%(name)s %(levelname)s %(message)s')
main()
Generally it's not necessary to have loggers at class level: in Python, unlike say Java, the unit of program (de)composition is the module. However, nothing stops you from doing it, as I've shown above. The script, when run, displays:
Log file contents:
__main__.A DEBUG Hello from A
__main__.B DEBUG Hello from B
Note that code from both classes logged to the same file, myapp.log. This would have worked even with A and B in different modules.
Try using logging.getLogger() to get your logging object instance:
http://docs.python.org/3/library/logging.html#logging.getLogger
All calls to this function with a given name return the same logger instance. This means that logger instances never need to be passed between different parts of an application.
UPDATE:
The recommended way to do this is to use the getLogger() function and configure it (setting a handler, formatter, etc...):
# main.py
import logging
import lib
def main():
logger = logging.getLogger('custom_logger')
logger.setLevel(logging.INFO)
logger.addHandler(logging.FileHandler('test.log'))
logger.info('logged from main module')
lib.log()
if __name__ == '__main__':
main()
# lib.py
import logging
def log():
logger = logging.getLogger('custom_logger')
logger.info('logged from lib module')
If you really need to extend the logger class take a look at logging.setLoggerClass(klass)
UPDATE 2:
Example on how to add a custom logging level without changing the Logging class:
# main.py
import logging
import lib
# Extend Logger class
CUSTOM_LEVEL_NUM = 9
logging.addLevelName(CUSTOM_LEVEL_NUM, 'CUSTOM')
def custom(self, msg, *args, **kwargs):
self._log(CUSTOM_LEVEL_NUM, msg, args, **kwargs)
logging.Logger.custom = custom
# Do global logger instance setup
logger = logging.getLogger('custom_logger')
logger.setLevel(logging.INFO)
logger.addHandler(logging.FileHandler('test.log'))
def main():
logger = logging.getLogger('custom_logger')
logger.custom('logged from main module')
lib.log()
if __name__ == '__main__':
main()
Note that adding custom level is not recommended: http://docs.python.org/2/howto/logging.html#custom-levels
Defining a custom handler and maybe using more than one logger may do the trick for your other requirement: optional output to stderr.
I hate to give the question this heading but I actually don't know whats happening so here it goes.
I was doing another project in which I wanted to use logging module. The code is distributed among few files & instead of creating separate logger objects for seperate files, I thought of creating a logs.py with contents
import sys, logging
class Logger:
def __init__(self):
formatter = logging.Formatter('%(filename)s:%(lineno)s %(levelname)s:%(message)s')
stdout_handler = logging.StreamHandler(sys.stdout)
stdout_handler.setFormatter(formatter)
self.logger=logging.getLogger('')
self.logger.addHandler(stdout_handler)
self.logger.setLevel(logging.DEBUG)
def debug(self, message):
self.logger.debug(message)
and use this class like (in different files.)
import logs
b = logs.Logger()
b.debug("Hi from a.py")
I stripped down the whole problem to ask the question here. Now, I have 3 files, a.py, b.py & main.py. All 3 files instantiate the logs.Logger class and prints a debug message.
a.py & b.py imports "logs" and prints their debug message.
main.py imports logs, a & b; and prints it own debug message.
The file contents are like this: http://i.imgur.com/XoKVf.png
Why is debug message from b.py printed 2 times & from main.py 3 times?
Specify a name for the logger, otherwise you always use root logger.
import sys, logging
class Logger:
def __init__(self, name):
formatter = logging.Formatter('%(filename)s:%(lineno)s %(levelname)s:%(message)s')
stdout_handler = logging.StreamHandler(sys.stdout)
stdout_handler.setFormatter(formatter)
self.logger=logging.getLogger(name)
self.logger.addHandler(stdout_handler)
self.logger.setLevel(logging.DEBUG)
def debug(self, message):
self.logger.debug(message)
http://docs.python.org/howto/logging.html#advanced-logging-tutorial :
A good convention to use when naming loggers is to use a module-level
logger, in each module which uses logging, named as follows:
logger = logging.getLogger(__name__)
logging.getLogger('') will return exactly the same object each time you call it. So each time you instantiate a Logger (why use old-style classes here?) you are attaching one more handler resulting in printing to one more target. As all your targets pointing to the same thing, the last call to .debug() will print to each of the three StreamHandler objects pointing to sys.stdout resulting in three lines being printed.
First. Don't create your own class of Logger.
Just configure the existing logger classes with exising logging configuration tools.
Second. Each time you create your own class of Logger you also create new handlers and then attach the new (duplicating) handler to the root logger. This leads to duplication of messages.
If you have several modules that must (1) run stand-alone and (2) also run as part of a larger, composite, application, you need to do this. This will assure that logging configuration is done only once.
import logging
logger= logging.getLogger( __file__ ) # Unique logger for a, b or main
if __name__ == "__main__":
logging.basicConfig( stream=sys.stdout, level=logging.DEBUG, format='%(filename)s:%(lineno)s %(levelname)s:%(message)s' )
# From this point forward, you can use the `logger` object.
logger.info( "Hi" )