Coding this in Python as a newbie and trying to understand how to create a single instance of a logging class in a separate module. I am trying to access the logging module from my python script. So, I have different files in this automation script and I am trying to record the logs in a file and displaying the logs in the console at the same time so I would be utilizing two handlers namely: FileHandler() and StreamHandler(). The initialization of logger is in a different file called debugLogs.py and I am accessing this file from multiple Python modules running the script. But if the separate modules call debugLogs.py it creates multiple instances of the logger which means it gets printed multiple times which is not what I want. That is why I need to use singleton method to create just one instance. How do you suggest I go about doing that? I have included my version of debugLogs.py in this code and I have shown
#debugLogs.py
import logging
import logging.handlers
#open readme file and read name of latest log file created
def get_name():
with open("latestLogNames.txt") as f:
for line in f:
pass
latestLog = line
logfile_name = latestLog[:-1]
return logfile_name
class Logger(object):
_instance = None
def __new__(self, logfile_name):
if not self._instance:
self._instance = super(Logger, self).__new__(self)
logger = logging.getLogger(__name__)
logger.setLevel(logging.INFO)
formatter = logging.Formatter('%(message)s')
file_handler = logging.FileHandler(logfile_name)
file_handler.setFormatter(formatter)
stream_handler = logging.StreamHandler()
logger.addHandler(file_handler)
logger.addHandler(stream_handler)
return self._instance
So get_name() gets the latest log name already indicated in the readme file latestLogNames.txt and inputs the logs in the latest log file already there.
I understand that my singleton class code is not right and I am confused on how to initialize the whole class structure. But somehow I would have to pass that logfile_name value to that class. So I am planning to call this logger from a different module with something like this:
#differentModule.py
import debugLogs
logger = debugLogs.Logger(debugLogs.get_name())
And then I would use logger.info("...") to print the logs as well as store it in the file. Please tell me how to restructure the debugLogs.py and how to call it from different modules of my script.
Related
I've been trying to figure out how best to set this up. Cutting it down as much as I can. I have 4 python files: core.py (main), logger_controler.py, config_controller.py, and a 4th as a module or singleton well just call it tool.py.
The way I have it setup is logging has an init function that setup pythons built in logging with the necessary levels, formatter, directory location, etc. I call this init function in main.
import logging
import logger_controller
def main():
logger_controller.init_log()
logger = logging.getLogger(__name__)
if __name__ == "__main__":
main()
config_controller is using configparser and is mainly a singleton as a controller for my config.
import configparser
import logging
logger = logging.getLogger(__name__)
class ConfigController(object):
def __init__(self, *file_names):
self.config_parser = configparser.ConfigParser()
found_files = self.config_parser.read(file_names)
if not found_files:
raise ValueError("No config file found.")
self._validate()
def _validate(self):
...
def read_config(self, section, field):
try:
data = self.config_parser.get(section, field)
except (configparser.NoSectionError, configparser.NoOptionError) as e:
logger.error(e)
data = None
return data
config = ConfigController("config.ini")
And then my problem is trying to create the 4th file and making sure both my logger and config parser are running before it. I'm also wanting this 4th one to be a singleton so it's following a similar format as the config_controller.
So tool.py uses config_controller to pull anything it needs from the config file. It also has some error checking for if config_controller's read_config returns None as that isn't validated in _validate. I did this as I wanted my logging to have a general layer for error checking and a more specific layer. So _validate just checks if required fields and sections are in the config file. Then wherever the field is read will handle extra error checking.
So my main problem is this:
How do I have it where my logger and configparser are both running and available before anything else. I'm very much willing to rework all of this, but I'd like to keep the functionality of it all.
One attempt I tried that works, but seems very messy is making my logger_controler a singleton that just returns python's logging object.
import logging
import os
class MyLogger(object):
def __new__(cls, *args, **kwargs):
init_log()
return logging
def init_log():
...
mylogger = MyLogger()
Then in core.py
from logger_controller import mylogger
logger = mylogger.getLogger(__name__)
I feel like there should be a better way to do the above, but I'm honestly not sure how.
A few ideas:
Would I be able to extend the logging class instead of just using that init_log function?
Maybe there's a way I can make all 3 individual modules such that they each initialize in a correct order? My attempts here didn't quite work as I also have some internal data that I wouldn't want exposed to classes using the module, just the functionality.
I'd like to have it where all 3, logging, configparsing, and the tool, available anywhere I import them.
How I have it setup now "works" but if I were to import the tool.py anywhere in core.py and an error occurs that I need to catch, then my logger won't be able to log it as this tool is loading before the init of my logger.
I am coding in Python and I am facing a situation where my object is throwing out multiple instances even though it has been configured for a single instance. So I have multiple modules that is running a script and there is one module called mylogger.py which instantiates the logger object that stores and displays the logs of the script in a file and the console respectively. I am calling the module mylogger.py from multiple modules in my script. But I want to make sure that for those multiple calls from different modules I use the same initial object so that I do not get the same lines being inserted multiple times in my log file. But I am not sure even after proper configuration I am seeing multiple objects being created each time the function setLogger() is called from different modules.
# mylogger.py
import logging
logger = None
def setLogger(filename="logfile.txt"):
global logger
print("*********VALUE OF LOGGER IN FUNCTION******** ", logger)
if logger is None:
logger = logging.getLogger(__name__) # creating logger object only once
if not getattr(logger, 'handler_set', None):
logger.setLevel(logging.INFO)
stream_handler = logging.StreamHandler()
file_handler = logging.FileHandler(filename)
formatter = logging.Formatter('%(message)s')
file_handler.setFormatter(formatter)
logger.addHandler(file_handler)
logger.addHandler(stream_handler)
logger.setLevel(logging.INFO)
logger.propagate = False
logger.handler_set = True
return logger
I am calling this file from multiple modules. I have included the parts which calls the mylogger.py file.
#firstModule.py
import mylogger
import os
import sys
logger = mylogger.setLogger()
logger.info("Logger object in firstModule is ", logger)
# Some code
# and print statements
# goes here
# Trigger next module by calling it using os.system(...)
os.system('python' + path_of_script/secondModule.py)
Now the secondModule.py calls the mylogger.py
#secondModule.py
import mylogger
import os
import sys
logger = mylogger.setLogger()
logger.info("Logger object in secondModule is ", logger)
# Some code
#and print statements
#go here
#And then it calls some other module
So basically the logger object is different for firstModule.py and secondModule.py and for the next subsequent modules which prints out multiple lines of the same statement in the logfile.txt. However, the lines that get printed on the console through streamHandler() is fine as it only prints out single line for a single logger.info(). This seems very weird as both streamHandler() and fileHandler() were configured using the same function.
You have two processes (through os.system()) so of course you have different objects. In order to "Trigger next module", use import instead.
I have python project with multiple modules with logging. I perform initialization (reading log configuration file and creating root logger and enable/disable logging) in every module before start of logging the messages. Is it possible to perform this initialization only once in one place (like in one class may be called as Log) such that the same settings are reused by logging all over the project?
I am looking for a proper solution to have only once to read the configuration file and to only once get and configure a logger, in a class constructor, or perhaps in the initializer (__init__.py). I don't want to do this at client side (in __main__ ). I want to do this configuration only once in separate class and call this class in other modules when logging is required.
setup using #singleton pattern
#log.py
import logging.config
import yaml
from singleton_decorator import singleton
#singleton
class Log:
def __init__(self):
configFile = 'path_to_my_lof_config_file'/logging.yaml
with open(configFile) as f:
config_dict = yaml.load(f)
logging.config.dictConfig(config_dict)
self.logger = logging.getLogger('root')
def info(self, message):
self.logger.info(message)
#module1.py
from Log import Log
myLog = Log()
myLog.info('Message logged successfully)
#module2.py
from Log import Log
myLog = Log() #config read only once and only one object is created
myLog.info('Message logged successfully)
From the documentation,
Note that Loggers should NEVER be instantiated directly, but always through the module-level function logging.getLogger(name). Multiple calls to getLogger() with the same name will always return a reference to the same Logger object.
You can initialize and configure logging in your main entry point. See Logging from multiple modules in this Howto (Python 2.7).
I had the same problem and I don't have any classes or anything, so I solved it with just using global variable
utils.py:
existing_loggers = {}
def get_logger(name='my_logger', level=logging.INFO):
if name in existing_loggers:
return existing_loggers[name]
# Do the rest of initialization, handlers, formatters etc...
I was wondering what the standard set up is for performing logging from within a Python app.
I am using the Logging class, and I've written my own logger class that instantiates the Logging class. My main then instantiates my logger wrapper class. However, my main instantiates other classes and I want those other classes to also be able to write to he log file via the logger object in the main.
How do I make that logger object such that it can be called by other classes? It's almost like we need some sort of static logger object to get this to work.
I guess the long and short of the question is: how do you implement logging within your code structure such that all classes instantiated from within main can write to the same log file? Do I just have to create a new logging object in each of the classes that points to the same file?
I don't know what you mean by the Logging class - there's no such class in Python's built-in logging. You don't really need wrappers: here's an example of how to do logging from arbitrary classes that you write:
import logging
# This class could be imported from a utility module
class LogMixin(object):
#property
def logger(self):
name = '.'.join([__name__, self.__class__.__name__])
return logging.getLogger(name)
# This class is just there to show that you can use a mixin like LogMixin
class Base(object):
pass
# This could be in a module separate from B
class A(Base, LogMixin):
def __init__(self):
# Example of logging from a method in one of your classes
self.logger.debug('Hello from A')
# This could be in a module separate from A
class B(Base, LogMixin):
def __init__(self):
# Another example of logging from a method in one of your classes
self.logger.debug('Hello from B')
def main():
# Do some work to exercise logging
a = A()
b = B()
with open('myapp.log') as f:
print('Log file contents:')
print(f.read())
if __name__ == '__main__':
# Configure only in your main program clause
logging.basicConfig(level=logging.DEBUG,
filename='myapp.log', filemode='w',
format='%(name)s %(levelname)s %(message)s')
main()
Generally it's not necessary to have loggers at class level: in Python, unlike say Java, the unit of program (de)composition is the module. However, nothing stops you from doing it, as I've shown above. The script, when run, displays:
Log file contents:
__main__.A DEBUG Hello from A
__main__.B DEBUG Hello from B
Note that code from both classes logged to the same file, myapp.log. This would have worked even with A and B in different modules.
Try using logging.getLogger() to get your logging object instance:
http://docs.python.org/3/library/logging.html#logging.getLogger
All calls to this function with a given name return the same logger instance. This means that logger instances never need to be passed between different parts of an application.
UPDATE:
The recommended way to do this is to use the getLogger() function and configure it (setting a handler, formatter, etc...):
# main.py
import logging
import lib
def main():
logger = logging.getLogger('custom_logger')
logger.setLevel(logging.INFO)
logger.addHandler(logging.FileHandler('test.log'))
logger.info('logged from main module')
lib.log()
if __name__ == '__main__':
main()
# lib.py
import logging
def log():
logger = logging.getLogger('custom_logger')
logger.info('logged from lib module')
If you really need to extend the logger class take a look at logging.setLoggerClass(klass)
UPDATE 2:
Example on how to add a custom logging level without changing the Logging class:
# main.py
import logging
import lib
# Extend Logger class
CUSTOM_LEVEL_NUM = 9
logging.addLevelName(CUSTOM_LEVEL_NUM, 'CUSTOM')
def custom(self, msg, *args, **kwargs):
self._log(CUSTOM_LEVEL_NUM, msg, args, **kwargs)
logging.Logger.custom = custom
# Do global logger instance setup
logger = logging.getLogger('custom_logger')
logger.setLevel(logging.INFO)
logger.addHandler(logging.FileHandler('test.log'))
def main():
logger = logging.getLogger('custom_logger')
logger.custom('logged from main module')
lib.log()
if __name__ == '__main__':
main()
Note that adding custom level is not recommended: http://docs.python.org/2/howto/logging.html#custom-levels
Defining a custom handler and maybe using more than one logger may do the trick for your other requirement: optional output to stderr.
I hate to give the question this heading but I actually don't know whats happening so here it goes.
I was doing another project in which I wanted to use logging module. The code is distributed among few files & instead of creating separate logger objects for seperate files, I thought of creating a logs.py with contents
import sys, logging
class Logger:
def __init__(self):
formatter = logging.Formatter('%(filename)s:%(lineno)s %(levelname)s:%(message)s')
stdout_handler = logging.StreamHandler(sys.stdout)
stdout_handler.setFormatter(formatter)
self.logger=logging.getLogger('')
self.logger.addHandler(stdout_handler)
self.logger.setLevel(logging.DEBUG)
def debug(self, message):
self.logger.debug(message)
and use this class like (in different files.)
import logs
b = logs.Logger()
b.debug("Hi from a.py")
I stripped down the whole problem to ask the question here. Now, I have 3 files, a.py, b.py & main.py. All 3 files instantiate the logs.Logger class and prints a debug message.
a.py & b.py imports "logs" and prints their debug message.
main.py imports logs, a & b; and prints it own debug message.
The file contents are like this: http://i.imgur.com/XoKVf.png
Why is debug message from b.py printed 2 times & from main.py 3 times?
Specify a name for the logger, otherwise you always use root logger.
import sys, logging
class Logger:
def __init__(self, name):
formatter = logging.Formatter('%(filename)s:%(lineno)s %(levelname)s:%(message)s')
stdout_handler = logging.StreamHandler(sys.stdout)
stdout_handler.setFormatter(formatter)
self.logger=logging.getLogger(name)
self.logger.addHandler(stdout_handler)
self.logger.setLevel(logging.DEBUG)
def debug(self, message):
self.logger.debug(message)
http://docs.python.org/howto/logging.html#advanced-logging-tutorial :
A good convention to use when naming loggers is to use a module-level
logger, in each module which uses logging, named as follows:
logger = logging.getLogger(__name__)
logging.getLogger('') will return exactly the same object each time you call it. So each time you instantiate a Logger (why use old-style classes here?) you are attaching one more handler resulting in printing to one more target. As all your targets pointing to the same thing, the last call to .debug() will print to each of the three StreamHandler objects pointing to sys.stdout resulting in three lines being printed.
First. Don't create your own class of Logger.
Just configure the existing logger classes with exising logging configuration tools.
Second. Each time you create your own class of Logger you also create new handlers and then attach the new (duplicating) handler to the root logger. This leads to duplication of messages.
If you have several modules that must (1) run stand-alone and (2) also run as part of a larger, composite, application, you need to do this. This will assure that logging configuration is done only once.
import logging
logger= logging.getLogger( __file__ ) # Unique logger for a, b or main
if __name__ == "__main__":
logging.basicConfig( stream=sys.stdout, level=logging.DEBUG, format='%(filename)s:%(lineno)s %(levelname)s:%(message)s' )
# From this point forward, you can use the `logger` object.
logger.info( "Hi" )