I enclose the code sample below. I extended the Logger class to try to make it more user-friendly for our purposes by adding a SumoLogic handler that can be easily utilised to send log messages to SumoLogic. This functionality works as expected and is not the issue. An abbreviated but functional form of my extended class can be seen below:
import sys
import logging
class CustomLogger(logging.getLoggerClass()):
def __init__(self, logger_name, log_level="INFO"):
MSG_FORMAT = '%(asctime)s %(levelname)s %(name)s: %(message)s'
DATETIME_FORMAT = '%Y-%m-%d %H:%M:%S'
formatter = logging.Formatter(MSG_FORMAT, DATETIME_FORMAT)
# Initialise logger instance.
super(CustomLogger, self).__init__(logger_name)
self.handlers = []
self.setLevel(log_level)
# Create the stdout handler
stream_handler = logging.StreamHandler(sys.stdout)
stream_handler.setLevel(log_level)
stream_handler.setFormatter(formatter)
self.addHandler(stream_handler)
self.debug('Initialised "{}" Logger'.format(logger_name))
def add_sumologic_handler(self, **kwargs):
"""
Call this method with a kwargs dictionary to add a Sumologic handler to your logger. This is done in the
following way:
... Comments removed ...
"""
self.debug(f"Attempting to add Sumologic Handler with the following endpoint settings:\n{kwargs}")
# ... SumoLogic handler addition code here...
def remove_sumologic_handler(self):
"""
Call this method to turn off Sumologic logging, if it has already been activated.
"""
self.debug(f"Attempting to remove Sumologic Handler.")
# ... SumoLogic handler removal code here...
I have found that when I call other methods from the base Logger class, they behave contrary to my expectations. By way of example, I often use the logger.getChild() method in script functions to identify the function that currently has execution focus, when writing log messages. When I call it, I expect it to take my logger and append the given child name as a postfix to my base logger name when writing log messages. However, that is not what is occurring. Instead, my new child logger is created, but with a base log level of WARNING, meaning it is not inheriting the log level of my parent logger.
Here is a test case that features the expected behaviour, that does not use my CustomLogger() class:
import sys
import logging
def initialise_logger(logger_name, log_level="INFO"):
MSG_FORMAT = '%(asctime)s %(levelname)s %(name)s: %(message)s'
DATETIME_FORMAT = '%Y-%m-%d %H:%M:%S'
formatter = logging.Formatter(MSG_FORMAT, DATETIME_FORMAT)
# Create the stdout handler
stream_handler = logging.StreamHandler(sys.stdout)
stream_handler.setLevel(log_level)
stream_handler.setFormatter(formatter)
# Create the logger
logger = logging.getLogger(logger_name)
logger.setLevel(log_level)
logger.addHandler(stream_handler)
return logger
logger1 = initialise_logger("expected.logger", "INFO")
logger1.info("Test Message 1")
logger1a = logger1.getChild("child")
logger1a.info("Expected log message - note logger name hierarchy and inherited log level of INFO.")
... and here is an example using my CustomLogger() class extension of Logger:
logger1 = CustomLogger("expected.logger", "INFO")
logger1.info("Test Message 1")
logger1a = logger1.getChild("child")
logger1a.info("Expected log message is not output by the child logger, meaning the parent log level has not been inherited by the getChild() call.")
logger1a.warning("Now a log message is output by the child logger.")
assert isinstance(logger1, CustomLogger), "logger1 is NOT an instance of CustomLogger"
assert isinstance(logger1a, CustomLogger), "logger1a is NOT an instance of CustomLogger"
assert isinstance(logger1a, logging.getLoggerClass()), "logger1a is NOT an instance of Logger"
The above shows that in the case of my CustomLogger class instance, a call to the base Logger class's getChild() function returns an instance of Logger rather than CustomLogger; contrary to my expectations of it inheriting the parent CustomLogger class instance, along with its settings.
Is anybody able to explain what is going wrong for me here, or provide a workaround, please?
Related
I have a root logging class that I created, which I'd like to use for each micro-service function that I'm deploying.
Example output log: [2023-01-01 13:46:26] - INFO - [utils.logger.<module>:5] - testaaaaa
The logger is defined in utils.logger so that's why it's showing that in the log, hence %(name)s.
Instead of using the same root logger name which is set with logger = logging.getLogger(__name__), how can I get the same structure in dot notation where the logger is being instantiated and called?
Even if I have to modify my logger class to accept a name parameter when initializing the object, that is fine. But I like the dot notation because I will have files like routes.users.functiona, routes.users.functionb, routes.database.functiona and so on.
So I want to show which module the logging came from. Can't seem to follow how logging is capturing the path when using __name__.
Also if you have any suggestions about making the following more robust :)
Here is my class:
import typing
import logging
class GlobalLogger:
MINIMUM_GLOBAL_LEVEL = logging.DEBUG
GLOBAL_HANDLER = logging.StreamHandler()
LOG_FORMAT = "[%(asctime)s] - %(levelname)s - [%(name)s.%(funcName)s:%(lineno)d] - %(message)s"
LOG_DATETIME_FORMAT = "%Y-%m-%d %H:%M:%S"
def __init__(self, level: typing.Union[int, str] = MINIMUM_GLOBAL_LEVEL):
self.level = level
self.logger = self._get_logger()
self.log_format = self._log_formatter()
self.GLOBAL_HANDLER.setFormatter(self.log_format)
def _get_logger(self):
logger = logging.getLogger(__name__)
logger.setLevel(self.level)
logger.addHandler(self.GLOBAL_HANDLER)
return logger
def _log_formatter(self):
return logging.Formatter(fmt=self.LOG_FORMAT, datefmt=self.LOG_DATETIME_FORMAT)
I think you misunderstood the logging concept.
The variable __name__ holds the name of Python package where this variable is used. I think in your case it will have the value where class GlobalLogger is defined. And it does not matter from where you will create instance of your class. => Check it with print command.
At the top of every of your Python module (*.py), initialize module logger simply by logger = logging.getLogger(__name__). In your access the logging methods via module variable logger.
The logging data are automatically propagated to the parent logger based on the dot notation. If you have module A.B.C, the data are propagated C -> B -> A.
The output handlers, formatting can be defined for every logger or just for root logger e.g.: A or just (empty) "".
I usually have function InitLogging, where I get handle to the root logger and setup the output handlers. You do not need setup logging handlers in every module.
Conclusion:
If you still want use your class, give name of your logger as named parameter into the __init__. When you create instance of your class use variable __name__ during initialization.
I think there is not needed to create so many specific logger classes. You can define the logger properties later.
I have a logging function with hardcoded logfile name (LOG_FILE):
setup_logger.py
import logging
import sys
FORMATTER = logging.Formatter("%(levelname)s - %(asctime)s - %(name)s - %(message)s")
LOG_FILE = "my_app.log"
def get_console_handler():
console_handler = logging.StreamHandler(sys.stdout)
console_handler.setFormatter(FORMATTER)
return console_handler
def get_file_handler():
file_handler = logging.FileHandler(LOG_FILE)
file_handler.setFormatter(FORMATTER)
return file_handler
def get_logger(logger_name):
logger = logging.getLogger(logger_name)
logger.setLevel(logging.DEBUG) # better to have too much log than not enough
logger.addHandler(get_console_handler())
logger.addHandler(get_file_handler())
# with this pattern, it's rarely necessary to propagate the error up to parent
logger.propagate = False
return logger
I use this in various modules this way:
main.py
from _Core import setup_logger as log
def main(incoming_feed_id: int, type: str) -> None:
logger = log.get_logger(__name__)
...rest of my code
database.py
from _Core import setup_logger as log
logger = log.get_logger(__name__)
Class Database:
...rest of my code
etl.py
import _Core.database as db
from _Core import setup_logger as log
logger = log.get_logger(__name__)
Class ETL:
...rest of my code
What I want to achieve is to always change the logfile's path and name on each run based on arguments passed to the main() function in main.py.
Simplified example:
If main() receives the following arguments: incoming_feed_id = 1, type = simple_load, the logfile's name should be 1simple_load.log.
I am not sure what is the best practice for this. What I came up with is probably the worst thing to do: Add a log_file parameter to the get_logger() function in setup_logger.py, so I can add a filename in main() in main.py. But in this case I would need to pass the parameters from main to the other modules as well, which I do not think I should do as for example the database class is not even used in main.py.
I don't know enough about your application to be sure this'll work for you, but you can just configure the root logger in main() by calling get_logger('', filename_based_on_cmdline_args), and stuff logged to the other loggers will be passed to the root logger's handlers for processing if the logger levels configured allow it. The way you're doing it now seems to open multiple handlers pointing to the same file, which seems sub-optimal. The other modules can just use logging.getLogger(__name__) rather than log.get_logger(__name__).
I have a logger and a class DuplicateFilter that filters messages that already were logged once. I would like to include the time when the logging happened into my filter but when I try to access the property asctime I get: AttributeError: 'LogRecord' object has no attribute 'asctime'
Here a small example how I set up my logger:
import logging
import logging.handlers as log_handlers
def setup_logger(filename):
class DuplicateFilter(object):
def __init__(self):
self.msgs = set()
def filter(self, record):
if logger.level == 10:
return True
rv = True
try:
print(record.asctime)
msg = record.threadName + " " + record.msg
if msg in self.msgs:
rv = False
except Exception as e:
print(traceback.format_exc())
return rv
self.msgs.add(msg)
return rv
log_formatter = logging.Formatter("%(asctime)s [%(levelname)-5.5s] [%(threadName)-30.30s] %(message)s")
file_handler = log_handlers.TimedRotatingFileHandler(filename, encoding="UTF-8", when='W6', backupCount=12)
file_handler.setFormatter(log_formatter)
console_handler = logging.StreamHandler()
console_handler.setFormatter(log_formatter)
logger = logging.getLogger(filename)
logger.propagate = False
logger.addHandler(console_handler)
logger.addHandler(file_handler)
logger.setLevel(logging.INFO)
dup_filter = DuplicateFilter()
logger.addFilter(dup_filter)
return logger
log = setup_logger("test.log")
for i in range(3):
log.info("wow")
Now my records look like this: 2018-07-18 14:34:49,642 [INFO ] [MainThread ] wow They clearly have an asctime and I explicitly set the asctime property in the constructor of my Formatter. The only question similar to mine I found says
To have message and asctime set, you must first call self.format(record) inside the emit method
but doesn't the logging.Formatter do that when you specify the log string the way I did with %(asctime)s?
EDIT: running.t was right, I just didn't understand what the documentation meant. I solved it by adding my formater to my filter and calling the format function at the beginning:
def __init__(self, formatter):
self.msgs = {}
self.formatter = formatter
def filter(self, record):
self.formatter.format(record)
In filter objects section of pyton logging module documentation I found following note:
Note that filters attached to handlers are consulted before an event is emitted by the handler, whereas filters attached to loggers are consulted whenever an event is logged (using debug(), info(), etc.), before sending an event to handlers. This means that events which have been generated by descendant loggers will not be filtered by a logger’s filter setting, unless the filter has also been applied to those descendant loggers.
Your filter is added to logger, while formatters are added to handlers. So in my opinion your filter method is applied before any of formatter you specified.
BTW, shouldn't your DuplicateFilter inherit from logging.Filter?
I have the following class:
class Log(object):
# class new
#new is used instead of init because __new__ is able to return (where __init__ can't)
def __new__(self, name, consolelevel, filelevel):
formatter = logging.Formatter('%(asctime)s %(levelname)s: %(name)s: %(message)s')
#Create consolehandler and set formatting (level is set in the ROOT)
consolehandler = StreamHandler()
consolehandler.setFormatter(formatter)
#Create filehandler, set level and set formatting
filehandler = FileHandler(name + '.log')
filehandler.setLevel(filelevel)
filehandler.setFormatter(formatter)
#Create the root logger, add console and file logger. Set the rootlevel == consolelevel.
self.root = logging.getLogger(name)
#causing me problems....
self.root.setLevel(consolelevel)
self.root.addHandler(consolehandler)
self.root.addHandler(filehandler)
self.root.propagate = True
return self.root
# Close the logger object
def close():
# to be implemented
pass
I use this class to log to the console and to a file (depending on the set level). The problem is that the root level seems to be leading for the addhandlers. Is there a way to disable this? Now I set the rootlevel to the same level as the consolelevel but this does not work...
Any advice?
Thanks in advance and with best regards,
JR
A problem that I can see in your code is that it will add more handlers whenever you instantiate the Log class. You probably do not want this.
Keep in mind that getLogger returns always the same instance when called with the same argument, and basically it implements the singleton pattern.
Hence when you later call addHandler it will add a new handler everytime.
The way to deal with logging is to create a logger at the module level and use it.
Also I'd avoid using __new__. In your case you can use a simple function. And note that your Log.close method wont work, because your __new__ method does not return a Log instance, and thus the returned logger doesn't have that method.
Regarding the level of the logger, I don't understand why you do not set the level on the consolehandler but on the whole logger.
This is a simplified version of the module I am making. The module contains a few classes that all need a logging functionality. Each class logs to a different file and and it should also be possible to change the file handler levels between classes (e.g. gamepad class: console.debug and filehandler.info and MQTT class: console info and filehandler.debug).
Therefor I thought that setting up a log class would be the easiest way. Please bear in mind that I usually do electronics but now combined with python. So my skills are pretty basic....
#!/bin/env python2.7
from future import division
from operator import *
import logging
from logging import FileHandler
from logging import StreamHandler
import pygame
import threading
from pygame.locals import *
import mosquitto
import time
from time import sleep
import sys
class ConsoleFileLogger(object):
# class constructor
def __init__(self, filename, loggername, rootlevel, consolelevel, filelevel):
# logger levels: DEBUG, INFO, WARNING, ERROR, CRITICAL
# define a root logger and set its ROOT logging level
logger = logging.getLogger(loggername)
logger.setLevel(rootlevel)
# define a Handler which writes messages or higher to the sys.stderr (Console)
self.console = logging.StreamHandler()
# set the logging level
self.console.setLevel(consolelevel)
# define a Handler which writes messages to a logfile
self.logfile = logging.FileHandler(filename + '.log')
# set the logging level
self.logfile.setLevel(filelevel)
# create formatter and add it to the handlers
formatter = logging.Formatter('%(asctime)s %(levelname)s: %(name)s: %(message)s')
self.console.setFormatter(formatter)
self.logfile.setFormatter(formatter)
# add the handlers to the root logger
logger.addHandler(self.console)
logger.addHandler(self.logfile)
self._logger = logger
# set a net instance of the logger
def set(self):
return self._logger
# Stop and remove the ConsoleFileLogger object
def remove(self):
self._logger.removeHandler(self.console)
self._logger.removeHandler(self.logfile)
self._logfile.FileHandler().close()
class Gamepad():
# class constructor
def __init__(self, mqttgamepad):
self.logger = ConsoleFileLogger('BaseLogFiles/Gamepad', 'Gamepad', logging.INFO, logging.INFO, logging.INFO).set()
if joystickcount == 0:
self.logger.error('No gamepad connected')
elif joystickcount == 1:
self.gamepad = pygame.joystick.Joystick(0)
self.gamepad.init()
self.logger.debug('Joystick name %s', self.gamepad.get_name())
self.logger.debug('nb of axes = %s', self.gamepad.get_numaxes())
self.logger.debug('nb of balls = %s', self.gamepad.get_numballs())
self.logger.debug('nb of buttons = %s', self.gamepad.get_numbuttons())
self.logger.debug('nb of mini joysticks = %s', self.gamepad.get_numhats())
elif joystickcount > 1:
self.logger.error('only one gamepad is allowed')
def run(self):
self.logger.debug('gamepad running')
class MQTTClient():
def __init__(self, clientname):
self.logger = ConsoleFileLogger('BaseLogFiles/MQTT/Pub', clientname, logging.DEBUG, logging.DEBUG, logging.DEBUG).set()
self.logger.debug('test')
def run(self):
self.logger.info('Connection MQTT Sub OK')
def main():
logger = ConsoleFileLogger('BaseLogFiles/logControlCenterMain', 'ControlCenterMain', logging.DEBUG, logging.DEBUG, logging.DEBUG).set()
mqttclient = MQTTClient("MQTTClient")
mqttclient.connect()
gamepad = Gamepad(mqttclient)
if gamepad.initialized():
gamepadthread = threading.Thread(target=gamepad.run)
gamepadthread.start()
mqtttpubhread = threading.Thread(target=mqttclient.run)
mqtttpubhread.start()
logger.info('BaseMain started')
# Monitor the running program for a KeyboardInterrupt exception
# If this is the case all threads and other methods can be closed the correct way :)
while 1:
try:
sleep(1)
except KeyboardInterrupt:
logger.info('Ctrl-C pressed')
gamepad.stop()
mqttclient.stop()
logger.info('BaseMain stopped')
sys.exit(0)
if name == 'main':
main()
I'm looking for a simple way to extend the logging functionality defined in the standard python library. I just want the ability to choose whether or not my logs are also printed to the screen.
Example: Normally to log a warning you would call:
logging.basicConfig(level=logging.DEBUG, format='%(asctime)s %(levelname)s: %(message)s', filename='log.log', filemode='w')
logging.warning("WARNING!!!")
This sets the configurations of the log and puts the warning into the log
I would like to have something along the lines of a call like:
logging.warning("WARNING!!!", True)
where the True statement signifys if the log is also printed to stdout.
I've seen some examples of implementations of overriding the logger class
but I am new to the language and don't really follow what is going on, or how to implement this idea. Any help would be greatly appreciated :)
The Python logging module defines these classes:
Loggers that emit log messages.
Handlers that put those messages to a destination.
Formatters that format log messages.
Filters that filter log messages.
A Logger can have Handlers. You add them by invoking the addHandler() method. A Handler can have Filters and Formatters. You similarly add them by invoking the addFilter() and setFormatter() methods, respectively.
It works like this:
import logging
# make a logger
main_logger = logging.getLogger("my logger")
main_logger.setLevel(logging.INFO)
# make some handlers
console_handler = logging.StreamHandler() # by default, sys.stderr
file_handler = logging.FileHandler("my_log_file.txt")
# set logging levels
console_handler.setLevel(logging.WARNING)
file_handler.setLevel(logging.INFO)
# add handlers to logger
main_logger.addHandler(console_handler)
main_logger.addHandler(file_handler)
Now, you can use this object like this:
main_logger.info("logged in the FILE")
main_logger.warning("logged in the FILE and on the CONSOLE")
If you just run python on your machine, you can type the above code into the interactive console and you should see the output. The log file will get crated in your current directory, if you have permissions to create files in it.
I hope this helps!
It is possible to override logging.getLoggerClass() to add new functionality to loggers. I wrote simple class which prints green messages in stdout.
Most important parts of my code:
class ColorLogger(logging.getLoggerClass()):
__GREEN = '\033[0;32m%s\033[0m'
__FORMAT = {
'fmt': '%(asctime)s %(levelname)s: %(message)s',
'datefmt': '%Y-%m-%d %H:%M:%S',
}
def __init__(self, format=__FORMAT):
formatter = logging.Formatter(**format)
self.root.setLevel(logging.INFO)
self.root.handlers = []
(...)
handler = logging.StreamHandler()
handler.setFormatter(formatter)
self.root.addHandler(handler)
def info(self, message):
self.root.info(message)
(...)
def info_green(self, message):
self.root.info(self.__GREEN, message)
(...)
if __name__ == '__main__':
logger = ColorLogger()
logger.info("This message has default color.")
logger.info_green("This message is green.")
Handlers send the log records (created by loggers) to the appropriate
destination.
(from the docs: http://docs.python.org/library/logging.html)
Just set up multiple handlers with your logging object, one to write to file, another to write to the screen.
UPDATE
Here is an example function you can call in your classes to get logging set up with a handler.
def set_up_logger(self):
# create logger object
self.log = logging.getLogger("command")
self.log.setLevel(logging.DEBUG)
# create console handler and set min level recorded to debug messages
ch = logging.StreamHandler()
ch.setLevel(logging.DEBUG)
# add the handler to the log object
self.log.addHandler(ch)
You would just need to set up another handler for files, ala the StreamHandler code that's already there, and add it to the logging object. The line that says ch.setLevel(logging.DEBUG) means that this particular handler will take logging messages that are DEBUG or higher. You'll likely want to set yours to WARNING or higher, since you only want the more important things to go to the console. So, your logging would work like this:
self.log.info("Hello, World!") -> goes to file
self.log.error("OMG!!") -> goes to file AND console