I have a logger configuration class below, my_logger.py:
def my_logger(module_name, log_file):
logger = logging.getLogger(module_name)
logger.setLevel(logging.DEBUG)
# Create handlers
c_handler = logging.StreamHandler()
f_handler = logging.handlers.RotatingFileHandler(filename=log_file)
c_handler.setLevel(logging.DEBUG)
f_handler.setLevel(logging.DEBUG)
# Create formatters and add it to handlers
c_format = logging.Formatter('%(asctime)s - %(name)s - %(levelname)s - %(lineno)d - %(message)s')
f_format = logging.Formatter('%(asctime)s - %(name)s - %(levelname)s - %(lineno)d - %(message)s')
c_handler.setFormatter(c_format)
f_handler.setFormatter(f_format)
# Add handlers to the logger
logger.addHandler(c_handler)
logger.addHandler(f_handler)
return logger
This my_logger.py is under the package root directory:
my_package:
my_logger.py
test.py
api/api.py
logs/
Then in my test.py:
abspath = os.path.abspath(os.path.dirname(__file__))
logger_info = logger.my_logger("test_info", os.path.join(abspath,"../logs/info.log"))
logger_debug = logger.my_logger("test_debug", os.path.join(abspath,"../logs/debug.log"))
logger_error = logger.my_logger("test_error", os.path.join(abspath,"../logs/error.log"))
logger_info.info('Info test ...')
logger_debug.debug('Debug test ...')
logger_error.error('Error test ...')
I want to debugging to debug.log, info to info.log and error to error.log.
For each file that I want to log, I need to add the following 3 lines to each file:
logger_info = logger.my_logger(module_info, os.path.join(abspath,"../logs/info.log"))
logger_debug = logger.my_logger(module_debug, os.path.join(abspath,"../logs/debug.log"))
logger_error = logger.my_logger(module_error, os.path.join(abspath,"../logs/error.log"))
Is this normal practice? I want all log messages from all modules to go into the same 3 files under logs/.
Short answer
To achieve exactly what you described, you need 3 handlers (which you have in your example), and set their default level to the corresponding level at the top. No need to set the handler's levels separately.
import logging
from logging import handlers
def my_logger(module_name, log_file, level):
logger = logging.getLogger(module_name)
logger.setLevel(level)
# Create handlers
c_handler = logging.StreamHandler()
f_handler = handlers.RotatingFileHandler(filename=log_file)
# Create formatters and add it to handlers
c_format = logging.Formatter('%(asctime)s - %(name)s - %(levelname)s - %(lineno)d - %(message)s')
f_format = logging.Formatter('%(asctime)s - %(name)s - %(levelname)s - %(lineno)d - %(message)s')
c_handler.setFormatter(c_format)
f_handler.setFormatter(f_format)
# Add handlers to the logger
logger.addHandler(c_handler)
logger.addHandler(f_handler)
return logger
if __name__ == "__main__":
logger_info = my_logger("test_info", "info.log", logging.INFO)
logger_debug = my_logger("test_debug", "debug.log", logging.DEBUG)
logger_error = my_logger("test_error", "error.log", logging.ERROR)
logger_info.info('Info test ...')
logger_debug.debug('Debug test ...')
logger_error.error('Error test ...')
Output:
python test_logger2.py
2020-12-04 16:57:00,771 - test_info - INFO - 29 - Info test ...
2020-12-04 16:57:00,771 - test_debug - DEBUG - 30 - Debug test ...
2020-12-04 16:57:00,771 - test_error - ERROR - 31 - Error test ...
cat info.log
2020-12-04 16:57:00,771 - test_info - INFO - 29 - Info test ...
cat debug.log
2020-12-04 16:57:00,771 - test_debug - DEBUG - 30 - Debug test ...
cat error.log
2020-12-04 16:57:00,771 - test_error - ERROR - 31 - Error test ...
Long answer:
Why 3 copies?
You are calling your my_logger function 3 times, and each time you call it a new file handler as well as a NEW stream handler added to the logger. That's why you see 3 copies on your console (3 stream handlers). Plus all your handlers are set to DEBUG level. That's why all three logger prints out any log that you have given. You don't want a ERROR handler to process/print logs at DEBUG/INFO levels, so you should set its level to ERROR.
I don't think this is the standard approach for logging. You should instead have single logger with 4 handlers (stream, file_debug, file_info, file_error). Additionally, a debug log file should include all logs, and an info log file should include info logs and error logs. Details below.
import logging
from logging import handlers
def main():
logger = logging.getLogger()
c_format = logging.Formatter('%(asctime)s - %(name)s - %(levelname)s - %(lineno)d - %(message)s')
# You need to set the default level to the lowest (DEBUG)
logger.setLevel(logging.DEBUG)
c_handler = logging.StreamHandler()
c_handler.setLevel(logging.DEBUG)
c_handler.setFormatter(c_format)
f1_handler = handlers.RotatingFileHandler("debug.log")
f1_handler.setLevel(logging.DEBUG)
f1_handler.setFormatter(c_format)
f2_handler = logging.handlers.RotatingFileHandler("info.log")
f2_handler.setLevel(logging.INFO)
f2_handler.setFormatter(c_format)
f3_handler = logging.handlers.RotatingFileHandler("error.log")
f3_handler.setLevel(logging.ERROR)
f3_handler.setFormatter(c_format)
logger.addHandler(c_handler)
logger.addHandler(f1_handler)
logger.addHandler(f2_handler)
logger.addHandler(f3_handler)
logger.debug("A debug line")
logger.info("An info line")
logger.error("An error line")
if __name__ == "__main__":
main()
The output is :
python test_logger.py
2020-12-04 16:48:56,247 - root - DEBUG - 32 - A debug line
2020-12-04 16:48:56,248 - root - INFO - 33 - An info line
2020-12-04 16:48:56,248 - root - ERROR - 34 - An error line
cat debug.log
2020-12-04 16:49:06,673 - root - DEBUG - 32 - A debug line
2020-12-04 16:49:06,673 - root - INFO - 33 - An info line
2020-12-04 16:49:06,673 - root - ERROR - 34 - An error line
cat info.log
2020-12-04 16:49:06,673 - root - INFO - 33 - An info line
2020-12-04 16:49:06,673 - root - ERROR - 34 - An error line
cat error.log
2020-12-04 16:49:06,673 - root - ERROR - 34 - An error line
Here you see your debug.log file also contains the logs from other levels, and your info.log file contain error logs too. Because that's the whole rationale behind the log levels. The lower level should also keep track of the logs of higher levels (DEBUG < INFO < WARNING < ERROR). That's why what you want is not a standard way of doing it, but can perfectly be achieved as described in the short answer.
Related
For some reason my Python logger does not want to recognize microseconds format.
import logging, io
stream = io.StringIO()
logger = logging.getLogger("TestLogger")
logger.setLevel(logging.INFO)
logger.propagate = False
log_handler = logging.StreamHandler(stream)
log_format = logging.Formatter('%(asctime)s - %(name)s - %(levelname)s - %(message)s',"%Y-%m-%d %H:%M:%S.%f %Z")
log_handler.setFormatter(log_format)
logger.addHandler(log_handler)
logger.info("This is test info log")
print(stream.getvalue())
It returns:
2023-01-06 18:52:34.%f UTC - TestLogger - INFO - This is test info log
Why are microseconds missing?
Update
I am running
Python 3.10.4
Distributor ID: Debian
Description: Debian GNU/Linux 11 (bullseye)
Release: 11
Codename: bullseye
The issue is that the formatTime method uses time.strptime to format the current time.time(), but since struct_time has no information about milliseconds and microseconds the formatter ignores the %f.
Also, note that the LogRecord calculates the milliseconds separately and stores them in another variable named msecs
To get what you're looking for we need a custom version of the Formatter class that uses a different converter than time.localtime and is able to interpret the microseconds:
from datetime import datetime
class MyFormatter(logging.Formatter):
def formatTime(self, record, datefmt=None):
if not datefmt:
return super().formatTime(record, datefmt=datefmt)
return datetime.fromtimestamp(record.created).astimezone().strftime(datefmt)
...
log_format = MyFormatter('%(asctime)s - %(name)s - %(levelname)s - %(message)s', "%Y-%m-%d %H:%M:%S.%f %Z")
...
Should output:
2023-01-06 17:47:54.828521 EST - TestLogger - INFO - This is test info log
I found that the answer given by Ashwini Chaudhary to not be applicable for me and found a slightly simpler solution in creating a function with the formatting as done with datetime and changing the value of the logging.Formatter.formatTime method to this instead i.e.:
def _formatTime(self, record, datefmt: str = None) -> str:
return datetime.datetime.fromtimestamp(record.created).astimezone().strftime(datefmt)
log_format = logging.Formatter('%(asctime)s - %(name)s - %(levelname)s - %(message)s',"%Y-%m-%d %H:%M:%S.%f %Z")
logging.Formatter.formatTime = _formatTime
And then generating your logger as normal
I want to use different log levels. For example I need to log actions as INFO if everyting is OK, and WARNING when the required action is NOT proceeded OK (or ERROR, FATAl etc. it doesn't matter right now). How can I reach it? Basically I need just 2 levels, but unfortunately if I am trying to use 2 levels then my log file created as empty. Below is my code but it doesn't work and log file created empty.
class Logger:
#staticmethod
def info_logger(logLevel=log.INFO):
logger_name = inspect.stack()[1][3]
logger = log.getLogger(logger_name)
logger.setLevel(logLevel)
fh = log.FileHandler(log_file)
formatter = log.Formatter("%(asctime)s - %(levelname)s - %(message)s", datefmt='%m/%d/%Y %I: %M: %S %p')
fh.setFormatter(formatter)
logger.addHandler(fh)
return logger
#staticmethod
def warning_logger(logLevel=log.WARNING):
logger_name = inspect.stack()[1][3]
logger = log.getLogger(logger_name)
logger.setLevel(logLevel)
fh = log.FileHandler(error_log_file)
formatter = log.Formatter("%(asctime)s - %(levelname)s - %(message)s", datefmt='%m/%d/%Y %I: %M: %S %p')
fh.setFormatter(formatter)
logger.addHandler(fh)
return logger
I sent my logger information into 'seq' module. I have:
_log_format = f"%(asctime)s - [%(levelname)s] - request_id=%(request_id)s - %(name)s - (%(filename)s).%(funcName)s(%(lineno)d) - %(message)s"
logger.info("Hello, {name}", name="world")
As result in 'seq' I have only:
No one from: f"%(asctime)s - [%(levelname)s] - request_id=%(request_id)s - %(name)s - (%(filename)s).%(funcName)s(%(lineno)d) - %(message)s" was add into 'seq'.
In stream is all right:
2021-08-20 14:40:24,244 - [INFO] - request_id=None - app - (__init__.py).create_app(43) - Hello, world
I sent logs to 'seq' by this way:
import seqlog
seqlog.log_to_seq(
server_url="http://localhost:5341/",
api_key="My API Key",
level=logging.INFO,
batch_size=10,
auto_flush_timeout=10, # seconds
override_root_logger=True,
)
import logging
import google.cloud.logging
class LoggingClass:
#staticmethod
def _get_logging_level(level):
level = level.lower()
if level == 'debug':
logging_level = logging.DEBUG
elif level == 'info':
logging_level = logging.INFO
elif level == 'warning':
logging_level = logging.WARNING
elif level == 'error':
logging_level = logging.ERROR
elif level == 'critical':
logging_level = logging.CRITICAL
# Default logging level
else: logging_level = logging.INFO
return logging_level
#staticmethod
def setup_logging(level='INFO', mode='formatted'):
client = google.cloud.logging.Client()
client.get_default_handler()
cloud_logger = logging.getLogger('CloudLogging')
logging_level = LoggingClass._get_logging_level(level)
cloud_logger.setLevel(logging_level)
if mode == 'simple':
formatter = logging.Formatter('%(asctime)s - %(levelname)s - %(message)s')
else:
formatter = logging.Formatter(
'%(process)d - %(thread)d - %(asctime)s - '
'%(name)s - %(levelname)s - %(message)s'
)
chl = logging.StreamHandler()
if cloud_logger.handlers:
cloud_logger.handlers.pop()
chl.setFormatter(formatter)
cloud_logger.addHandler(chl)
return cloud_logger
if __name__ == "__main__":
cloud_logger = LoggingClass.setup_logging(level='INFO')
cloud_logger.error("ok")
cloud_logger.warning("ok1")
cloud_logger.info("ok2")
The output I get on my Cloud Function log is:
12828 - 3444 - 2021-02-01 18:57:26,451 - CloudLogging - ERROR - ok
ok
12828 - 3444 - 2021-02-01 18:57:26,451 - CloudLogging - WARNING - ok1
ok1
12828 - 3444 - 2021-02-01 18:57:26,451 - CloudLogging - INFO - ok2
ok2
CAN SOMEBODY RECTIFY WHERE AM I GOING WRONG? Why the duplication? I only want the first line logs on Cloud Function in that format. If I do not use, google cloud logging, I see significant delay in the logs. However using google cloud logging, there's no delay but the duplication!
I would suggest to have at look at this article: Python and Stackdriver Logging
From the best of my understanding you get such results because the logging happens asynchronously and in small batches (another thread behind the scene).
I am using python logging module. I am initialising file having following data
def initialize_logger(output_dir):
'''
This initialise the logger
:param output_dir:
:return:
'''
root = logging.getLogger()
root.setLevel(logging.INFO)
format = '%(asctime)s - %(levelname)-8s - %(message)s'
date_format = '%Y-%m-%d %H:%M:%S'
if 'colorlog' in sys.modules and os.isatty(2):
cformat = '%(log_color)s' + format
f = colorlog.ColoredFormatter(cformat, date_format,
log_colors={'DEBUG': 'green', 'INFO': 'green',
'WARNING': 'bold_yellow', 'ERROR': 'bold_red',
'CRITICAL': 'bold_red'})
else:
f = logging.Formatter(format, date_format)
#ch = logging.FileHandler(output_dir, "w")
ch = logging.StreamHandler()
ch.setFormatter(f)
root.addHandler(ch)
As there is only one streamHandler, But I am getting two prints on my console as
INFO:root:clearmessage:%ss1=00
2017-12-21 17:07:20 - INFO - clearmessage:%ss1=00
INFO:root:clearmessage:%ss2=00
2017-12-21 17:07:20 - INFO - clearmessage:%ss2=00
Every message is printed as Root and Info. Any idea Why I am getting two prints. In the above code you can ignore color code.
You have two handlers. Clear the handlers before you add a new one:
root.handlers = [] # clears the list
root.addHandler(ch) # adds a new handler