In an attempt to learn the Python module logging I made the small script below.
However, whenever I use the object my_logger it outputs to the file my_logger.log as specified by the file handler, but it also outputs the same text to the previously specified file in basicConfig log.log. My question is, why is it outputting to both locations instead of just the file specified by the FileHandler?
import logging
logging.basicConfig(level=logging.INFO,
filename='log.log',
filemode="w",
format="%(levelname)s - %(message)s")
logging.debug("debug_message") # Lowest |
logging.info("info_message") # |
logging.warning("warning_message") # |
logging.error("error_message") # |
logging.critical("critical_message") # Highest V
my_logger = logging.getLogger('My_Logger')
my_logger.info("Successfully created my custom logger")
handler = logging.FileHandler("my_logger.log")
formatter = logging.Formatter("%(name)s: %(levelname)s - %(message)s")
handler.setFormatter(formatter)
my_logger.addHandler(handler)
my_logger.info("Successfully created My Logger!")
try:
1/0
except ZeroDivisionError as e:
my_logger.exception("ZeroDivisionError")
You have two loggers in your script. One is the root logger that is setup using basiConfig and accessed using logging keyword itself. Another logger is "My_Logger" which is a child of root logger. Both loggers are active in your script and are instructed to write to specified handlers.
That's why you are getting logs written in both files.
If you want to use a customized handler than don't use basicConfig. Use only one logger in the whole of your module.
I have a main process which makes use of different other modules. And these modules also use other modules. I need to log all the logs into single log file. Due to use of TimedRotatingFileHandler, my log behaves differently after midnight. I got to know why it so but couldn't clearly how I can solve it.
Below is log_config.py which is used by all other modules to get the logger and log.
'''
import logging
import sys
from logging.handlers import TimedRotatingFileHandler
FORMATTER = logging.Formatter("%(asctime)s — %(name)s — %(message)s")
LOG_FILE = "my_app.log"
def get_file_handler():
file_handler = TimedRotatingFileHandler(LOG_FILE, when='midnight')
file_handler.setFormatter(FORMATTER)
return file_handler
def get_logger(logger_name):
logger = logging.getLogger(logger_name)
logger.setLevel(logging.DEBUG) # better to have too much log than not enough
logger.addHandler(get_file_handler())
#with this pattern, it's rarely necessary to propagate the error up to parent
logger.propagate = False
return logger
'''
All other modules call,
'logging = log_config.get_logger(name)'
and use it to log.
I came to know about QueueHandler and QueueListener but not sure how to use them in my code.
How can I use these to serialize logs to single file.?
main module
module A
module B
The main module uses the functions of modules.
The functions of modules include logger.info or logger.warning.. so on to show what i made wrong about the code.
Objective :
- Logging all things in main, A and B.
- A faculty to set logging level of A and B in main jupyter notebook, at the moment. (e.g. When i need more information about a function of A, then increase logging level to DEBUG from INFO)
By the way, the main script has:
import logging, sys
# create logger
logger = logging.getLogger('logger')
logger.setLevel(logging.DEBUG)
# create console handler and set level to debug
fh = logging.FileHandler('process.log')
logger.setLevel(logging.DEBUG)
ch = logging.StreamHandler(sys.stdout)
ch.setLevel(logging.DEBUG)
# create formatter
formatter = logging.Formatter('%(asctime)s - %(name)s - %(levelname)s - %(message)s\n')
# add formatter to ch
fh.setFormatter(formatter)
# add ch to logger
logger.addHandler(fh)
logger.addHandler(ch)
I want to use Logger object and configure this object, instead of logging's baseconfig. But if i can't, other ways is ok.
If you do:
logger = logging.getLogger('logger')
Into Module A and Module B, then they should have access to the same logger as your main file. From there, you can set whatever level at any time you want. E.g.
# ../module_a.py
import logging
logger = logging.getLogger('logger')
logger.setLevel(whatever) # Now all instances of "logger" will be set to that level.
Basically, loggers are globally registered by name and accessible through the logging module directly from any other modules.
I have a use case where I want all logs to go to a single file. I am using a custom logger on which I define the file handler to log to desired file.
The problem I'm facing is the external modules which I use are using root loggers and I wan't them also to log to same file.
Custom logger module
def initialize_logger():
logger = logging.getLogger("custom")
file_handler = FileHandler(filename="test")
file_handler.setLevel(logging.INFO)
file_handler.setFormatter(FORMATTER)
logger.addHandler(file_handler)
return logger
def get_logger():
return logging.getLogger("custom")
Modules in my application are using the following code
from log import get_logger
logger = get_logger()
logger.info("Test")
External modules which I am using are having the following line.
logging.info("test")
Is there a way I can capture logs in external modules in same file?
Just add your handler to the root logger, instead of your logger.
root_logger = logging.getLogger()
root_logger.addHandler(...)
This way, any log will reach to this file-handler.
As the AWS documentation suggests:
import logging
logger = logging.getLogger()
logger.setLevel(logging.INFO)
def my_logging_handler(event, context):
logger.info('got event{}'.format(event))
logger.error('something went wrong')
Now I made:
import logging
logging.basicConfig(level = logging.INFO)
logging.info("Hello World!")
The first snippet of code prints in the Cloud Watch console, but the second one no.
I didn't see any difference as the two snippets are using the root logger.
The reason that logging does not seem to work is because the AWS Lambda Python runtime pre-configures a logging handler that, depending on the version of the runtime selected, might modify the format of the message logged, and might also add some metadata to the record if available. What is not preconfigured though is the log-level. This means that no matter the type of log-message you try to send, it will not actually print.
As AWS documents themselves, to correctly use the logging library in the AWS Lambda context, you only need to set the log-level for the root-logger:
import logging
logging.getLogger().setLevel(logging.INFO)
If you want your Python-script to be both executable on AWS Lambda, but also with your local Python interpreter, you can check whether a handler is configured or not, and fall back to basicConfig (which creates the default stderr-handler) otherwise:
if len(logging.getLogger().handlers) > 0:
# The Lambda environment pre-configures a handler logging to stderr. If a handler is already configured,
# `.basicConfig` does not execute. Thus we set the level directly.
logging.getLogger().setLevel(logging.INFO)
else:
logging.basicConfig(level=logging.INFO)
Copied straight from the top answer in the question #StevenBohrer's answer links to (this did the trick for me, replacing the last line with my own config):
root = logging.getLogger()
if root.handlers:
for handler in root.handlers:
root.removeHandler(handler)
logging.basicConfig(format='%(asctime)s %(message)s',level=logging.DEBUG)
I've struggled with this exact problem. The solution that works both locally and on AWS CloudWatch is to setup your logging like this:
import logging
# Initialize you log configuration using the base class
logging.basicConfig(level = logging.INFO)
# Retrieve the logger instance
logger = logging.getLogger()
# Log your output to the retrieved logger instance
logger.info("Python for the win!")
I had a similar problem, and I suspect that the lambda container is calling logging.basicConfig to add handlers BEFORE the lambda code is imported. This seems like bad form...
Workaround was to see if root logger handlers were configured and if so, remove them, add my formatter and desired log level (using basicConfig), and restore the handlers.
See this article Python logging before you run logging.basicConfig?
Probably not referencing the same logger, actually.
In the first snippet, log the return of: logging.Logger.manager.loggerDict
It will return a dict of the loggers already initialized.
Also, from the logging documentation, an important note on logging.basicConfig:
Does basic configuration for the logging system by creating a StreamHandler with a default Formatter and adding it to the root logger. The functions debug(), info(), warning(), error() and critical() will call basicConfig() automatically if no handlers are defined for the root logger.
This function does nothing if the root logger already has handlers configured for it.
Source: https://docs.python.org/2/library/logging.html#logging.basicConfig
It depends upon the aws lambda python version
If python version 3.8 and above
import os
import logging
default_log_args = {
"level": logging.DEBUG if os.environ.get("DEBUG", False) else logging.INFO,
"format": "%(asctime)s [%(levelname)s] %(name)s - %(message)s",
"datefmt": "%d-%b-%y %H:%M",
"force": True,
}
logging.basicConfig(**default_log_args)
log = logging.getLogger("Run-Lambda")
log.info("I m here too)
If python version 3.7 and below
import os
import logging
root = logging.getLogger()
if root.handlers:
for handler in root.handlers:
root.removeHandler(handler)
default_log_args = {
"level": logging.DEBUG if os.environ.get("DEBUG", False) else logging.INFO,
"format": "%(asctime)s [%(levelname)s] %(name)s - %(message)s",
"datefmt": "%d-%b-%y %H:%M"
}
logging.basicConfig(**default_log_args)
log = logging.getLogger("Run-Lambda")
log.info("Iam here")
Essentially, the AWS logging monkey patch needs to be handled in a very particular way, where:
The log level is set from the TOP level of the script (e.g., at import time)
The log statements you are interested in are invoked from within the lambda function
Since it's generally considered good form not to run arbitrary code in Python module import, you usually should be able to restructure your code so that the heavy lifting occurs only inside the lambda function.
I would suggest use aws python lambda powertools. The logging doc is here. Code example:
from aws_lambda_powertools import Logger
logger = Logger() # Sets service via env var
# OR logger = Logger(service="example")
It works works both locally and on CloudWatch for me.
I have also solved this issue so that logging would not require change for local and on aws. Below is the sample code:
def set_default_logger():
if "LOG_LEVEL" in os.environ:
# For Lambda
log_level = os.environ["LOG_LEVEL"]
else:
log_level = DEFAULT_LOG_LEVEL # Set default log level for local
root = logging.getLogger()
if len(logging.getLogger().handlers) > 0:
# For Lambda
for handler in root.handlers:
root.removeHandler(handler)
logging.basicConfig(level=log_level,
format='[%(asctime)s.%(msecs)03d] [%(levelname)s] [%(module)s] [%(funcName)s] [L%(lineno)d] [P%(process)d] [T%(thread)d] %(message)s',
datefmt='%Y-%m-%d %H:%M:%S')
else:
# For Local
l_name = os.getcwd()+'/'+'count_mac_module.log'
logging.basicConfig(filename=l_name, level=log_level,
format='[%(asctime)s.%(msecs)03d] [%(levelname)s] [%(module)s] [%(funcName)s] [L%(lineno)d] [P%(process)d] [T%(thread)d] %(message)s',
datefmt='%Y-%m-%d %H:%M:%S')
logger = logging.getLogger(__name__)
logger.debug(f"************* logging set for Lambda {os.getenv('AWS_LAMBDA_FUNCTION_NAME') } *************")
LOGGER = logging.getLogger()
HANDLER = LOGGER.handlers[0]
HANDLER.setFormatter(
logging.Formatter(“[%(asctime)s] %(levelname)s:%(name)s:%(message)s”, “%Y-%m-%d %H:%M:%S”)
)
import os
import logging
logger = logging.getLogger()
logger.setLevel(logging.INFO)
def lambda_handler(event, context):
logger.info('## ENVIRONMENT VARIABLES')
logger.info(os.environ)
logger.info('## EVENT')
logger.info(event)`enter code here`