main module
module A
module B
The main module uses the functions of modules.
The functions of modules include logger.info or logger.warning.. so on to show what i made wrong about the code.
Objective :
- Logging all things in main, A and B.
- A faculty to set logging level of A and B in main jupyter notebook, at the moment. (e.g. When i need more information about a function of A, then increase logging level to DEBUG from INFO)
By the way, the main script has:
import logging, sys
# create logger
logger = logging.getLogger('logger')
logger.setLevel(logging.DEBUG)
# create console handler and set level to debug
fh = logging.FileHandler('process.log')
logger.setLevel(logging.DEBUG)
ch = logging.StreamHandler(sys.stdout)
ch.setLevel(logging.DEBUG)
# create formatter
formatter = logging.Formatter('%(asctime)s - %(name)s - %(levelname)s - %(message)s\n')
# add formatter to ch
fh.setFormatter(formatter)
# add ch to logger
logger.addHandler(fh)
logger.addHandler(ch)
I want to use Logger object and configure this object, instead of logging's baseconfig. But if i can't, other ways is ok.
If you do:
logger = logging.getLogger('logger')
Into Module A and Module B, then they should have access to the same logger as your main file. From there, you can set whatever level at any time you want. E.g.
# ../module_a.py
import logging
logger = logging.getLogger('logger')
logger.setLevel(whatever) # Now all instances of "logger" will be set to that level.
Basically, loggers are globally registered by name and accessible through the logging module directly from any other modules.
Related
I have a main process which makes use of different other modules. And these modules also use other modules. I need to log all the logs into single log file. Due to use of TimedRotatingFileHandler, my log behaves differently after midnight. I got to know why it so but couldn't clearly how I can solve it.
Below is log_config.py which is used by all other modules to get the logger and log.
'''
import logging
import sys
from logging.handlers import TimedRotatingFileHandler
FORMATTER = logging.Formatter("%(asctime)s — %(name)s — %(message)s")
LOG_FILE = "my_app.log"
def get_file_handler():
file_handler = TimedRotatingFileHandler(LOG_FILE, when='midnight')
file_handler.setFormatter(FORMATTER)
return file_handler
def get_logger(logger_name):
logger = logging.getLogger(logger_name)
logger.setLevel(logging.DEBUG) # better to have too much log than not enough
logger.addHandler(get_file_handler())
#with this pattern, it's rarely necessary to propagate the error up to parent
logger.propagate = False
return logger
'''
All other modules call,
'logging = log_config.get_logger(name)'
and use it to log.
I came to know about QueueHandler and QueueListener but not sure how to use them in my code.
How can I use these to serialize logs to single file.?
I've got a sagemaker instance running a jupyter notebook. I'd like to use python's logging module to write to a log file, but it doesn't work.
My code is pretty straightforward:
import logging
logger = logging.getLogger()
formatter = logging.Formatter("%(asctime)s - %(levelname)s - %(name)s - %(message)s", datefmt="%y/%m/%d %H:%M:%S")
fhandler = logging.FileHandler("taxi_training.log")
fhandler.setFormatter(formatter)
logger.addHandler(fhandler)
logger.debug("starting log...")
This should write a line to my file taxi_training.log but it doesn't.
I tried using the reload function from importlib, I also tried setting the output stream to sys.stdout explicitly. Nothing is logging to the file or in cloudwatch.
Do I need to add anything to my Sagemaker instance for this to work properly?
The Python logging module requires a logging level and one or more handlers to process output. By default, the logging level is set to WARNING (30) with a STDOUT handler for that level. If a logging level and/or handler is not explicitly defined, these settings are inherited from the parent root logger settings. These settings can be verified by adding the following lines to the bottom of your code:
# Verify levels and handlers
print("Parent Logger: "+logger.parent.name)
print("Parent Level: "+str(logger.parent.level))
print("Parent Handlers: "+str(logger.parent.handlers))
print("Logger Level: "+str(logger.level))
print("Logger Handlers: "+str(logger.handlers))
The easiest way to instantiate a handler and set a logging level is by running the logging.basicConfig() function (documentation). This will set a logging level and STDOUT handler at the root logger level which will propagate to any child loggers created in the same code. Here is an example using the code provided:
import logging
logger = logging.getLogger('log')
logging.basicConfig(level=logging.INFO) # Set logging level and STDOUT handler
logger.info(5)
I have multiple python modules that I'd like to use the same logger while preserving the call hierarchy in those logs. I'd also like to do this with a logger whose name is the name of the calling module (or calling module stack). I haven't been able to work out how to get the name of the calling module except with messing with the stack trace, but that doesn't feel very pythonic.
Is this possible?
main.py
import logging
from sub_module import sub_log
logger = logging.getLogger(__name__)
logger.info("main_module")
sub_log()
sub_module.py
import logging
def sub_log():
logger = logging.getLogger(???)
logger.info("sub_module")
Desired Output
TIME main INFO main_module
TIME main.sub_module INFO sub_module
To solve your problem pythonic use the Logger Formatter:
For reference check the
Logging Docs
main.py
import logging
from submodule import sub_log
from submodule2 import sub_log2
logger = logging.getLogger(__name__)
logger.setLevel(logging.DEBUG)
# create file handler which logs even debug messages
fh = logging.FileHandler('test.log')
fh.setLevel(logging.DEBUG)
# create formatter and add it to the handlers
formatter = logging.Formatter('%(asctime)s %(name)s.%(module)s - %(levelname)s - %(message)s')
fh.setFormatter(formatter)
# add the handlers to the logger
logger.addHandler(fh)
sub_log("test")
sub_log2("test")
submodule.py
import logging
import __main__
def sub_log(msg):
logger = logging.getLogger(__main__.__name__)
logger.info(msg)
I've created second submodule. ( same code other name)
My Results:
2018-10-16 20:41:23,860 __main__.submodule - INFO - test
2018-10-16 20:41:23,860 __main__.submodule2 - INFO - test
I hope this will help you :)
Is there any way I can provide the filename for logger from my main module?
I am using following way, however it's not working.all the logs go to xyz.log file rather than main.log
Updated as per suggestion from nosklo
logger.py
formatter = logging.Formatter(fmt='[%(asctime)s] - {%(filename)s:%(lineno)d} %(levelname)s - %(message)s')
def _get_file_handler(file_name="xyz.log"):
file_handler = logging.FileHandler(file_name)
file_handler.setLevel(logging.DEBUG)
file_handler.setFormatter(formatter)
return file_handler
def get_logger(name):
logger = logging.getLogger(name)
logger.setLevel(logging.DEBUG)
logger.addHandler(_get_file_handler())
return logger
parser.py
log = logger.get_logger(__name__)
def parse():
log.info("is there anyway this could go to main.log and xyz.log")
main.py
log = logger.get_logger(__name__)
if __name__ == '__main__':
for handler in log.handlers:
if isinstance(handler, logging.FileHandler):
log.removeHandler(handler)
log.addHandler(logger._get_file_handler())
log.info("is there anyway this could go to main.log and xyz.log?")
parser.parse()
Is there a way I can set the Log file name from my main.py module and not from logger.py module?
You're calling get_logger() first, so when you set the class attribute in FileName.file_name = "main.log" the get_logger function is already finished, and the logger is already defined to write in xyz.log; Changing the variable later won't change the logger anymore, since it is already defined.
To change the previously selected file, you'd have to retrieve the logger, remove the previous handler and add a new file handler. Another option is to set the variable before calling get_logger() so when you call it, the variable already has the correct value.
Logging instances can have multiple file handlers. Use a function like this to just add another handler with the additional output path you want. Log messages will get sent to both (or all) text logs added to the instance. You can even configure the handlers to have different logging levels so you can filter messages to different logs for critical errors, info message, etc.
import logging
def add_handler(output_log_path, log):
# Set up text logger and add it to logging instance
file_logger = logging.FileHandler(output_log_path)
file_logger.setLevel(logging.DEBUG)
formatter = logging.Formatter('%(asctime)s | logger name: %(name)s | module: %(module)s | lineno: %(lineno)d | %(message)s')
file_logger.setFormatter(formatter)
log.addHandler(file_logger)
return log
I like using the python logging module because it standardizes my application and easier to get metrics. The problem I face is, for every application (or file.py) I am keep putting this on top of my code.
logger = logging.getLogger(__name__)
if not os.path.exists('log'):
os.makedirs('log')
logName=time.strftime("%Y%m%d.log")
hdlr = logging.FileHandler('log/%s'%(logName))
logger.setLevel(logging.INFO)
ch = logging.StreamHandler()
ch.setLevel(logging.DEBUG)
formatter = logging.Formatter('%(asctime)s - %(name)s - %(funcName)s %(levelname)s - %(message)s')
ch.setFormatter(formatter)
hdlr.setFormatter(formatter)
logger.addHandler(ch)
logger.addHandler(hdlr)
This is tedious and repetitive. Is there a better way to do this?
How do people log for a large application with multiple modules?
Take a look at logging.basicConfig().
If you wrap the basicConfig() in a function then you can just import your function and pass specific args (i.e. log filename, format, level, etc).
It will help to condense the code a bit and make it more extensible.
For example -
import logging
def test_logging(filename, format):
logging.basicConfig(filename=filename, format=format, level=logging.DEBUG)
# test
logging.info('Info test...')
logging.debug('Debug test...')
Then just import the test_logging() function into other programs.
Hope that this helps.
Read Using logging in multiple modules from Logging Cookbook.
What you need to do is to use getLogger() function to get a logger with a pre-defined settings.
import logging
logger = logging.getLogger('some_logger')
You set those settings just once at application startup time.