I have a big project, I want to add logging to it with the python logging module.
What I want is: Log all levels (Debug and above) in main.log, and other log file takes a certain level (Which I want is Critical and error) from main.log and log it in sub.log.
So main.log will log every level (Debug and above),
And sub.log log Critical and error levels only.
what is the best way to do that, What comes to my mind is:
Do a function that will walk all over the main.log and search if there is any (Critical or Error level) in line, copy it and paste it in sub.log (but I am afraid this will be not efficient since I have maybe hundreds of lines)
This is logger.py
# logging.py
import logging
from datetime import datetime
import time
FORMAT = '[%(levelname)s]: %(asctime)s : %(filename)s :%(lineno)s: %(funcName)s() =>: %(message)s'
TIME_FORMAT = '%Y-%m-%d %H:%M:%S'
PATH = '../../../scripts/utils/log_modules/logs/'
LOGFILE = f"logs_{datetime.now().strftime('%d%m%Y_%H%M%S')}.log"
def log(name, file_name=LOGFILE, level=logging.DEBUG):
file_name = PATH + file_name
logger = logging.getLogger(name)
logger.setLevel(level)
formatter = logging.Formatter(FORMAT, TIME_FORMAT)
file_h = logging.FileHandler(file_name)
file_h.setFormatter(formatter)
logger.addHandler(file_h)
return logger
# two files separate
and here I call the function and create an object, and start logging:
module2.py:
# From random module.py
from logger import *
add_log = log(__name__)
# **I don't** want to make other object from the function just one.
def foo():
# do something
add_log.info("done correctly") # This will be logged in Main log Only
add_log.erorr("There is an error") # This will be logged in Main.log And Sub.log TOO
if __name__ == '__main__':
foo()
You should add a second handler to your logger, and have the logging machinery to handle everything:
def log(name, file_name=LOGFILE, level=logging.DEBUG):
file_name = PATH + file_name
logger = logging.getLogger(name)
logger.setLevel(level)
formatter = logging.Formatter(FORMAT, TIME_FORMAT)
# build and add main handler
file_h = logging.FileHandler(file_name)
file_h.setFormatter(formatter)
logger.addHandler(file_h)
# build and add sub handler
file_h = logging.FileHandler('sub_' + file_name)
file_h.setFormatter(formatter)
file_h.set_level(logging.ERROR)
logger.addHandler(file_h)
return logger
Related
I have a logging function with hardcoded logfile name (LOG_FILE):
setup_logger.py
import logging
import sys
FORMATTER = logging.Formatter("%(levelname)s - %(asctime)s - %(name)s - %(message)s")
LOG_FILE = "my_app.log"
def get_console_handler():
console_handler = logging.StreamHandler(sys.stdout)
console_handler.setFormatter(FORMATTER)
return console_handler
def get_file_handler():
file_handler = logging.FileHandler(LOG_FILE)
file_handler.setFormatter(FORMATTER)
return file_handler
def get_logger(logger_name):
logger = logging.getLogger(logger_name)
logger.setLevel(logging.DEBUG) # better to have too much log than not enough
logger.addHandler(get_console_handler())
logger.addHandler(get_file_handler())
# with this pattern, it's rarely necessary to propagate the error up to parent
logger.propagate = False
return logger
I use this in various modules this way:
main.py
from _Core import setup_logger as log
def main(incoming_feed_id: int, type: str) -> None:
logger = log.get_logger(__name__)
...rest of my code
database.py
from _Core import setup_logger as log
logger = log.get_logger(__name__)
Class Database:
...rest of my code
etl.py
import _Core.database as db
from _Core import setup_logger as log
logger = log.get_logger(__name__)
Class ETL:
...rest of my code
What I want to achieve is to always change the logfile's path and name on each run based on arguments passed to the main() function in main.py.
Simplified example:
If main() receives the following arguments: incoming_feed_id = 1, type = simple_load, the logfile's name should be 1simple_load.log.
I am not sure what is the best practice for this. What I came up with is probably the worst thing to do: Add a log_file parameter to the get_logger() function in setup_logger.py, so I can add a filename in main() in main.py. But in this case I would need to pass the parameters from main to the other modules as well, which I do not think I should do as for example the database class is not even used in main.py.
I don't know enough about your application to be sure this'll work for you, but you can just configure the root logger in main() by calling get_logger('', filename_based_on_cmdline_args), and stuff logged to the other loggers will be passed to the root logger's handlers for processing if the logger levels configured allow it. The way you're doing it now seems to open multiple handlers pointing to the same file, which seems sub-optimal. The other modules can just use logging.getLogger(__name__) rather than log.get_logger(__name__).
I had a script with logging capabilities, and it stopped working (the logging, not the script). I wrote a small example to illustrate the problem:
import logging
from os import remove
from os.path import exists
def setup_logger(logger_name, log_file, level=logging.WARNING):
# Erase log if already exists
if exists(log_file):
remove(log_file)
# Configure log file
l = logging.getLogger(logger_name)
formatter = logging.Formatter('%(message)s')
fileHandler = logging.FileHandler(log_file, mode='w')
fileHandler.setFormatter(formatter)
streamHandler = logging.StreamHandler()
streamHandler.setFormatter(formatter)
l.setLevel(level)
l.addHandler(fileHandler)
l.addHandler(streamHandler)
if __name__ == '__main__':
setup_logger('log_pl', '/home/myuser/test.log')
log_pl = logging.getLogger('log_pl')
log_pl.info('TEST')
log_pl.debug('TEST')
At the end of the script, the file test.log is created, but it is empty.
What am I missing?
Your setup_logger function specifies a (default) level of WARNING
def setup_logger(logger_name, log_file, level=logging.WARNING):
...and you later log two events that are at a lower level than WARNING, and are ignored as they should be:
log_pl.info('TEST')
log_pl.debug('TEST')
If you change your code that calls your setup_logger function to:
if __name__ == '__main__':
setup_logger('log_pl', '/home/myuser/test.log', logging.DEBUG)
...I'd expect that it works as you'd like.
See the simple example in the Logging HOWTO page.
I had a script with logging capabilities, and it stopped working (the logging, not the script). I wrote a small example to illustrate the problem:
import logging
from os import remove
from os.path import exists
def setup_logger(logger_name, log_file, level=logging.WARNING):
# Erase log if already exists
if exists(log_file):
remove(log_file)
# Configure log file
l = logging.getLogger(logger_name)
formatter = logging.Formatter('%(message)s')
fileHandler = logging.FileHandler(log_file, mode='w')
fileHandler.setFormatter(formatter)
streamHandler = logging.StreamHandler()
streamHandler.setFormatter(formatter)
l.setLevel(level)
l.addHandler(fileHandler)
l.addHandler(streamHandler)
if __name__ == '__main__':
setup_logger('log_pl', '/home/myuser/test.log')
log_pl = logging.getLogger('log_pl')
log_pl.info('TEST')
log_pl.debug('TEST')
At the end of the script, the file test.log is created, but it is empty.
What am I missing?
Your setup_logger function specifies a (default) level of WARNING
def setup_logger(logger_name, log_file, level=logging.WARNING):
...and you later log two events that are at a lower level than WARNING, and are ignored as they should be:
log_pl.info('TEST')
log_pl.debug('TEST')
If you change your code that calls your setup_logger function to:
if __name__ == '__main__':
setup_logger('log_pl', '/home/myuser/test.log', logging.DEBUG)
...I'd expect that it works as you'd like.
See the simple example in the Logging HOWTO page.
I have been writing simple scripts and I am trying to use logger to generate log for each functions in the scripts.
1) based on the function name I create a logger filehandler and I try to put logs using that handler. I also delete the previous existing file with the same name.
3) at the end of the function I close the handler.
My problem are:
1)even though I close the handler, the next time I run the same function I get an error that the file I am trying to delete is (as a part of setting the logger file handler) is still being used.
2) Also the logger prints everything to console which I dont want, I just want it to write everything to the file.
Here are the logger functions:
def setLogger(path):
"""
#purpose: Intializes basic logging directory and file
"""
LOG_FILENAME = path + "\\" + "log.txt"
#logging.basicConfig(filename=LOG_FILENAME,
# format='%(levelname)s %(asctime)s %(message)s',level=logging.INFO
# )
logger = logging.getLogger()
logger.setLevel(logging.INFO)
file_handler = logging.FileHandler(LOG_FILENAME)
file_handler.setLevel(logging.INFO)
formatter = logging.Formatter("%(levelname)s %(asctime)s %(message)s")
file_handler.setFormatter(formatter)
logger.addHandler(file_handler)
return logger
def unsetLogger(logger):
"""
#purpose: performs a basic shutdown of logger
"""
logger.handlers[0].close()
logger.removeHandler(logger.handlers[0])
logging.shutdown
The way i use them is:
for eg:
def fun():
os.remove(path)
logger = setLogger(path)
` logging.info("hi") #this writes to file and prints on the console as well
unsetLogger(logger)
if I run the function fun() once, its all good. but if i run it again, I get that can't delete error for the log file.
Thanks in Advance.
learningNinja
After making some slight modifications, I came up with the following test to try to reproduce your error, but I don't get any errors.
import os
import logging
def setLogger(path):
"""
#purpose: Intializes basic logging directory and file
"""
logger = logging.getLogger()
logger.setLevel(logging.INFO)
# Simplified log file path (I just use full value passed in, and don't append "\log.txt")
file_handler = logging.FileHandler(path)
file_handler.setLevel(logging.INFO)
formatter = logging.Formatter("%(levelname)s %(asctime)s %(message)s")
file_handler.setFormatter(formatter)
logger.addHandler(file_handler)
return logger
def unsetLogger(logger):
"""
#purpose: performs a basic shutdown of logger
"""
logger.handlers[0].close()
logger.removeHandler(logger.handlers[0])
logging.shutdown
def fun():
try:
# Was getting error trying to remove a file that didn't exist on
# first execution...
os.remove("log.txt")
except:
pass
logger = setLogger("log.txt")
logging.info("hi")
unsetLogger(logger)
fun()
fun()
fun()
See if there is anything I'm doing differently than your actual code and maybe that might help you.
I created a module named log.py where a function defines how the log will be registered. Here is the atomic code:
import logging
import time
def set_up_log():
"""
Create a logging file.
"""
#
# Create the parent logger.
#
logger = logging.getLogger(__name__)
logger.setLevel(logging.INFO)
#
# Create a file as handler.
#
file_handler = logging.FileHandler('report\\activity.log')
file_handler.setLevel(logging.INFO)
formatter = logging.Formatter('%(asctime)s - %(filename)s - %(name)s - % (levelname)4s - %(message)s')
file_handler.setFormatter(formatter)
logger.addHandler(file_handler)
#
# Start recording.
#
logger.info('______ STARTS RECORDING _______')
if __name__=='__main__':
set_up_log()
A second module named read_file.py is using this log.py to record potential error.
import logging
import log
log.set_up_log()
logger = logging.getLogger(__name__)
def read_bb_file(input_file):
"""
Input_file must be the path.
Open the source_name and read the content. Return the result.
"""
content = list()
logger.info('noi')
try:
file = open(input_file, 'r')
except IOError, e:
logger.error(e)
else:
for line in file:
str = line.rstrip('\n\r')
content.append(str)
file.close()
return content
if __name__ == "__main__":
logger.info("begin execution")
c = read_bb_file('textatraiter.out')
logger.info("end execution")
In the command prompt lauching read_file.py, I get this error:
No handlers could be found for logger "__main__"
My result in the file is the following
2014-05-12 13:32:58,690 - log.py - log - INFO - ______ STARTS RECORDING _______
I read lots of topics here and on Py Doc but it seems I did not understand them properly since I have this error.
I add I would like to keep the log settlement appart in a function and not define it explicitely in my main method.
You have 2 distinct loggers and you're only configuring one.
The first is the one you make in log.py and set up correctly. Its name however will be log, because you have imported this module from read_file.py.
The second logger, the one you're hoping is the same as the first, is the one you assign to the variable logger in read_file.py. Its name will be __main__ because you're calling this module from the command line. You're not configuring this logger.
What you could do is to add a parameter to set_up_log to pass the name of the logger in, e.g.
def set_up_log(logname):
logger = logging.getLogger(logname)
That way, you will set the handlers and formatters for the correct logging instance.
Organizing your logs in a hierarchy is the way logging was intended to be used by Vinay Sajip, the original author of the module. So your modules would only log to a logging instance with the fully qualified name, as given by __name__. Then your application code could set up the loggers, which is what you're trying to accomplish with your set_up_log function. You just need to remember to pass it the relevant name, that's all. I found this reference very useful at the time.