Take an action each time a log is rotated in python - python

I want to log a line describing the logging information for each log file created by the logger.
Currently I'm using a separate logger process which will be running all the time. It receives the information from a queue and writes it to the log. A lot of modules will be passing information to this logging queue.
My current sample code is:
import logging
import time
from logging.handlers import TimedRotatingFileHandler as rotate
def info_log(log_queue):
logger = logging.getLogger("My Log")
logger.setLevel(logging.INFO)
handler = rotate("log/info.log", when="D", interval=30, backupCount=13)
logger.addHandler(handler)
desc_string = "yyyy/mm/dd-HH:MM:SS \t name \t country \n"
logger.info(desc_string)
while True:
result=log_queue.get().split("#")
logger.info(result[0] + "\t" result[1] + "\t" + result[2] + "\n")
Each time the log is rotated, I want desc_string to be written 1st to the log file.How can I do that?
Or in other words, How to know in the program when a log is rotated?

Maybe you can simply override the doRollover method from TimedRotatingFileHandler ?
class CustomFileHandler(TimedRotatingFileHandler):
def doRollover(self):
super().doRollover()
self.stream.write(desc_string)
# to use it
handler = CustomFileHandler("log/info.log", when="D", interval=30, backupCount=13)
logger.addHandler(handler)

Related

How to rotate log files in Python

I came across rotating file handler in order to rotate log file in Python, I have this piece of code.
My requirement is my log file is getting appended every few seconds, and there should be log rotation, whenever the file is beyond 1 GB of size.
This is my pseudo code, where I am trying only for 40 bytes, to check if my code works.
import logging
import time
from logging.handlers import RotatingFileHandler
#----------------------------------------------------------------------
def create_rotating_log(path):
"""
Creates a rotating log
"""
logger = logging.getLogger("Rotating Log")
logger.setLevel(logging.INFO)
# add a rotating handler
handler = RotatingFileHandler(path, maxBytes=40,
backupCount=1)
logger.addHandler(handler)
#----------------------------------------------------------------------
if __name__ == "__main__":
log_file = "test.log"
create_rotating_log(log_file)
Now as per this code, whenever my log file reaches 40 bytes, there should be a copy of the log file as test.log.1 and my older log file should have new content.
So, I manually opened my test.log file, added contents to it to make sure size > 40 bytes and ran this piece of code, but nothing happened.My requirement is my log file is getting appended every few seconds, and there should be log rotation, whenever the file is beyond 1 GB of size.
What is the correct way of running the code for my requirement?? I am stuck here.
Can someone please help?
As it was adviced at the comment of #jordanm, I've tried
import logging
from logging.handlers import RotatingFileHandler
#----------------------------------------------------------------------
def create_rotating_log(path):
"""
Creates a rotating log
"""
logger = logging.getLogger("Rotating Log")
logger.setLevel(logging.INFO)
# add a rotating handler
handler = RotatingFileHandler(path, maxBytes=40,
backupCount=1)
logger.addHandler(handler)
#----------------------------------------------------------------------
if __name__ == "__main__":
log_file = "test.log"
create_rotating_log(log_file)
logger = logging.getLogger("Rotating Log")
logger.info("Hello!")
It works.

Python polling library show message for each iteration

A polling method is implemented and it works every second to check the request status. Is it possible to add a log for each iteration of the polling?
result = poll(
lambda: getSomething(),
timeout=100,
step=1,
check_success=IsPollingSuccessfull
)
I need something like,
Waiting for the response + time
Waiting for the response + time
Waiting for the response + time
Waiting for the response + time
EDIT:
I want to print log to the console.
Have you considered python's logging? Here is the documentation
you can create a logger instance that saves to all messages to file. Then you can use it everywhere in your code and log anything you'd like with different logging levels.
Here is how I create and use the logger:
# Method to create an instance of a logger
import logging
def set_logger(context, file_name, verbose=False):
logger = logging.getLogger(context)
logger.setLevel(logging.DEBUG if verbose else logging.INFO)
formatter = logging.Formatter(
'[%(asctime)s][%(levelname)s]:' + context + ':[%(filename).30s:%(funcName).30s:%(lineno)3d]:%(message)s',
datefmt='%Y-%m-%d %H:%M:%S\x1b[0m')
console_handler = logging.StreamHandler()
console_handler.setLevel(logging.DEBUG if verbose else logging.INFO)
console_handler.setFormatter(formatter)
logger.handlers = []
logger.addHandler(console_handler)
file_handler = logging.FileHandler($path_to_save_logger + file_name)
file_handler.setLevel(logging.DEBUG if verbose else logging.INFO)
file_handler.setFormatter(formatter)
logger.addHandler(file_handler)
return logger
then create the instance and use it.
from termcolor import colored
my_logger = set_logger(colored('GENERAL', 'yellow'), "/tmp/my_logger.txt", verbose=True)
my_logger.info("this is an info message")
my_logger.debug("this is a debug message")
.....
EDIT: assuming you're using polling2.poll()
You can add a logger into you poll() call - documentation
import logging
poll(lambda: getSomething(),
timeout=100,
step=1,
check_success=IsPollingSuccessful,
log=logging.DEBUG)

unformated logging prints on the console (it only prints formated to a file) / Python

logger configuration to log to file and print to stdout would not work with me so:
I want to both print logging details in prog_log.txt AND in the console, I have:
# Debug Settings
Import logging
#logging.disable() #when program ready, un-indent this line to remove all logging messages
logger = logging.getLogger('')
logging.basicConfig(level=logging.DEBUG,
filename = 'prog_log.txt',
format='%(asctime)s - %(levelname)s - %(message)s',
filemode = 'w')
logger.addHandler(logging.StreamHandler())
logging.debug('Start of Program')
The above does print into the prog_log.txt file the logging details properly formated but it prints unformated logging messages multiple times as I re-run the program in the console like below..
In [3]: runfile('/Volumes/GoogleDrive/Mon Drive/MAC_test.py', wdir='/Volumes/GoogleDrive/Mon Drive/')
Start of Program
Start of Program
Start of Program #after third running
Any help welcome ;)

python logging setting multiple loggers and switching between them

I have a program with a loop, and I would like to set up a main logger which logs information from the whole program. I would like to set up a different logger inside a loop (I will call this a 'logger_instance' going forward) for each loop instance. The main logger also needs to be able to log some information within a loop.
The issue with my current code is the following: once I initialize a logger_instance inside a loop, the information that I intended to log to the main logger gets logged to the logger_instance instead of the main logger.
Below is the sample code:
class DispatchingFormatter:
"""
This allows to create several formatter objects with desired formats so that logging outputs can take different
formats
"""
def __init__(self, formatters, default_formatter):
self._formatters = formatters
self._default_formatter = default_formatter
def format(self, record):
formatter_obj = self._formatters.get(record.name, self._default_formatter)
return formatter_obj.format(record)
def initiate_logger(log_file_name):
# Set logging to display INFO level or above
logging.basicConfig(level=logging.INFO)
# First empty out list of handlers to remove the default handler created by the running basicConfig above
# logging.getLogger().handlers.clear()
logger = logging.getLogger()
logger.handlers.clear()
# logger = logging.getLogger().handlers.clear()
# Set up formatter objects
formatter_obj = DispatchingFormatter(
# Custom formats - add new desired formats here
{
# This format allows to print the user and the time - use this log format at the start of the execution
'start_log': logging.Formatter('\n%(levelname)s - %(message)s executed the pipeline at %(asctime)s',
datefmt='%Y-%m-%d %H:%M:%S'),
# This format allows to print the time - use this log format at the end of the execution
'end_log': logging.Formatter('\n%(levelname)s - pipeline execution completed at %(asctime)s',
datefmt='%Y-%m-%d %H:%M:%S'),
# This format allows to print the duration - use this log format at the end of the execution
'duration_log': logging.Formatter('\n%(levelname)s - total time elapsed: %(message)s minutes'),
# This is the format of warning messages
'warning_log': logging.Formatter('\n\t\t%(levelname)s - %(message)s'),
# This is the format of error messages (
'error_log': logging.Formatter('\n%(levelname)s! [%(filename)s:line %(lineno)s - %(funcName)20s()] - '
'Pipeline terminating!\n\t%(message)s')
},
# Default format - this default is used to print all ESN outputs
logging.Formatter('%(message)s'),
)
# Log to console (stdout)
c_handler = logging.StreamHandler()
c_handler.setFormatter(formatter_obj)
# logging.getLogger().addHandler(c_handler)
logger.addHandler(c_handler)
# Log to file in runs folder
f_handler = logging.FileHandler(log_file_name, 'w+')
f_handler.setFormatter(formatter_obj)
# logging.getLogger().addHandler(f_handler)
logger.addHandler(f_handler)
# Log user and start time information upon creating the logger
logging.getLogger('start_log').info(f'{getpass.getuser()}')
return logger
if __name__ == '__main__':
# Test logging
# Initial main logger for outside the loop
logger_main = initiate_logger('log_main.txt')
logger_main.info(f'Started the main logging')
for i in range(5):
# Create logger in a loop
this_logger = initiate_logger(f'log_{str(i)}.txt')
this_logger.info(f'Logging in a loop - loop #{str(i)}')
# Log some information to main logger
logger_main.info(f'Something happened in loop #{str(i)}')
# Log more information to main logger after the loop
logger_main.info(f'Loop completed!')
log_main.txt contains
INFO - this_user executed the pipeline at 2019-05-29 19:15:47
Started the main logging
log_0.txt contains
INFO - lqk4061 executed the pipeline at 2019-05-29 19:15:47
Logging in a loop - loop #0
Something happened in loop #0
Desired output for log_main.txt should be
INFO - this_user executed the pipeline at 2019-05-29 19:15:47
Started the main logging
Something happened in loop #0
Something happened in loop #1
Something happened in loop #2
Something happened in loop #3
Something happened in loop #4
Loop completed!
Desired output for log_0.txt should be
INFO - lqk4061 executed the pipeline at 2019-05-29 19:15:47
Logging in a loop - loop #0
Any help will be greatly appreciated!
That happens because your initiate_logger function always returns the root logger since it calls getlogger without a name. See the documentation. What you need to do is give each of them a different name if you want them to be different logger instances. for example logger = logging.getLogger(log_file_name) would work in your code. I would recomment using filters instead though.

Python logging command does not work at real time

I have a logging widget in Tkinter ( ScrolledText ) with a TextHandler class that handle logs and print theme in widget
class TextHandler(logging.Handler):
def __init__(self, text):
# run the regular Handler __init__
logging.Handler.__init__(self)
# Store a reference to the Text it will log to
self.text = text
def emit(self, record):
msg = self.format(record)
def append():
self.text.configure(state='normal')
self.text.insert(Tkinter.END, msg + '\n')
self.text.configure(state='disabled')
# Autoscroll to the bottom
self.text.yview(Tkinter.END)
self.text.after(0, append)
st = ScrolledText.ScrolledText(self, width=190, height=9, state='disabled')
st.configure(font='TkFixedFont')
st.place(x=0, y=539)
text_handler = TextHandler(st)
# Logging configuration
logging.basicConfig(filename='test.log',
level=logging.INFO,
format='%(asctime)s - %(levelname)s - %(message)s')
# Add the handler to logger
self.logger = logging.getLogger()
self.logger.addHandler(text_handler)
And call logging.info(msg) for log messages. but there is a problem. everything is work well but when this function called before a process ( for example some works with lists ) my logs appear after that !!
logging.info("message")
print "message"
for topic in news:
...
print method works fine here but there is problem for logging. i have my log message just after the loop end.
So ... what is the problem ?
You should not use after(0, ...). I'm not sure if it's the only problem, but it's definitely one problem. You are starving the event handler -- the "idle" queue will never empty, so it doesn't have the chance to service normal events. You have, in effect, created an infinite loop.
You should give a small non-zero interval, which will help this problem.

Categories