I am learning about logging when running multiple process.
Below is how I normally log things when running a single process.
import logging
log_format = '%(asctime)s - %(name)s - %(levelname)s - %(messsage)s'
logger = logging.getLogger(__name__)
logger.setLevel('Debug')
file_handler = logging.FileHandler('C:/my_directory/logs/file_name.log')
formatter = logging.Formatter(log_format)
file_handler.setFormatter(formatter)
# to stop duplication
if not len(logger.handlers):
logger.addHandler(file_handler)
So after my code has run I can go to C:/my_directory/logs/file_name.log & check what I need to.
With multiple processes I understand its not so simple. I have read this great article. I have copied the example code below. What I don't understand is how to output the logged messages to a file like above so that I can read it after the code has finished?
from random import random
from time import sleep
from multiprocessing import current_process
from multiprocessing import Process
from multiprocessing import Queue
from logging.handlers import QueueHandler
import logging
# executed in a process that performs logging
def logger_process(queue):
# create a logger
logger = logging.getLogger('app')
# configure a stream handler
logger.addHandler(logging.StreamHandler())
# log all messages, debug and up
logger.setLevel(logging.DEBUG)
# run forever
while True:
# consume a log message, block until one arrives
message = queue.get()
# check for shutdown
if message is None:
break
# log the message
logger.handle(message)
# task to be executed in child processes
def task(queue):
# create a logger
logger = logging.getLogger('app')
# add a handler that uses the shared queue
logger.addHandler(QueueHandler(queue))
# log all messages, debug and up
logger.setLevel(logging.DEBUG)
# get the current process
process = current_process()
# report initial message
logger.info(f'Child {process.name} starting.')
# simulate doing work
for i in range(5):
# report a message
logger.debug(f'Child {process.name} step {i}.')
# block
sleep(random())
# report final message
logger.info(f'Child {process.name} done.')
# protect the entry point
if __name__ == '__main__':
# create the shared queue
queue = Queue()
# create a logger
logger = logging.getLogger('app')
# add a handler that uses the shared queue
logger.addHandler(QueueHandler(queue))
# log all messages, debug and up
logger.setLevel(logging.DEBUG)
# start the logger process
logger_p = Process(target=logger_process, args=(queue,))
logger_p.start()
# report initial message
logger.info('Main process started.')
# configure child processes
processes = [Process(target=task, args=(queue,)) for i in range(5)]
# start child processes
for process in processes:
process.start()
# wait for child processes to finish
for process in processes:
process.join()
# report final message
logger.info('Main process done.')
# shutdown the queue correctly
queue.put(None)
Update
I added the below code in the logger_process function just before the While True: loop. However when I look in the file, there is nothing there. I'm not seeing any output, not sure what I'm missing?
# add file handler
log_format = '%(asctime)s - %(name)s - %(levelname)s - %(messsage)s'
file_handler = logging.FileHandler('C:/my_directory/logs/file_name.log')
formatter = logging.Formatter(log_format)
file_handler.setFormatter(formatter)
logger.addHandler(file_handler)
Related
A polling method is implemented and it works every second to check the request status. Is it possible to add a log for each iteration of the polling?
result = poll(
lambda: getSomething(),
timeout=100,
step=1,
check_success=IsPollingSuccessfull
)
I need something like,
Waiting for the response + time
Waiting for the response + time
Waiting for the response + time
Waiting for the response + time
EDIT:
I want to print log to the console.
Have you considered python's logging? Here is the documentation
you can create a logger instance that saves to all messages to file. Then you can use it everywhere in your code and log anything you'd like with different logging levels.
Here is how I create and use the logger:
# Method to create an instance of a logger
import logging
def set_logger(context, file_name, verbose=False):
logger = logging.getLogger(context)
logger.setLevel(logging.DEBUG if verbose else logging.INFO)
formatter = logging.Formatter(
'[%(asctime)s][%(levelname)s]:' + context + ':[%(filename).30s:%(funcName).30s:%(lineno)3d]:%(message)s',
datefmt='%Y-%m-%d %H:%M:%S\x1b[0m')
console_handler = logging.StreamHandler()
console_handler.setLevel(logging.DEBUG if verbose else logging.INFO)
console_handler.setFormatter(formatter)
logger.handlers = []
logger.addHandler(console_handler)
file_handler = logging.FileHandler($path_to_save_logger + file_name)
file_handler.setLevel(logging.DEBUG if verbose else logging.INFO)
file_handler.setFormatter(formatter)
logger.addHandler(file_handler)
return logger
then create the instance and use it.
from termcolor import colored
my_logger = set_logger(colored('GENERAL', 'yellow'), "/tmp/my_logger.txt", verbose=True)
my_logger.info("this is an info message")
my_logger.debug("this is a debug message")
.....
EDIT: assuming you're using polling2.poll()
You can add a logger into you poll() call - documentation
import logging
poll(lambda: getSomething(),
timeout=100,
step=1,
check_success=IsPollingSuccessful,
log=logging.DEBUG)
I have a program with a loop, and I would like to set up a main logger which logs information from the whole program. I would like to set up a different logger inside a loop (I will call this a 'logger_instance' going forward) for each loop instance. The main logger also needs to be able to log some information within a loop.
The issue with my current code is the following: once I initialize a logger_instance inside a loop, the information that I intended to log to the main logger gets logged to the logger_instance instead of the main logger.
Below is the sample code:
class DispatchingFormatter:
"""
This allows to create several formatter objects with desired formats so that logging outputs can take different
formats
"""
def __init__(self, formatters, default_formatter):
self._formatters = formatters
self._default_formatter = default_formatter
def format(self, record):
formatter_obj = self._formatters.get(record.name, self._default_formatter)
return formatter_obj.format(record)
def initiate_logger(log_file_name):
# Set logging to display INFO level or above
logging.basicConfig(level=logging.INFO)
# First empty out list of handlers to remove the default handler created by the running basicConfig above
# logging.getLogger().handlers.clear()
logger = logging.getLogger()
logger.handlers.clear()
# logger = logging.getLogger().handlers.clear()
# Set up formatter objects
formatter_obj = DispatchingFormatter(
# Custom formats - add new desired formats here
{
# This format allows to print the user and the time - use this log format at the start of the execution
'start_log': logging.Formatter('\n%(levelname)s - %(message)s executed the pipeline at %(asctime)s',
datefmt='%Y-%m-%d %H:%M:%S'),
# This format allows to print the time - use this log format at the end of the execution
'end_log': logging.Formatter('\n%(levelname)s - pipeline execution completed at %(asctime)s',
datefmt='%Y-%m-%d %H:%M:%S'),
# This format allows to print the duration - use this log format at the end of the execution
'duration_log': logging.Formatter('\n%(levelname)s - total time elapsed: %(message)s minutes'),
# This is the format of warning messages
'warning_log': logging.Formatter('\n\t\t%(levelname)s - %(message)s'),
# This is the format of error messages (
'error_log': logging.Formatter('\n%(levelname)s! [%(filename)s:line %(lineno)s - %(funcName)20s()] - '
'Pipeline terminating!\n\t%(message)s')
},
# Default format - this default is used to print all ESN outputs
logging.Formatter('%(message)s'),
)
# Log to console (stdout)
c_handler = logging.StreamHandler()
c_handler.setFormatter(formatter_obj)
# logging.getLogger().addHandler(c_handler)
logger.addHandler(c_handler)
# Log to file in runs folder
f_handler = logging.FileHandler(log_file_name, 'w+')
f_handler.setFormatter(formatter_obj)
# logging.getLogger().addHandler(f_handler)
logger.addHandler(f_handler)
# Log user and start time information upon creating the logger
logging.getLogger('start_log').info(f'{getpass.getuser()}')
return logger
if __name__ == '__main__':
# Test logging
# Initial main logger for outside the loop
logger_main = initiate_logger('log_main.txt')
logger_main.info(f'Started the main logging')
for i in range(5):
# Create logger in a loop
this_logger = initiate_logger(f'log_{str(i)}.txt')
this_logger.info(f'Logging in a loop - loop #{str(i)}')
# Log some information to main logger
logger_main.info(f'Something happened in loop #{str(i)}')
# Log more information to main logger after the loop
logger_main.info(f'Loop completed!')
log_main.txt contains
INFO - this_user executed the pipeline at 2019-05-29 19:15:47
Started the main logging
log_0.txt contains
INFO - lqk4061 executed the pipeline at 2019-05-29 19:15:47
Logging in a loop - loop #0
Something happened in loop #0
Desired output for log_main.txt should be
INFO - this_user executed the pipeline at 2019-05-29 19:15:47
Started the main logging
Something happened in loop #0
Something happened in loop #1
Something happened in loop #2
Something happened in loop #3
Something happened in loop #4
Loop completed!
Desired output for log_0.txt should be
INFO - lqk4061 executed the pipeline at 2019-05-29 19:15:47
Logging in a loop - loop #0
Any help will be greatly appreciated!
That happens because your initiate_logger function always returns the root logger since it calls getlogger without a name. See the documentation. What you need to do is give each of them a different name if you want them to be different logger instances. for example logger = logging.getLogger(log_file_name) would work in your code. I would recomment using filters instead though.
I'm trying to understand why the same set of code works in osx but not in windows.
logger_name = 'gic_scheduler'
logger = logging.getLogger(logger_name)
logger.setLevel(logging.INFO)
ch = logging.StreamHandler()
ch.setLevel(logging.INFO)
fh = logging.FileHandler(filename=os.path.join(tmp_folder, 'scheduler.log'), encoding='utf-8')
fh.setLevel(logging.DEBUG)
logger.addHandler(ch)
logger.addHandler(fh)
executor_logger = logging.getLogger('apscheduler.executors.default')
executor_logger.setLevel(logging.DEBUG)
executor_logger.addHandler(ch)
executor_logger.addHandler(fh)
executors = {'default': ProcessPoolExecutor(5)}
scheduler = BlockingScheduler(executors=executors, logger=logger)
scheduler.add_job(go, 'interval', seconds=5)
scheduler.start()
In particular, no output is produced by the logger 'apscheduler.executors.default'. I digged into the 3rd party library using this logger and printed out logger.handlers and in OSX's case the handlers are there but in Windows they are empty. Any ideas why?
def run_job(job, jobstore_alias, run_times, logger_name):
"""Called by executors to run the job. Returns a list of scheduler events to be dispatched by the scheduler."""
events = []
logger = logging.getLogger(logger_name)
print logger_name
print logger.handlers
After I port my script to Windows from Mac (both python 2.7.*), I find that all the logging not working in subprocess, only the father's logging are write to file. Here is my example code:
# test log among multiple process env
import logging, os
from multiprocessing import Process
def child():
logging.info('this is child')
if __name__ == '__main__':
logging.basicConfig(filename=os.path.join(os.getcwd(), 'log.out'),
level = logging.DEBUG, filemode='w',
format = '[%(filename)s:%(lineno)d]: %(asctime)s - %(levelname)s: %(message)s')
p = Process(target = child, args = ())
p.start()
p.join()
logging.info('this is father')
the output only write this is father into log.out, and the child's log missing. How to make logging woking in child process?
Each child is an independent process, and file handles in the parent may be closed in the child after a fork (assuming POSIX). In any case, logging to the same file from multiple processes is not supported. See the documentation for suggested approaches.
How can I put my two processes to log in a only file?
With my code only proc1 is logging to my log file...
module.py:
import multiprocessing,logging
log = multiprocessing.log_to_stderr()
log.setLevel(logging.DEBUG)
handler = logging.FileHandler('/var/log/my.log')
handler.setLevel(logging.DEBUG)
formatter = logging.Formatter('%(asctime)s - %(name)s - %(levelname)s - %(message)s')
handler.setFormatter(formatter)
log.addHandler(handler)
def proc1():
log.info('Hi from proc1')
while True:
if something:
log.info('something')
def proc2():
log.info('Hi from proc2')
while True:
if something_more:
log.info('something more')
if __name__ == '__main__':
p1 = multiprocessing.Process(target=proc1)
p2 = multiprocessing.Process(target=proc2)
p1.start()
p2.start()
As said at https://docs.python.org/2/howto/logging-cookbook.html#logging-to-a-single-file-from-multiple-processes
"Although logging is thread-safe, and logging to a single file from
multiple threads in a single process is supported, logging to a single
file from multiple processes is not supported"
Then, you should find another approach to get it, ie implementing a logging server:
https://docs.python.org/2/howto/logging-cookbook.html#sending-and-receiving-logging-events-across-a-network