A polling method is implemented and it works every second to check the request status. Is it possible to add a log for each iteration of the polling?
result = poll(
lambda: getSomething(),
timeout=100,
step=1,
check_success=IsPollingSuccessfull
)
I need something like,
Waiting for the response + time
Waiting for the response + time
Waiting for the response + time
Waiting for the response + time
EDIT:
I want to print log to the console.
Have you considered python's logging? Here is the documentation
you can create a logger instance that saves to all messages to file. Then you can use it everywhere in your code and log anything you'd like with different logging levels.
Here is how I create and use the logger:
# Method to create an instance of a logger
import logging
def set_logger(context, file_name, verbose=False):
logger = logging.getLogger(context)
logger.setLevel(logging.DEBUG if verbose else logging.INFO)
formatter = logging.Formatter(
'[%(asctime)s][%(levelname)s]:' + context + ':[%(filename).30s:%(funcName).30s:%(lineno)3d]:%(message)s',
datefmt='%Y-%m-%d %H:%M:%S\x1b[0m')
console_handler = logging.StreamHandler()
console_handler.setLevel(logging.DEBUG if verbose else logging.INFO)
console_handler.setFormatter(formatter)
logger.handlers = []
logger.addHandler(console_handler)
file_handler = logging.FileHandler($path_to_save_logger + file_name)
file_handler.setLevel(logging.DEBUG if verbose else logging.INFO)
file_handler.setFormatter(formatter)
logger.addHandler(file_handler)
return logger
then create the instance and use it.
from termcolor import colored
my_logger = set_logger(colored('GENERAL', 'yellow'), "/tmp/my_logger.txt", verbose=True)
my_logger.info("this is an info message")
my_logger.debug("this is a debug message")
.....
EDIT: assuming you're using polling2.poll()
You can add a logger into you poll() call - documentation
import logging
poll(lambda: getSomething(),
timeout=100,
step=1,
check_success=IsPollingSuccessful,
log=logging.DEBUG)
Related
I am learning about logging when running multiple process.
Below is how I normally log things when running a single process.
import logging
log_format = '%(asctime)s - %(name)s - %(levelname)s - %(messsage)s'
logger = logging.getLogger(__name__)
logger.setLevel('Debug')
file_handler = logging.FileHandler('C:/my_directory/logs/file_name.log')
formatter = logging.Formatter(log_format)
file_handler.setFormatter(formatter)
# to stop duplication
if not len(logger.handlers):
logger.addHandler(file_handler)
So after my code has run I can go to C:/my_directory/logs/file_name.log & check what I need to.
With multiple processes I understand its not so simple. I have read this great article. I have copied the example code below. What I don't understand is how to output the logged messages to a file like above so that I can read it after the code has finished?
from random import random
from time import sleep
from multiprocessing import current_process
from multiprocessing import Process
from multiprocessing import Queue
from logging.handlers import QueueHandler
import logging
# executed in a process that performs logging
def logger_process(queue):
# create a logger
logger = logging.getLogger('app')
# configure a stream handler
logger.addHandler(logging.StreamHandler())
# log all messages, debug and up
logger.setLevel(logging.DEBUG)
# run forever
while True:
# consume a log message, block until one arrives
message = queue.get()
# check for shutdown
if message is None:
break
# log the message
logger.handle(message)
# task to be executed in child processes
def task(queue):
# create a logger
logger = logging.getLogger('app')
# add a handler that uses the shared queue
logger.addHandler(QueueHandler(queue))
# log all messages, debug and up
logger.setLevel(logging.DEBUG)
# get the current process
process = current_process()
# report initial message
logger.info(f'Child {process.name} starting.')
# simulate doing work
for i in range(5):
# report a message
logger.debug(f'Child {process.name} step {i}.')
# block
sleep(random())
# report final message
logger.info(f'Child {process.name} done.')
# protect the entry point
if __name__ == '__main__':
# create the shared queue
queue = Queue()
# create a logger
logger = logging.getLogger('app')
# add a handler that uses the shared queue
logger.addHandler(QueueHandler(queue))
# log all messages, debug and up
logger.setLevel(logging.DEBUG)
# start the logger process
logger_p = Process(target=logger_process, args=(queue,))
logger_p.start()
# report initial message
logger.info('Main process started.')
# configure child processes
processes = [Process(target=task, args=(queue,)) for i in range(5)]
# start child processes
for process in processes:
process.start()
# wait for child processes to finish
for process in processes:
process.join()
# report final message
logger.info('Main process done.')
# shutdown the queue correctly
queue.put(None)
Update
I added the below code in the logger_process function just before the While True: loop. However when I look in the file, there is nothing there. I'm not seeing any output, not sure what I'm missing?
# add file handler
log_format = '%(asctime)s - %(name)s - %(levelname)s - %(messsage)s'
file_handler = logging.FileHandler('C:/my_directory/logs/file_name.log')
formatter = logging.Formatter(log_format)
file_handler.setFormatter(formatter)
logger.addHandler(file_handler)
I have a logging widget in Tkinter ( ScrolledText ) with a TextHandler class that handle logs and print theme in widget
class TextHandler(logging.Handler):
def __init__(self, text):
# run the regular Handler __init__
logging.Handler.__init__(self)
# Store a reference to the Text it will log to
self.text = text
def emit(self, record):
msg = self.format(record)
def append():
self.text.configure(state='normal')
self.text.insert(Tkinter.END, msg + '\n')
self.text.configure(state='disabled')
# Autoscroll to the bottom
self.text.yview(Tkinter.END)
self.text.after(0, append)
st = ScrolledText.ScrolledText(self, width=190, height=9, state='disabled')
st.configure(font='TkFixedFont')
st.place(x=0, y=539)
text_handler = TextHandler(st)
# Logging configuration
logging.basicConfig(filename='test.log',
level=logging.INFO,
format='%(asctime)s - %(levelname)s - %(message)s')
# Add the handler to logger
self.logger = logging.getLogger()
self.logger.addHandler(text_handler)
And call logging.info(msg) for log messages. but there is a problem. everything is work well but when this function called before a process ( for example some works with lists ) my logs appear after that !!
logging.info("message")
print "message"
for topic in news:
...
print method works fine here but there is problem for logging. i have my log message just after the loop end.
So ... what is the problem ?
You should not use after(0, ...). I'm not sure if it's the only problem, but it's definitely one problem. You are starving the event handler -- the "idle" queue will never empty, so it doesn't have the chance to service normal events. You have, in effect, created an infinite loop.
You should give a small non-zero interval, which will help this problem.
I want to log a line describing the logging information for each log file created by the logger.
Currently I'm using a separate logger process which will be running all the time. It receives the information from a queue and writes it to the log. A lot of modules will be passing information to this logging queue.
My current sample code is:
import logging
import time
from logging.handlers import TimedRotatingFileHandler as rotate
def info_log(log_queue):
logger = logging.getLogger("My Log")
logger.setLevel(logging.INFO)
handler = rotate("log/info.log", when="D", interval=30, backupCount=13)
logger.addHandler(handler)
desc_string = "yyyy/mm/dd-HH:MM:SS \t name \t country \n"
logger.info(desc_string)
while True:
result=log_queue.get().split("#")
logger.info(result[0] + "\t" result[1] + "\t" + result[2] + "\n")
Each time the log is rotated, I want desc_string to be written 1st to the log file.How can I do that?
Or in other words, How to know in the program when a log is rotated?
Maybe you can simply override the doRollover method from TimedRotatingFileHandler ?
class CustomFileHandler(TimedRotatingFileHandler):
def doRollover(self):
super().doRollover()
self.stream.write(desc_string)
# to use it
handler = CustomFileHandler("log/info.log", when="D", interval=30, backupCount=13)
logger.addHandler(handler)
Hi I am trying a sample program using logger in python
import logging
import time,sys
import os
logger = logging.getLogger('myapp')
hdlr = logging.FileHandler('myapp1234.log')
formatter = logging.Formatter('%(asctime)s %(levelname)s %(message)s')
hdlr.setFormatter(formatter)
logger.addHandler(hdlr)
logging.getLogger().setLevel(logging.DEBUG)
logger.error('We have a problem')
logger.info('While this is just chatty')
logger.debug("Sample")
hdlr.flush()
time.sleep(10)
logger.error('We have a problem')
logger.info('While this is just chatty')
logger.debug("Sample")
hdlr.close()
This code is not dynamically printing. I tried even handler.flush, sys.exit(0), sys.stdout.
When I try to open a file even by killing I am getting following error. Log is only printing after 120-200 seconds (sometimes it's taking even more).
How can I print immediately (at least by end of program)?
Did I miss any Handel closing.
Try removing the following statement.
time.sleep(10)
I'm trying to understand why the same set of code works in osx but not in windows.
logger_name = 'gic_scheduler'
logger = logging.getLogger(logger_name)
logger.setLevel(logging.INFO)
ch = logging.StreamHandler()
ch.setLevel(logging.INFO)
fh = logging.FileHandler(filename=os.path.join(tmp_folder, 'scheduler.log'), encoding='utf-8')
fh.setLevel(logging.DEBUG)
logger.addHandler(ch)
logger.addHandler(fh)
executor_logger = logging.getLogger('apscheduler.executors.default')
executor_logger.setLevel(logging.DEBUG)
executor_logger.addHandler(ch)
executor_logger.addHandler(fh)
executors = {'default': ProcessPoolExecutor(5)}
scheduler = BlockingScheduler(executors=executors, logger=logger)
scheduler.add_job(go, 'interval', seconds=5)
scheduler.start()
In particular, no output is produced by the logger 'apscheduler.executors.default'. I digged into the 3rd party library using this logger and printed out logger.handlers and in OSX's case the handlers are there but in Windows they are empty. Any ideas why?
def run_job(job, jobstore_alias, run_times, logger_name):
"""Called by executors to run the job. Returns a list of scheduler events to be dispatched by the scheduler."""
events = []
logger = logging.getLogger(logger_name)
print logger_name
print logger.handlers