I'm trying to understand why the same set of code works in osx but not in windows.
logger_name = 'gic_scheduler'
logger = logging.getLogger(logger_name)
logger.setLevel(logging.INFO)
ch = logging.StreamHandler()
ch.setLevel(logging.INFO)
fh = logging.FileHandler(filename=os.path.join(tmp_folder, 'scheduler.log'), encoding='utf-8')
fh.setLevel(logging.DEBUG)
logger.addHandler(ch)
logger.addHandler(fh)
executor_logger = logging.getLogger('apscheduler.executors.default')
executor_logger.setLevel(logging.DEBUG)
executor_logger.addHandler(ch)
executor_logger.addHandler(fh)
executors = {'default': ProcessPoolExecutor(5)}
scheduler = BlockingScheduler(executors=executors, logger=logger)
scheduler.add_job(go, 'interval', seconds=5)
scheduler.start()
In particular, no output is produced by the logger 'apscheduler.executors.default'. I digged into the 3rd party library using this logger and printed out logger.handlers and in OSX's case the handlers are there but in Windows they are empty. Any ideas why?
def run_job(job, jobstore_alias, run_times, logger_name):
"""Called by executors to run the job. Returns a list of scheduler events to be dispatched by the scheduler."""
events = []
logger = logging.getLogger(logger_name)
print logger_name
print logger.handlers
Related
I've 2 handlers to write to the console and the file:
import logging, sys
logger = logging.getLogger(__name__)
stdout_handler = logging.StreamHandler(sys.stdout)
logger.addHandler(stdout_handler)
file_handler = logging.FileHandler(filename="out.log")
logger.addHandler(file_handler)
if __name__ == '__main__':
raise ZeroDivisionError
When exceptions happen StreamHandler made for stdout is able to log the traceback which came to the stderr. Meanwhile, FileHandler doesn't write a traceback to the file.
Am I missing something in the FileHandler setup? Is FileHandler able to log stderr (e.g. uncaught exceptions) at all?
Logging module doesn't do everything on it's own. You have to specify what to capture as shown below.
import logging, sys
logger = logging.getLogger(__name__)
stdout_handler = logging.StreamHandler(sys.stdout)
logger.addHandler(stdout_handler)
file_handler = logging.FileHandler(filename="out.log")
logger.addHandler(file_handler)
if __name__ == '__main__':
try:
1/0
except ZeroDivisionError as err:
logger.error(err)
Refer to these for more details:
logging doc
Logging geeks for geeks
I am learning about logging when running multiple process.
Below is how I normally log things when running a single process.
import logging
log_format = '%(asctime)s - %(name)s - %(levelname)s - %(messsage)s'
logger = logging.getLogger(__name__)
logger.setLevel('Debug')
file_handler = logging.FileHandler('C:/my_directory/logs/file_name.log')
formatter = logging.Formatter(log_format)
file_handler.setFormatter(formatter)
# to stop duplication
if not len(logger.handlers):
logger.addHandler(file_handler)
So after my code has run I can go to C:/my_directory/logs/file_name.log & check what I need to.
With multiple processes I understand its not so simple. I have read this great article. I have copied the example code below. What I don't understand is how to output the logged messages to a file like above so that I can read it after the code has finished?
from random import random
from time import sleep
from multiprocessing import current_process
from multiprocessing import Process
from multiprocessing import Queue
from logging.handlers import QueueHandler
import logging
# executed in a process that performs logging
def logger_process(queue):
# create a logger
logger = logging.getLogger('app')
# configure a stream handler
logger.addHandler(logging.StreamHandler())
# log all messages, debug and up
logger.setLevel(logging.DEBUG)
# run forever
while True:
# consume a log message, block until one arrives
message = queue.get()
# check for shutdown
if message is None:
break
# log the message
logger.handle(message)
# task to be executed in child processes
def task(queue):
# create a logger
logger = logging.getLogger('app')
# add a handler that uses the shared queue
logger.addHandler(QueueHandler(queue))
# log all messages, debug and up
logger.setLevel(logging.DEBUG)
# get the current process
process = current_process()
# report initial message
logger.info(f'Child {process.name} starting.')
# simulate doing work
for i in range(5):
# report a message
logger.debug(f'Child {process.name} step {i}.')
# block
sleep(random())
# report final message
logger.info(f'Child {process.name} done.')
# protect the entry point
if __name__ == '__main__':
# create the shared queue
queue = Queue()
# create a logger
logger = logging.getLogger('app')
# add a handler that uses the shared queue
logger.addHandler(QueueHandler(queue))
# log all messages, debug and up
logger.setLevel(logging.DEBUG)
# start the logger process
logger_p = Process(target=logger_process, args=(queue,))
logger_p.start()
# report initial message
logger.info('Main process started.')
# configure child processes
processes = [Process(target=task, args=(queue,)) for i in range(5)]
# start child processes
for process in processes:
process.start()
# wait for child processes to finish
for process in processes:
process.join()
# report final message
logger.info('Main process done.')
# shutdown the queue correctly
queue.put(None)
Update
I added the below code in the logger_process function just before the While True: loop. However when I look in the file, there is nothing there. I'm not seeing any output, not sure what I'm missing?
# add file handler
log_format = '%(asctime)s - %(name)s - %(levelname)s - %(messsage)s'
file_handler = logging.FileHandler('C:/my_directory/logs/file_name.log')
formatter = logging.Formatter(log_format)
file_handler.setFormatter(formatter)
logger.addHandler(file_handler)
A polling method is implemented and it works every second to check the request status. Is it possible to add a log for each iteration of the polling?
result = poll(
lambda: getSomething(),
timeout=100,
step=1,
check_success=IsPollingSuccessfull
)
I need something like,
Waiting for the response + time
Waiting for the response + time
Waiting for the response + time
Waiting for the response + time
EDIT:
I want to print log to the console.
Have you considered python's logging? Here is the documentation
you can create a logger instance that saves to all messages to file. Then you can use it everywhere in your code and log anything you'd like with different logging levels.
Here is how I create and use the logger:
# Method to create an instance of a logger
import logging
def set_logger(context, file_name, verbose=False):
logger = logging.getLogger(context)
logger.setLevel(logging.DEBUG if verbose else logging.INFO)
formatter = logging.Formatter(
'[%(asctime)s][%(levelname)s]:' + context + ':[%(filename).30s:%(funcName).30s:%(lineno)3d]:%(message)s',
datefmt='%Y-%m-%d %H:%M:%S\x1b[0m')
console_handler = logging.StreamHandler()
console_handler.setLevel(logging.DEBUG if verbose else logging.INFO)
console_handler.setFormatter(formatter)
logger.handlers = []
logger.addHandler(console_handler)
file_handler = logging.FileHandler($path_to_save_logger + file_name)
file_handler.setLevel(logging.DEBUG if verbose else logging.INFO)
file_handler.setFormatter(formatter)
logger.addHandler(file_handler)
return logger
then create the instance and use it.
from termcolor import colored
my_logger = set_logger(colored('GENERAL', 'yellow'), "/tmp/my_logger.txt", verbose=True)
my_logger.info("this is an info message")
my_logger.debug("this is a debug message")
.....
EDIT: assuming you're using polling2.poll()
You can add a logger into you poll() call - documentation
import logging
poll(lambda: getSomething(),
timeout=100,
step=1,
check_success=IsPollingSuccessful,
log=logging.DEBUG)
If I run my script manually I see runlog.log populated with messages. When i run the script via cron, there's no output in my log file BUT i do see it in /var/spool/mail/root so I know the script works.
file_handler = logging.FileHandler("runlog.log", "a")
file_handler = logging.StreamHandler()
file_handler.setFormatter(default_formatter)
logger = logging.getLogger()
logger.setLevel(logging.DEBUG)
logger.addHandler(file_handler)
...
logger.info("No new files")
How can I put my two processes to log in a only file?
With my code only proc1 is logging to my log file...
module.py:
import multiprocessing,logging
log = multiprocessing.log_to_stderr()
log.setLevel(logging.DEBUG)
handler = logging.FileHandler('/var/log/my.log')
handler.setLevel(logging.DEBUG)
formatter = logging.Formatter('%(asctime)s - %(name)s - %(levelname)s - %(message)s')
handler.setFormatter(formatter)
log.addHandler(handler)
def proc1():
log.info('Hi from proc1')
while True:
if something:
log.info('something')
def proc2():
log.info('Hi from proc2')
while True:
if something_more:
log.info('something more')
if __name__ == '__main__':
p1 = multiprocessing.Process(target=proc1)
p2 = multiprocessing.Process(target=proc2)
p1.start()
p2.start()
As said at https://docs.python.org/2/howto/logging-cookbook.html#logging-to-a-single-file-from-multiple-processes
"Although logging is thread-safe, and logging to a single file from
multiple threads in a single process is supported, logging to a single
file from multiple processes is not supported"
Then, you should find another approach to get it, ie implementing a logging server:
https://docs.python.org/2/howto/logging-cookbook.html#sending-and-receiving-logging-events-across-a-network