I've 2 handlers to write to the console and the file:
import logging, sys
logger = logging.getLogger(__name__)
stdout_handler = logging.StreamHandler(sys.stdout)
logger.addHandler(stdout_handler)
file_handler = logging.FileHandler(filename="out.log")
logger.addHandler(file_handler)
if __name__ == '__main__':
raise ZeroDivisionError
When exceptions happen StreamHandler made for stdout is able to log the traceback which came to the stderr. Meanwhile, FileHandler doesn't write a traceback to the file.
Am I missing something in the FileHandler setup? Is FileHandler able to log stderr (e.g. uncaught exceptions) at all?
Logging module doesn't do everything on it's own. You have to specify what to capture as shown below.
import logging, sys
logger = logging.getLogger(__name__)
stdout_handler = logging.StreamHandler(sys.stdout)
logger.addHandler(stdout_handler)
file_handler = logging.FileHandler(filename="out.log")
logger.addHandler(file_handler)
if __name__ == '__main__':
try:
1/0
except ZeroDivisionError as err:
logger.error(err)
Refer to these for more details:
logging doc
Logging geeks for geeks
Related
A polling method is implemented and it works every second to check the request status. Is it possible to add a log for each iteration of the polling?
result = poll(
lambda: getSomething(),
timeout=100,
step=1,
check_success=IsPollingSuccessfull
)
I need something like,
Waiting for the response + time
Waiting for the response + time
Waiting for the response + time
Waiting for the response + time
EDIT:
I want to print log to the console.
Have you considered python's logging? Here is the documentation
you can create a logger instance that saves to all messages to file. Then you can use it everywhere in your code and log anything you'd like with different logging levels.
Here is how I create and use the logger:
# Method to create an instance of a logger
import logging
def set_logger(context, file_name, verbose=False):
logger = logging.getLogger(context)
logger.setLevel(logging.DEBUG if verbose else logging.INFO)
formatter = logging.Formatter(
'[%(asctime)s][%(levelname)s]:' + context + ':[%(filename).30s:%(funcName).30s:%(lineno)3d]:%(message)s',
datefmt='%Y-%m-%d %H:%M:%S\x1b[0m')
console_handler = logging.StreamHandler()
console_handler.setLevel(logging.DEBUG if verbose else logging.INFO)
console_handler.setFormatter(formatter)
logger.handlers = []
logger.addHandler(console_handler)
file_handler = logging.FileHandler($path_to_save_logger + file_name)
file_handler.setLevel(logging.DEBUG if verbose else logging.INFO)
file_handler.setFormatter(formatter)
logger.addHandler(file_handler)
return logger
then create the instance and use it.
from termcolor import colored
my_logger = set_logger(colored('GENERAL', 'yellow'), "/tmp/my_logger.txt", verbose=True)
my_logger.info("this is an info message")
my_logger.debug("this is a debug message")
.....
EDIT: assuming you're using polling2.poll()
You can add a logger into you poll() call - documentation
import logging
poll(lambda: getSomething(),
timeout=100,
step=1,
check_success=IsPollingSuccessful,
log=logging.DEBUG)
Hi I am trying a sample program using logger in python
import logging
import time,sys
import os
logger = logging.getLogger('myapp')
hdlr = logging.FileHandler('myapp1234.log')
formatter = logging.Formatter('%(asctime)s %(levelname)s %(message)s')
hdlr.setFormatter(formatter)
logger.addHandler(hdlr)
logging.getLogger().setLevel(logging.DEBUG)
logger.error('We have a problem')
logger.info('While this is just chatty')
logger.debug("Sample")
hdlr.flush()
time.sleep(10)
logger.error('We have a problem')
logger.info('While this is just chatty')
logger.debug("Sample")
hdlr.close()
This code is not dynamically printing. I tried even handler.flush, sys.exit(0), sys.stdout.
When I try to open a file even by killing I am getting following error. Log is only printing after 120-200 seconds (sometimes it's taking even more).
How can I print immediately (at least by end of program)?
Did I miss any Handel closing.
Try removing the following statement.
time.sleep(10)
I'm trying to understand why the same set of code works in osx but not in windows.
logger_name = 'gic_scheduler'
logger = logging.getLogger(logger_name)
logger.setLevel(logging.INFO)
ch = logging.StreamHandler()
ch.setLevel(logging.INFO)
fh = logging.FileHandler(filename=os.path.join(tmp_folder, 'scheduler.log'), encoding='utf-8')
fh.setLevel(logging.DEBUG)
logger.addHandler(ch)
logger.addHandler(fh)
executor_logger = logging.getLogger('apscheduler.executors.default')
executor_logger.setLevel(logging.DEBUG)
executor_logger.addHandler(ch)
executor_logger.addHandler(fh)
executors = {'default': ProcessPoolExecutor(5)}
scheduler = BlockingScheduler(executors=executors, logger=logger)
scheduler.add_job(go, 'interval', seconds=5)
scheduler.start()
In particular, no output is produced by the logger 'apscheduler.executors.default'. I digged into the 3rd party library using this logger and printed out logger.handlers and in OSX's case the handlers are there but in Windows they are empty. Any ideas why?
def run_job(job, jobstore_alias, run_times, logger_name):
"""Called by executors to run the job. Returns a list of scheduler events to be dispatched by the scheduler."""
events = []
logger = logging.getLogger(logger_name)
print logger_name
print logger.handlers
How can I put my two processes to log in a only file?
With my code only proc1 is logging to my log file...
module.py:
import multiprocessing,logging
log = multiprocessing.log_to_stderr()
log.setLevel(logging.DEBUG)
handler = logging.FileHandler('/var/log/my.log')
handler.setLevel(logging.DEBUG)
formatter = logging.Formatter('%(asctime)s - %(name)s - %(levelname)s - %(message)s')
handler.setFormatter(formatter)
log.addHandler(handler)
def proc1():
log.info('Hi from proc1')
while True:
if something:
log.info('something')
def proc2():
log.info('Hi from proc2')
while True:
if something_more:
log.info('something more')
if __name__ == '__main__':
p1 = multiprocessing.Process(target=proc1)
p2 = multiprocessing.Process(target=proc2)
p1.start()
p2.start()
As said at https://docs.python.org/2/howto/logging-cookbook.html#logging-to-a-single-file-from-multiple-processes
"Although logging is thread-safe, and logging to a single file from
multiple threads in a single process is supported, logging to a single
file from multiple processes is not supported"
Then, you should find another approach to get it, ie implementing a logging server:
https://docs.python.org/2/howto/logging-cookbook.html#sending-and-receiving-logging-events-across-a-network
So I would like to log errors in memory (I don't want to write any logfile) and access them later to display them in a bulk after the program execution.
It would look like:
...
Program executing ...
...
Errors occured !
Error 1 : ...
Error 2 : ...
I am not asking how to do it myself but if some existing module is capable of that. I would like to use standard modules as much as possible.
You could pass a StringIO buffer to a StreamHandler of the standard logging module. In Python2:
import logging, StringIO
print 'Setting up logging ...'
stream = StringIO.StringIO()
logger = logging.getLogger()
handler = logging.StreamHandler(stream)
logger.addHandler(handler)
print 'Starting main program ...'
logger.warning('This is serious')
logger.error('This is really bad')
print 'Finished main, printing log messages ...'
print stream.getvalue()
As commented by Hettomei, the imports should be changed slightly for Python3:
import logging, io
print('Setting up logging ...')
stream = io.StringIO()
logger = logging.getLogger()
handler = logging.StreamHandler(stream)
logger.addHandler(handler)
print('Starting main program ...')
logger.warning('This is serious')
logger.error('This is really bad')
print('Finished main, printing log messages ...')
print(stream.getvalue())
In both cases, you get the desired result:
Setting up logging ...
Starting main program ...
Finished main, printing log messages ...
This is serious
This is really bad