Python logger error - python

Hi I am trying a sample program using logger in python
import logging
import time,sys
import os
logger = logging.getLogger('myapp')
hdlr = logging.FileHandler('myapp1234.log')
formatter = logging.Formatter('%(asctime)s %(levelname)s %(message)s')
hdlr.setFormatter(formatter)
logger.addHandler(hdlr)
logging.getLogger().setLevel(logging.DEBUG)
logger.error('We have a problem')
logger.info('While this is just chatty')
logger.debug("Sample")
hdlr.flush()
time.sleep(10)
logger.error('We have a problem')
logger.info('While this is just chatty')
logger.debug("Sample")
hdlr.close()
This code is not dynamically printing. I tried even handler.flush, sys.exit(0), sys.stdout.
When I try to open a file even by killing I am getting following error. Log is only printing after 120-200 seconds (sometimes it's taking even more).
How can I print immediately (at least by end of program)?
Did I miss any Handel closing.

Try removing the following statement.
time.sleep(10)

Related

unformated logging prints on the console (it only prints formated to a file) / Python

logger configuration to log to file and print to stdout would not work with me so:
I want to both print logging details in prog_log.txt AND in the console, I have:
# Debug Settings
Import logging
#logging.disable() #when program ready, un-indent this line to remove all logging messages
logger = logging.getLogger('')
logging.basicConfig(level=logging.DEBUG,
filename = 'prog_log.txt',
format='%(asctime)s - %(levelname)s - %(message)s',
filemode = 'w')
logger.addHandler(logging.StreamHandler())
logging.debug('Start of Program')
The above does print into the prog_log.txt file the logging details properly formated but it prints unformated logging messages multiple times as I re-run the program in the console like below..
In [3]: runfile('/Volumes/GoogleDrive/Mon Drive/MAC_test.py', wdir='/Volumes/GoogleDrive/Mon Drive/')
Start of Program
Start of Program
Start of Program #after third running
Any help welcome ;)

some of the logs are missing with TimedRotatingFileHandler

I need your help in solving this problem. So I am using Python's timedrotatingfilehandler lib to append logs for 24 hours in a log file and then rotate the log file. This works as expected but every time I run my script, timedrotatingfilehandler only logs till certain point. For example: If I have 10 log statement, each run only logs till 5 and then nothing gets logged in the log file. Appending happens correctly though. Not sure whats happening. Here is the python code:
#Configuring logger
LOG = logging.getLogger(__name__)
def config_Logs():
# format the log entries
global LOG
formatter = logging.Formatter('[%(asctime)s] : {%(pathname)s:%(lineno)d} %(levelname)s - %(message)s')
handler = TimedRotatingFileHandler('collecting_logs.log',
when='h', interval=24, backupCount=0)
handler.setFormatter(formatter)
LOG.addHandler(handler)
LOG.setLevel(logging.DEBUG)
I am calling config_Logs() at the beginning of the script.

How to cleanly start logging multiple times by removing all handlers with "removeHandler(...)"?

During development I use the python logging module. For example, after an unhandled exception I'd like to re-run the program and freshly initialize the logging.
For some reason it seems that I'm unable to remove all handlers from the log instance. Even so it was never deleted.
import logging
log = logging.getLogger(__name__)
print('Existing handlers:')
print(log.handlers)
#Remove all handlers:
for handler in log.handlers: #get rid of existing old handlers
print('removing handler %s'%handler)
log.removeHandler(handler)
#excpecting "[]" for log.handlers
print('Existing handlers after removal:')
print(log.handlers)
fh1 = logging.StreamHandler()
formatter1 = logging.Formatter('fh1: %(levelname)s - %(message)s')
fh1.setFormatter(formatter1)
fh2 = logging.StreamHandler()
formatter2 = logging.Formatter('fh2: %(levelname)s - %(message)s')
fh2.setFormatter(formatter2)
log.addHandler(fh1)
log.addHandler(fh2)
log.error('Some logging occurs here')
On the first run in a fresh IPython console I get:
fh1: ERROR - Some logging occurs here
fh2: ERROR - Some logging occurs here
Existing handlers:
[]
Existing handlers after removal:
[]
This almost what I expected. The order of appearance bugs me a bit. Why does the log appear before the print output? It really gets strange when starting the program a second time:
fh2: ERROR - Some logging occurs here
fh1: ERROR - Some logging occurs here
fh2: ERROR - Some logging occurs here
Existing handlers:
[<StreamHandler stderr (NOTSET)>, <StreamHandler stderr (NOTSET)>]
removing handler <StreamHandler stderr (NOTSET)>
Existing handlers after removal:
[<StreamHandler stderr (NOTSET)>]
It seems that the for loop, removing the handles, is executed just one time only.
And than consequently I get 3 logging entries, which is not what I want. I expected for the second run:
Existing handlers:
[<StreamHandler stderr (NOTSET)>, <StreamHandler stderr (NOTSET)>]
removing handler <StreamHandler stderr (NOTSET)>
removing handler <StreamHandler stderr (NOTSET)>
Existing handlers after removal:
[]
fh1: ERROR - Some logging occurs here
fh2: ERROR - Some logging occurs here
I seem to have missed some of the concept.
+ Why does the for loop just run one time, although len(log.handlers) returns 2 after the first run and 3 after the second run?
Why is the order of the print and logging commands mixed up?
And most important:
How can all the handlers be removed properly? Or the logging forced to start cleanly?
I'm using python 3.7.1 and logging 0.5.1.2
I think this issue is more related to your system OS and to your computer hardware characteristics. Everything works fine and prints continuously on my side
About 3 logging entries - maybe there is the same issue with flushing text in the logging module
Note: in the part where you remove log handlers, you should make a copy of this list before iterating it. That's why you weren't clearing all logging handlers. Like this:
#Remove all handlers:
for handler in log.handlers[:]: #get rid of existing old handlers
print('removing handler %s'%handler)
log.removeHandler(handler)
This way you remove all handlers:
logger = logging.getLogger()
while logger.hasHandlers():
logger.removeHandler(logger.handlers[0])

Python Logging different file on each turn

I am using a python 2.7 script that runs 24/7, I want a different log file produced by the logging module each time a loop is executed. Each file would have the timestamp has filename to avoid confusion.
So Far I got:
def main():
while True:
datetimenow = datetime.datetime.now().strftime("%Y-%m-%d-%H:%M:%S")
logging.basicConfig(format='%(asctime)s %(levelname)-8s %(message)s',
datefmt='%a, %d %b %Y %H:%M:%S', filename="logs/" + datetimenow + '.log',
level=logging.INFO)
logging.getLogger().addHandler(logging.StreamHandler())
# ... start of action
if __name__ == "__main__":
main()
This produces one file and when the loop is started again it doesnt close and open a new file.
Also, seems that the console output is double printed as each line is outputted to console twice.
Any ideas to fix these ?
Ok, I got it working by removing the basicConfig snippet and building two handlers, one inside the loop for the file with different timestamp and one in the class for the console. The key is to remove the handler at the end of the loop, before adding it again with a different date. Here is the complete example:
import logging
import time
import datetime
logger = logging.getLogger('simple_example')
logger.setLevel(logging.INFO)
con = logging.StreamHandler()
con.setLevel(logging.INFO)
formatter = logging.Formatter('%(asctime)s %(levelname)-8s %(message)s')
con.setFormatter(formatter)
logger.addHandler(con)
def main():
a = 0
while True:
datetimenow = datetime.datetime.now().strftime("%Y-%m-%d-%H:%M:%S")
ch = logging.FileHandler("logs/" + datetimenow + '.log')
ch.setLevel(logging.INFO)
formatter = logging.Formatter('%(asctime)s %(levelname)-8s %(message)s')
ch.setFormatter(formatter)
logger.addHandler(ch)
time.sleep(5)
a += 1
logger.warning("logging step "+ str(b))
time.sleep(5)
logger.removeHandler(ch)
if __name__ == "__main__":
main()
Sleep(5) is used for the purpose of testing and that it doesnt go too fast.

How can I save my log file in Python when the process is killed

I am learning the logging module in Python.
However, if I log like this
logging.basicConfig(filename='mylog.log',format='%(asctime)s - %(levelname)s - %(message)s', level=logging.DEBUG)
while 1:
logging.debug("something")
time.sleep(1)
and interrupt the process with control-C event(or the process is killed), nothing I can got from the log file.
Can I save the most logs whatever happens?
————
EDIT
the question seem become more complex:
I have imported scipy, numpy, pyaudio in my script, and I got:
forrtl: error (200): program aborting due to control-C event
instead of KeyboardInterrupt
I have read this question: Ctrl-C crashes Python after importing scipy.stats
and add these line to my script:
import _thread
import win32api
def handler(dwCtrlType, hook_sigint=_thread.interrupt_main):
if dwCtrlType == 0: # CTRL_C_EVENT
hook_sigint()
return 1 # don't chain to the next handler
return 0 # chain to the next handler
then:
try:
main()
except KeyboardInterrupt:
print("exit manually")
exit()
Now, the script stops without any info if I use ctrl+C. print("exit manually") does not appear. Of course, no logs.
Solved
A stupid mistake!
I run the script when working directory is System32 and want to find log in the script's path.
After I change the route like this, all is well.
logging.basicConfig(filename=os.path.dirname(sys.argv[0])+os.sep+'mylog.log',format='%(asctime)s - %(levelname)s - %(message)s', level=logging.DEBUG)
When you log using logging.debug, logging.info, ..., logging.critical, you're using the root logger. I assume you're not doing anything to configure logging that you haven't shown, so you're running with the out-of-the-box, default configuration. (This is set up for you by the first call to logging.debug, which calls logging.basicConfig()).
The default logging level of the root logger is logging.WARNING (as mentioned in e.g. https://docs.python.org/3/howto/logging.html#logging-basic-tutorial). Thus, nothing you log with logging.debug or logging.info will appear :) If you change logging.debug to logging.warning (or .error or .critical), you will see logging output.
For your code to work as is, set the logging level of the root logger to logging.DEBUG before the loop:
import logging
import time
# logging.getLogger() returns the root logger
logging.getLogger().setLevel(logging.DEBUG)
while 1:
logging.debug("something")
time.sleep(1)
For the CTRL + C event use a try-except to catch the KeyboardInterrupt exception.

Categories