Python Logging different file on each turn - python

I am using a python 2.7 script that runs 24/7, I want a different log file produced by the logging module each time a loop is executed. Each file would have the timestamp has filename to avoid confusion.
So Far I got:
def main():
while True:
datetimenow = datetime.datetime.now().strftime("%Y-%m-%d-%H:%M:%S")
logging.basicConfig(format='%(asctime)s %(levelname)-8s %(message)s',
datefmt='%a, %d %b %Y %H:%M:%S', filename="logs/" + datetimenow + '.log',
level=logging.INFO)
logging.getLogger().addHandler(logging.StreamHandler())
# ... start of action
if __name__ == "__main__":
main()
This produces one file and when the loop is started again it doesnt close and open a new file.
Also, seems that the console output is double printed as each line is outputted to console twice.
Any ideas to fix these ?

Ok, I got it working by removing the basicConfig snippet and building two handlers, one inside the loop for the file with different timestamp and one in the class for the console. The key is to remove the handler at the end of the loop, before adding it again with a different date. Here is the complete example:
import logging
import time
import datetime
logger = logging.getLogger('simple_example')
logger.setLevel(logging.INFO)
con = logging.StreamHandler()
con.setLevel(logging.INFO)
formatter = logging.Formatter('%(asctime)s %(levelname)-8s %(message)s')
con.setFormatter(formatter)
logger.addHandler(con)
def main():
a = 0
while True:
datetimenow = datetime.datetime.now().strftime("%Y-%m-%d-%H:%M:%S")
ch = logging.FileHandler("logs/" + datetimenow + '.log')
ch.setLevel(logging.INFO)
formatter = logging.Formatter('%(asctime)s %(levelname)-8s %(message)s')
ch.setFormatter(formatter)
logger.addHandler(ch)
time.sleep(5)
a += 1
logger.warning("logging step "+ str(b))
time.sleep(5)
logger.removeHandler(ch)
if __name__ == "__main__":
main()
Sleep(5) is used for the purpose of testing and that it doesnt go too fast.

Related

Python polling library show message for each iteration

A polling method is implemented and it works every second to check the request status. Is it possible to add a log for each iteration of the polling?
result = poll(
lambda: getSomething(),
timeout=100,
step=1,
check_success=IsPollingSuccessfull
)
I need something like,
Waiting for the response + time
Waiting for the response + time
Waiting for the response + time
Waiting for the response + time
EDIT:
I want to print log to the console.
Have you considered python's logging? Here is the documentation
you can create a logger instance that saves to all messages to file. Then you can use it everywhere in your code and log anything you'd like with different logging levels.
Here is how I create and use the logger:
# Method to create an instance of a logger
import logging
def set_logger(context, file_name, verbose=False):
logger = logging.getLogger(context)
logger.setLevel(logging.DEBUG if verbose else logging.INFO)
formatter = logging.Formatter(
'[%(asctime)s][%(levelname)s]:' + context + ':[%(filename).30s:%(funcName).30s:%(lineno)3d]:%(message)s',
datefmt='%Y-%m-%d %H:%M:%S\x1b[0m')
console_handler = logging.StreamHandler()
console_handler.setLevel(logging.DEBUG if verbose else logging.INFO)
console_handler.setFormatter(formatter)
logger.handlers = []
logger.addHandler(console_handler)
file_handler = logging.FileHandler($path_to_save_logger + file_name)
file_handler.setLevel(logging.DEBUG if verbose else logging.INFO)
file_handler.setFormatter(formatter)
logger.addHandler(file_handler)
return logger
then create the instance and use it.
from termcolor import colored
my_logger = set_logger(colored('GENERAL', 'yellow'), "/tmp/my_logger.txt", verbose=True)
my_logger.info("this is an info message")
my_logger.debug("this is a debug message")
.....
EDIT: assuming you're using polling2.poll()
You can add a logger into you poll() call - documentation
import logging
poll(lambda: getSomething(),
timeout=100,
step=1,
check_success=IsPollingSuccessful,
log=logging.DEBUG)

unformated logging prints on the console (it only prints formated to a file) / Python

logger configuration to log to file and print to stdout would not work with me so:
I want to both print logging details in prog_log.txt AND in the console, I have:
# Debug Settings
Import logging
#logging.disable() #when program ready, un-indent this line to remove all logging messages
logger = logging.getLogger('')
logging.basicConfig(level=logging.DEBUG,
filename = 'prog_log.txt',
format='%(asctime)s - %(levelname)s - %(message)s',
filemode = 'w')
logger.addHandler(logging.StreamHandler())
logging.debug('Start of Program')
The above does print into the prog_log.txt file the logging details properly formated but it prints unformated logging messages multiple times as I re-run the program in the console like below..
In [3]: runfile('/Volumes/GoogleDrive/Mon Drive/MAC_test.py', wdir='/Volumes/GoogleDrive/Mon Drive/')
Start of Program
Start of Program
Start of Program #after third running
Any help welcome ;)

Python logger error

Hi I am trying a sample program using logger in python
import logging
import time,sys
import os
logger = logging.getLogger('myapp')
hdlr = logging.FileHandler('myapp1234.log')
formatter = logging.Formatter('%(asctime)s %(levelname)s %(message)s')
hdlr.setFormatter(formatter)
logger.addHandler(hdlr)
logging.getLogger().setLevel(logging.DEBUG)
logger.error('We have a problem')
logger.info('While this is just chatty')
logger.debug("Sample")
hdlr.flush()
time.sleep(10)
logger.error('We have a problem')
logger.info('While this is just chatty')
logger.debug("Sample")
hdlr.close()
This code is not dynamically printing. I tried even handler.flush, sys.exit(0), sys.stdout.
When I try to open a file even by killing I am getting following error. Log is only printing after 120-200 seconds (sometimes it's taking even more).
How can I print immediately (at least by end of program)?
Did I miss any Handel closing.
Try removing the following statement.
time.sleep(10)

Python script completes, but Windows Task Scheduler shows running

I wrote a small script to clear out stopped torrents in Transmission. The script writes a log, does it's work in Transmission, and exits. Script below:
#Module Imports#
import transmissionrpc
import os
import logging
import sys
import datetime
#Set variables before main() function
logdir = 'D:\\scripts\\logs'
myDate = datetime.datetime.now().strftime("%y-%m-%d")
myTime = datetime.datetime.now().strftime("%H:%M")
myDateTime = datetime.datetime.now().strftime("%y-%m-%d %H:%M")
if not os.path.exists(logdir):
os.makedirs(logdir)
logger = logging.getLogger('transmissionrpc')
logdate = datetime.datetime.now().strftime("%y-%m-%d %H%M")
logfile = logdir + "\\CTS-" + logdate + '.log'
hdlr = logging.FileHandler(logfile)
formatter = logging.Formatter('%(asctime)s %(levelname)s %(message)s')
hdlr.setFormatter(formatter)
logger.addHandler(hdlr)
logger.setLevel(logging.INFO)
logger.info("Begin Transmission clean")
tc = transmissionrpc.Client('localhost', port = 9091, user = 'USER', password = 'PASS')
for t in tc.get_torrents():
if t.status == 'stopped':
tc.remove_torrent(t.id, delete_data = True)
print ('Removing Torrent %s - %s' % (t.id, t.name))
logger.info('Removing Torrent %s - %s' % (t.id, t.name))
logger.info("No more stopped torrents. Exiting")
sys.exit()
Running the script in Tasks Scheduler as pythonw D:\Path\to\script.py
How can I get Task Scheduler to properly show the script has ended?
Add these lines to your code:
import os
print(os.getpid())
Compare the number that it prints with the process ID in the Task Scheduler. If you don't see a match, your script must be running in another process.

Python 2.7 Multiprocessing logging and loops

How can I put my two processes to log in a only file?
With my code only proc1 is logging to my log file...
module.py:
import multiprocessing,logging
log = multiprocessing.log_to_stderr()
log.setLevel(logging.DEBUG)
handler = logging.FileHandler('/var/log/my.log')
handler.setLevel(logging.DEBUG)
formatter = logging.Formatter('%(asctime)s - %(name)s - %(levelname)s - %(message)s')
handler.setFormatter(formatter)
log.addHandler(handler)
def proc1():
log.info('Hi from proc1')
while True:
if something:
log.info('something')
def proc2():
log.info('Hi from proc2')
while True:
if something_more:
log.info('something more')
if __name__ == '__main__':
p1 = multiprocessing.Process(target=proc1)
p2 = multiprocessing.Process(target=proc2)
p1.start()
p2.start()
As said at https://docs.python.org/2/howto/logging-cookbook.html#logging-to-a-single-file-from-multiple-processes
"Although logging is thread-safe, and logging to a single file from
multiple threads in a single process is supported, logging to a single
file from multiple processes is not supported"
Then, you should find another approach to get it, ie implementing a logging server:
https://docs.python.org/2/howto/logging-cookbook.html#sending-and-receiving-logging-events-across-a-network

Categories