Currently, this is what I have (testlog.py):
import logging
import logging.handlers
filename = "example.log"
logger = logging.getLogger()
logger.setLevel(logging.DEBUG)
formatter = logging.Formatter("%(asctime)s - %(name)s - %(levelname)s - %(message)s")
handler = logging.handlers.RotatingFileHandler(filename, mode = 'w', backupCount = 5)
handler.setLevel(logging.DEBUG)
handler.setFormatter(formatter)
logger.addHandler(handler)
ch = logging.StreamHandler()
ch.setLevel(logging.DEBUG)
ch.setFormatter(formatter)
logger.addHandler(ch)
for i in range(10):
logger.debug("testx") #where I alternate x from 1 thru 9 to see output
It currently successfully prints out to the console and to example.log, which is what I want.
Every time I run it, it makes a new file and replaces the old example.log like so:
run with logger.debug("test1") - example.log will contain test1 10 times like it should.
run with logger.debug("test2") - it rewrites example.log to contain test2 10 times.
etc...
However, I would like for the code to make a new log file every time I run the program so that I have:
example.log
example.log1
example.log2
...
example.log5
In conclusion, I'd like for this file to print the log message to the console, to the log file, and I would like a new log file (up to *.5) whenever I run the program.
logging.handlers.RotatingFileHandler rotates your logs based either on size or on date, but you can force it to rotate using RotatingFileHandler.doRollover() so something like:
import logging.handlers
import os
filename = "example.log"
# your logging setup
should_roll_over = os.path.isfile(filename)
handler = logging.handlers.RotatingFileHandler(filename, mode='w', backupCount=5)
if should_roll_over: # log already exists, roll over!
handler.doRollover()
# the rest of your setup...
Should work like a charm.
If you change the mode to 'a' in the solution posted by zwer, then you get a single empty log file at the first run, after that it will work as intended. Just increase the backupCount by +1 :-)
Related
I have written a bonobo script to extract some data, and I would like to use python's logging module to write some status messages to a file while my job runs. I've done the following:
import logging
logging.basicConfig(filename=INFO["LOGFILE_PATH"]+r'\bonobo_job_'+date.today().isoformat(),
filemode='a',
format='%(name)s - %(levelname)s - %(message)s')
If I simply run the script in Pycharm, it logs to the file as I would expect. But if I run it from the command line with the bonobo run command, it ignores the filename and logs to stdout. How do I fix this? Is there a flag or environment variable I need to set somewhere?
Okay,I figured it out. For some reason, basicConfig doesn't work. I had to use getLogger and add a FileHandler. So in main I did this:
logger = logging.getLogger('bonobo_logger')
ch = logging.FileHandler(logfilename)
formatter = logging.Formatter('%(name)s - %(levelname)s - %(message)s')
ch.setFormatter(formatter)
logger.addHandler(ch)
Then in every node in my graph where I wanted to do logging, I called:
logger = logging.getLogger('bonobo_logger')
and used the logger object to write out all messages. If anyone knows a better way of doing it, please let me know.
Im working on a project with several modules that should all log to the same file.
Initializing the logger:
parentLogger = logging.getLogger('PARENTLOGGER')
logger = logging.getLogger('PARENTLOGGER.' + __name__)
#set format
fmt = logging.Formatter('%(asctime)s [%(levelname)s] %(name)s: %(message)s')
#set handler for command line
ch = logging.StreamHandler()
ch.setLevel(logging.DEBUG)
ch.setFormatter(fmt)
#set file handler
open('data/logs.log', 'w+').close() #for creating the file in case it doesnt exists. I got exceptions otherwise.
fh = RotatingFileHandler('data/logs.log', maxBytes=5242880, backupCount=1)
fh.setLevel(logging.DEBUG)
fh.setFormatter(fmt)
parentLogger.addHandler(fh)
parentLogger.addHandler(ch)
#
than, on all other modules im calling:
self._logger = logging.getLogger('PARENTLOGGER.' + __name__)
The problem is that the rotating file handler wont write anything to the log file. In any module.
Am I configuring the logger correctly? I've tried several examples from pythons' logging cookbook without success...
Regards and thanks in advance!
You should tell logging when and what to log. Also, the log file will be created by default, no need to create it yourself. Here's a simple example:
$ cat test_log.py
import logging
log = '/home/user/test_log.log'
logging.basicConfig(filename=log,
format='%(asctime)s [%(levelname)s]: %(message)s at line: %(lineno)d (func: "%(funcName)s")')
try:
this_will_fail
except Exception as err:
logging.error('%s' % err)
$ python test_log.py
$ cat /home/user/test_log.log
2018-02-07 11:31:24,127 [ERROR]: name 'this_will_fail' is not defined at line: 10 (func: "<module>")
So, when I copy paste the following x times to the python prompt,
it add the log x times to the end of the designated file.
How can I change the code so that each time I copy paste this to the prompt,
I simply overwrite the existing file (the code seems to not accept the
mode = 'w' option or I do not seem to understand its meaning)
def MinimalLogginf():
import logging
import os
paths = {'work': ''}
logger = logging.getLogger('oneDayFileLoader')
LogHandler = logging.FileHandler(os.path.join(paths["work"] , "oneDayFileLoader.log"), mode='w')
formatter = logging.Formatter('%(asctime)s %(levelname)s %(message)s')
LogHandler.setFormatter(formatter)
logger.addHandler(LogHandler)
logger.setLevel(logging.DEBUG)
#Let's say this is an error:
if(1 == 1):
logger.error('overwrite')
So I run it once:
MinmalLoggingf()
Now, I want the new log file to overwrite the log file created on the previous run:
MinmalLoggingf()
If I understand correctly, you're running a certain Python process for days at a time, and want to rotate the log every day. I'd recommend you go a different route, using a handler that automatically rotates the log file, e.g. http://www.blog.pythonlibrary.org/2014/02/11/python-how-to-create-rotating-logs/
But, if you want to control the log using the process in the same method you're comfortable with (Python console, pasting in code.. extremely unpretty and error prone, but sometimes quick-n-dirty is sufficient for the task at hand), well...
Your issue is that you create a new FileHandler each time you paste in the code, and you add it to the Logger object. You end up with a logger that has X FileHandlers attached to it, all of them writing to the same file. Try this:
import logging
paths = {'work': ''}
logger = logging.getLogger('oneDayFileLoader')
if logger.handlers:
logger.handlers[0].close()
logger.handlers = []
logHandler = logging.FileHandler(os.path.join(paths["work"] , "oneDayFileLoader.log"), mode='w')
formatter = logging.Formatter('%(asctime)s %(levelname)s %(message)s')
logHandler.setFormatter(formatter)
logger.addHandler(logHandler)
logger.setLevel(logging.DEBUG)
logger.error('overwrite')
Based on your request, I've also added an example using TimedRotatingFileHandler. Note I haven't tested it locally, so if you have issues ping back.
import logging
from logging.handlers import TimedRotatingFileHandler
logPath = os.path.join('', "fileLoaderLog")
logger = logging.getLogger('oneDayFileLoader')
logHandler = TimedRotatingFileHandler(logPath,
when="midnight",
interval=1)
formatter = logging.Formatter('%(asctime)s %(levelname)s %(message)s')
logHandler.setFormatter(formatter)
logger.addHandler(logHandler)
logger.setLevel(logging.DEBUG)
logger.error('overwrite')
Your log messages are being duplicated because you call addHandler more than once. Each call to addHandler adds an additional log handler.
If you want to make sure the file is created from scratch, add an extra line of code to remove it:
os.remove(os.path.join(paths["work"], "oneDayFileLoader.log"))
The mode is specified as part of logging.basicConfig and is passed through using filemode.
logging.basicConfig(
level = logging.DEBUG,
format = '%(asctime)s %(levelname)s %(message)s',
filename = 'oneDayFileLoader.log,
filemode = 'w'
)
https://docs.python.org/3/library/logging.html#simple-examples
I have used the same Python script on Windows which worked fine and produced several logs during each time it was run. The problem is when I ran the script on Linux the logging produced all of the logs onto one line.
I have tried \n in different places such as in the formatter, in each line itself.
This is how the logging is set up:
# This is the setup of the logging system for the program.
logger = logging.getLogger(__name__)
# Sets the name of the log file to 'login.log'
handler = logging.FileHandler(config.path['logger'])
# Sets up the format of the log file: Time, Function Name, Error Level (e.g. Warning Info, Critical),
# and then the message that follows the format.
formatter = logging.Formatter('%(asctime)-5s %(funcName)-20s %(levelname)-10s %(message)s')
handler.setFormatter(formatter)
logger.addHandler(handler)
# Sets the lowest level of messages to info.
logger.setLevel(logging.INFO)
And here is how each log is made:
logger.warning('%-15s' % client + ' Failed: Logout Error')
Thanks in advance
as I checked the Python doc at python logging and with some experiment, I don't quite under stand,
When computing the next rollover time for the first time (when the handler is created), the last modification time of an existing log file, or else the current time, is used to compute when the next rotation will occur.
I found the rotation time for a hourly rotation is affected by the time I start logging, say 12:23:33 starts, and next rotation at 13:23:33, which will finally be confusing to the log file name.
Code would be like ,
TimedRotatingFileHandler(filename, when='h', interval=1, backupCount=0, encoding=None, delay=False, utc=False)
Any way to force the hourly log rotation starts from 00 minute like 13:00:00 rather than the time the logging starts, and every log will contain only the logs within the hour its log file name indicates?
Looking at the code for TimedRotatingFileHandler, you can't force it to rotate at a specific minute value: as per the documentation, the next rollover will occur at either logfile last modification time + interval if the logfile already exists, or current time + interval if it doesn't.
But since you seem to know the filename, you could trick TimedRotatingFileHandler by first setting the last modification time of the logfile to the current hour:
from datetime import datetime
import os, time
thishour = datetime.now().replace(minute = 0, second = 0, microsecond = 0)
timestamp = time.mktime(thishour.timetuple())
# this opens/creates the logfile...
with file(filename, 'a'):
# ...and sets atime/mtime
os.utime(filename, (timestamp, timestamp))
TimedRotatingFileHandler(filename, ...)
(untested)
I also attach another answer. Python logging support WatchedFileHandler, which will close and re-open log file with same file name if it find file info is updated. It works nicely with system level log rotate like linux logrotate daemon.
logger = logging.getLogger()
log_format = '%(asctime)s - %(levelname)s - %(name)s - %(message)s'
logging.basicConfig(format=log_format, level=level)
# Create 'log' directory if not present
log_path = os.path.dirname(logfile)
if not os.path.exists(log_path):
os.makedirs(log_path)
handler = TimedRotatingFileHandler(
logfile,
when="H",
interval=1,
encoding="utf-8")
handler.setFormatter(logging.Formatter(log_format))
handler.extMatch = re.compile(r"^\d{8}$")
handler.suffix = "%Y%m%d"
handler.setLevel(level)
logger.addHandler(handler)
I wrote a custom Handler, through which you can achieve the functionality you need.
import logging
import os
from custom_rotating_file_handler import CustomRotatingFileHandler
def custom_rotating_logger(log_fpath, when='H'):
os.makedirs(os.path.dirname(log_fpath), exist_ok=True)
file_handler = CustomRotatingFileHandler(log_fpath,
when=when,
backupCount=10,
encoding="utf-8")
formatter = logging.Formatter('%(asctime)s %(message)s')
file_handler.setFormatter(formatter)
logger = logging.getLogger('MyCustomLogger')
logger.setLevel(logging.DEBUG)
logger.addHandler(file_handler)
return logger
# For test: log rotating at beginning of each minute.
logger = custom_rotating_logger('/tmp/tmp.log', when='M')
import time
for i in range(100):
time.sleep(1)
logger.debug('ddddd')
logger.info('iiiii')
logger.warning('wwwww')
Save the code above as testhandler.py and run the following command:
python3 testhandler.py && sudo find /tmp -name 'tmp.log.*'
You can see the output like this:
/tmp/tmp.log.202212142132
/tmp/tmp.log.202212142133
Content in the log file:
> head -n 3 /tmp/tmp.log.202212142133
2022-12-14 21:33:00,483 ddddd
2022-12-14 21:33:00,490 iiiii
2022-12-14 21:33:00,490 wwwww