as I checked the Python doc at python logging and with some experiment, I don't quite under stand,
When computing the next rollover time for the first time (when the handler is created), the last modification time of an existing log file, or else the current time, is used to compute when the next rotation will occur.
I found the rotation time for a hourly rotation is affected by the time I start logging, say 12:23:33 starts, and next rotation at 13:23:33, which will finally be confusing to the log file name.
Code would be like ,
TimedRotatingFileHandler(filename, when='h', interval=1, backupCount=0, encoding=None, delay=False, utc=False)
Any way to force the hourly log rotation starts from 00 minute like 13:00:00 rather than the time the logging starts, and every log will contain only the logs within the hour its log file name indicates?
Looking at the code for TimedRotatingFileHandler, you can't force it to rotate at a specific minute value: as per the documentation, the next rollover will occur at either logfile last modification time + interval if the logfile already exists, or current time + interval if it doesn't.
But since you seem to know the filename, you could trick TimedRotatingFileHandler by first setting the last modification time of the logfile to the current hour:
from datetime import datetime
import os, time
thishour = datetime.now().replace(minute = 0, second = 0, microsecond = 0)
timestamp = time.mktime(thishour.timetuple())
# this opens/creates the logfile...
with file(filename, 'a'):
# ...and sets atime/mtime
os.utime(filename, (timestamp, timestamp))
TimedRotatingFileHandler(filename, ...)
(untested)
I also attach another answer. Python logging support WatchedFileHandler, which will close and re-open log file with same file name if it find file info is updated. It works nicely with system level log rotate like linux logrotate daemon.
logger = logging.getLogger()
log_format = '%(asctime)s - %(levelname)s - %(name)s - %(message)s'
logging.basicConfig(format=log_format, level=level)
# Create 'log' directory if not present
log_path = os.path.dirname(logfile)
if not os.path.exists(log_path):
os.makedirs(log_path)
handler = TimedRotatingFileHandler(
logfile,
when="H",
interval=1,
encoding="utf-8")
handler.setFormatter(logging.Formatter(log_format))
handler.extMatch = re.compile(r"^\d{8}$")
handler.suffix = "%Y%m%d"
handler.setLevel(level)
logger.addHandler(handler)
I wrote a custom Handler, through which you can achieve the functionality you need.
import logging
import os
from custom_rotating_file_handler import CustomRotatingFileHandler
def custom_rotating_logger(log_fpath, when='H'):
os.makedirs(os.path.dirname(log_fpath), exist_ok=True)
file_handler = CustomRotatingFileHandler(log_fpath,
when=when,
backupCount=10,
encoding="utf-8")
formatter = logging.Formatter('%(asctime)s %(message)s')
file_handler.setFormatter(formatter)
logger = logging.getLogger('MyCustomLogger')
logger.setLevel(logging.DEBUG)
logger.addHandler(file_handler)
return logger
# For test: log rotating at beginning of each minute.
logger = custom_rotating_logger('/tmp/tmp.log', when='M')
import time
for i in range(100):
time.sleep(1)
logger.debug('ddddd')
logger.info('iiiii')
logger.warning('wwwww')
Save the code above as testhandler.py and run the following command:
python3 testhandler.py && sudo find /tmp -name 'tmp.log.*'
You can see the output like this:
/tmp/tmp.log.202212142132
/tmp/tmp.log.202212142133
Content in the log file:
> head -n 3 /tmp/tmp.log.202212142133
2022-12-14 21:33:00,483 ddddd
2022-12-14 21:33:00,490 iiiii
2022-12-14 21:33:00,490 wwwww
Related
I have created a pytest.ini file,
addopts = --resultlog=log.txt
This creates a log file, but I would like to create a new log file everytime I run the tests.
I am new to the pytest, and pardon me if I have missed out anything while reading the documentation.
Thanks
Note
--result-log argument is deprecated and scheduled for removal in version 6.0 (see Deprecations and Removals: Result log). The possible replacement implementation is discussed in issue #4488, so watch out for the next major version bump - the code below will stop working with pytest==6.0.
Answer
You can modify the resultlog in the pytest_configure hookimpl. Example: put the code below in the conftest.py file in your project root dir:
import datetime
def pytest_configure(config):
if not config.option.resultlog:
timestamp = datetime.datetime.strftime(datetime.datetime.now(), '%Y-%m-%d_%H-%M-%S')
config.option.resultlog = 'log.' + timestamp
Now if --result-log is not passed explicitly (so you have to remove addopts = --resultlog=log.txt from your pytest.ini), pytest will create a log file ending with a timestamp. Passing --result-log with a log file name will override this behaviour.
Answering my own question.
As hoefling mentioned --result-log is deprecated, I had to find a way to do it without using that flag. Here's how I did it,
conftest.py
from datetime import datetime
import logging
log = logging.getLogger(__name__)
def pytest_assertrepr_compare(op, left, right):
""" This function will print log everytime the assert fails"""
log.error('Comparing Foo instances: vals: %s != %s \n' % (left, right))
return ["Comparing Foo instances:", " vals: %s != %s" % (left, right)]
def pytest_configure(config):
""" Create a log file if log_file is not mentioned in *.ini file"""
if not config.option.log_file:
timestamp = datetime.strftime(datetime.now(), '%Y-%m-%d_%H-%M-%S')
config.option.log_file = 'log.' + timestamp
pytest.ini
[pytest]
log_cli = true
log_cli_level = CRITICAL
log_cli_format = %(message)s
log_file_level = DEBUG
log_file_format = %(asctime)s [%(levelname)8s] %(message)s (%(filename)s:%(lineno)s)
log_file_date_format=%Y-%m-%d %H:%M:%S
test_my_code.py
import logging
log = logging.getLogger(__name__)
def test_my_code():
****test code
You can have different pytest run logs by naming the log file the time when test execution starts.
pytest tests --log-file $(date '+%F_%H:%M:%S')
This will create a log file for each test run. And the name of the test run would be the timestamp.
$(date '+%F_%H:%M:%S') is the bash command to get current timestamp in DATE_Hr:Min:Sec format.
Currently, this is what I have (testlog.py):
import logging
import logging.handlers
filename = "example.log"
logger = logging.getLogger()
logger.setLevel(logging.DEBUG)
formatter = logging.Formatter("%(asctime)s - %(name)s - %(levelname)s - %(message)s")
handler = logging.handlers.RotatingFileHandler(filename, mode = 'w', backupCount = 5)
handler.setLevel(logging.DEBUG)
handler.setFormatter(formatter)
logger.addHandler(handler)
ch = logging.StreamHandler()
ch.setLevel(logging.DEBUG)
ch.setFormatter(formatter)
logger.addHandler(ch)
for i in range(10):
logger.debug("testx") #where I alternate x from 1 thru 9 to see output
It currently successfully prints out to the console and to example.log, which is what I want.
Every time I run it, it makes a new file and replaces the old example.log like so:
run with logger.debug("test1") - example.log will contain test1 10 times like it should.
run with logger.debug("test2") - it rewrites example.log to contain test2 10 times.
etc...
However, I would like for the code to make a new log file every time I run the program so that I have:
example.log
example.log1
example.log2
...
example.log5
In conclusion, I'd like for this file to print the log message to the console, to the log file, and I would like a new log file (up to *.5) whenever I run the program.
logging.handlers.RotatingFileHandler rotates your logs based either on size or on date, but you can force it to rotate using RotatingFileHandler.doRollover() so something like:
import logging.handlers
import os
filename = "example.log"
# your logging setup
should_roll_over = os.path.isfile(filename)
handler = logging.handlers.RotatingFileHandler(filename, mode='w', backupCount=5)
if should_roll_over: # log already exists, roll over!
handler.doRollover()
# the rest of your setup...
Should work like a charm.
If you change the mode to 'a' in the solution posted by zwer, then you get a single empty log file at the first run, after that it will work as intended. Just increase the backupCount by +1 :-)
So, when I copy paste the following x times to the python prompt,
it add the log x times to the end of the designated file.
How can I change the code so that each time I copy paste this to the prompt,
I simply overwrite the existing file (the code seems to not accept the
mode = 'w' option or I do not seem to understand its meaning)
def MinimalLogginf():
import logging
import os
paths = {'work': ''}
logger = logging.getLogger('oneDayFileLoader')
LogHandler = logging.FileHandler(os.path.join(paths["work"] , "oneDayFileLoader.log"), mode='w')
formatter = logging.Formatter('%(asctime)s %(levelname)s %(message)s')
LogHandler.setFormatter(formatter)
logger.addHandler(LogHandler)
logger.setLevel(logging.DEBUG)
#Let's say this is an error:
if(1 == 1):
logger.error('overwrite')
So I run it once:
MinmalLoggingf()
Now, I want the new log file to overwrite the log file created on the previous run:
MinmalLoggingf()
If I understand correctly, you're running a certain Python process for days at a time, and want to rotate the log every day. I'd recommend you go a different route, using a handler that automatically rotates the log file, e.g. http://www.blog.pythonlibrary.org/2014/02/11/python-how-to-create-rotating-logs/
But, if you want to control the log using the process in the same method you're comfortable with (Python console, pasting in code.. extremely unpretty and error prone, but sometimes quick-n-dirty is sufficient for the task at hand), well...
Your issue is that you create a new FileHandler each time you paste in the code, and you add it to the Logger object. You end up with a logger that has X FileHandlers attached to it, all of them writing to the same file. Try this:
import logging
paths = {'work': ''}
logger = logging.getLogger('oneDayFileLoader')
if logger.handlers:
logger.handlers[0].close()
logger.handlers = []
logHandler = logging.FileHandler(os.path.join(paths["work"] , "oneDayFileLoader.log"), mode='w')
formatter = logging.Formatter('%(asctime)s %(levelname)s %(message)s')
logHandler.setFormatter(formatter)
logger.addHandler(logHandler)
logger.setLevel(logging.DEBUG)
logger.error('overwrite')
Based on your request, I've also added an example using TimedRotatingFileHandler. Note I haven't tested it locally, so if you have issues ping back.
import logging
from logging.handlers import TimedRotatingFileHandler
logPath = os.path.join('', "fileLoaderLog")
logger = logging.getLogger('oneDayFileLoader')
logHandler = TimedRotatingFileHandler(logPath,
when="midnight",
interval=1)
formatter = logging.Formatter('%(asctime)s %(levelname)s %(message)s')
logHandler.setFormatter(formatter)
logger.addHandler(logHandler)
logger.setLevel(logging.DEBUG)
logger.error('overwrite')
Your log messages are being duplicated because you call addHandler more than once. Each call to addHandler adds an additional log handler.
If you want to make sure the file is created from scratch, add an extra line of code to remove it:
os.remove(os.path.join(paths["work"], "oneDayFileLoader.log"))
The mode is specified as part of logging.basicConfig and is passed through using filemode.
logging.basicConfig(
level = logging.DEBUG,
format = '%(asctime)s %(levelname)s %(message)s',
filename = 'oneDayFileLoader.log,
filemode = 'w'
)
https://docs.python.org/3/library/logging.html#simple-examples
I am facing little strange issue where, the logger is not picking up the configured time stamp format (ascii) in the log message for the first time it is initialized. It by default prints the log time format in UTC, not sure why.
Below snip is from the /proj/req_proc.py python code, which uwsgi starts, initialize the logger. The log_config.yaml contains a formatter definition to print timestamp in ascii format.
def setup_logging(default_path='=log_config.yaml',
default_level=logging.INFO):
path = default_path
if os.path.exists(path):
with open(path, 'rt') as f:
config = yaml.load(f.read())
logging.config.dictConfig(config)
Below is the snip from my launch script which starts the uwsgi process.
uwsgi -M --processes 1 --threads 2 -s /tmp/uwsgi.sock --wsgi-file=/proj/req_proc.py --daemonize /dev/null
Is there any specific behavior either to python logger or to uwsgi, which picks up the UTC time format by default? When I restart my uwsgi process, it picks the correct/expected time stamp configured in the log_config.yaml
I have the assumption that the uwsgi module is somehow hijacking Python's logging module. Setting the loglevel, logger name and logging itself works, but trying to modify the format even with something basic like:
logging.basicConfig(level=logging.NOTSET, format='[%(process)-5d:%(threadName)-10s] %(name)-25s: %(levelname)-8s %(message)s')
logger = logging.getLogger(__name__)
has no effect.
Update: Here's a way to overwrite uWSGI's default logger:
# remove uUWSGI's default logging configuration, this can be removed in
# more recent versions of uWSGI
root = logging.getLogger()
map(root.removeHandler, root.handlers[:])
map(root.removeFilter, root.filters[:])
logger = logging.getLogger(__name__)
logging.basicConfig(
level=logging.INFO,
format='%(levelname)-8s %(asctime)-15s %(process)4d:%(threadName)-11s %(name)s %(message)s'
)
I have used the same Python script on Windows which worked fine and produced several logs during each time it was run. The problem is when I ran the script on Linux the logging produced all of the logs onto one line.
I have tried \n in different places such as in the formatter, in each line itself.
This is how the logging is set up:
# This is the setup of the logging system for the program.
logger = logging.getLogger(__name__)
# Sets the name of the log file to 'login.log'
handler = logging.FileHandler(config.path['logger'])
# Sets up the format of the log file: Time, Function Name, Error Level (e.g. Warning Info, Critical),
# and then the message that follows the format.
formatter = logging.Formatter('%(asctime)-5s %(funcName)-20s %(levelname)-10s %(message)s')
handler.setFormatter(formatter)
logger.addHandler(handler)
# Sets the lowest level of messages to info.
logger.setLevel(logging.INFO)
And here is how each log is made:
logger.warning('%-15s' % client + ' Failed: Logout Error')
Thanks in advance