PermissionError when using python 3.3.4 and RotatingFileHandler - python

I am trying to get a rotating log file for a GUI application I am writing with python 3.3.4 and PyQt4.
I have the following snippet of code in my main script:
import logging
import resources
logger = logging.getLogger('main.test')
def main():
logger.setLevel(logging.DEBUG)
fh = RotatingFileHandler(resources.LOG_FILE_PATH, maxBytes=500, backupCount=5)
formatter = logging.Formatter('%(asctime)s - %(name)s - %(levelname)s - %(message)s')
fh.setFormatter(formatter)
logger.addHandler(fh)
logger.info('main')
I have the maxBytes low so that I can test the rotating is working correctly, which it is not. I am getting the following error whenever the log should be rotated:
Traceback (most recent call last):
File "C:\Python33\lib\logging\handlers.py", line 73, in emit
self.doRollover()
File "C:\Python33\lib\logging\handlers.py", line 176, in doRollover
self.rotate(self.baseFilename, dfn)
File "C:\Python33\lib\logging\handlers.py", line 116, in rotate
os.rename(source, dest)
PermissionError: [WinError 32] The process cannot access the file because it is being used by another process: 'C:\\Users\\myuser\\.logtest\\test.log.1'
And nothing is logged. Any help is much appreciated.
Thank you

Spent half a day on this as non previous answer resolved my issue.
My working solution is to use https://pypi.org/project/concurrent-log-handler/ instead of RotatingFileHandler. In multiple thread scenarios like Flask app, PermissionError will be raised when we rotate the log file that reaches maximum size.
Install pypiwin32 to get rid of No Module name win32con error.
Thanks go to https://www.programmersought.com/article/43941158027/

Instead of adding handler to the logger object you can directly specify handler in basicConfig(). If you add RotatingFileHandler to the logger object then one object might open the log file and another at the same time might try to rename it which throws the PermissionError.
Below code seems to work pretty well.
import logging
import resources
from logging.handlers import RotatingFileHandler
logging.basicConfig(format='%(asctime)s - %(name)s - %(levelname)s - %(message)s', handlers=[RotatingFileHandler(filename=resources.LOG_FILE_PATH, maxBytes=500, backupCount=5)])
logger = logging.getLogger('main.test')
def main():
logger.setLevel(logging.DEBUG)
logger.info('main')

In my case it happens only in Windows. To solve it I changed the delay parameter to True for my TimedRotatingFileHandler log handler.
Docs -> https://docs.python.org/3/library/logging.handlers.html#logging.handlers.TimedRotatingFileHandler

You cannot specify the same filename in both basicConfig() and RotatingFileHandler(). I had this same issue and I removed the filename parameter from basicConfig() and it now works.

In my case (Windows Server 2016 + IIS + FastCGI + Flask) finally I fix it by turning off files indexing in the folder.
how-to
Source:
https://stackoverflow.com/a/22467917/9199668
Btw, it was working for months correctly... I have no idea why...

Check that the file isn't being kept open by e.g. Windows file indexing, anti-virus or other software. Files that are open can't be renamed.

I changed the application to use dictConfig and created a separate file that holds the dictionary configuration. At the top of my main application I have:
from log.logger import LOGGING
logging.config.dictConfig(LOGGING)
logger = logging.getLogger('testlogging')
Then in log.logger I have:
import logging
import sys
import resources
LOGGING = {
"version":1,
"handlers":{
"fileHandler":{
"class":"logging.handlers.RotatingFileHandler",
"formatter":"myFormatter",
"filename":resources.LOG_FILE_PATH,
"maxBytes":100000,
"backupCount":5
},
"console":{
"class":"logging.StreamHandler",
"formatter":"myFormatter"
}
},
"loggers":{
"aoconnect":{
"handlers":["fileHandler", "console"],
"level":"DEBUG",
}
},
"formatters":{
"myFormatter":{
"format":"%(asctime)s - %(name)s - %(levelname)s - %(message)s"
}
}
}
This all seems to work pretty well.

In My case file size is full, after removing server.log file it worked
LOGS_DIR = os.path.join(BASE_DIR, 'logs')
LOGGING = {
'version': 1,
'handlers': {
'log_file': {
'level': 'INFO',
'class': 'logging.handlers.RotatingFileHandler',
'filename': os.path.join(LOGS_DIR, 'server.log'),
'backupCount': 10, # keep at most 10 log files
'maxBytes': 5*1024*1024 # 5242880 bytes (5MB)
},
},
'loggers': {
'django': {
'handlers':['log_file'],
'propagate': True,
'level':'INFO',
},
},
}

Related

Python: Reading Logging Level from a Config File and avoiding previous DEBUG Information not getting lost

I have the following python code where logging level of a global logger is set through a Command-line argument:
logging.basicConfig (
level = getattr (logging, clArgs.logLevel),
...
If no logging level has been specified through CL argument, then by default INFO log level is used for global logging:
# Define '--loglevel' argument
clArgsParser.add_argument (
"-l", "--loglevel",
help = "Set the logging level of the program. The default log level is 'INFO'.",
default = 'INFO', # Default log level is 'INFO'
dest = "logLevel", # Store the argument value into the variable 'logLevel'
choices = ['DEBUG', 'INFO', 'WARNING', 'ERROR', 'CRITICAL']
)
Now I would like to give a 2nd option to the user, so that the logging level can be also specified in a Configuration File. Nonetheless, before the Configuration file is read out the script must first read out the CL arguments to figure out if the user has set the log level CL argument. Additionally, the configuration file path can be also specified through a CL argument.
After the script reads out the CL arguments and sets the logging level, it stores some logging information (e.g. log file location, directory where the script is being executed, etc.) as well it stores the DEBUG logging information while reading out the config file in the function readProgramConfig. The config file is read out at the very end (function readProgramConfig) as you can see in the code snippet below:
# Parse the Command-line Arguments
clArgs, progName = parseClArgs ( __program__ )
# Initialize Global Logging with Command-line Arguments
defaultLogFilePath = configLogging ( clArgs )
# Log Program Version used
logging.info ( f"Program version: {__program__.swVersion}")
# Log Script Start Time
logging.info ( f"Program start time: {programStartDate}" )
# Log the program's current directory
logging.debug ( f"Directory where the Program is being executed: {PROGRAM_DIR}" )
# Output the log file location
logging.debug ( f"Log file location: {defaultLogFilePath}")
# Read out the program configuration
programConfig = readProgramConfig ( clArgs.programConfigPath )
This leads to a problem - if no log level is specified through the CL argument and the log level is specified by the user in the config file (e.g. DEBUG), then the following will happen:
No CL-Arg for log level specified -> use INFO log level per default
Do logging (program version, program start time) before reading out the config file -> however, no DEBUG level information is logged as INFO is used by default
Additionally, no DEBUG information is logged while reading out the config file (function readProgramConfig)
Once the config file has been read out, the script will figure out that the config file wants to set the log level to DEBUG, and will then try to change the global logging level from INFO to DEBUG
From now on all DEBUG information will be logged, however the previous DEBUG information is lost, i.e. never logged
So it's sort of like a hen and egg problem.
I do have one solution in mind, but it is rather complicated and I would like to find out if someone of you have a simpler solution.
One possible solution would be:
Start the script with DEBUG log level by default to capture all log messages
Read out the config file:
2.1 If the log level in the config file is set to DEBUG, then continue logging into the log file with the DEBUG log level.
2.2 If the log level in the config file is set to lower log level than DEBUG (e.g. INFO), then delete all DEBUG entries in the log file, and continue logging only using the INFO log level.
You see the solution is rather complicated - it involves editing the log file and writing back and forth ... Not to mention that this approach will not work for logging into the console ...
One solution would be to use logging.config python module. You can read a config file (e.g. in JSON format) and store it as a dictionary. The module provides function logging.config.dictConfig which will configure the logger using the information from the config file, i.e. dictionary. The code for configuring the root logger would look something like this:
import logging # Logging
from logging import config as LogConfig # Logging Configuration
# Create the Log File name
logFileConfigName = "LogConfig.json"
logFileConfigPath = os.path.join ( PROGRAM_DIR, logFileConfigName )
# Open the JSON Config File
jsonLogConfigFile = open ( logFileConfigPath )
# Decode the JSON Config File
jsonLogConfig = json.load ( jsonLogConfigFile )
# Take the logging configuration from a dictionary
LogConfig.dictConfig ( jsonLogConfig )
The logger configuration would be stored in the LogConfig.json file. For both console and file logging there is a seperate handler defined in the config file:
{
"version": 1,
"root":
{
"handlers" : ["console", "file"],
"level": "DEBUG"
},
"handlers":
{
"console":
{
"formatter": "std_out",
"class": "logging.StreamHandler"
},
"file":
{
"filename" : ".\\Program.log",
"formatter" : "std_out",
"class": "logging.FileHandler",
"mode": "w"
}
},
"formatters":
{
"std_out":
{
"format": "%(asctime)s - %(levelname)s: %(message)s",
"datefmt":"%Y-%m-%d - %H:%M:%S"
}
}
}

Logging does not write anything in my file

I have problems with logging in Python 3.7 I use Spyder as editor
This code is working normally and creates a file and writes in it.
import logging
LOG_FORMAT="%(Levelname)s %(asctime)s - %(message)s"
logging.basicConfig(filename="C:\\Users\\MOHAMED\\Desktop\\Learn python\\tst.log",
level=logging.DEBUG)
logger=logging.getLogger()
logger.info("Our first message.")
The problem is when I add the format in my file this code does not write anything in tst file.
import logging
LOG_FORMAT="%(Levelname)s %(asctime)s - %(message)s"
logging.basicConfig(filename="C:\\Users\\MOHAMED\\Desktop\\Learn python\\tst.log",
level=logging.DEBUG,
format=LOG_FORMAT)
logger=logging.getLogger()
logger.info("Our first message.")
You are specifying logging variable Levelname but you do not use the extra to populate the variable.
try it with
logger.info("Our first message.", extra={"Levelname":"test"})
also, recommend the docs https://docs.python.org/3/library/logging.html

logging with dictConfig and writing to the console and a file

In my code I have the following for a verbose mode and a non-verbose mode. I'm reading from a logDict object.
I expect that in verbose mode I will get "DEBUG MODE: test debug" and "DEBUG MODE: test error" written to the console and "[uuid] [date] [etc] test error" only written to a file, and that in non-verbose mode that nothing gets printed to the console but "test error" will be written to the file.
First, here is my dictConfig
LOGGING_DICT = {
'version': 1,
'disable_existing_loggers': False,
'formatters': {
'simple': {
# we have a uuid for the log so we know what process it came from
'format': '[{0}][%(asctime)s][%(name)s][%(levelname)s] : %(message)s'.format(logger_id),
'datefmt': "%Y-%m-%d %H:%M:%S",
}
},
'loggers': {
'root': {
'handlers': ['console'],
'level': 'DEBUG',
},
'script_A': {
'handlers': ['timed_rotate_file'],
'level': 'INFO',
},
},
'handlers' : {
'timed_rotate_file': {
'filename': 'logs/weekly_tool.log',
'level': 'INFO',
'formatter': 'simple',
'class': 'logging.handlers.TimedRotatingFileHandler',
'encoding': 'utf8',
# Used to configure when backups happen 'seconds, minutes, w0,w1 (monday tuesday)
'when': 'midnight', # Daily backup
# This is used to configure rollover (7=weekly files if when = daily or midnight)
'backupCount': 7,
}
}
And now the script that calls it
from logging.config import dictConfig
from helpers.logging_config import LOGGING_DICT
...
main():
logger.debug("test debug")
logger.error("test error")
if __name__ == "__main__":
if args.verbose:
dictConfig(LOGGING_DICT)
logger = logging.getLogger("script_A")
stream_handler = logging.StreamHandler()
formatter = logging.Formatter("DEBUG MODE: %(message)s")
stream_handler.setFormatter(formatter)
stream_handler.setLevel(logging.DEBUG)
logger.addHandler(stream_handler)
else:
dictConfig(LOGGING_DICT)
logger = logging.getLogger("script_A")
What I get instead is the following:
~$ python script_A.py
~$ (No output, as expected)
~$ python script_A.py -v
~$ DEBUG MODE: test error
Why is the test_debug not printing to console? Clearly the stream handler is being called, but the level is either not being set correctly or is being ignored.
When I print logger.level in the middle of the script I get 20, which is what I expect given dictConfig, however the handler's level is being set separately, does that mean it is being ignored? (What is the point of setLevel in a python logging handler?) <-- I'm looking at this as well, but my issue is flipped. In dict config my settings are stricter than what I actually want to print, which means that if I reset my log level for the logger I'm getting from dictConfig, things I don't want to print to my file are going to be printed. Can I circumvent this?
I figured this out on my own. Similar to what I posted, I have to reset the log level.
if __name__ == "__main__":
if args.verbose:
dictConfig(LOGGING_DICT)
logger = logging.getLogger("script_A")
stream_handler = logging.StreamHandler()
formatter = logging.Formatter("DEBUG MODE: %(message)s")
stream_handler.setFormatter(formatter)
stream_handler.setLevel(logging.DEBUG)
logger.addHandler(stream_handler)
logger.setLevel(logging.DEBUG)
else:
dictConfig(LOGGING_DICT)
logger = logging.getLogger("script_A")
I thought that doing this would mean that the file handler level also gets changed, but for some reason that doesn't happen. If anyone knows why I would love to know how the internals work this out.

Logging formatters in django

From the Django documentation, here is an example format for logging:
'formatters': {
'verbose': {
'format': '%(levelname)s %(asctime)s %(module)s: %(message)s'
}
}
This prints something like:
ERROR 2012-05-22 14:33:07,261 views 42892 4398727168 hello
Is there a list of items you can include in the string formatting? For example, I'd like to be able to see the function and app where the message is being created, for example:
ERROR time myproject.myapp.views.login_function message
From Python logging module documentation:
asctime: %(asctime)s
Human-readable time when the LogRecord was created. By default this is of the form ‘2003-07-08 16:49:45,896’ (the numbers after the comma are millisecond portion of the time).
created: %(created)f
Time when the LogRecord was created (as returned by time.time()).
filename: %(filename)s
Filename portion of pathname.
funcName: %(funcName)s
Name of function containing the logging call.
levelname: %(levelname)s
Text logging level for the message ('DEBUG', 'INFO', 'WARNING', 'ERROR', 'CRITICAL').
levelno: %(levelno)s
Numeric logging level for the message (DEBUG, INFO, WARNING, ERROR, CRITICAL).
lineno: %(lineno)d
Source line number where the logging call was issued (if available).
module: %(module)s
Module (name portion of filename).
msecs: %(msecs)d
Millisecond portion of the time when the LogRecord was created.
message: %(message)s
The logged message, computed as msg % args. This is set when Formatter.format() is invoked.
name: %(name)s
Name of the logger used to log the call.
pathname: %(pathname)s
Full pathname of the source file where the logging call was issued (if available).
process: %(process)d
Process ID (if available).
processName: %(processName)s
Process name (if available).
relativeCreated: %(relativeCreated)d
Time in milliseconds when the LogRecord was created, relative to the time the logging module was loaded.
thread: %(thread)d
Thread ID (if available).
threadName: %(threadName)s
Thread name (if available).
The following arguments are also available to Formatter.format(), although they are not intended to be included in the format string:
args:
The tuple of arguments merged into msg to produce message.
exc_info:
Exception tuple (à la sys.exc_info) or, if no exception has occurred, None.
msg:
The format string passed in the original logging call. Merged with args to produce message, or an arbitrary object (see Using arbitrary objects as messages).
Step1. Edit your settings.py file
$ cd mysite
$ vim mysite/settings.py
'formatters': {
'simple': {
'format': '%(levelname)s %(asctime)s %(name)s.%(funcName)s:%(lineno)s- %(message)s'
},
},
Step2. You should use logger in your coding like this:
import logging
logger = logging.getLogger(__name__)
def fn1() {
logger.info('great!')
logger.info(__name__)
}
Hope that helps you!

How do I configure the Python logging module in Django?

I'm trying to configure logging for a Django app using the Python logging module. I have placed the following bit of configuration code in my Django project's settings.py file:
import logging
import logging.handlers
import os
date_fmt = '%m/%d/%Y %H:%M:%S'
log_formatter = logging.Formatter(u'[%(asctime)s] %(levelname)-7s: %(message)s (%(filename)s:%(lineno)d)', datefmt=date_fmt)
log_dir = os.path.join(PROJECT_DIR, "var", "log", "my_app")
log_name = os.path.join(log_dir, "nyrb.log")
bytes = 1024 * 1024 # 1 MB
if not os.path.exists(log_dir):
os.makedirs(log_dir)
handler = logging.handlers.RotatingFileHandler(log_name, maxBytes=bytes, backupCount=7)
handler.setFormatter(log_formatter)
handler.setLevel(logging.DEBUG)
logging.getLogger().setLevel(logging.DEBUG)
logging.getLogger().addHandler(handler)
logging.getLogger(__name__).info("Initialized logging subsystem")
At startup, I get a couple Django-related messages, as well as the "Initialized logging subsystem", in the log files, but then all the log messages end up going to the web server logs (/var/log/apache2/error.log, since I'm using Apache), and use the standard log format (not the formatter I designated). Am I configuring logging incorrectly?
Kind of anti-climactic, but it turns out there was a third-party app installed in the project that had its own logging configuration that was overriding the one I set up (it modified the root logger, for some reason -- not very kosher for a Django app!). Removed that code and everything works as expected.
See this other answer. Note that settings.py is usually imported twice, so you should avoid creating multiple handlers. Better logging support is coming to Django in 1.3 (hopefully), but for now you should ensure that if your setup code is called more than once, there are no adverse effects.
I'm not sure why your logged messages are going to the Apache logs, unless you've (somewhere else in your code) added a StreamHandler to your root logger with sys.stdout or sys.stderr as the stream. You might want to print out logging.getLogger().handlers just to see it's what you'd expect to see.
I used this with success (although it does not rotate):
# in settings.py
import logging
logging.basicConfig(
level = logging.DEBUG,
format = '%(asctime)s %(levelname)s %(funcName)s %(lineno)d \
\033[35m%(message)s\033[0m',
datefmt = '[%d/%b/%Y %H:%M:%S]',
filename = '/tmp/my_django_app.log',
filemode = 'a'
)
I'd suggest to try an absolute path, too.
I guess logging stops when Apache forks the process. After that happened, because all file descriptors were closed during daemonization, logging system tries to reopen log file and as far as I understand uses relative file path:
log_dir = os.path.join(PROJECT_DIR, "var", "log", "my_app")
log_name = os.path.join(log_dir, "nyrb.log")
But there is no “current directory” when process has been daemonized. Try to use absolute log_dir path. Hope that helps.

Categories