I´m trying to create logfiles in django 2.0.8. As long as i´m using it on localhost, the logfiles will be created and everything is fine. But when i´m running it on iis7, it won´t create the logfiles anymore.
The Logfiles are created with python(3.6.5) logging.
manage.py:
logging.config.fileConfig('ProjektServer/logging.ini')
logging.ini_:
[loggers]
keys=root
[handlers]
keys=consoleHandler, rotatingFileHandler
[formatters]
keys=simpleFormatter
[logger_root]
level=DEBUG
handlers=consoleHandler,rotatingFileHandler
[handler_consoleHandler]
class=logging.StreamHandler
level=INFO
formatter=simpleFormatter
args=(sys.stdout,)
[handler_rotatingFileHandler]
class=logging.handlers.RotatingFileHandler
args=(r'C:\log\project.log', 'a', 1000000, 3)
level=INFO
formatter=simpleFormatter
[formatter_simpleFormatter]
format=%(asctime)s - %(name)s - %(levelname)s - %(message)s
datefmt=
Afterwords it´s called like this in the pythonclasses for example:
logger = logging.getLogger(__name__)
...
logger.info("ID is %d", int(shiftid))
I allready tried to follow the following instructions, thinking it is an permission problem:
https://learn.microsoft.com/en-us/iis/manage/configuring-security/application-pool-identities
Open Windows Explorer
List item
Select a file or directory.
Right click the file and select Properties
Select the Security tab
Click the Edit button and then Add button
Click the Locations button and make sure that you select your computer.
Enter IIS AppPool\DefaultAppPool in the Enter the object names to select: text box.
Click the Check Names button and click OK.
I also tried to run the website on the ISS with another Identity which is an Admin. But still no success. Am I still missing something?
If the web application is doing impersonation, then you may need to add everyone that uses the application to be able to write to the folder, unless the logger is using the process account, or you revert to the process account before writing the log. Add 'Everyone' to the folder permissions temporarily to see if it is a permissions issue.
Related
I have a following python logging configuration file which
[loggers]
keys=root,paramiko
[handlers]
keys=consoleHandler,fileHandler
[formatters]
keys=consoleFormatter,fileFormatter
[logger_paramiko]
level=CRITICAL
handlers=consoleHandler,fileHandler
qualname=paramiko
[logger_root]
level=DEBUG
handlers=consoleHandler,fileHandler
[handler_consoleHandler]
class=StreamHandler
level=INFO
formatter=consoleFormatter
args=(sys.stdout,)
[handler_fileHandler]
class=FileHandler
level=DEBUG
formatter=fileFormatter
args=('run.log', 'w')
[formatter_consoleFormatter]
format=[%(asctime)s - %(levelname)s] %(message)s
[formatter_fileFormatter]
format=[%(asctime)s - %(pathname)s:%(lineno)s - %(levelname)s] %(message)s
As you can see I'm logging to console and to file. The name of the file is run.log. I want to be able to append/prepend to the file name a timestamp, i.e. name my log file as 2019-08-08__18:13:40-run.log. I searched online, but couldn't find anything. How can I do it through a configuration file?
The way that python reads the args key from the config is to run it through eval() so you can put any valid code there. You can put pretty much anything you want there, something that is very likely different on every run would be this:
args=(str(time.time())+'.log', 'w') # needs import time in the code
args=(str(hash(' '))+'.log', 'w') # works without import, 99% chance to be unique
I have a logging.conf [1] file to configure the logging in my python application.
...
[handler_file_log_handler]
class=logging.handlers.TimedRotatingFileHandler
level=INFO
formatter=simple
args=('/var/log/myapp/myapp.log',)
As you can see, I'm storing the log file in /var/log/myapp directory.
What I would like is to store it in the user (the user that manages the application) home directory (/home/myuser/.myapp/log). So, my question is:
What should I configure in my logging.conf in order to be able to save the log file of my python application in the user home directory?
[1] - https://docs.python.org/2.6/library/logging.html#configuring-logging
Yes, you can pass any Python expression as argument! Including one that finds the user's home!
args=(f'{os.path.expanduser("~")}/myapp.log',)
For Python <= 3.5 (no f-strings):
args=(os.path.expanduser("~") + '/myapp.log',)
For completeness, here is a minimal example (tested with 3.5 and 3.9):
logging.conf
[loggers]
keys=root
[handlers]
keys=fileLogHandler
[formatters]
keys=simpleFormatter
[logger_root]
level=DEBUG
handlers=fileLogHandler
[formatter_simpleFormatter]
format=%(asctime)s - %(name)s - %(levelname)s - %(message)s
[handler_fileLogHandler]
class=logging.handlers.TimedRotatingFileHandler
formatter=simpleFormatter
args=(os.path.expanduser("~") + '/myapp.log',)
main.py
import logging
import logging.config
logging.config.fileConfig('logging.conf')
log = logging.getLogger(__name__)
log.info('Testo')
Additional note:
I thought that passing ~/myapp.log as path would work and let the OS take care of the expansion, which actually works in most places in Python. But inexplicably this does not work inside a logging.conf file!
args=('~/myapp.log',)
FileNotFoundError: [Errno 2] No such file or directory:
'/path/to/this/python/project/~/myapp.log' # ???
The user's home directory can be found (on any OS) using:
os.path.expanduser("~")
I want to write into two log files by using two loggers with following .ini config file:
[loggers]
keys=root,teja
[handlers]
keys=fileHandler,tejaFileHandler
[formatters]
keys=simpleFormatter
[logger_teja]
level=DEBUG
handlers=tejaFileHandler
qualname='tejaLogger'
[logger_root]
level=DEBUG
handlers=fileHandler
[handler_fileHandler]
class=logging.FileHandler
level=DEBUG
formatter=simpleFormatter
args=("error.log", "a")
[handler_tejaFileHandler]
class=logging.FileHandler
level=DEBUG
formatter=simpleFormatter
args=("teja.log", "a")
[formatter_simpleFormatter]
format=%(asctime)s - %(name)s - %(levelname)s - %(message)s
And I am using this configuration in my python code as
import logging
import logging.config
# load my module
import my_module
# load the logging configuration
logging.config.fileConfig('logging.ini')
logger1=logging.getLogger('root')
logger1.info('Hi how are you?')
logger2=logging.getLogger('teja')
logger2.debug('checking teja logger?')
I see that logs are written to error.log file whereas no logs are written to teja.log file. Please correct me if I am doing something silly...
You named your logger object 'tejaLogger':
[logger_teja]
level=DEBUG
handlers=tejaFileHandler
qualname='tejaLogger'
# ^^^^^^^^^^^^
Note that the quotes are part of the name.
but your test code picks up teja instead:
logger2=logging.getLogger('teja')
Rename one or the other; although you could use logging.getLogger("'tejaLogger'")
you probably want to drop the quotes and / or rename the logger to what you expected it to be:
[logger_teja]
level=DEBUG
handlers=tejaFileHandler
qualname=teja
It turns out that the problem is on this line (in the [logger_teja] section):
qualname='tejaLogger'
If you add this to your code (it prints all the current loggers):
print(logging.Logger.manager.loggerDict)
You get:
{"'tejaLogger'": <logging.Logger object at 0x7f89631170b8>}
Which means that your logger is called literally 'tejaLogger'. Using:
logger2=logging.getLogger("'tejaLogger'")`
Actually works fine. Either do that, or change qualname='tejaLogger' to qualname=teja.
I am planning to add logging mechanism in my python&geodjango web service.
Is there log4j look a like logging mechanism in python/geodjango?
I am looking for log4j's dailyrollingfileappender equivalent. Then automatically delete all 1month old log files.
Any guidance is appreciated.
UPDATES1
I am thinking of the below format.
datetime(ms)|log level|current thread name|client ip address|logged username|source file|source file line number|log message
Yes - Python 2.5 includes the 'logging' module. One of the handlers it supports is handlers.TimedRotatingFileHandler, this is what you're looking for. 'logging' is very easy to use:
example:
import logging
import logging.config
logging.fileConfig('mylog.conf')
logger = logging.getLogger('root')
The following is your config file for logging
#======================
# mylog.conf
[loggers]
keys=root
[handlers]
keys=default
[formatters]
keys=default
[logger_root]
level=INFO
handlers=default
qualname=(root) # note - this is used in non-root loggers
propagate=1 # note - this is used in non-root loggers
channel=
parent=
[handler_default]
class=handlers.TimedRotatingFileHandler
level=INFO
formatter=default
args=('try.log', 'd', 1)
[formatter_default]
format=%(asctime)s %(pathname)s(%(lineno)d): %(levelname)s %(message)s
I'm working a small, personal Django project and I've added South (latest mercurial as of 10/9/10) to my project.
However, whenever I run "./manage.py syncdb" or "./manage.py migrate " I get about 13 pages (40 lines each) of output solely regarding 'initial_data' files not being found. I don't have any initial_data nor do I really want any, yet I get over 200 attempts at reading them for all the different apps in my project, including django's own apps.
Is there any way to quiet South? I haven't given South any input beyond adding it to my INSTALLED_APPS tuple and throwing an initial migration on, but I've gotten this annoying output since I installed it.
How is Your logging configured?
I have turned much of the output by configuring logging to higher level, as in:
[formatters]
keys=simple
[handlers]
keys=console
[loggers]
keys=root,south
[formatter_simple]
format=%(asctime)s %(levelname)7s %(message)s
datefmt=%Y-%m-%d %H:%M:%S
[handler_console]
class=StreamHandler
args=[]
formatter=simple
[logger_root]
level=INFO
qualname=root
handlers=console
[logger_south]
level=INFO
qualname=south
handlers=console
Also beware that logging config has to be called AFTER south logging has been imported, because of some magic. From my project, in my settings:
# south is setting logging on import-time; import it before setting our logger
# so it is not overwriting our settings
try:
import south.logger
except ImportError:
pass
import logging.config
if LOGGING_CONFIG_FILE:
logging.config.fileConfig(LOGGING_CONFIG_FILE)