I'm testing some Django models with bog-standerd django.test.Testcase. My models.py writes to a debug log, using the following init code:
import logging
logger = logging.getLogger(__name__) # name is myapp.models
and then I write to the log with:
logger.debug("Here is my message")
In my settings.py, I've set up a single FileHandler, and a logger for myapp, using that handler and only that handler. This is great. I see messages to that log. When I'm in the Django shell, I only see messages to that log.
When, however, I run my test suite, my test suite console also sees all those messages. It's using a different formatter that I haven't explicitly defined, and it's writing to stderr. I don't have a log handler defined that writes to stderr.
I don't really want those messages spamming my console. I'll tail my log file if I want to see those messages. Is there a way to make it stop? (Yes, I could redirect stderr, but useful output goes to stderr as well.)
Edit: I've set up two handlers in my settings.py:
'handlers': {
'null': {
'level': 'DEBUG',
'class': 'django.utils.log.NullHandler',
},
'logfile' : {
'level':'DEBUG',
'class':'logging.FileHandler',
'filename':'%s/log/development.log' % PROJECT_DIR,
'formatter': 'simple'
},
},
and tried this:
'loggers': {
'django': {
'level': 'DEBUG',
'handlers': ['null']
},
'myapp': {
'handlers': ['logfile'],
'level':'DEBUG',
},
... but the logging / stderr dumping behavior remains the same. It's like I'm getting another log handler when I'm running tests.
It's not clear from your config snippet which handlers, if any, are configured for the root logger. (I'm also assuming you're using Django 1.3.) Can you investigate and tell us what handlers have been added to the root logger when you're running tests? AFAICT Django doesn't add anything - perhaps some code you're importing does a call to basicConfig without you realising it. Use something like ack-grep to look for any occurrences of fileConfig, dictConfig, basicConfig, and addHandler - all of which could be adding a handler to the root logger.
Another thing to try: set the propagate flag to False for all top-level loggers (like "django", but also those used by your modules - say "myapp"). Does that change things?
Related
This bounty has ended. Answers to this question are eligible for a +50 reputation bounty. Bounty grace period ends in 7 hours.
StuffHappens wants to draw more attention to this question.
I have django rest framework application with socket.io. To run it in staging I use gunicorn as WSGI-server and GeventWebSocketWorker as a worker. The thing I want to fix is that there's no logs for web requests like this:
[2023-02-10 10:54:21 -0500] [35885] [DEBUG] GET /users
Here's my gunicorn.config.py:
worker_class = "geventwebsocket.gunicorn.workers.GeventWebSocketWorker"
bind = "0.0.0.0:8000"
workers = 1
log_level = "debug"
accesslog = "-"
errorlog = "-"
access_log_format = "%(h)s %(l)s %(u)s %(t)s '%(r)s' %(s)s %(b)s '%(f)s' '%(a)s'"
Here's the command in docker compose I use deploy app:
command:
- gunicorn
- my_app.wsgi:application
I saw the issue was discussed in gitlab: https://gitlab.com/noppo/gevent-websocket/-/issues/16 but still I have no idea how to fix it.
It looks like you've already configured your gunicorn settings to output debug-level logs, so it's possible that the logs you're looking for are being written to the console output or error stream rather than to a log file.
One thing you could try is redirecting the console output and error stream to log files using the --access-logfile and --error-logfile options, like this:
accesslog = "/path/to/access.log"
errorlog = "/path/to/error.log"
This will redirect the access and error logs to the specified files, rather than writing them to the console output. You can then check the log files to see if the debug-level logs are being written there.
Alternatively, you could try using a logging library like Python's built-in logging module to output more detailed logs. Here's an example configuration that might work for your use case:
import logging
# Create a logger with the name of your application
logger = logging.getLogger('my_app')
# Set the logging level to debug
logger.setLevel(logging.DEBUG)
# Create a stream handler to output logs to the console
handler = logging.StreamHandler()
# Add a formatter to the handler to format the log messages
formatter = logging.Formatter('[%(asctime)s] [%(levelname)s] %(message)s')
handler.setFormatter(formatter)
# Add the handler to the logger
logger.addHandler(handler)
You can place this code in a file called logging_config.py, and then import it in your Django settings file and add it to the LOGGING setting:
LOGGING = {
'version': 1,
'disable_existing_loggers': False,
'handlers': {
'console': {
'class': 'logging.StreamHandler',
},
},
'root': {
'handlers': ['console'],
'level': 'DEBUG',
},
'loggers': {
'my_app': {
'handlers': ['console'],
'level': 'DEBUG',
},
},
}
This will configure the logger to output log messages with a timestamp, log level, and message to the console. You can then add logging statements in your Django views or other code to output debug-level logs when certain events occur:
import logging
logger = logging.getLogger('my_app')
def my_view(request):
logger.debug(f'GET {request.path}')
# ... rest of the view logic ...
This will output a log message like [2023-02-19 09:12:43,244] [DEBUG] GET /users when the view is accessed with a GET request.
I hope this helps!
Try adding - --access-logfile=/path/to/access.log parameter to your gunicorn command to write to a file or - --access-logfile=- to write to stdout.
https://docs.gunicorn.org/en/stable/settings.html#accesslog
I've created a project in Django and have deployed it to Heroku. Unfortunately, a number of things that were working locally, now don't work on Heroku. To troubleshoot I need to be able to write to the Heroku logs when my program runs so that I can troubleshoot better. So far I have not gotten it to work
My settings/staging.py file contains:
LOGGING = {
'version': 1,
'disable_existing_loggers': False,
'handlers': {
'console': {
'class': 'logging.StreamHandler',
},
},
'loggers': {
'django': {
'handlers': ['console'],
'level': os.getenv('DJANGO_LOG_LEVEL', 'DEBUG'),
},
},
}
I have an app called accounts, so my accounts/views.py file contains:
import logging
log = logging.getLogger(__name__)
def auth_profile(request):
log.debug('TESTING THE DEBUGGER')
When auth_profile is accessed I want to see the text 'TESTING THE DEBUGGER' show up in the Heroku logs, but so far I get nothing.
How do I get this to work?
I think if you drop a log.error you would probably see something.
I had the same problem and it turns out heroku's settings tool breaks the pre-existing LOGGING setup. The logger you are using is not registered with django but is making its own way to the console using python's standard logging system.
Try doing:
django_heroku.settings(locals(), logging=False)
Or better yet, don't use it at all anymore since this package is no longer maintained anyway.
I have very good question which I would like an expert to comment on that for me please. (perhaps Graham Dumpleton)
So I have a Django web application (developed on ubuntu 16.04) which loges some failures as below on /var/log/apache2/APPNAME.log.
since all files in /var/log/apache2 have root:adm owner, I granted ownership of my log file the same way and I made sure www-data is a member of adm group. Then I granted rwx to adm group for owner group and I tested everything was working fine.
After 24hr the permission of the file and the parent folder has changed and I can see the write permission has been revoked from the log file and the parent directory causing permission denied error in error because the log file couldn't be written.
Here are my questions if you could kindly help:
1) where is the right place to put Django log files?
2) What process under what user permission writes the file?
3) Which process resets permissions in the /var/log/apache and why?
Thank you much in advance,
I hope this question help others too.
Cheers,
Mike
views.py
from django.shortcuts import render
from django.shortcuts import render
from django.http import HttpResponse, HttpResponseRedirect
from django import forms
from django.core.mail import send_mail, EmailMessage
from StudioHanel.forms import ContactForm
import traceback
import time
# import the logging library
import logging
import sys
# Get an instance of a logger
#logger = logging.getLogger('APPNAME')
def contact(request):
logger.debug('Contact Start!')
if request.method == 'POST':
etc...
settings.py
LOGGING = {
'version': 1,
'disable_existing_loggers': False,
'filters': {
'require_debug_false': {
'()': 'django.utils.log.RequireDebugFalse'
}
},
'handlers': {
'mail_admins': {
'level': 'ERROR',
'filters': ['require_debug_false'],
'class': 'django.utils.log.AdminEmailHandler'
},
'applogfile': {
'level':'DEBUG',
'class':'logging.handlers.RotatingFileHandler',
'filename': os.path.join('/var/log/apache2', 'APPNAME.log'),
'maxBytes': 1024*1024*15, 15MB
'backupCount': 10,
},
},
'loggers': {
'django.request': {
'handlers': ['mail_admins'],
'level': 'ERROR',
'propagate': True,
},
'APPNAME': {
'handlers': ['applogfile',],
'level': 'DEBUG',
},
}
}
1) where is the right place to put Django log files?
Recently I initiated a discussion in the django-users mailing list about the directories to use for Django projects, and I concluded there is no standard practice. I've settled on using /var/log/django-project-name.
In any case, /var/log/apache2 is the wrong place because of the problem you identified, that logrotate will interfere. More on that below.
2) What process under what user permission writes the file?
If you use Gunicorn, it's the gunicorn process, and if you use uWSGI, it's uwsgi. Judging from your reference to Graham Dumpleton, you are using mod_wsgi. So the process is the mod_wsgi daemon.
The user as which these processes are writing to the file is the user as which the process runs. For mod_wsgi, you can specify a user option to the WSGIDaemonProcess directive. According to its documentation, "If this option is not supplied the daemon processes will be run as the same user that Apache would run child processes and as defined by the User directive." In Ubuntu, this is www-data. I think it's a good idea to use the user option and run the daemon as a different dedicated user.
You should not add www-data to the adm group. The adm group is people who have permission to read the log files. www-data should not have such permission. (Reading and writing its own log files is fine, but you wouldn't want it to have permission to read /var/log/syslog.)
3) Which process resets permissions in the /var/log/apache and why?
It's logrotate, which is run by cron; see /etc/cron.daily/logrotate. The configuration at /etc/logrotate.d/apache2 manipulates all files matching /var/log/apache2/*.log. The primary purpose of logrotate is to, well, rotate logs. That is, it creates a new log file every day, yesterday's is named access.log.1, before yesterday's access.log.2.gz, and so on, and logs older than some days are deleted. This is done to save space and to keep the logs manageable. logrotate will also fix the permissions of the files if they are wrong.
In theory you should configure logrotate to also rotate your Django project's logs, otherwise they might eventually fill the disk.
For mod_wsgi you are better to direct Python logging to stderr or stdout so that it is captured in the Apache error log. Don't create a separate log file as by using Apache log file, things like log file rotation will be handled for you automatically. For an example see under 'Logging of Python exceptions' in:
http://blog.dscpl.com.au/2015/04/integrating-modwsgi-express-as-django.html
Do ensure though that you configure a separate error log for Apache for the VirtualHost so that your logging for site is saved away separately to main Apache error log.
I'm looking for a way to write to console only when Django tests are run with high verbosity level.
For example - when I run
python manage.py test -v 3
It would log my messages to console, but, when I run
python manage.py test -v 0
It would not log my messages.
I tried to use logger.info() in the code but the messages do not show up at all.
Any suggestions?
-v controls the verbosity of the test runner itself (eg: DB creation, etc), not the verbosity of your app's logging.
What you actually want, is to change the logging level of the django app itself, as described in the logging documentation.
You may want to simply override the logging settings just for tests, or you can use --debug-mode to set settings.DEBUG to True, and include a DEBUG-specific configuration in your settings.py (this is rather common since it also helps when developing).
This is a basic example of how to achieve the latter:
'loggers': {
'': {
'handlers': ['console', 'sentry'],
'level': 'INFO',
},
'myapp': {
'level': 'DEBUG' if DEBUG else 'INFO',
'propagate': True,
},
}
I need to enable logging if a code has been executed from console application (as manage.py <cmd>) and to disable logging if HTTPRequest is being processed. Probably filter can be very useful here.
LOGGING = {
...
'filters': {
'require_debug_false': {
'()': 'IsFromHTTPRequest'
}
},
...
}
But what is the best way to define if command has been executed or HTTPRequest is being processed? Traceback analysis?
Well, there is no a good way to do. But here is what we do when we need distinguish manage.py jenkins from regular http requests:
Add to settings.py
import sys
JENKINS = "jenkins" in sys.argv
Then you can use that variable whenever you need. In the log filter as well.