Providing "resource" argument to CloudLoggingHandler class does't work - python

Providing resource argument to CloudLoggingHandler class doesn't work, that is, it cannot logging to stackdriver. If I comment resource out, it works fine. I also tried a simple python script that doesn't run in Django, it worked fine too.
This actually my Django LOGGING handlers settings:
'handlers': {
'stderr': {
'class': 'google.cloud.logging.handlers.CloudLoggingHandler',
'name': "name",
'resource': Resource(
type="container",
labels={
...
},
),
'client': google.cloud.logging.Client()
},
},
No resource, No problem:
'handlers': {
'stderr': {
'class': 'google.cloud.logging.handlers.CloudLoggingHandler',
'name': "name",
'client': google.cloud.logging.Client()
},
},
A simple script works too:
import logging
import google.cloud.logging # Don't conflict with standard logging
from google.cloud.logging.handlers import CloudLoggingHandler, setup_logging
from google.cloud.logging.resource import Resource
client = google.cloud.logging.Client()
logging.getLogger().setLevel(logging.INFO) # defaults to WARN
res = Resource(
type="container",
labels={
...
},
)
handler = CloudLoggingHandler(client, name='name', resource=res)
setup_logging(handler)
logging.error('logging!')
I use google-cloud-logging version is 1.10.0.
Can someone give some suggestions about debugging stackdriver logging?

This issue is most likely caused by the resource being malformed, either because the type is not supported (or no longer supported), because the labels do not match those expected for the given type, because required labels are missing, or because special permission is required to write logs against the specific resource type in question.
In this particular case, the use of container rather than k8s_container looks suspicious. Based on this conversation as well as the existence of k8s_container in the list of Stackdriver Monitoring resource types as well as Stackdriver Logging resource types, whereas container is documented only on the latter, this is likely a deprecated resource type that has been supplanted by k8s_container.
If that does not work, failures to write the remote logs should produce logs locally (or using whatever handlers have been attached to the background thread transport); though those logs are obviously harder to access, if you can get to those logs, it should be possible to see what went wrong with the attempt to write to Stackdriver Logging.

Related

Which logger to use in a Python Flask app with Connexion

I'm using both Flask and Connexion for a Python based REST API, and it runs within a Docker container. Here is main.py:
import connexion
import logging
from app.log import handler
# initiate swagger/connexion
application = connexion.App(__name__, specification_dir='./')
application.add_api('swagger.yml')
# logging
application.app.logger.handlers.clear()
application.app.logger.addHandler(handler)
application.app.logger.setLevel(logging.DEBUG)
application.app.logger.debug('application starting...')
# if we're running in standalone mode, run the application
if __name__ == '__main__':
application.run(host='0.0.0.0', port=5000, debug=True)
This works fine, and in my syslog server I can see:
2020-01-14 11:03:14,951 app main:DEBUG application starting...
However, I'm not sure how to log correctly from files outside of main.py. For example, I have a status.py which has a single route for GET /status and the code looks like:
import yaml
from flask import current_app
import logging
def read():
# LOG TESTING
current_app.logger.debug('Test using current_app')
logging.getLogger(__name__).debug('Test using getLogger')
print('Test using print')
with open('./swagger.yml', 'r') as f:
y = yaml.load(f)
return {
# .... some data here
}
In my syslog server, I can see:
Test using print
./status.py:22: YAMLLoadWarning: calling yaml.load() without Loader=... is deprecated, as the default Loader is unsafe. Please read https://msg.pyyaml.org/load for full details.
y = yaml.load(f)
I would like to use the same logging mechanism that main.py is using in all of my separate files, but I can only get it to work from main.py, and the only thing that works outside of main.py is the print function, however, as can be seen above, errors also seem to get picked up (albeit with no timestamp).
Please review the docs here. https://flask.palletsprojects.com/en/1.1.x/logging/ You are changing the logging after calling app.log or is it app.logger (I forget) so the application has already started. You need to override the default. The document covers it but here is a gist.
Before you instantiate the Flask App. do this
from logging.config import dictConfig
dictConfig({
'version': 1,
'formatters': {'default': {
'format': '[%(asctime)s] %(levelname)s in %(module)s: %(message)s',
}},
'handlers': {'wsgi': {
'class': 'logging.StreamHandler',
'stream': 'ext://flask.logging.wsgi_errors_stream',
'formatter': 'default'
}},
'root': {
'level': 'INFO',
'handlers': ['wsgi']
}
})
app = Flask(__name__) # important! logging stuff is set before this.
One thing to note is that error from web request get logged differently than the errors outside of web requests (for e.g. jobs, cli etc). The default behavior is to log to standard error which in your case is syslog

Celery tasks don't send email to admins on logger critical messages

My celery tasks don't send an email to application admins each time I call logger.critical.
I'm building a Django application. The current configuration of my project, allows the admins of the application to receive an email each time a logger.critical message is created. This was pretty straight forward to set up, I just followed the documentation on both projects (celery and Django). For some reason, which I'm not figuring out, the code that runs inside a celery task, does not have the same behavior, It doesn't send an email to the application admin each time the logger.critical message is created.
Does celery actually even allow this to be done?
Am I missing some configuration?
Does anyone got this problem and was able to solve it?
Using:
Django 1.11
Celery 4.3
Thanks for any help.
As stated in the documentation Celery overrides the current logging configuration to apply its own, it also says that you can set CELERYD_HIJACK_ROOT_LOGGER to False in your Django settings to prevent this behavior, what is not well documented is that this is not really working at the moment.
In my opinion you have 2 options:
1. Prevent Celery to override your configuration (really) using the setup_logging signal
Open your celery.py file and add the following:
from celery.signals import setup_logging
#setup_logging.connect
def config_loggers(*args, **kwags):
pass
After that your file should look more or less like this:
from __future__ import absolute_import, unicode_literals
import os
from celery import Celery
from celery.signals import setup_logging
os.environ.setdefault('DJANGO_SETTINGS_MODULE', 'myproject.settings')
app = Celery('myproject')
app.config_from_object('django.conf:settings', namespace='CELERY')
app.autodiscover_tasks()
#setup_logging.connect
def config_loggers(*args, **kwags):
pass
However I would avoid this option unless you have a really good reason because in this way you will lose the default task logging handled by Celery, which is quite good to have.
2. Use a specific logger
You can define a custom logger in your Django LOGGING configuration and use it in your task, eg:
Django settings:
LOGGING = {
# ... other configs ...
'handlers': {
'my_email_handler': {
# ... handler configuration ...
},
},
'loggers': {
# ... other loggers ...
'my_custom_logger': {
'handlers': ['my_email_handler'],
'level': 'CRITICAL',
'propagate': True,
},
},
}
Tasks:
import logging
logger = logging.getLogger('my_custom_logger')
#shared_task
def log():
logger.critical('Something bad happened!')
I believe this is the best approach for you because, as far as I understand, you need to manually log messages, and this allows you to keep using the Celery logging system.
From Celery docs:
By default any previously configured handlers on the root logger will be removed. If you want to customize your own logging handlers, then you can disable this behavior by setting worker_hijack_root_logger = False.
Celery installs own logger that you can obtain using get_task_logger() call. I assume you wrote your own logger that does the logic you described in the original question. Read more about Celery logging to find out how to disable this behaviour and tweak Celery to your needs.

What permission/user does apache2 use to write django logs

I have very good question which I would like an expert to comment on that for me please. (perhaps Graham Dumpleton)
So I have a Django web application (developed on ubuntu 16.04) which loges some failures as below on /var/log/apache2/APPNAME.log.
since all files in /var/log/apache2 have root:adm owner, I granted ownership of my log file the same way and I made sure www-data is a member of adm group. Then I granted rwx to adm group for owner group and I tested everything was working fine.
After 24hr the permission of the file and the parent folder has changed and I can see the write permission has been revoked from the log file and the parent directory causing permission denied error in error because the log file couldn't be written.
Here are my questions if you could kindly help:
1) where is the right place to put Django log files?
2) What process under what user permission writes the file?
3) Which process resets permissions in the /var/log/apache and why?
Thank you much in advance,
I hope this question help others too.
Cheers,
Mike
views.py
from django.shortcuts import render
from django.shortcuts import render
from django.http import HttpResponse, HttpResponseRedirect
from django import forms
from django.core.mail import send_mail, EmailMessage
from StudioHanel.forms import ContactForm
import traceback
import time
# import the logging library
import logging
import sys
# Get an instance of a logger
#logger = logging.getLogger('APPNAME')
def contact(request):
logger.debug('Contact Start!')
if request.method == 'POST':
etc...
settings.py
LOGGING = {
'version': 1,
'disable_existing_loggers': False,
'filters': {
'require_debug_false': {
'()': 'django.utils.log.RequireDebugFalse'
}
},
'handlers': {
'mail_admins': {
'level': 'ERROR',
'filters': ['require_debug_false'],
'class': 'django.utils.log.AdminEmailHandler'
},
'applogfile': {
'level':'DEBUG',
'class':'logging.handlers.RotatingFileHandler',
'filename': os.path.join('/var/log/apache2', 'APPNAME.log'),
'maxBytes': 1024*1024*15, 15MB
'backupCount': 10,
},
},
'loggers': {
'django.request': {
'handlers': ['mail_admins'],
'level': 'ERROR',
'propagate': True,
},
'APPNAME': {
'handlers': ['applogfile',],
'level': 'DEBUG',
},
}
}
1) where is the right place to put Django log files?
Recently I initiated a discussion in the django-users mailing list about the directories to use for Django projects, and I concluded there is no standard practice. I've settled on using /var/log/django-project-name.
In any case, /var/log/apache2 is the wrong place because of the problem you identified, that logrotate will interfere. More on that below.
2) What process under what user permission writes the file?
If you use Gunicorn, it's the gunicorn process, and if you use uWSGI, it's uwsgi. Judging from your reference to Graham Dumpleton, you are using mod_wsgi. So the process is the mod_wsgi daemon.
The user as which these processes are writing to the file is the user as which the process runs. For mod_wsgi, you can specify a user option to the WSGIDaemonProcess directive. According to its documentation, "If this option is not supplied the daemon processes will be run as the same user that Apache would run child processes and as defined by the User directive." In Ubuntu, this is www-data. I think it's a good idea to use the user option and run the daemon as a different dedicated user.
You should not add www-data to the adm group. The adm group is people who have permission to read the log files. www-data should not have such permission. (Reading and writing its own log files is fine, but you wouldn't want it to have permission to read /var/log/syslog.)
3) Which process resets permissions in the /var/log/apache and why?
It's logrotate, which is run by cron; see /etc/cron.daily/logrotate. The configuration at /etc/logrotate.d/apache2 manipulates all files matching /var/log/apache2/*.log. The primary purpose of logrotate is to, well, rotate logs. That is, it creates a new log file every day, yesterday's is named access.log.1, before yesterday's access.log.2.gz, and so on, and logs older than some days are deleted. This is done to save space and to keep the logs manageable. logrotate will also fix the permissions of the files if they are wrong.
In theory you should configure logrotate to also rotate your Django project's logs, otherwise they might eventually fill the disk.
For mod_wsgi you are better to direct Python logging to stderr or stdout so that it is captured in the Apache error log. Don't create a separate log file as by using Apache log file, things like log file rotation will be handled for you automatically. For an example see under 'Logging of Python exceptions' in:
http://blog.dscpl.com.au/2015/04/integrating-modwsgi-express-as-django.html
Do ensure though that you configure a separate error log for Apache for the VirtualHost so that your logging for site is saved away separately to main Apache error log.

Django: Is process started from console or from a HTTPRequest?

I need to enable logging if a code has been executed from console application (as manage.py <cmd>) and to disable logging if HTTPRequest is being processed. Probably filter can be very useful here.
LOGGING = {
...
'filters': {
'require_debug_false': {
'()': 'IsFromHTTPRequest'
}
},
...
}
But what is the best way to define if command has been executed or HTTPRequest is being processed? Traceback analysis?
Well, there is no a good way to do. But here is what we do when we need distinguish manage.py jenkins from regular http requests:
Add to settings.py
import sys
JENKINS = "jenkins" in sys.argv
Then you can use that variable whenever you need. In the log filter as well.

How to stop logging in Django unittests from printing to stderr?

I'm testing some Django models with bog-standerd django.test.Testcase. My models.py writes to a debug log, using the following init code:
import logging
logger = logging.getLogger(__name__) # name is myapp.models
and then I write to the log with:
logger.debug("Here is my message")
In my settings.py, I've set up a single FileHandler, and a logger for myapp, using that handler and only that handler. This is great. I see messages to that log. When I'm in the Django shell, I only see messages to that log.
When, however, I run my test suite, my test suite console also sees all those messages. It's using a different formatter that I haven't explicitly defined, and it's writing to stderr. I don't have a log handler defined that writes to stderr.
I don't really want those messages spamming my console. I'll tail my log file if I want to see those messages. Is there a way to make it stop? (Yes, I could redirect stderr, but useful output goes to stderr as well.)
Edit: I've set up two handlers in my settings.py:
'handlers': {
'null': {
'level': 'DEBUG',
'class': 'django.utils.log.NullHandler',
},
'logfile' : {
'level':'DEBUG',
'class':'logging.FileHandler',
'filename':'%s/log/development.log' % PROJECT_DIR,
'formatter': 'simple'
},
},
and tried this:
'loggers': {
'django': {
'level': 'DEBUG',
'handlers': ['null']
},
'myapp': {
'handlers': ['logfile'],
'level':'DEBUG',
},
... but the logging / stderr dumping behavior remains the same. It's like I'm getting another log handler when I'm running tests.
It's not clear from your config snippet which handlers, if any, are configured for the root logger. (I'm also assuming you're using Django 1.3.) Can you investigate and tell us what handlers have been added to the root logger when you're running tests? AFAICT Django doesn't add anything - perhaps some code you're importing does a call to basicConfig without you realising it. Use something like ack-grep to look for any occurrences of fileConfig, dictConfig, basicConfig, and addHandler - all of which could be adding a handler to the root logger.
Another thing to try: set the propagate flag to False for all top-level loggers (like "django", but also those used by your modules - say "myapp"). Does that change things?

Categories