In python 2.7, the app engine sdk was doing the work in the background to nest all logs with the parent request to have a correlation in Google StackDriver.
As of the transition to python 3, it is through the usage of google cloud logging or structured logging, and from all the different references I could found, it's important to have the same trace id in the 'sub' logs for stack driver to make a match with the 'request' log.
And still as you can see below, it still appear as different logs.
For context, I even tried this on an empty django project deployed on app engine.
Got the same result, even when following the example in the documentation:
https://cloud.google.com/run/docs/logging#writing_structured_logs
Trying to log to the stdout is giving the same result.
Edit:
After the initial request, all other request will be nested under the initial request when using the stdout.
But, the highest severity of the 'child' logs is not taken by the 'parent' log, therefore the filters won't pick up the actual log. See below:
Thanks for the question!
It looks like you're logging the trace correctly, but your logName indicates that you're not using a stdout or stderr. If you use one of these for your logs, they will correlate properly, like this:
StackDriver Logs Screenshot
You can see that the logName ends with stdout. An stdout or stderr will correlate. You can create this as shown here in the tutorial:
# Build structured log messages as an object.
global_log_fields = {}
# Add log correlation to nest all log messages
# beneath request log in Log Viewer.
trace_header = request.headers.get('X-Cloud-Trace-Context')
if trace_header and PROJECT:
trace = trace_header.split('/')
global_log_fields['logging.googleapis.com/trace'] = (
f"projects/{PROJECT}/traces/{trace[0]}")
# Complete a structured log entry.
entry = dict(severity='NOTICE',
message='This is the default display field.',
# Log viewer accesses 'component' as jsonPayload.component'.
component='arbitrary-property',
**global_log_fields)
print(json.dumps(entry))
EDIT:
To filter out the stdout and only see the Request logs in the stackdriver UI, you can de-select the stdout from the filter. Logs Filter
For a sample using the python client API, please see this article and attached sample Flask app. Combining correlated Log Lines in Google Stackdriver
I was able to achieve this kind of logging structure on Google Cloud Logging Console:
I was using the Django Framework. I wrote Django middleware which integrates Google Cloud Logging API.
"Trace" needs to be added to every log object which points to its parent log object.
please check manage nesting of logs in Google Stackdriver with Django
Please check django-google-stackdriver-nested-logging log_middleware.py source on Github.
Related
If I am running a container within Cloud Run and do a print statement in my python code. Where can I view it? Cloud logs seem to show logs for the contain itself(build, etc)?
to debug my code often I do write statements that help me figure what's going on. Where would that print output be located?
1] You can find all the logs including your print statement output in Cloud Logging as mentioned in this link. So when you write a print statement from your service they will be automatically picked up by Cloud Logging.
2] Steps to view logs in Cloud Logging: Logs Explorer -> Cloud Run Revision.
3] You may wanna check your logging level. For example: if you have configured level as logging.ERROR in basicConfig (default is WARNING), and used logging.info() in your code, then it will not be printed. You can refer to this link for more information.
4] Also, you may try flushing the stdout which will make sure the logs get written from buffers. You may refer Stackoverflow answer on how to do this.
I need to setup logging in a custom web app which ideally would match the magic which happens when running a web app in Google app engine
For example, in GAE there is a request_log which can be viewed. This groups all log statements together under each request and each request has the http status code together with the endpoint path of the url. Here is an example (I apologise in advance for the crude editing here)
In a flask application I are deploying to Google Kubernetes Engine I would like to get the same level of logging in place. Trouble is I just do not know where to start.
I have got as far as installing the google-cloud-logging python library and have some rudimentary logging in place like this....
..but this is no where near the level I would like.
So the question is - where do I start?? Any searches / docs I have found so far have come up short.
Structured Logging
In Stackdriver Logging, structured logs refer to log entries that use the jsonPayload field to add structure to their payloads. If you use the Stackdriver Logging API or the command-line utility, gcloud logging, you can control the structure of your payloads. Here's an example of what a jsonPayload would look like:
{
insertId: "1m9mtk4g3mwilhp"
jsonPayload: {
[handler]: "/"
[method]: "GET"
[message]: "200 OK"
}
labels: {
compute.googleapis.com/resource_name: "add-structured-log-resource"
}
logName: "projects/my-sample-project-12345/logs/structured-log"
receiveTimestamp: "2018-03-21T01:53:41.118200931Z"
resource: {
labels: {
instance_id: "5351724540900470204"
project_id: "my-sample-project-12345"
zone: "us-central1-c"
}
type: "gce_instance"
}
timestamp: "2018-03-21T01:53:39.071920609Z"
}
You can set your own customizable jsonPayload with the parameters and values that you would like to obtain and then write this information to Stackdriver Logs Viewer.
Setting Debug mode to True
When setting debug=True, you will be able see your app in debugging mode. You will be able to see the HTTP requests, as they will appear on your console for debugging purposes, which you could then write these requests to Stackdriver Logs Viewer. An example of a Hello world Flask app running in Debug mode.
from flask import Flask
app = Flask(__name__)
#app.route("/")
def hello():
return "Hello World!"
if __name__ == "__main__":
app.run(port='5000', debug=True)
Which you could add a Flask logging handler as follows:
import logging
from logging.handlers import RotatingFileHandler
from flask import Flask
app = Flask(__name__)
#app.route('/')
def foo():
app.logger.warning('A warning occurred (%d apples)', 42)
app.logger.error('An error occurred')
app.logger.info('Info')
return "foo"
if __name__ == '__main__':
handler = RotatingFileHandler('foo.log', maxBytes=10000, backupCount=1)
handler.setLevel(logging.INFO)
app.logger.addHandler(handler)
app.run()
As you can see, there are ways to achieve this, by following the proper log configuration; although, the Stackdriver Logs Viewer UI will not look the same for Kubernetes logs as in App Engine Stackdriver Logs Viewer.
Additionally, you could also take a look into Combining correlated log lines in Google Stackdriver since it will give you a better idea of how to batch your logs by categories or groups in case you need to do so.
Click on "View options" at top right corner in the logs panel > "Modify Custom fields"
https://cloud.google.com/logging/docs/view/overview#custom-fields
I am writing this here letting people know what I have come up with during my investigations.
The information supplied by sllopis got me to to the closest solution - using a mixture of structured logging and refactoring some of the code in the flask-gcp-log-groups library I am able to get requests logged in Stackdriver with log lines correlated underneath
Unfortunately this solution has a few gaping holes making it far from ideal albeit it is the best I can come up with so far based on Stackdrivers rigidness.
Each time I drill into a request there is a "flash" as Stackdriver searches and grabs all the trace entries matching that request. The bigger the collection of entries, the longer the flash takes to complete.
I cannot search for text within the correlated lines when only looking at the "request" log. For example, say a correlated log entry underneath a request has a string with the text "now you see me" - if I search for the string "see" it will not bring up that request in the list of search results.
I may be missing something obvious but I have spent several very frustrating days trying to achieve something which you think should be quite simple.
Ideally I would create a protoPayload per log entry, within I would put an array under the property "line" similar to how Google App Engine does its logging.
However there does not appear to be a way of doing this as protoPayload is reserved for Audit Logs.
Thanks to sllopis for the information supplied - if I don't find a better solution soon I will mark the answer as correct as it is the closest I believe I will get to what I want to achieve.
Given the situation I am very tempted to ditch Stackdriver in favour of a better logging solution - any suggestions welcome!
probably I don't quite understand how logging really works in Python. I'm trying to debug a Flask+SQLAlchemy (but without flask_sqlalchemy) app which mysteriously hangs on some queries only if run from within Apache, so I need to have proper logging to get meaningful information. The Flask application by default comes with a nice logger+handler, but how do I get SQLAlchemy to use the same logger?
The "Configuring Logging" section in the SQLAlchemy just explains how to turn on logging in general, but not how to "connect" SQLAlchemy's logging output to an already existing logger.
I've been looking at Flask + sqlalchemy advanced logging for a while with a blank, expressionless face. I have no idea if the answer to my question is even in there.
EDIT: Thanks to the answer given I now know that I can have two loggers use the same handler. Now of course my apache error log is littered with hundreds of lines of echoed SQL calls. I'd like to log only error messages to the httpd log and divert all lower-level stuff to a separate logfile. See the code below. However, I still get every debug message into the http log. Why?
if app.config['DEBUG']:
# Make logger accept all log levels
app.logger.setLevel(logging.DEBUG)
for h in app.logger.handlers:
# restrict logging to /var/log/httpd/error_log to errors only
h.setLevel(logging.ERROR)
if app.config['LOGFILE']:
# configure debug logging only if logfile is set
debug_handler = logging.FileHandler(app.config['LOGFILE'])
debug_handler.setLevel(logging.DEBUG)
app.logger.addHandler(debug_handler)
# get logger for SQLAlchemy
sq_log = logging.getLogger('sqlalchemy.engine')
sq_log.setLevel(logging.DEBUG)
# remove any preconfigured handlers there might be
for h in sq_log.handlers:
sq_log.removeHandler(h)
h.close()
# Now, SQLAlchemy should not have any handlers at all. Let's add one
# for the logfile
sq_log.addHandler(debug_handler)
You cannot make SQLAlchemy and Flask use the same logger, but you can make them writing to one place by add a common Handler. And maybe this article is helpful: https://www.electricmonk.nl/log/2017/08/06/understanding-pythons-logging-module/
By the way, if you want to get all logs in one single request, you can set a uniq name for current thread before request, and add the threadName in you logging's formatter.
Answer to my question at EDIT: I still had "echo=True" set on the create_engine, so what I saw was all the additional output on stderr. echo=False stops that but still logs to debug level DEBUG.
Clear all corresponding handlers created by SqlAlchemy:
logging.getLogger("sqlalchemy.engine.Engine").handlers.clear()
The code above should be called after engine created.
Previously when an error occurred in my application I could find a trace of the entire code to where it happened ( file, line number ). In the Google Cloud console.
Right now I only receive a request ID and a timestamp, with no indication of a trace or line number in the code when in the 'logging' window in the Google Cloud Console. Selecting a 'log event' only shows some sort of JSON structure of a request, but not anything about the code or any helpful information what went wrong with the application.
What option should be selected in the google cloud console to show a stack trace for Python App Engine applications?
Google has in the mean time update the cloud console and debugger, which now does contain full stack traces for Python.
This might not be bug, but feature. I'm having problem views expanded logs when searching logs in dashboard on app engine.
Search results show first couple of logs in full detail, but rest of log entries are obscured. Every new entry in log is shown in full details, but older ones get obscured over the time.
Same behavior is reflected if I try to download logs from app engine, only more log entries are not obscured.
Point is that I can't get full log of my app and would like to be able to run some tasks over data.
App Engine stores logging information in a set of circular buffers. When it runs out of space, it overwrites older log entries with the new data. What you're seeing is requests for which the detailed logs have been overwritten by newer requests.