Previously when an error occurred in my application I could find a trace of the entire code to where it happened ( file, line number ). In the Google Cloud console.
Right now I only receive a request ID and a timestamp, with no indication of a trace or line number in the code when in the 'logging' window in the Google Cloud Console. Selecting a 'log event' only shows some sort of JSON structure of a request, but not anything about the code or any helpful information what went wrong with the application.
What option should be selected in the google cloud console to show a stack trace for Python App Engine applications?
Google has in the mean time update the cloud console and debugger, which now does contain full stack traces for Python.
Related
I've created an Azure Function App with python and have published an app that runs every 5 minutes. I used to go to the Function > Monitoring to see the last 30 day runs. I've checked today and all logs have disappeared and the function does not display any runs in the Overview
The last time I checked before this happened, I had loads of logs in here but now I have none. I know the function is running because if I go to Application Insight into Live Monitoring I can see the traces and also can check that the results are being processed. I haven't changed anything to the script and not sure why this is happening. Has anyone experienced this and found a fix?
EDIT
I've recreated the Function App and noticed that it creates a DefaultResourceGroup-XXX resource group with a Default Workspace in it which I remember deleting it when I first created the Function App. I've left it on and now I see the logs in Monitoring but cannot see any connections to the Function App itself. Does anyone know how does this workspace relate to the logs and is there a way I can create a more user-friendly workspace name and link it to the App?
Thank you sheldonzy. Posting your suggestions as answer to help other community members.
On your function App go to monitor there if application insights is enabled you will see an option of Run Query in Application Insights
Open run query and check exception tables in application insights
Log Viewer
Unknown Error Image
I am running into an Unknown Error while executing a Cloud Function in GCP (Python).
Steps:
Running Cloud Function to retrieve the data from BigQuery DataStore and save the CSV file in GCP Storage.
Running Cloud Function to retrieve the data from BigQuery DataStore and save the CSV file in GCP Storage.
Running Cloud Function to retrieve the data from BigQuery DataStore and save the CSV file in GCP Storage.
It is executing successfully and files are stored in Storage. If you view the Logs it is showing Finished with Status Code 200 (attached is the log view image), which is success code.
However, in the end we are getting Unknown Error with some tracking number as per the attached screen shot.
Have anyone seen this earlier and suggestions for resolution.
Based on my follow up with Google Support, it seems this is related to Cloud Console itself.
The error message which we are experiencing is related to Cloud Function's Tester UI timing out. Currently it is set to 1 minute maximum even when Cloud Function itself has a timeout window different (between 1 min to 9mins maximum). So if we are using the CF UI Testing (Test Function option in CF), it will time out in 1 min, even though CF will successfully execute (Success Code 200 in view log)
As per the Google Support, CF Product team is working on delivering a more descriptive message (for 1 min UT Testing timeout) instead of this error. Also they are not sure if CF’s Product Team is going to set the CF’s testing UI timeout same as the CF timeout. No ETA yet.
So we will running our CF differently and not using CF UI Console for testing.
In python 2.7, the app engine sdk was doing the work in the background to nest all logs with the parent request to have a correlation in Google StackDriver.
As of the transition to python 3, it is through the usage of google cloud logging or structured logging, and from all the different references I could found, it's important to have the same trace id in the 'sub' logs for stack driver to make a match with the 'request' log.
And still as you can see below, it still appear as different logs.
For context, I even tried this on an empty django project deployed on app engine.
Got the same result, even when following the example in the documentation:
https://cloud.google.com/run/docs/logging#writing_structured_logs
Trying to log to the stdout is giving the same result.
Edit:
After the initial request, all other request will be nested under the initial request when using the stdout.
But, the highest severity of the 'child' logs is not taken by the 'parent' log, therefore the filters won't pick up the actual log. See below:
Thanks for the question!
It looks like you're logging the trace correctly, but your logName indicates that you're not using a stdout or stderr. If you use one of these for your logs, they will correlate properly, like this:
StackDriver Logs Screenshot
You can see that the logName ends with stdout. An stdout or stderr will correlate. You can create this as shown here in the tutorial:
# Build structured log messages as an object.
global_log_fields = {}
# Add log correlation to nest all log messages
# beneath request log in Log Viewer.
trace_header = request.headers.get('X-Cloud-Trace-Context')
if trace_header and PROJECT:
trace = trace_header.split('/')
global_log_fields['logging.googleapis.com/trace'] = (
f"projects/{PROJECT}/traces/{trace[0]}")
# Complete a structured log entry.
entry = dict(severity='NOTICE',
message='This is the default display field.',
# Log viewer accesses 'component' as jsonPayload.component'.
component='arbitrary-property',
**global_log_fields)
print(json.dumps(entry))
EDIT:
To filter out the stdout and only see the Request logs in the stackdriver UI, you can de-select the stdout from the filter. Logs Filter
For a sample using the python client API, please see this article and attached sample Flask app. Combining correlated Log Lines in Google Stackdriver
I was able to achieve this kind of logging structure on Google Cloud Logging Console:
I was using the Django Framework. I wrote Django middleware which integrates Google Cloud Logging API.
"Trace" needs to be added to every log object which points to its parent log object.
please check manage nesting of logs in Google Stackdriver with Django
Please check django-google-stackdriver-nested-logging log_middleware.py source on Github.
Friends,
I am working new to Google App Engine. I have learnt by myself. I am struck with a issue. I am trying to Copy entities from one app to another. I select all the entities and click on copy to app button in the bottom.
I gave my Target Application’s Remote API URL: and clicked "Copy Entities". It gives me error like ""Copy Job Status
There was a problem kicking off the jobs. The error was:
Fetch to http://xyzxyzxyz.appspot.com/_ah/remote_api/ failed with status 404.
I tried to edit app.yaml and added this piece of code to (Python)
builtins:
- remote_api: on.
Where am i doing mistake. I want to copy entities from one app to another. Your help is valuable for my further process in my work.
Thanks a lot in advance.
Have you enabled remote_api in builtins section of app.yaml ?
This might not be bug, but feature. I'm having problem views expanded logs when searching logs in dashboard on app engine.
Search results show first couple of logs in full detail, but rest of log entries are obscured. Every new entry in log is shown in full details, but older ones get obscured over the time.
Same behavior is reflected if I try to download logs from app engine, only more log entries are not obscured.
Point is that I can't get full log of my app and would like to be able to run some tasks over data.
App Engine stores logging information in a set of circular buffers. When it runs out of space, it overwrites older log entries with the new data. What you're seeing is requests for which the detailed logs have been overwritten by newer requests.