I am testing out using stackdriver, and I'm curious how to set additional attributes other than the message itself. For example, I'd like to see what application or server is sending the message. Perhaps something like this:
message: "Hello"
tags: ["Application-1", "Server-XYZ"]
Is there a way to do this?
Additionally, is it suggested that a straight text message is sent, or a json struct? For example:
You can create user-defined log-based metric labels, see https://cloud.google.com/logging/docs/logs-based-metrics/labels
You can send custom attributes by using "Structured Logging".
https://cloud.google.com/logging/docs/structured-logging
I'm not sure which product you are running your application (such as Google App Engine Standard/Flexible, Google Cloud Functions, Google Compute Engine, Google Kubernetes Engine), it's recommended to use JSON formatted structured log.
In the case you need to configure logging agent (in the case of GCE), you can set up the agent accordingly.
https://cloud.google.com/logging/docs/agent/installation
Related
I have a Python agent for AppDynamics and I would like to have a feature to send specific logs that I could read afterwards directly in AppDynamics.
That is, just some logs sent to AppDynamics that I would use to troubleshoot any issue.
How could I do that, and where would I read them within AppDynamics?
The Python Agent does not have this specific ability, but there are a few options which may aid in your use case.
Use Log Analytics - requires an Analytics Agent (or a Machine Agent with the Analytics extension). Set config / log line format to parse log data into a usable format in AppD. Docs: https://docs.appdynamics.com/appd/22.x/latest/en/analytics
Use Custom Events - through hitting the Analytics Events API to store records in a custom schema. Could be used to store log lines. Docs: https://docs.appdynamics.com/appd/22.x/latest/en/analytics/adql-reference/adql-data/analytics-custom-events-data
Use Custom Extensions - custom scripting for the Machine Agent allows for arbitrary reporting of metrics under a host as "Custom Metrics". Not for logs specifically, meant for additional metrics. Docs: https://docs.appdynamics.com/appd/22.x/latest/en/infrastructure-visibility/machine-agent/extensions-and-custom-metrics
It is also worth noting that by default, stack trace / call stack information can be found within sampled snapshots - see https://docs.appdynamics.com/appd/22.x/latest/en/application-monitoring/business-transactions/troubleshoot-business-transaction-performance-with-transaction-snapshots
I'm using locust to load test a REST API. The endpoints take an API Key as a query parameter, which I can see in the cleartext in the locust logs. I know there is skip-log-setup that disables all the locust logs. Is there a way though, to just hide the API Key from the logs and subsequent HTML reports so that it will just be shown as **** or something like that, in the same way that you'd want to hide username/password?
Thank you!
You can override what the URL looks like when it's reported using the name= parameter when you make your client calls.
Example:
self.client.get("/blog?id=%i" % i, name="/blog?id=[id]")
http://docs.locust.io/en/stable/writing-a-locustfile.html#grouping-requests
I've tried deleting/recreating endpoints with the same name, and wasted a lot of time before I realized that changes do not get applied unless you also delete the corresponding Model and Endpoint configuration so that new ones can be created with that name.
Is there a way with the sagemaker python api to delete all three instead of just the endpoint?
I believe you are looking for something like this? :
https://boto3.amazonaws.com/v1/documentation/api/latest/reference/services/sagemaker.html#SageMaker.Client.delete_endpoint_config
Examples:
import boto3
deployment_name = 'my_deployment_name'
client = boto3.client('sagemaker')
response = client.describe_endpoint_config(EndpointConfigName=deployment_name)
model_name = response['ProductionVariants'][0]['ModelName']
client.delete_model(ModelName=model_name)
client.delete_endpoint(EndpointName=deployment_name)
client.delete_endpoint_config(EndpointConfigName=deployment_name)
It looks like AWS is currently in the process of supporting model deletion via API with this pull request.
For the time being Amazon's only recommendation is to delete everything via the console.
If this is critical to your system you can probably manage everything via Cloud Formation and create/delete services containing your Sagemaker models and endpoints.
I would like to sync my Cloud Datastore contents with an index in ElasticSearch. I would like for the ES index to always be up to date with the contents of Datastore.
I noticed that an equivalent mechanism is available in the Appengine Python Standard Environment by implementing a _post_put_hook method in a Datastore Model. This doesn't seem to be possible however using the google-cloud-datastore library available for use in the flex environment.
Is there any way to receive a callback after every insert? Or will I have to put up a "proxy" API in front of the datastore API which will update my ES index after every insert/delete?
The _post_put_hook() of NDB.Model does only work if you have written the entity through NDB to Datastore, and yes, unfortunately the NDB library is only available in App Engine Python Standard Environment. I don't know of such feature in Cloud Datastore. If I remember correctly, Firebase Realtime Database or Firestore have triggers for writes, but I guess you are not eager to migrate the database neither.
In Datastore you would either need a "proxy" API with the above method as you suggested, or you would need to modify your Datastore client(s) to do this upon any successful write op. The latter may come with higher risk of fails and stale data in ElasticSearch, especially if the client is outside your control.
I believe that a custom API makes sense if consistent and up-to-date search records is important for your use-cases. Datastore and Python / NDB (maybe with Cloud Endpoints) would be a good approach.
I have a similar solution running on GAE Python Standard (although with the builtin Search API instead of ElasticSearch). If you choose this route you should be aware of two potential caveats:
_post_put_hook() is always called, even if the put operation failed. I have added a code sample below. You can find more details in the docs: model hooks,
hook methods,
check_success()
Exporting the data to ElasticSearch or Search API will prolong your response time. This might be no issue for background tasks, just call the export feature inside _post_put_hook(). But if a user made the request, this could be a problem. For these cases, you can defer the export operation to a different thread, either by using the deferred.defer() method or by creating a push task). More or less, they are the same. Below, I use defer().
Add a class method for every kind of which you want to export search records. Whenever something went wrong or you move apps / datastores, add new search indexes etc. you can call this method that will then query all entities of that kind from datastore batch by batch, and export the search records.
Example with deferred export:
class CustomModel(ndb.Model):
def _post_put_hook(self, future):
try:
if future.check_success() is None:
deferred.defer(export_to_search, self.key)
except:
pass # or log error to Cloud Console with logging.error('blah')
def export_to_search(key=None):
try:
if key is not None:
entity = key.get()
if entity is not None:
call_export_api(entity)
except:
pass # or log error to Cloud Console with logging.error('blah')
```
I simply want to receive notifications from dropbox that a change has been made. I am currently following this tutorial:
https://www.dropbox.com/developers/reference/webhooks#tutorial
The GET method is done, verification is good.
However, when trying to mimic their implementation of POST, I am struggling because of a few things:
I have no idea what redis_url means in the def_process function of the tutorial.
I can't actually verify if anything is really being sent from dropbox.
Also any advice on how I can debug? I can't print anything from my program since it has to be ran on a site rather than an IDE.
Redis is a key-value store; it's just a way to cache your data throughout your application.
For example, access token that is received after oauth callback is stored:
redis_client.hset('tokens', uid, access_token)
only to be used later in process_user:
token = redis_client.hget('tokens', uid)
(code from https://github.com/dropbox/mdwebhook/blob/master/app.py as suggested by their documentation: https://www.dropbox.com/developers/reference/webhooks#webhooks)
The same goes for per-user delta cursors that are also stored.
However there are plenty of resources how to install Redis, for example:
https://www.digitalocean.com/community/tutorials/how-to-install-and-use-redis
In this case your redis_url would be something like:
"redis://localhost:6379/"
There are also hosted solutions, e.g. http://redistogo.com/
Possible workaround would be to use database for such purpose.
As for debugging, you could use logging facility for Python, it's thread safe and capable of writing output to file stream, it should provide you with plenty information if properly used.
More info here:
https://docs.python.org/2/howto/logging.html