How do I set custom "trace_id" for Datadog tracing? I searched high and low but can't find an answer to this. I suspect it's not supported. Would really appreciate it if I can get some help here.
As an example, if I can do the following in multiple files, then I can view these spans together in the Datadog UI since they all have the same trace ID:
#tracer.wrap(service='foo', resource='bar')
def bar(self, ttt):
span = tracer.current_span()
span.set_trace_id("my_customer_trace_id")
It turns out that trace id can be set via HTTP endpoint https://docs.datadoghq.com/api/v1/tracing/#send-traces. There doesn't seem to be an option for sending traces to the agent directly.
This can still be useful if the performance penalty of making HTTP calls is not a concern, i.e., if you are not working on a real-time system.
I am not well familiar with Datadog UI, but I see that ddtrace allow you to set tags:
span.set_tag('your_own_id', '12345')
Related
I have a generic flask application. The application is instrumented using opentelemetry-instrumentation-flask, I am shipping this data with the opentelemetry-exporter-otlp to an Elastic APM server. This is all working fine and it is done as the documentation shows.
There are some endpoints of the application that I would like not to track with the instrumentation, as they are noisy and add little-to-no value for me (For example, health endpoints). I want the instrumentation to ignore them, but I cannot find how.
How can this be done? I have been checking the documentation for Python and after searching over the internet, I could not find any clear answer about how this could be done... because I believe this must be doable.
You can do that using environment variable OTEL_PYTHON_FLASK_EXCLUDED_URLS. It takes a comma separated regular expressions for urls you want to exclude. For more detailed info please check here https://github.com/open-telemetry/opentelemetry-python-contrib/tree/main/instrumentation/opentelemetry-instrumentation-flask#exclude-lists
Looking through the API documentation it seems that there's currently no way to access a custom report via the API. If this is, in fact, the case, is there a workaround to make this possible?
The goal is to get a modified version of this report shown on the web interface:
No, you need to build the report yourself and call it with the API unfortunately.
Depending on how complex the report is, it can be done pretty quickly. You can quickly generate the GAQL needed for your APU query using this tool: https://developers.google.com/google-ads/api/fields/v7/overview_query_builder
This will save you typing out all the resources manually, and will even validate it for you.
If you're stuck, let us know what report you're trying to generate and we can help with the GAQL.
I am working on an application that sends logs to GCP StackDriver. I want to put custom "tags" (or summary fields) natively on my log entry. I am looking for a solution that doesn't rely on defining custom summary fields in the console, as those are not permanent, and not project-wide.
I realized that some logger have tags displayed. For example, GCF logs will show its execution_id. Using the following snippet, I can verify that the tags displayed depend on the name of the logger:
from google.cloud import logging
client = logging.Client()
client.logger(name="custom").log_text("foobar", labels={"execution_id": "foo"})
client.logger(name="cloudfunctions.googleapis.com%2Fcloud-functions").log_text("foobar", labels={"execution_id": "foo"})
if you filter your logs on "foobar", you will see that only the second entry has "foo" as a tag.
That tag matches the label.execution_id specified in the code. The problem is, I cannot add custom labels, if I add another label that is not execution_id, it is not displayed as a tag (but still found in the log body).
It looks like each monitored resources has its own set of tag, ie: BigQuery resources use protoPayload.authenticationInfo.principalEmail as tag. But I cannot find a way to specify my own resources.
Does anybody has some experience with that kind of issue?
Thanks in advance
The closest solution I found was in an expanded log entry, click on a field within the JSON representation. In the resulting panel, select Add field to summary line,
to get more information about this topic, please refer to this link
Additionally I found a feature request opened for the product team, where the user, on that case, wants to filter out in Stackdriver by Dataflow jobs custom labels, the reference might be useful on your use case, no ETA was shared, neither guarantee of its implementation
I've filed a Feature Request on your behalf to the product team, they'll evaluate the possibility to implement the functionality that fits your use case, you can follow up on this PIT [1], where you will be able to receive further updates from the team as well.
Keep in mind that there is no ETA, nor guarantee that this will be implemented. However, please feel free to ask for updates directly on the PIT, I would appreciate if you give my answer as accepted, if it was helpful for you.
[1]https://issuetracker.google.com/172667238
Locust is a great and simple load testing tool. By default it only tracks response times and content length from which it can deduce RPS, etc. Is there any way to track custom statistics in locust as well?
In my case a site Im testing returns couple of stats via headers. For example a count of SQL queries within a request. It would be very helpful to track some of these statistics in conjunction to tracking standard response times.
I do not see any way to do that in locust however. Is there a simple way for doing that?
Only customization I could see is setting url names in a request in docs.
Manually storing some of the stats is not that straight forward either as locust is distributed so would like to avoid doing anything custom.
edit
There is an example how custom stats can be passed around however that does not show up in the UI and requires custom export. Any way to add additional data in locust which will get logged both in UI and data export?
Maybe something like:
class MyTaskSet(TaskSet):
#task
def my_task(self):
response = self.client.get("/foo")
self.record(foo=response.headers.get('x-foo'))
As far as I know, there is no simple way of visualizing custom data in Locust. However, by looking at https://github.com/locustio/locust/blob/master/locust/main.py#L370, you could easily replace main locust run function and inject some custom logic to https://github.com/locustio/locust/blob/master/locust/web.py. This seem to be a low hanging fruit for the Locust devs to make this part of code more adjustable out of the box so I'd suggest opening issue in their GitHub.
In my website, users have the possibility to store links.
During typing the internet address into the designated field I would like to display a suggest/autocomplete box similar to Google Suggest or the Chrome Omnibar.
Example:
User is typing as URL:
http://www.sta
Suggestions which would be displayed:
http://www.staples.com
http://www.starbucks.com
http://www.stackoverflow.com
How can I achieve this while not reinventing the wheel? :)
You could try with
http://google.com/complete/search?output=toolbar&q=keyword
and then parse the xml result.
I did this once before in a Django server. There's two parts - client-side and server-side.
Client side you will have to send out XmlHttpRequests to the server as the user is typing, and then when the information comes back, display it. This part will require a decent amount of javascript, including some tricky parts like callbacks and keypress handlers.
Server side you will have to handle the XmlHttpRequests which will be something that contains what the user has typed so far. Like a url of
www.yoursite.com/suggest?typed=www.sta
and then respond with the suggestions encoded in some way. (I'd recommend JSON-encoding the suggestions.) You also have to actually get the suggestions from your database, this could be just a simple SQL call or something else depending on your framework.
But the server-side part is pretty simple. The client-side part is trickier, I think. I found this article helpful
He's writing things in php, but the client side work is pretty much the same. In particular you might find his CSS helpful.
Yahoo has a good autocomplete control.
They have a sample here..
Obviously this does nothing to help you out in getting the data - but it looks like you have your own source and arent actually looking to get data from Google.
If you want the auto-complete to use date from your own database, you'll need to do the search yourself and update the suggestions using AJAX as users type. For the search part, you might want to look at Lucene.
That control is often called a word wheel. MSDN has a recent walkthrough on writing one with LINQ. There are two critical aspects: deferred execution and lazy evaluation. The article has source code too.