How to add traces for AppDynamics - python

I have a Python agent for AppDynamics and I would like to have a feature to send specific logs that I could read afterwards directly in AppDynamics.
That is, just some logs sent to AppDynamics that I would use to troubleshoot any issue.
How could I do that, and where would I read them within AppDynamics?

The Python Agent does not have this specific ability, but there are a few options which may aid in your use case.
Use Log Analytics - requires an Analytics Agent (or a Machine Agent with the Analytics extension). Set config / log line format to parse log data into a usable format in AppD. Docs: https://docs.appdynamics.com/appd/22.x/latest/en/analytics
Use Custom Events - through hitting the Analytics Events API to store records in a custom schema. Could be used to store log lines. Docs: https://docs.appdynamics.com/appd/22.x/latest/en/analytics/adql-reference/adql-data/analytics-custom-events-data
Use Custom Extensions - custom scripting for the Machine Agent allows for arbitrary reporting of metrics under a host as "Custom Metrics". Not for logs specifically, meant for additional metrics. Docs: https://docs.appdynamics.com/appd/22.x/latest/en/infrastructure-visibility/machine-agent/extensions-and-custom-metrics
It is also worth noting that by default, stack trace / call stack information can be found within sampled snapshots - see https://docs.appdynamics.com/appd/22.x/latest/en/application-monitoring/business-transactions/troubleshoot-business-transaction-performance-with-transaction-snapshots

Related

API endpoint for data from Microsoft Exchange Online Protection?

I am working on a project where I have been using Python to make API calls to our organization's various technologies to get data, which I then push to Power BI to track metrics over time relating to IT Security.
My boss wants to see info added from Exchange Online Protection such as malware detected in emails, spam blocks etc., essentially replicating some of the email and collaboration reports you'd see in M365 defender > reports > email and collaboration (security.microsoft.com/emailandcollabreport).
I have tried the Defender API and MS Graph API, read through a ton of documentation, and can't seem to find anywhere to pull this info from. Has anyone done something similar, or know where this data can be pulled from?
Thanks in advance.
You can try using the Microsoft Graph Security API using which you can get the alerts, information protection, secure score using that. Also you can refer the alerts section in the documentation which talks about the list of supported providers at this point using the Microsoft Graph security api.
In case anyone else runs into this, this is the solution I ended up using (hacky as it may be);
The only way to extract the pertinent info seems to be through PowerShell, you need the modules ExchangeOnlineManagement and PSWSMan so those will need to be installed.
You need to add an app to your Azure instance with global reader role minimum (or something custom) and generate and upload self-signed certificates to the app.
I then ran the following lines as a ps1 script:
Connect-ExchangeOnline -CertificateFilePath "<PATH>" -AppID "<APPID>" -Organization "<ORG>.onmicrosoft.com" -CertificatePassword (ConvertTo-SecureString -String '<PASSWORD>' -AsPlainText -Force)
$dte = (Get-Date).AddDays(-30)
Get-MailflowStatusReport -StartDate $dte -EndDate (Get-Date)
Disconnect-ExchangeOnline
I used python to call the powershell script, then extract the info I needed from the output and push it to PowerBI.
I'm sure there is a more secure and efficient way to do this but I was able to accomplish the task this way.

Load testing an API in python for response time

I would like to do a Load test over an API I developed. I have tried the Jmeter which seems to be only providing the elapsed time and the latency
So is there a way I can do the same test(i.e sending over 100 POST request to the API) using python so I can get much better control over it
You cannot write the response time by configuring what has to be written to the file, but it can be done by right clicking the Response Times over time graph(which has to installed via the plugins)
JMeter provides whatever you "tell" it to provide, check out Results File Configuration documentation chapter to see what JMeter can store and what are the default values.
Once you have the results file with the metrics you want you can either analyze it yourself or generate HTML Reporting Dashboard from it.
If you're good in Python development you might want to try Locust tool which is Python-based and the workload is defined via locustfiles - Python scripts.
More information: JMeter vs. Locust - Which One Should You Choose?

How should StackDriver entries be created?

I am testing out using stackdriver, and I'm curious how to set additional attributes other than the message itself. For example, I'd like to see what application or server is sending the message. Perhaps something like this:
message: "Hello"
tags: ["Application-1", "Server-XYZ"]
Is there a way to do this?
Additionally, is it suggested that a straight text message is sent, or a json struct? For example:
You can create user-defined log-based metric labels, see https://cloud.google.com/logging/docs/logs-based-metrics/labels
You can send custom attributes by using "Structured Logging".
https://cloud.google.com/logging/docs/structured-logging
I'm not sure which product you are running your application (such as Google App Engine Standard/Flexible, Google Cloud Functions, Google Compute Engine, Google Kubernetes Engine), it's recommended to use JSON formatted structured log.
In the case you need to configure logging agent (in the case of GCE), you can set up the agent accordingly.
https://cloud.google.com/logging/docs/agent/installation

Retrieve list of log names from Google Cloud Stackdriver API with Python

I'm using Google's Stackdriver Logging Client Libraries for Python to programmatically retrieve log entries, similar to using gcloud beta logging read.
Stackdriver also does provide an API to retrieve a list of log names, which is most probably what gcloud beta logging logs list uses.
How can I use that API with the Python client libraries? I couldn't find anything in the docs.
You can work with the Stackdriver Logging Client Libraries for Python. You can install them using the command pip install --upgrade google-cloud-logging, and after setting up authentication, you will be able to run a simple program such as the one I have quickly developed and share below.
Before getting into the code itself, let me share with you some interesting documentation pages that will help you develop your own code to retrieve log entries programatically using these Client Libraries:
First, there is the general Stackdriver Logging Python Client Library documentation. You will find all kind of information here: retrieving, writing and deleting logs, exporting logs, etc.
In detail, you will be interested in how to retrieve log entries, listing them from a single or multiple projects, and also applying advanced filters.
Also have a look at how the entry class is defined, in order to access the fields that you are interested in (in my example, I only check the timestamp and the severity fields).
A set of examples that might also be useful.
Now that you have all the data you need, let's get into some easy coding:
# Import the Google Cloud Python client library
from google.cloud import logging
from google.cloud.logging import DESCENDING
# Instantiate a client
logging_client = logging.Client(project = "<YOUR_PROJECT_ID>")
# Set the filter to apply to the logs
FILTER = 'resource.type:gae_app and resource.labels.module_id:default and severity>=WARNING'
i = 0
# List the entries in DESCENDING order and applying the FILTER
for entry in logging_client.list_entries(order_by=DESCENDING, filter_=FILTER): # API call
print('{} - Severity: {}'.format(entry.timestamp, entry.severity))
if (i >= 5):
break
i += 1
This small snippet imports the Client Libraries, instantiates a client at your project (with Project ID equal to YOUR_PROJECT_ID), sets a filter that only looks for log entries with a severity higher than WARNING, and finally lists the 6 most recent logs matching the filter.
The results of running this code are the following:
my-console:python/logs$ python example_log.py
2018-01-25 09:57:51.524603+00:00 - Severity: ERROR
2018-01-25 09:57:44.696807+00:00 - Severity: WARNING
2018-01-25 09:57:44.661957+00:00 - Severity: ERROR
2018-01-25 09:57:37.948483+00:00 - Severity: WARNING
2018-01-25 09:57:19.632910+00:00 - Severity: ERROR
2018-01-25 09:54:39.334199+00:00 - Severity: ERROR
Which are the entries that exactly correspond to the logs matching the filter I established (note they are shown in inverse order in this screenshot):
I hope this small piece of code (accompanied by all the documentation pages I shared) can be useful for you to retrieve logs programmatically using the Stackdriver Client Libraries for Python.
As pointed out by #otto.poellath, it might also be interesting to list all the log names available in your project. However, there is currently not a Python Client Library method available for that purpose, so we will have to work with the old Python API Client Library (not the same as Python Client Library) for that. It can be installed with the command pip install --upgrade google-api-python-client, and it makes easier to use the REST API (which as you shared in your question does indeed include a method to list log names) by providing a library for Python. It is not as easy to work with it as it is with the new Client Libraries, but it implements all (or almost all) methods that are available through the REST API itself. Below I share another code snippet that lists all the log names with any written log in your project:
from apiclient.discovery import build
from oauth2client.client import GoogleCredentials
import json
credentials = GoogleCredentials.get_application_default()
service = build('logging', 'v2', credentials=credentials)
# Methods available in: https://developers.google.com/resources/api-libraries/documentation/logging/v2/python/latest/index.html
collection = service.logs()
# Build the request and execute it
request = collection.list(parent='projects/<YOUR_PROJECT_ID>')
res = request.execute()
print(json.dumps(res, sort_keys=True, indent=4))
It prints a result such as this one:
my-console:python/logs$ python list_logs.py
{
"logNames": [
"projects/<YOUR_PROJECT_ID>/logs/my-log",
"projects/<YOUR_PROJECT_ID>/logs/my-test-log",
"projects/<YOUR_PROJECT_ID>/logs/python",
"projects/<YOUR_PROJECT_ID>/logs/requests"
]
}
I know this is not exactly what you ask in the question, as it is not using Python Client Libraries specifically, but I think it might be also interesting for you, knowing that this feature is not available in the new Client Libraries, and the result is similar, as you can access the log names list programmatically using Python.
from google.cloud import logging
# Instantiate a client
logging_client = logging.Client(project="projectID")
# Set the filter to apply to the logs
FILTER = 'resource.labels.function_name="func_name" severity="DEBUG"'
for entry in logging_client.list_entries(filter_=FILTER): # API call
print(entry.severity, entry.timestamp, entry.payload)
This is with respect to Cloud Function!

How to get SWF activity information for a given workflow execution using boto

When looking at the SWF console on Amazon AWS, you can view closed workflow executions' histories. In the history, you can see all the activities that were called and their inputs and outputs.
I haven't been able to figure out how to access this activity information using boto 2. I am able to get the history of a workflow, but it resembles the "Events" tab of the SWF console and not the "Activities" tab. For example, it doesn't contain the output of any activities.
Here is the code I've used to get to where I am:
domain = boto.swf.layer2.Domain(name=swf_domain,
aws_access_key_id=<id>,
aws_secret_access_key=<secret>)
close_oldest_date = int((datetime.utcnow() -
timedelta(days=LOOKBACK_DAYS)).timestamp())
execution = domain.executions(closed=True,
close_status='COMPLETED',
maximum_page_size=1,
close_oldest_date=close_oldest_date)[0]
print(execution.history())
Is there a way to access the completed activities' inputs, outputs, and other information using boto 2? Possibly using boto 3?
History contains complete information about activity execution.
ActivityTaskScheduled contains input of an activity.
ActivityTaskStarted contains identity of a worker (usually host:pid
ActivityTaskCompleted contains activity output.
ActivityTaskFailed contains failure information
Consult API Referece to get full information about available events and their meaning.

Categories