Automate bugsense proguard mapping file upload using (apitoken/apikey) - python

Hi need to automate Bugsense proguard mapping file upload using python and api(apitoken /apikey) . I was trying with the code from("github.com/PanosJee/5004886") but not find anything getting uploaded . I am able to do curl to the urls specified in the python code(.../errors.json and .../analytics.json) using my apikey and apitoken but not to anyother urls which asks me to login

You could use curl as in the examples below.
APPTOKEN - the token provided for application
ACCESSTOKEN - Bugsense access token. Found in Account Info -> Integration -> API TOKEN
Bash script examples below:
iOS
export DSYMFILEPATH=file.dSYM
export APPTOKEN="fcccccca"
export ACCESSTOKEN="aaaaa4075aaaa69fbaaaa61"
curl -F "file=#$DSYMFILEPATH" --header "X-Bugsense-apikey: $APPTOKEN" --header "X-BugSense-auth-token: $ACCESSTOKEN" https://symbolicator.splkmobile.com/upload/dsym -i
Android
export PROGUARDMAPPINGFILE=mapping.txt
export APPTOKEN="acccccca"
export ACCESSTOKEN="aaaaa4075aaaa69fbaaaa61"
export APPVERSION="1.1"
curl -F "file=#$PROGUARDMAPPINGFILE" --header "X-Bugsense-apikey: $APPTOKEN" --header "X-BugSense-auth-token: $ACCESSTOKEN" --header "X-Bugsense-appver: $APPVERSION" https://symbolicator.splkmobile.com/upload/mapping -i
For more details: https://github.com/bugsense/docs/blob/master/api/read.md

Related

notebook to execute Databricks job

Is there an api or other way to programmatically run a Databricks job. Ideally, we would like to call a Databricks job from a notebook. Following just gives currently running job id but that's not very useful:
dbutils.notebook.entry_point.getDbutils().notebook().getContext().currentRunId().toString()
To run a databricks job, you can use Jobs API. I have a databricks job called for_repro which I ran using the 2 ways provided below from databricks notebook.
Using requests library:
You can create an access token by navigating to Settings -> User settings. Under Access token tab, click generate token.
Use the above generated token along with the following code.
import requests
import json
my_json = {"job_id": <your_job-id>}
auth = {"Authorization": "Bearer <your_access-token>"}
response = requests.post('https://<databricks-instance>/api/2.0/jobs/run-now', json = my_json, headers=auth).json()
print(response)
The <databricks-instance> value from the above code can be extracted from your workspace URL.
Using %sh magic command script:
You can also use magic command %sh in your python notebook cell to run a databricks job.
%sh
curl --netrc --request POST --header "Authorization: Bearer <access_token>" \
https://<databricks-instance>/api/2.0/jobs/run-now \
--data '{"job_id": <your job id>}'
The following is my job details and run history for reference.
Refer to this Microsoft documentation to know all other operations that can be achieved using Jobs API.

python bot server does not receive bash messages - missing concept

I have created a Telegram bot using BotFather, https://t.me/botfather
In a Raspberry server, I have some python code answering messages written to bot.
It uses "pyTelegramBotAPI" and is based on this code
https://www.flopy.es/crea-un-bot-de-telegram-para-tu-raspberry-ordenale-cosas-y-habla-con-ella-a-distancia/
Basically it does "bot.polling()"
It works perfect when I write messages into the bot using the smartphone Telegram APP.
The problem is when I write messages into the bot from another computer,
using "bash" + "curl" + "POST"
The server does not receive the bash message, so it does not answer it.
Can someone provide some light on any concept I am missing ?
PD.- the bash+curl code is this one
#!/bin/bash
TOKEN="1436067683:ABGcHbGWS3ek1UdKvyRWC7Xtuv1DuyvT6A4"
ID="304688070"
MENSAJE="La Raspberry te saluda."
URL="https://api.telegram.org/bot${TOKEN}/sendMessage"
curl -s -X POST ${URL} -d chat_id=${ID} -d text="${MENSAJE}"
PD #2 .- now I use "json" and have reached an interesting situation :
curl -v -X POST -H 'Content-Type: application/json' -d '{"chat_id":"${ID}", "text":"${MENSAJE}"}' ${URL_sndmsg}
... produces ...
{"ok":false,"error_code":400,"description":"Bad Request: chat not found"}
... but I did not change ID neither TOKEN ... and old code still finds the chat ...
Strange
the sendMessage endpoint of the bot API is a POST endpoint, and according to the documentation, you need to use JSON data to communicate with it.
the request you made in the question would look like this:
#!/bin/bash
TOKEN="1436067683:ABGcHbGWS3ek1UdKvyRWC7Xtuv1DuyvT6A4"
ID="304688070"
MENSAJE="La Raspberry te saluda."
URL="https://api.telegram.org/bot${TOKEN}/sendMessage"
curl -s -X POST -H 'Content-Type: application/json' -d '{"chat_id":"${ID}", "text":"${MENSAJE}"}' ${URL}
I'd recommend that you don't use the -s option while making tests, that way you can see the output and you could've figure it out.

Run a curl tlsv1.2 http get request in python?

I have the following command that I run using curl in linux.
curl --tlsv1.2 --cert ~/aws-iot/certs/certificate.pem.crt --key ~/aws-iot/certs/private.pem.key --cacert ~/aws-iot/certs/root-CA.crt -X GET https://data.iot.us-east-1.amazonaws.com:8443/things/pi_3/shadow
This command returns JSON text that I want. However I want to be able to run the above command in Python3. I do not know what library to use in order to get the same JSON response.
P.S. I replace "data" with my account number in AWS to get JSON
After playing around with it on my own I was able to successfully do it in python using the requests library.
import requests
s = requests.Session()
r = s.get('https://data.iot.us-east-1.amazonaws.com:8443/things/pi_3/shadow',
cert=('/home/pi/aws-iot/certs/certificate.pem.crt', '/home/pi/aws-iot/certs/private.pem.key', '/home/pi/aws-iot/certs/root-CA.crt'))
print(r.text)

How to automatically download a file, behind a login page, riddled with javascript?

I want to periodically download a logfile from a device. After finding the curl command to do so through inspecting chrome->inspect element->network while manually downloading the file, I found the following curl type command to display the log file:
curl 'http://10.0.0.1:1333/cgi/WebCGI?7043' -H 'Cookie: loginname=admin; password=e5T%7B%5CYnlcIX%7B; OsVer=2.17.0.32' -H 'Connection: keep-alive' --data '' --compressed
Is this command reliable? The username in the 'Cookie:' is the same as the login username, but the password is not (lets say the true password is p#$$w0rd). I guess e5T%7B%5CYnlcIX%7B is only a cookie session password and will expire at some stage. Am I correct?
Question 1
How can I achieve this reliably with username as admin and password as p#$$w0rd, using a command-line program or Python library. This will run on a server and I do not want it to periodically break.
Question 2
What is the usual steps towards achieving: "I want to script a file download, which is initiated through a JavaScript button, which is behind a login page".

How to auth into BigQuery on Google Compute Engine?

What's the easiest way to authenticate into Google BigQuery when on a Google Compute Engine instance?
Make sure that your instance has the scope to access BigQuery first of all - you can decide this only at creation time.
in a bash script, get a oauth token by calling :
ACCESSTOKEN=`curl -s "http://metadata/computeMetadata/v1/instance/service-accounts/default/token" -H "X-Google-Metadata-Request: True" | jq ".access_token" | sed 's/"//g'`
echo "retrieved access token $ACCESSTOKEN"
now let's say you want a list of the data sets in a project :
CURL_URL="https://www.googleapis.com/bigquery/v2/projects/YOURPROJECTID/datasets"
CURL_OPTIONS="-s --header 'Content-Type: application/json' --header 'Authorization: OAuth $ACCESSTOKEN' --header 'x-goog-project-id:YOURPROJECTID' --header 'x-goog-api-version:1'"
CURL_COMMAND="curl --request GET $CURL_URL $CURL_OPTIONS"
CURL_RESPONSE=`eval $CURL_COMMAND`
the response in JSON format can be found in the variable CURL_RESPONSE
PS: I realize now that this question is tagged as Python, but same principles apply.
In Python:
AppAssertionCredentials is a python class that allows a Compute Engine instance to identify itself to Google and other OAuth 2.0 servers, withour requiring a flow.
https://developers.google.com/api-client-library/python/
The project id can be read from the metadata server, so it doesn't need to be set as a variable.
https://cloud.google.com/compute/docs/metadata
The following code gets a token using AppAssertionCredentials, the project id from the metadata server, and instantiates a BigqueryClient with this data:
import bigquery_client
import urllib2
from oauth2client import gce
def GetMetadata(path):
return urllib2.urlopen(
'http://metadata/computeMetadata/v1/%s' % path,
headers={'Metadata-Flavor': 'Google'}
).read()
credentials = gce.AppAssertionCredentials(
scope='https://www.googleapis.com/auth/bigquery')
client = bigquery_client.BigqueryClient(
credentials=credentials,
api='https://www.googleapis.com',
api_version='v2',
project_id=GetMetadata('project/project-id'))
For this to work, you need to give the GCE instance access to the BigQuery API when creating it:
gcloud compute instances create <your_instance_name> --scopes storage-ro bigquery

Categories