I'm trying to access the quota list/read api from a python runbook in an automation account - but how do I authenticate it?
I'm reading. I need to pass a bearer auth token in the header but the tutorials I find are all done via Postman which doesn't make sense since this is an internally ran script.
API: https://learn.microsoft.com/en-us/rest/api/reserved-vm-instances/quota/list?tabs=HTTP
I found this in the documentation, but it looks like this is for an externally ran python script? https://learn.microsoft.com/en-us/azure/automation/learn/automation-tutorial-runbook-textual-python-3
Any help here would be much appreciated.
Here is a basic example of passing headers in a python api call. You can add the headers needed in the headers variable here to make a successful api call.
Here is the documentation on the python requests module.
import requests
# Make an API call and store the response
url = 'https://api.github.com/search/repositories?q=language:python&sort=stars'
headers = {'Accept': 'application/vnd.github.v3+json'}
r = requests.get(url, headers=headers)
print(f'Status Code: {r.status_code}')
# Store the response as a variable
response_dict = r.json()
print(f"Total repositories: {response_dict['total_count']}")
I am trying to update an already existing page in Atlassian confluence page through the Python requests module. I am using the requests.put() method to send the http request to update my page. The page already has the title "Update Status". I am trying to enter one line as the content of the page. The page id and other information that is within the json payload has been copied by me directly from the rest/api/content... output of the webpage I am trying to access.
Note: I am already able to access information from the webpage through python requests.get but I am not able to post information to the webpage.
Method used to access information from the webpage which works:
response = requests.get('https://confluence.ai.com/rest/api/content/525424594?expand=body.storage',
auth=HTTPBasicAuth('svc-Automation#ai.com', 'AIengineering1#ai')).json()
Method used to update information to that page which does not work as the response is in the form of an error 415.
import requests
from requests.auth import HTTPBasicAuth
import json
url = "https://confluence.ai.com/rest/api/content/525424594"
payload = {"id":"525424594","type":"page", "title":"new page-Update Status","space":{"key":"TST"},"body":{"storage":{"value": "<p>This is the updated text for the new page</p>","representation":"storage"}}, "version":{"number":2}}
result = requests.put(url, data=payload, auth=HTTPBasicAuth('svc-Automation#ai.com', 'AIengineering1#ai'))
print (result)
I am guessing that the payload is not in the right format. Any suggestions?
Note: The link, username and password shown here are all fictional.
Try sending the data with the "json" named argument instead of "data", so requests module would set the application/json to content-type header.
result = requests.put(url, json=payload, auth=HTTPBasicAuth('svc-Automation#ai.com', 'AIengineering1#ai'))
I've been working on a project to reverse-enginner twitter's app to scrape public posts from Twitter using an unofficial API, with Python. (I want to create an "alternative" app, which is simply a localhost that can search for a user, and get its posts)
I've been searching and reading everything related to REST, AJAX, and the python modules requests, requests-html, BeautifulSoup, and more.
I can see when looking at twitter on the devtools (for example on Marvel's profile page) that the only relevant requests being sent (by POST and GET) are the following: client_event.json and UserTweets?variables=... .
I understood that these are the relevant messages being received by cleaning the network tab and recording only when I scroll down and load new tweets - these are the only messages that came up which aren't random videos (I cleaned the search using -video -init -csp_report -config -ondemand -like -pageview -recommendations -prefetch -jot -key_live_kn -svg -jpg -jpeg -png -ico -analytics -loader -sharedCore -Hebrew).
I am new to this field, so I am probably doing something wrong. I can see on UserTweets the response I'm looking for - a beautiful JSON with all the data I need - but I am unable, no matter how much I've been trying to, to access it.
I tried different modules and different headers, and I get nothing. I DON'T want to use Selenium since it's tiresome, and I know where the data I need is stored.
I've been trying to send a GET reuest to:
https://twitter.com/i/api/graphql/vamMfA41UoKXUmppa9PhSw/UserTweets?variables=%7B%22userId%22%3A%2215687962%22%2C%22count%22%3A20%2C%22cursor%22%3A%22HBaIgLLN%2BKGEryYAAA%3D%3D%22%2C%22withHighlightedLabel%22%3Atrue%2C%22withTweetQuoteCount%22%3Atrue%2C%22includePromotedContent%22%3Atrue%2C%22withTweetResult%22%3Afalse%2C%22withUserResults%22%3Afalse%2C%22withVoice%22%3Afalse%2C%22withNonLegacyCard%22%3Atrue%7D
by doing:
from requests_html import HTMLSession
from bs4 import BeautifulSoup
response = session.get('https://twitter.com/i/api/graphql/vamMfA41UoKXUmppa9PhSw/UserTweets?variables=%7B%22userId%22%3A%2215687962%22%2C%22count%22%3A20%2C%22cursor%22%3A%22HBaIgLLN%2BKGEryYAAA%3D%3D%22%2C%22withHighlightedLabel%22%3Atrue%2C%22withTweetQuoteCount%22%3Atrue%2C%22includePromotedContent%22%3Atrue%2C%22withTweetResult%22%3Afalse%2C%22withUserResults%22%3Afalse%2C%22withVoice%22%3Afalse%2C%22withNonLegacyCard%22%3Atrue%7D')
response.html.render()
s = BeautifulSoup(response.html.html, 'lxml')
but I get back an HTML script that either says Chromium is unsupported, or just a static page without the javascript updating the DOM.
All help appreciated.
Thank you
P.S
I've posted the same question on reverseengineering.stackexchange, just to be safe (overflow has more appropriate tags :-))
Before you deep dive into the actual code, I would first start building the correct request to twitter. I would use a 3rd party tool focused on REST and APIs such as Postman to build and test the required request - and only then would write the actual code.
From your questions it seems that you'll be using an open API of twitter, so it means you'll only need to send x-guest-token and basic Bearer authorization in your request headers.
The Bearer is static - you can just browse to twitter and copy/paste
it from the dev tools network monitor.
To get the x-guest-token you'll need something dynamic because it has expiration, what I would suggest is send a curl request to twitter, parse the token from there and put it in your header before sending the request. You can see something very similar in: Python Downloading twitter video using python (without using twitter api)
.
After you have both of the above, build the required GET request in Postman and test if you get back the correct response. Only after you have everything working in Postman - write the same in Python, or any other language**
**You can use Postman snippets which automatically generates the code needed in many programming languages.
#TripleS, example of how one may extract json data from __INITIAL_STATE__ and write it to text file.
import requests
import re
import json
from contextlib import suppress
# get page
result = requests.get('https://twitter.com/ThePSF')
# Extract json from "window.__INITIAL_STATE__={....};
json_string = re.search(r"window.__INITIAL_STATE__\s?=\s?(\{.*?\});", result.text).group(1)
# convert text string to structured json data
twitter_json = json.loads(json_string)
# Save structured json data to a text file that may help
# you to orient yourself and possible pick some parts you
# are interested in (if there are any)
with open('twitter_json_data.txt', 'w') as outfile:
outfile.write(json.dumps(twitter_json, indent=4, sort_keys=True))
I've just tried the same, but with requests, not requests_html module. I could get all site contents, but I would not call it "beautiful".
Also, now I am blocked to access the site without logging in.
Here is my small example.
Use official Twitter API instead.
I also think that I will probably be blocked after some tries of using this script. I've tried it only 2 times.
import requests
import bs4
def example():
result = requests.get("https://twitter.com/childrightscnct")
soup = bs4.BeautifulSoup(result.text, "lxml")
print(soup)
if __name__ == '__main__':
example()
To select any element with bs4, use
some_text = soup.select('locator').getText()
I found one tool for scraping Twitter, that has quite a lot of stars on Github https://github.com/twintproject/twint I did not try it myself and hope it is legal.
What you're missing is the bearer and guest token needed to make your request. If I just hit your endpoint with curl and no headers I get no response. However, if I add headers for the bearer token and guest token then I get that json you're looking for:
curl https://twitter.com/i/api/graphql/vamMfA41UoKXUmppa9PhSw/UserTweets?variables=%7B%22userId%22%3A%2215687962%22%2C%22count%22%3A20%2C%22cursor%22%3A%22HBaIgLLN%2BKGEryYAAA%3D%3D%22%2C%22withHighlightedLabel%22%3Atrue%2C%22withTweetQuoteCount%22%3Atrue%2C%22includePromotedContent%22%3Atrue%2C%22withTweetResult%22%3Afalse%2C%22withUserResults%22%3Afalse%2C%22withVoice%22%3Afalse%2C%22withNonLegacyCard%22%3Atrue%7D -H 'authorization: Bearer AAAAAAAAAAAAAAAAAAAAANRILgAAAAAAnNwIzUejRCOuH5E6I8xnZz4puTs%3D1Zv7ttfk8LF81IUq16cHjhLTvJu4FA33AGWWjCpTnA'' -H 'x-guest-token: 1452696114205847552'
You can get the bearer token (which may not expire that often) and the guest token (which does expire, I think) like this:
The html of the twitter link you go to links a file called main.some random numbers.js. Within that javascript file is the bearer token. You can recognize it is because a long string starting with lots of A's.
Take the bearer token and call https://api.twitter.com/1.1/guest/activate.json using the bearer token as an authorization header
curl 'https://api.twitter.com/1.1/guest/activate.json' -X POST -H 'authorization: Bearer AAAAAAAAAAAAAAAAAAAAANRILgAAAAAAnNwIzUejRCOuH5E6I8xnZz4puTs%3D1Zv7ttfk8LF81IUq16cHjhLTvJu4FA33AGWWjCpTnA'
In python this looks like:
import requests
import json
url = "https://twitter.com/i/api/graphql/vamMfA41UoKXUmppa9PhSw/UserTweets?variables=%7B%22userId%22%3A%2215687962%22%2C%22count%22%3A20%2C%22cursor%22%3A%22HBaIgLLN%2BKGEryYAAA%3D%3D%22%2C%22withHighlightedLabel%22%3Atrue%2C%22withTweetQuoteCount%22%3Atrue%2C%22includePromotedContent%22%3Atrue%2C%22withTweetResult%22%3Afalse%2C%22withUserResults%22%3Afalse%2C%22withVoice%22%3Afalse%2C%22withNonLegacyCard%22%3Atrue%7D"
headers = {"authorization": "Bearer AAAAAAAAAAAAAAAAAAAAANRILgAAAAAAnNwIzUejRCOuH5E6I8xnZz4puTs%3D1Zv7ttfk8LF81IUq16cHjhLTvJu4FA33AGWWjCpTnA", "x-guest-token": "1452696114205847552"}
resp = requests.get(url, headers=headers)
j = json.loads(resp.text)
And now, that variable, j, holds your beautiful json. One warning, sometimes the response back can be so big that it doesn't seem to fit into a single response. If this happens, you'll notice the resp.text isn't valid json, but just some portion of a big blog of json. To fix this, you'll just need to adapt the requests to use "stream=True" and stream out the whole response before you try to parse it as json.
I am stuck on this API thing. I want to print the incoming json file (channel points in specific) but it just prints the whole page in html format. Here is my code:
import requests
import json
client_id = secret
oauth_token = secret
my_uri = 'https://localhost'
header = {"Authorization": f"Bearer {oauth_token}"}
url = f'https://id.twitch.tv/oauth2/authorize?client_id={client_id}&redirect_uri={my_uri}&response_type=id_token&scope=channel:read:redemptions+openid&state=c3ab8aa609ea11e793ae92361f002671&claims={"id_token":{"email_verified":null}}'
response = requests.get(url, headers=header)
print(response.text)
My hypothesis is that either the url or the header is the problem. The twitch API is made for c# or js originally but I don't know how to convert that information to python.
I would also like to know how to do the "PING" and "PONG" thing that Twitch is writing about in the API
response.text show the html text
response.json show the json
tell me if it work now
response.json() will return you the data in JSON format
I've never really attempted to try and write my own code that calls an API. I have some Python code that I created after discovering Python Requests library.
However, I can't get past this "authentication failed" error message.
The API key I have acquired is from https://fortnitetracker.com/site-api
The code I am using is as follows:
import requests
url = 'https://api.fortnitetracker.com/v1/profile/pc/ninja'
headers = {"TRN-Api-Key": "MY_KEY"}
r = requests.get(url, headers=headers)
After asking for the r.status_code I get a 403 Forbidden. When I ask for r.text I get
u'{"message":"Invalid authentication credentials"}\n'
On the API page,all they say is to pass the API key in the request headers using a GET method.
I even tried passing my credentials I used to register on the site using
r = requests.get(url, headers=headers,auth=('User', 'Pass'))
Still got the same invalid authentcation credentials error.
Is what I am doing correct? What am I missing?
Thanks for your help in advance.
If you are calling an API you need to send post :
r = requests.post(url, headers=headers)
Are you sure you don't have to supply the API key something like this?
{'Auth': 'Token my_key'}
That's how most API's do it, but I can't find any reference to their API docs anywhere.