I am using requests library to query a F5 Big IP. I get the list of Virtual Server. I need to do a loop to get each VS name (VS1, VS2, VS3) from the response to use in another request like
https://localhost/mgmt/tm/ltm/virtual/VS1
What code will get each name value from the response? I tried this but could not get it to work.
url = "https://bigipname.domain.local/mgmt/tm/ltm/virtual"
querystring = {"$select":"name"}
headers = {
'Content-Type': "application/json",
'Accept': "*/*",
'Cache-Control': "no-cache",
'Host': "bigipgname.domain.local",
'accept-encoding': "gzip, deflate",
'Connection': "keep-alive",
'cache-control': "no-cache"
}
response = requests.request("GET", url, headers=headers, params=querystring, verify=False)
I get the response in the following json format :
{'kind': 'tm:ltm:virtual:virtualcollectionstate', 'selfLink': 'https://localhost/mgmt/tm/ltm/virtual?$select=name&ver=13.1.1.2', 'items': [{'name': 'VS1'}, {'name': 'VS2'}, {'name': 'VS3'}]}
Any help is appreciated. Thanks
You can use a list comprehension to extract the "items".
new_list = [item["name"] for item in response["items"]]
Related
I have setup my postman to use an API Key. I have added it to the authorization section and I see it in the headers of my api call, but when I send the request to my Flask app the API headers are not there. I am printing out all the headers and the "api_key" is not there. What am missing?
all_headers = dict(request.headers)
print(f"api key is {all_headers}")
api key is {'Content-Type': 'application/json', 'User-Agent':
'PostmanRuntime/7.29.2', 'Accept': '/', 'Postman-Token':
'908a9c7f-ca49-481e-9893-1e3a780887bc', 'Host': '127.0.0.1',
'Accept-Encoding': 'gzip, deflate, br', 'Connection': 'keep-alive'}
I have the following Python function which sends a post request using the requests library:
def http_post(self, url: str, headers: dict, data: str, auth: AuthBase):
token = self._xsuaa.get_token(self._service)
headers.update({'Proxy-Authorization': f"Bearer {token}"})
res = requests.post(
url,
headers=headers,
data=data,
proxies={'http': self._proxy},
auth=auth,
verify=False,
timeout=100,
allow_redirects=True)
When printing the headers dict, it looks like this:
{
'Content-Type': 'multipart/mixed;boundary=batch_4724f345-bb46-437d-a970-197a7b82bf41',
'Content-Transfer-Encoding': 'binary',
'sap-cancel-on-close': 'true',
'sap-contextid-accept': 'header',
'Accept': 'application/json',
'Accept-Language': 'de-DE',
'DataServiceVersion': '2.0',
'MaxDataServiceVersion': '2.0',
'Proxy-Authorization': 'Bearer <token>'
}
However, when I take a look at res.request.headers, I get the following:
{
'User-Agent': 'python-requests/2.26.0',
'Accept-Encoding': 'gzip, deflate',
'Accept': 'application/json',
'Connection': 'keep-alive',
'Content-Type': 'multipart/mixed; boundary=batch_4724f345-bb46-437d-a970-197a7b82bf41',
'Content-Transfer-Encoding': 'binary',
'sap-cancel-on-close': 'true',
'sap-contextid-accept': 'header',
'Accept-Language': 'de-DE',
'DataServiceVersion': '2.0',
'MaxDataServiceVersion': '2.0',
'Content-Length': '659',
'Authorization': 'Basic <auth>'
}
For some reason, the proxy-authorization header field is gone and accordingly, I get a 407 error in the response. I have read in the documentation that proxy credentials provided in the URL overwrite proxy-authorization headers, but my URL contains none. I also tried removing the auth=auth line from the request, but the problem still persited. Can someone point me in the right direction as to why this field is seemingly ignored or overwritten by requests?
I'm trying to get the data that get loaded into the chart of this page when hitting the max (time range) button. The data are loaded with an ajax request.
I inspected the request and tried to reproduce it with the requests python library but I'm only able to retrieve the 1-year data from this chart.
Here is the code I used:
r = requests.get("https://www.justetf.com/en/etf-profile.html?0-4.0-tabs-panel-chart-dates-ptl_max&groupField=none&sortField=ter&sortOrder=asc&from=search&isin=IE00B3VWN518&tab=chart&_=1576272593482")
r.content
I also tried to use Session:
from requests import Session
session = Session()
session.head('http://justetf.com')
response = session.get(
url='https://www.justetf.com/en/etf-profile.html?0-4.0-tabs-panel-chart-dates-ptl_max&groupField=none&sortField=ter&sortOrder=asc&from=search&isin=IE00B3VWN518&tab=chart&_=1575929227619',
data = {"0-4.0-tabs-panel-chart-dates-ptl_max":"",
"groupField":"none","sortField":"ter",
"sortOrder":"asc","from":"search",
"isin":"IE00B3VWN518",
"tab":"chart",
"_":"1575929227619"
},
headers={
'Host': 'www.justetf.com',
'User-Agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:70.0) Gecko/20100101 Firefox/70.0',
'Accept': 'application/xml, text/xml, */*; q=0.01',
'Accept-Language': 'en-US,en;q=0.5',
'Accept-Encoding': 'gzip, deflate, br',
'Wicket-Ajax': 'true',
'Wicket-Ajax-BaseURL': 'en/etf-profile.html?0&groupField=none&sortField=ter&sortOrder=asc&from=search&isin=IE00B3VWN518&tab=chart',
'Wicket-FocusedElementId': 'id28',
'X-Requested-With': 'XMLHttpRequest',
'Connection': 'keep-alive',
'Referer': 'https://www.justetf.com/en/etf-profile.html?groupField=none&sortField=ter&sortOrder=asc&from=search&isin=IE00B3VWN518&tab=chart',
'Cookie': 'locale_=en; _ga=GA1.2.1297456970.1574289342; cookieconsent_status=dismiss; AWSALB=QMWHJxgfcpLXJLqX0i0FgBuLn+mpVHVeLRQ6upH338LdggA4/thXHT2vVWQX7pdBd1r486usZXgpAF8RpDsGJNtf6ei8e5NHTsg0hzVHR9C+Fj89AWuQ7ue+fzV2; JSESSIONID=ABB2A35B91751CA9B2D293F5A04505BE; _gid=GA1.2.1029531470.1575928527; _gat=1',
'TE': 'Trailer'
},
cookies = {"_ga":"GA1.2.1297456970.1574289342","_gid":"GA1.2.1411779365.1574289342","AWSALB":"5v+tPMgooQC0deJBlEGl2wVeUSmwVGJdydie1D6dAZSRAK5eBsmg+DQCdBj8t25YRytC5NIi0TbU3PmDcNMjiyFPTp1xKHgwNjZcDvMRePZjTxthds5DsvelzE2I","JSESSIONID":"310F346AED94D1A345207A3489DCF83D","locale_":"en"}
)
but I get this response
<ajax-response><redirect><![CDATA[/en/etf-profile.html?0&groupField=none&sortField=ter&sortOrder=asc&from=search&isin=IE00B3VWN518&tab=chart]]></redirect></ajax-response>
Why am I not getting a response to the same XML file that I get on my browser when I hit MAX?
Okay below is my solution to obtaining the data you seek:
url = "https://www.justetf.com/en/etf-profile.html"
querystring = {
# Modify this string to get the timeline you want
# Currently it is set to "max" as you can see
"0-1.0-tabs-panel-chart-dates-ptl_max":"",
"groupField":"none",
"sortField":"ter",
"sortOrder":"asc",
"from":"search",
"isin":"IE00B3VWN518",
"tab":"chart",
"_":"1576627890798"}
# Not all of these headers may be necessary
headers = {
'authority': "www.justetf.com",
'accept': "application/xml, text/xml, */*; q=0.01",
'x-requested-with': "XMLHttpRequest",
'wicket-ajax-baseurl': "en/etf-profile.html?0&groupField=none&sortField=ter&sortOrder=asc&from=search&isin=IE00B3VWN518&tab=chart",
'wicket-ajax': "true",
'wicket-focusedelementid': "id27",
'Connection': "keep-alive",
}
session = requests.Session()
# The first request won't return what we want but it sets the cookies
response = session.get( url, params=querystring)
# Cookies have been set now we can make the 2nd request and get the data we want
response = session.get( url, headers=headers, params=querystring)
print(response.text)
As a bonus, I have included a link to a repl.it where I actually parse the data and get each individual data point. You can find this here.
Let me know if that helps!
I'm trying to make an authenticated GET request to an API. This is one of my first attempts working with Python's request library. I've looked over similar posts to this one, but they're a bit too generic to answer my question, it seems. Their answers work for nearly every other case I've worked with, so it feel a bit stuck.
The request header is fairly lengthy:
':authority': 'api-WEBSITE.com',
':method': 'GET',
':path': 'ENDPOINT',
':scheme': 'https',
'accept': 'application/json',
'accept-encoding': 'gzip, deflate, br',
'accept-language': 'en-US,en;q=0.9',
'authorization': 'AUTH_TOKEN',
'content-type': 'application/json',
'cookie': 'VERY_LONG_COOKIE',
'origin': 'https://WEBSITE.com',
'referer': 'https://WEBSITE.com/app',
'user-agent': 'LIST_OF_BROWSERS'
My code that makes this request:
import requests
requestURL = "https://api-WEBSITE.com/ENDPOINT"
parameters = {
':authority': 'api-WEBSITE.com',
':method': 'GET',
':path': 'ENDPOINT',
':scheme': 'https',
'accept': 'application/json',
'accept-encoding': 'gzip, deflate, br',
'accept-language': 'en-US,en;q=0.9',
'authorization': 'AUTH_TOKEN',
'content-type': 'application/json',
'cookie': 'VERY_LONG_COOKIE',
'origin': 'https://WEBSITE.com',
'referer': 'https://WEBSITE.com/app',
'user-agent': 'LIST_OF_BROWSERS'
}
response = requests.get(requestURL, parameters)
print(response.status_code)
When I run this, I'm getting a 401 status code asking for authentication; however, I can't seem to find out what's throwing this 401 error.
To supply headers for a python request: you must do this
r = requests.get(url, headers = headersDict)
Where headersDict is a valid dictionary of the headers you want added to the request
I have this HTTPS call in curl below;
header1="projectName: zhikovapp"
header2="Authorization: Bearer HZCdsf="
bl_url="https://BlazerNpymh.com/api/documents?pdfDate=$today"
curl -s -k -H "$header1" -H "$header2" "$bl_url"
I would like to write an equivalent python call using requests module.
header ={
"projectName": "zhikovapp",
"Authorization": "Bearer HZCdsf="
}
response = requests.get(bl_url, headers = header)
However, the request was not valid. What is wrong?
The contents of the returned response is like this;
<Response [400]>
_content = '{"Message":"The request is invalid."}'
headers = {'Content-Length': '37', 'Access-Control-Allow-Headers': 'projectname, authorization, Content-Type', 'Expires': '-1', 'cacheControlHeader': 'max-age=604800', 'Connection': 'keep-alive', 'Pragma': 'no-cache', 'Cache-Control': 'no-cache', 'Date': 'Sat, 15 Oct 2016 02:41:13 GMT', 'Access-Control-Allow-Origin': '*', 'Access-Control-Allow-Methods': 'GET, POST, PUT, DELETE, OPTIONS', 'Content-Type': 'application/json; charset=utf-8'}
reason = 'Bad Request'
I am using python 2.7
EDIT: I corrected some syntex errors after Soviut pointed them out.
In request.get() the headers argument should be defined as a dictionary, a set of key/value pairs. You've defined a set (a unique list) of strings instead.
You should declare your headers like this:
headers = {
"projectName": "zhikovapp",
"Authorization": "Bearer HZCdsf="
}
response = requests.get(bl_url, headers=headers)
Note the "key": "value" format of each line inside the dictionary.
Edit: Your Access-Control-Allow-Headers say they'll accept projectname and authorization in lower case. You've named your header projectName and Authorization with upper case letters in them. If they don't match, they'll be rejected.
If you have $today defined in the shell you make curl call from, and you don't substitute it in the requests' call URL, then it's a likely reason for the 400 Bad Request.
Access-Control-* and other CORS headers have nothing to do with non-browser clients. Also HTTP headers are generally case insensitive.
Following #furas's advice here's the output:
$ curl -H "projectName: zhikovapp" -H "Authorization: Bearer HZCdsf=" \
http://httpbin.org/get
{
"args": {},
"headers": {
"Accept": "*/*",
"Authorization": "Bearer HZCdsf=",
"Host": "httpbin.org",
"Projectname": "zhikovapp",
"User-Agent": "curl/7.35.0"
},
"origin": "1.2.3.4",
"url": "http://httpbin.org/get"
}
And the same request with requests:
import requests
res = requests.get('http://httpbin.org/get', headers={
"projectName" : "zhikovapp",
"Authorization" : "Bearer HZCdsf="
})
print(res.json())
{
'args': {},
'headers': {
'Accept': '*/*',
'Accept-Encoding': 'gzip, deflate, compress',
'Authorization': 'Bearer HZCdsf=',
'Host': 'httpbin.org',
'Projectname': 'zhikovapp',
'User-Agent': 'python-requests/2.2.1 CPython/3.4.3 '
'Linux/3.16.0-38-generic'
},
'origin': '1.2.3.4',
'url': 'http://httpbin.org/get'
}
As you can see the only difference is User-Agent header. It's unlikely the cause but you can easily set it in headers to the value you like.