This is a spotify documentation I'm following. Out of the 3 options of 'Authorization Flows', I'm trying the 'Authorization Code Flow'.
Finished step 1. Have your application request authorization.
Stuck at step 2. Have your application request refresh and access tokens
It's asking to make a POST request that contains the parameters encoded in ´application/x-www-form-urlencoded as defined in the OAuth 2.0 specification:. Here is what I've done so far with my limited knowledge and google search.
import requests
import base64
from html import unescape
url = "https://accounts.spotify.com/api/token"
params = {
"grant_type": "authorization_code",
"code": <authorization code I got from step 1>,
"redirect_uri": "http://127.0.0.1:5000/",
}
headers = {
"user-agent": "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_14_5) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/73.0.3683.103 Safari/537.36",
"Content-Type": "application/x-www-form-urlencoded",
"Authorization" : base64.b64encode("{}:{}".format(CLIENT_ID, CLIENT_SECRET).encode('UTF-8')).decode('ascii')
}
html = requests.request('post', url, headers=headers, params=params, data=None)
print(html.text)
result, with response code 400
{"error":"invalid_client"}
What should I do to make it work? I thought I got all the params right.
Related
I want to comment on a specific task in clickup but it responses 401 error.
url = "https://api.clickup.com/api/v2/task/861m8wtw3/comment"
headers = {
"Authorization": "Bearer <my api key>",
"Content-Type": "application/json"
}
# comment = input('Type your comment text: \n')
comment = 'test comment'
data = {
"content": f"{comment}"
}
response = requests.post(url, headers=headers, json=data)
and the output is:
<Response [401]>
what is the problem?
i tried to add mozilla headers as the user agent key:
'User-Agent': 'Mozilla/5.0 (Macintosh; Intel Mac OS X 10_10_1) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/39.0.2171.95 Safari/537.36'
but still get the 401 error!
Seems to me like your getting detected.
Try using Selenium-Profiles
It looks like the issue is with the Authorization header. Make sue that the header only includes the API token string, without 'Bearer' in front of it. Like so:
headers = {
"Authorization": "<your api token>",
"Content-Type": "application/json"
}
I need to make multiple calls by using threading but when mounting the session is giving me a bad request error over http protocol
gateway = ApiGateway("https://my.com/plp_search_v2", access_key_id=aws_access_key_id, access_key_secret=aws_secret_access_id, regions=EXTRA_REGIONS)
url = "https://my.com/plp_search_v2"
header={'User-Agent':'Mozilla/5.0 (Windows NT 10.0; Win64; x64) \
AppleWebKit/537.36 (KHTML, like Gecko) Chrome/92.0.4515.159 Safari/537.36'}
params = {
"key": "9f36aeafbe60771e321a7cc95a78140772ab3e96",
"category": "ziogr",
"channel": "WEB",
"count": "24",
}
session = requests.Session()
session.mount("https://", gateway)
session.mount("http://", gateway)
response = session.get(url, params=params, headers=header)
I am trying to use the API Gateway from AWS to make calls to that URI and i don't know why without the sessions.mount lines it is working.
The service is always retrieving a 400 bad request and we need to make multiple calls with aws proxy to retrieve some information.
There is a way we can improve this calls to achieve a more proficient result by calling API each time
I used url parse lib to delete the path from the url and after that re do the gateway mount into the session
It is like this
url = "https://redsky.target.com/redsky_aggregations/v1/web/plp_search_v2"
src_parsed = urlparse(url)
src_nopath = "%s://%s" % (src_parsed.scheme, src_parsed.netloc)
session = requests.Session()
session.mount(src_nopath, gateway)
response = session.get(url, stream=True, params=params, headers=header)
I am trying to get the html page to parse. The site itself has login form. I am using the following code to get through the login form:
headers = {
"Content-Type": "application/json",
"referer":"https://somesite/"
}
payload = {
"email": us,
"password": ps,
"web": "true"
}
session_requests = requests.session()
response = session_requests.post(
site,
data = json.dumps(payload),
headers = headers
)
result = response
resultContent = response.content
resultCookies = response.cookies
resultContentJson = json.loads(resultContent)
resultJwtToken = resultContentJson['jwtToken']
That works just fine, I am able to get 200 OK status and jwtToken.
NOW. When I actually trying to get the page (search result) the site returns to me '401 - not authorized'.. So, the question is 'what am I am doing wrong?'. Any suggestion/hint/idea is appreciated!
here is the request that gets 401 response:
siteSearch = "somesite/filters/search"
headersSearch = {
"content-type": "application/json",
"referer":"https://somesite",
"origin":"https://somesite",
"authorization":"Bearer {}".format(resultJwtToken),
"user-agent":"Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/89.0.4389.128 Safari/537.36"
}
payloadSearch = {
"userId":50432,
"filters" : [],
"savedSearchIds":[],
"size":24
}
responseSearch = session_requests.post(
siteSearch,
data = json.dumps(payloadSearch),
headers = headers
)
searchResult = response;
looking at the postman and chrome developer tools and seems to me I am sending the identical request as the actual browser (works via browser).. but nope - 401 response.
May be it has something to do with the cookies? The first login response returns bunch of cookies as well, but I thought the session_requests takes care about it?
in any way, any help is appreciated. Thanks
typo.. in responseSearch I used for the headers the headers defined in the initial login. should be headers = headersSearch. All the rest works as expected. Thanks!
I'm trying to implement the Yandex OCR translator tool into my code. With the help of Burp Suite, I managed to find that the following request is the one that is used to send the image:
I'm trying to emulate this request with the following code:
import requests
from requests_toolbelt import MultipartEncoder
files={
'file':("blob",open("image_path", 'rb'),"image/jpeg")
}
#(<filename>, <file object>, <content type>, <per-part headers>)
burp0_url = "https://translate.yandex.net:443/ocr/v1.1/recognize?srv=tr-image&sid=9b58493f.5c781bd4.7215c0a0&lang=en%2Cru"
m = MultipartEncoder(files, boundary='-----------------------------7652580604126525371226493196')
burp0_headers = {"User-Agent": "Mozilla/5.0 (Macintosh; Intel Mac OS X 10.14; rv:65.0) Gecko/20100101 Firefox/65.0", "Accept": "*/*", "Accept-Language": "en-US,en;q=0.5", "Accept-Encoding": "gzip, deflate", "Referer": "https://translate.yandex.com/", "Content-Type": "multipart/form-data; boundary=-----------------------------7652580604126525371226493196", "Origin": "https://translate.yandex.com", "DNT": "1", "Connection": "close"}
print(requests.post(burp0_url, headers=burp0_headers, files=m.to_string()).text)
though sadly it yields the following output:
{"error":"BadArgument","description":"Bad argument: file"}
Does anyone know how this could be solved?
Many thanks in advance!
You are passing the MultipartEncoder.to_string() result to the files parameter. You are now asking requests to encode the result of the multipart encoder to a multipart component. That's one time too many.
You don't need to replicate every byte here, just post the file, and perhaps set the user agent, referer, and origin:
files = {
'file': ("blob", open("image_path", 'rb'), "image/jpeg")
}
url = "https://translate.yandex.net:443/ocr/v1.1/recognize?srv=tr-image&sid=9b58493f.5c781bd4.7215c0a0&lang=en%2Cru"
headers = {
"User-Agent": "Mozilla/5.0 (Macintosh; Intel Mac OS X 10.14; rv:65.0) Gecko/20100101 Firefox/65.0",
"Referer": "https://translate.yandex.com/",
"Origin": "https://translate.yandex.com",
}
response = requests.post(url, headers=headers, files=files)
print(response.status)
print(response.json())
The Connection header is best left to requests, it can control when a connection should be kept alive just fine. The Accept* headers are there to tell the server what your client can handle, and requests sets those automatically too.
I get a 200 OK response with that code:
200
{'data': {'blocks': []}, 'status': 'success'}
However, if you don't set additional headers (remove the headers=headers argument), the request also works, so Yandex doesn't appear to be filtering for robots here.
I am trying to read/write SharePoint list items through python
I've written below which reads SharePoint details successfully as a response
import requests
from requests_ntlm import HttpNtlmAuth
requests.packages.urllib3.disable_warnings() # suprress all SSL warnings
url = "https://sharepoint.company.com/_api/web/lists/getbytitle('listname')/items?$top=3&$select=ID,Title,Notes" # just reading 3 columns
headers = {'accept': 'application/xml;q=0.9, */*;q=0.8'}
response = requests.get(url, headers=headers, auth=HttpNtlmAuth('domain\\username','Password'), verify=False, stream=True)
Now, when I try to update one of the items, I receive response 403 error
headers = {'User-Agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/60.0.3112.113 Safari/537.36'}
json_data = [{ '__metadata': { 'type': 'SP.List' }, 'Notes': 'Test Note' }]
response = requests.post(url, { '__metadata': { 'type': 'SP.List' }, 'Notes': 'Test Note' }, headers = self.headers, auth=HttpNtlmAuth('domain\\username','Password'), verify=False)
Microsoft SharePoint says X-RequestDigest: form digest value has to be sent in headers.
After reading through articles, found the below code to get form digest value:
site_url = "https://sharepoint.company.com"
login_user = 'domain\\username'
auth = HttpNtlmAuth(login_user, 'PASSWORD')
sharepoint_contextinfo_url = self.site_url + '/_api/contextinfo'
headers = {
'accept': 'application/json;odata=verbose',
'content-type': 'application/json;odata=verbose',
'odata': 'verbose',
'X-RequestForceAuthentication': 'true'
}
r = requests.post(sharepoint_contextinfo_url, auth=auth, headers=headers, verify=False)
form_digest_value = self.r.json()['d']['GetContextWebInformation']['FormDigestValue']
But, I do not receive form_digest_value
I tried to access the context info through Browser like https://sharepoint.company.com/_api/contextinfo and received below error:
<?xml version="1.0" encoding="UTF-8"?>
-<m:error xmlns:m="http://schemas.microsoft.com/ado/2007/08/dataservices/metadata">
<m:code>-1, Microsoft.SharePoint.Client.ClientServiceException</m:code>
<m:message xml:lang="en-US">The HTTP method 'GET' cannot be used to access the resource 'GetContextWebInformation'. The operation type of the resource is specified as 'Default'. Please use correct HTTP method to invoke the resource.</m:message>
</m:error>
Can someone please help how to get form digest value? Or is there anyway around to update SharePoint list item?
Thanks in advance!
Updated
After going through this article, I can understand we can get __REQUESTDIGEST value from Page source. On refreshing the page every min, can see value differs. how can I get the request digest value through python and keep it alive at least for 5mins?
Posting the answer, may be it could help someone
Data passed for update is not done properly here
So, passed like below:
json_data = {
"__metadata": { "type": "SP.Data.TasksListItem" },
"Title": "updated title from Python"
}
and passed json_data to requests like below:
r= requests.post(api_page, json.dumps(json_data), auth=auth, headers=update_headers, verify=False).text
After the above changes, code updated the Title on SharePoint.