Python Requests 400 Response - python

The api link at the moment is available and can be viewed from a browser. When I use requests on this particular web page I get the below error. Is there a different package that will work or anything within the requests package that will help.
import requests
leaderboard_req = requests.get('https://api.draftkings.com/scores/v1/leaderboards/141136667?format=json&embed=leaderboard')
leaderboard_req.json()
"{'errorStatus': {'code': 'SCO101', 'developerMessage': 'Invalid userKey.'},
'responseStatus': {'ErrorCode': 'SCO101', 'Message': 'Invalid userKey.'}}"

Related

Python get requests for an API URL returns 422 error but on browser no problems. Potential service worker problem?

I have noticed that for some websites' API Urls, the return on the browser is via a service worker which has caused problems in scraping those APIs.
For consider the following:
https://www.sephora.co.id/api/v2.3/products?filter[category]=makeup/face/bronzer&page[size]=30&page[number]=1&sort=sales&include=variants,brand
The data appears when the url is pasted into a browser However it gives me a 422 error when I try to automate the collection of that data in Python with the following code:
import requests
#API url
url = 'https://www.sephora.co.id/api/v2.3/products?filter[category]=makeup/face/bronzer&page[size]=30&page[number]=1&sort=sales&include=variants,brand'
#The response is always 422
response = requests.get(url)
I have noticed that when calling the API url on the browser returns a response via a service worker. Therefore my questions is there a way around to get a 200 response via the python requests library?
The server appears to require the Accept-Language header.
The code below now returns 200.
import requests
url = 'https://www.sephora.co.id/api/v2.3/products?filter[category]=makeup/face/bronzer&page[size]=30&page[number]=1&sort=sales&include=variants,brand'
headers = {'Accept-Language': 'en-gb'}
response = requests.get(url, headers=headers)
(Ascertained by checking a successful request via a browser, adding in all headers AS IS to the python request and then removing one by one.)

program to login Facebook using Python requests

I tried making a script to login Facebook using Python requests. By analyzing the login POST using Firebug, I found that it also sends some other inputs. I used BS4 module to extract values from the login form, prepared a load, and sent it using requests.session(). But when I returned the URL, it returned the action link of the form. What am I doing wrong? Any help is appreciated :)
Code:
#facebook login
import requests
from bs4 import BeautifulSoup as bs
url='http://www.facebook.com'
headers={'User-Agent':'Mozilla/5.0'}
r=requests.get(url)
soup=bs(r.text,'html.parser')
ft='login_form'
form=soup.find('form',{'id':'login_form'})
inputs=form.find_all('input')
load={}
for i in inputs:
load[i.get('name')]=i.get('value')
e=input('enter email')
p=input('enter password')
load['email']=e
load['pass']=p
s=requests.session()
r=s.post(form.get('action'),data=load,headers=headers)
print(r.url) #to verify login
s.close()
I found the solution myself. Facebook needed the cookies along with the input load. I extracted the cookie from the initial get request and passed it along with the post.

Logging in to Instagram using Python requests module

I'm trying to log in into Instagram with python requests module. As I checked this site with Firefox Developer Tools I saw that whenever I click the login button a request is sent to instagram.com/accounts/login/ajax/ as you can see below:
So I wrote this piece of code:
import json, requests
ses = requests.session()
url = "https://www.instagram.com/accounts/login/ajax/"
payload = { 'username':'****' , 'password':'****' }
req = ses.post(base,data= json.dumps(payload))
But the response object (req) contains a HTML page with this error:
"This page could not be loaded. If you have cookies disabled in your browser, or you are browsing in Private Mode, please try enabling cookies or turning off Private Mode, and then retrying your action"
What should I do?

Python requests login Glassdoor

I'm trying to use Python 2.7 and Requests 2.7.0 to login Glassdoor and get html response. However, we I was running following code, it always return 403 forbidden response. How can I login correctly?
s = requests.session()
login_data = {'username': 'myemailaddress', 'password': 'mypassword'}
s.post('https://www.glassdoor.com/profile/login_input.htm', data=login_data)
r = s.get('http://www.glassdoor.com/Reviews/us-reviews-SRCH_IL.0,2_IN1.htm')
print r
Thanks!
You could take a slightly different approach and use a framework like Scrapy: http://scrapy.org/
That way you can manipulate the DOM directly and invoke the click events that will trigger the (most likely) javascript that will trigger the login process appropriately on the site.

How to access a sharepoint site via the REST API in Python?

I have the following site in SharePoint 2013 in my local VM:
http://win-5a8pp4v402g/sharepoint_test/site_1/
When I access this from the browser, it prompts me for the username and password and then works fine. However I am trying to do the same using the REST API in Python. I am using the requests library, and this is what I have done:
import requests
from requests.auth import HTTPBasicAuth
USERNAME = "Administrator"
PASSWORD = "password"
response = requests.get("http://win-5a8pp4v402g/sharepoint_test/site_1/", auth=HTTPBasicAuth(USERNAME, PASSWORD))
print response.status_code
However I get a 401. I dont understand. What am I missing?
Note: I followed this article http://tech.bool.se/using-python-to-request-data-from-sharepoint-via-rest/
It's possible that your SharePoint site uses a different authentication scheme. You can check this by inspecting the network traffic in Firebug or the Chrome Developer Tools.
Luckily, the requests library supports many authentication options: http://docs.python-requests.org/en/latest/user/authentication/
Fore example, one of the networks I needed to access uses NTLM authentication. After installing the requests-ntml plugin, I was able to access the site using code similar to this:
import requests
from requests_ntlm import HttpNtlmAuth
requests.get("http://sharepoint-site.com", auth=HttpNtlmAuth('DOMAIN\\USERNAME','PASSWORD'))
Here is an examples of SharePoint 2016 REST API call from Python to create a site.
import requests,json,urllib
from requests_ntlm import HttpNtlmAuth
root_url = "https://sharepoint.mycompany.com"
headers = {'accept': "application/json;odata=verbose","content-type": "application/json;odata=verbose"}
##"DOMAIN\username",password
auth = HttpNtlmAuth("MYCOMPANY"+"\\"+"UserName",'Password')
def getToken():
contextinfo_api = root_url+"/_api/contextinfo"
response = requests.post(contextinfo_api, auth=auth,headers=headers)
response = json.loads(response.text)
digest_value = response['d']['GetContextWebInformation']['FormDigestValue']
return digest_value
def createSite(title,url,desc):
create_api = root_url+"/_api/web/webinfos/add"
payload = {'parameters': {
'__metadata': {'type': 'SP.WebInfoCreationInformation' },
'Url': url,
'Title': title,
'Description': desc,
'Language':1033,
'WebTemplate':'STS#0',
'UseUniquePermissions':True}
}
response = requests.post(create_api, auth=auth,headers=headers,data=json.dumps(payload))
return json.loads(response.text)
headers['X-RequestDigest']=getToken()
print createSite("Human Resources","hr","Sample Description")
You can also use Office365-REST-Python-Client ("Office 365 & Microsoft Graph Library for Python") or sharepoint ("Module and command-line utility to get data out of SharePoint")
A 401 response is an authentication error...
That leaves one of your three variables as incorrect: url, user, pass. Requests Authentication Docs
Your url looks incomplete.

Categories