I'm trying to get the flt information and prices through https://www.easyjet.com by using requests module.
Through browser when I filled the form easyjet.com and click on submit, it is internally fetching the data using following call:
https://www.easyjet.com/ejavailability/api/v15/availability/query?AdditionalSeats=0&AdultSeats=1&ArrivalIata=%23PARIS&ChildSeats=0&DepartureIata=%23LONDON&IncludeAdminFees=true&IncludeFlexiFares=false&IncludeLowestFareSeats=true&IncludePrices=true&Infants=0&IsTransfer=false&LanguageCode=EN&MaxDepartureDate=2018-02-23&MinDepartureDate=2018-02-23
when I'm trying to mimic the same by using following code, I'm not getting the response. I'm pretty new to this domain. Can anyone help to understand what is going wrong?
here is my code
import requests
url = 'https://www.easyjet.com/en/'
url1 = 'https://www.easyjet.com/ejavailability/api/v15/availability/query?AdditionalSeats=0&AdultSeats=1&ArrivalIata=%23PARIS&ChildSeats=0&DepartureIata=%23LONDON&IncludeAdminFees=true&IncludeFlexiFares=false&IncludeLowestFareSeats=true&IncludePrices=true&Infants=0&IsTransfer=false&LanguageCode=EN&MaxDepartureDate=2018-02-23&MinDepartureDate=2018-02-21'
http = requests.Session()
response = http.get(url, verify=False)
response1 = http.get(url1, verify=False)
print(response1.text)
Related
I need to send python requests data in application/x-www-form-urlencoded. Couldn;t find the answer. It must be that format otherwise the web won;t pass me :(
simple request should work
import requests
url = 'application/x-www-form-urlencoded&username=login&password=password'
r = requests.get(url)
or a JSON post:
import requests
r = requests.post('application/x-www-form-urlencoded', json={"username": "login","password": password})
I have noticed that for some websites' API Urls, the return on the browser is via a service worker which has caused problems in scraping those APIs.
For consider the following:
https://www.sephora.co.id/api/v2.3/products?filter[category]=makeup/face/bronzer&page[size]=30&page[number]=1&sort=sales&include=variants,brand
The data appears when the url is pasted into a browser However it gives me a 422 error when I try to automate the collection of that data in Python with the following code:
import requests
#API url
url = 'https://www.sephora.co.id/api/v2.3/products?filter[category]=makeup/face/bronzer&page[size]=30&page[number]=1&sort=sales&include=variants,brand'
#The response is always 422
response = requests.get(url)
I have noticed that when calling the API url on the browser returns a response via a service worker. Therefore my questions is there a way around to get a 200 response via the python requests library?
The server appears to require the Accept-Language header.
The code below now returns 200.
import requests
url = 'https://www.sephora.co.id/api/v2.3/products?filter[category]=makeup/face/bronzer&page[size]=30&page[number]=1&sort=sales&include=variants,brand'
headers = {'Accept-Language': 'en-gb'}
response = requests.get(url, headers=headers)
(Ascertained by checking a successful request via a browser, adding in all headers AS IS to the python request and then removing one by one.)
I am having trouble making a simple API call. Hoping I can get some assistance. I have tried several variations of this request but I keep getting the same result. I've tried many solutions from previous requests posted on this site but none seem to help me.
The request I am attempting to make:
import requests
import urllib3
urllib3.disable_warnings(urllib3.exceptions.InsecureRequestWarning)
url = "https://api.ramcoams.com/api/v2/"
headers = {'Key': 'My_Key', 'operation':'GetEntityTypes'}
response = requests.post( url, headers=headers, verify=False)
print(response.text)
The response I get is:
{"ResponseCode":400,"ResponseText":"Key parameter missing."}
Lastly the documentation for the API is here:
I'm building a Python web scraper (personal use) and am running into some trouble retrieving a JSON file. I was able to find the request URL I need, but when I run my script (I'm using Requests) the URL returns HTML instead of the JSON shown in the Chrome Developer Tools console. Here's my current script:
import requests
import json
url = 'https://nytimes.wd5.myworkdayjobs.com/Video?clientRequestID=1f1a6071627946499b4b09fd0f668ef0'
r = requests.get(url)
print(r.text)
Completely new to Python, so any push in the right direction is greatly appreciated. Thanks!
Looks like that website returns the response depending on the accept headers provided by the request. So try:
import requests
import json
url = 'https://nytimes.wd5.myworkdayjobs.com/Video?clientRequestID=1f1a6071627946499b4b09fd0f668ef0'
r = requests.get(url, headers={'accept': 'application/json'})
print(r.json())
You can have a look at the full api for further reference: http://docs.python-requests.org/en/latest/api/.
I am trying to login to http://127.0.0.1/dvwa/login.php, with Python requests.post method.
Currently I am doing as follows:
import requests
payload = {'username':'admin','password':'password'}
response = requests.post('http://127.0.0.1/dvwa/login.php', data=payload)
However it does not seem to be working. I should be getting a 301 status code from the response object, but I am only receiving 200 codes. I've also taken the cookies from my browser and set them in the requests object; however, this does not work, and also defeats the purpose of what I am trying to do.
I've also tried the following with no luck:
from requests.auth import HTTPBasicAuth
import requests
response = requests.get("http://127.0.0.1/dvwa/login.php",auth=HTTPBasicAuth('admin','password'))
and
from requests.auth import HTTPBasicAuth
import requests
cookies = {'PHPSESSID':'07761e3f52ae72fa7d0e2c57569c32a7'}
response = requests.get("http://127.0.0.1/dvwa/login.php",auth=HTTPBasicAuth('admin','password'),cookies=cookies)
None of the above methods give the result I require/want, which is simply logging in.
By default, requests will follow redirects. response.status_code will be the status code of the ultimate location. If you want to check if you've been redirected, look at response.history.
import requests
response = requests.get("http://google.com/") #301 redirects to 'www.google.com'
response.status_code
#200
response.history
#[<Respone [301]>]
response.url
#'http://www.google.com/'
Additionally, a good way to have requests keep track of your session/cookies is by using requests.Session
import requests
with requests.Session() as sesh:
sesh.post(the_url, data=payload)
#do more stuff in session
I appreciate your answer, however I found my answer question. It is as follows in case anyone else has the same issue.
instead of:
import requests
response = requests.post('http://127.0.0.1/dvwa/login.php',data={'username':'admin','password':'password'})
You also need the login token stored in the payload, as follows:
import requests
response = requests.post('http://127.0.0.1/dvwa/login.php',data={'username':'admin','password':'password','Login':'Login'})
It then logs me in correctly.