simulating xhr for a post request - python

I'm trying to send a post request via python, but it goes through badly.
I want my code to approve my selected seats and continue to payment.
I took this url,data and token from the post request after putting the selected cinema place time and seats.
import urllib.parse, urllib.request
url = "https://tickets.yesplanet.co.il/YPR/SelectSeatPageRes.aspx/SetSelectedSeats?ec=10725013018-246564"
data = urllib.parse.urlencode(dict(
seats = "11,19#11,20#11,21#11,22",
token ="246564#5#1"
))
res = urllib.request.urlopen(url, data.encode("utf8"))
print (res.read())
the link has an expiration but this is the result:
Session Ended It appears that the session has ended before you were able to complete your purchase.
a link to the main site : https://www.yesplanet.co.il
how do i know if my request is complete?
for your convince info from the headers and response tabs from the development tool:
response headers:
Cache-Control:private, max-age=0
Content-Length:170
Content-Type:application/json; charset=utf-8
Date:Tue, 30 Jan 2018 01:27:26 GMT
P3P:CP="NOI ADM DEV COM NAV OUR STP"
Server:Microsoft-IIS/8.5
X-AspNet-Version:4.0.30319
X-Powered-By:ASP.NET
request headers:
**Accept:application/json, text/javascript, */*; q=0.01
Accept-Encoding:gzip, deflate, br
Accept-Language:he-IL,he;q=0.9,en-US;q=0.8,en;q=0.7
Connection:keep-alive
Content-Length:44
Content-Type:application/json; charset=UTF-8
Cookie:ASP.NET_SessionId=p4citijvw3vrqxuoekqnlrhw; _ga=GA1.3.525452416.1517275557; _gid=GA1.3.1168599094.1517275557; _gat_tealium_0=1; utag_main=v_id:016144aba503001d7d72fa299b0904072001c06a00868$_sn:1$_ss:0$_st:1517277365866$ses_id:1517275555076%3Bexp-session$_pn:2%3Bexp-session; hfOIKey=CXCFcTD1; SS#246564#5#1=; SS%23246564%235%231=17%2C12%2317%2C13; hfSKey=%7C%7C%7C%7C%7C%7C%7C%7C%7C1072_res%7C10725013018-246564%7C20
Host:tickets.yesplanet.co.il
Origin:https://tickets.yesplanet.co.il
Referer:https://tickets.yesplanet.co.il/YPR/SelectSeatPageRes.aspx?dtticks=636528796178961691&cf=1004&ec=10725013018-246564
User-Agent:Mozilla/5.0 (Windows NT 6.1; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/64.0.3282.119 Safari/537.36
X-Requested-With:XMLHttpRequest**
request payload
{seats: "16,10#16,11", token: "246564#5#1"}
seats
:
"16,10#16,11"
token
:
"246564#5#1"
and the response tab:
{"d":"{\"ReturnCode\":0,\"Message\":null,\"Redirect\":\"/YPR/OrderFormPageRes.aspx?dtticks=636528796470870119\\u0026cf=1005\\u0026ec=10725013018-246564\",\"Data\":null}"}

The cookie header is the key. When you send a request from xhr (aka your browser), relevant cookies are automatically appended to your request.
These cookies are how sessions are usually managed, and the response message indicates that the server did not find a valid session cookie in your request.
You will need to "authorize", via logging in or otherwise beginning this session, and then insert that session cookie into your request before sending it.
After rereading, the token header is most likely not static either. My guess would be that this is engineered to prevent automated requests, and so may be difficult to circumvent.
Update in response to OP comment:
Use cookiejar or just read the urllib docs and figure out how to extract and then insert cookies.
how to send cookies inside post request
You will need to study the website’s behavior in your developer tools and see which request triggers a session cookie update, and then simulate that request before you simulate your post request.
You’ve been provided three answers. Mark the question as correct and post another, more specific if you still have trouble.

Related

Getting consent cookies for website login using python requests (CMP euconsent-v2)

I'm trying to login to a website using python requests, however the webpage has a mandatory data protection consent form pop-up on the first page. I think this is why I cannot yet login, because posting your login credentials to the login URL requires these content cookies (which are probably dynamic).
After checking out the login post headers request (via inspection tools) it says it requires the cookies from a CMP, specifically a variable called euconsent-v2 (https://help.consentmanager.net/books/cmp/page/cookies-set-by-the-cmp), so my question is how to get these cookies (and/or other necessary cookies) from the website after accepting a consent pop-up, so I can login.
Here is my code so far:
import requests
# Website
base_url = 'https://www.wg-gesucht.de'
# Login URL
login_url = 'https://www.wg-gesucht.de/ajax/sessions.php?action=login'
# Post headers (just a sample of all variables)
headers = {...,
'Cookie': 'euconsent-v2=********'}
# Post params
payload = {'display_language': "de",
'login_email_username': "******",
'login_form_auto_login': "1",
'login_password': "******"}
# Setup session and login
sess = requests.session()
resp_login = sess.post(login_url, data=payload, headers=headers)
UPDATE: I have searched through all recorded requests from starting up the website to login and the only mention of euconsent-v2 is in the response of this:
cookie_url = 'https://cdn.consentmanager.mgr.consensu.org/delivery/cmp_en.min.js'
referer = 'https://www.wg-gesucht.de'
headers = {'Referer': referer,
'User-Agent': 'Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/87.0.4280.66 Safari/537.36'}
sess = requests.session()
resp_init = sess.get(cookie_url, headers=headers)
But I still cannot get the required cookies
The best way would be creating a session, then requesting all the sites that set the cookie that you need. Then with all the cookies in the session you make, you request the login page.
https://help.consentmanager.net/books/cmp/page/cookies-set-by-the-cmp
On the right hand side there is the location.
The image shown below, is just an example of what I mean. Its a random site/url that on the response header, it sets two cookies. A session will save all the cookies and then when you have all the mandatory ones, you make a request to the login page with post data.

how to post form request on python

I am trying to fill a form like that and submit it automaticly. To do that, I sniffed the packets while logging in.
POST /?pg=ogrgiris HTTP/1.1
Host: xxx.xxx.com
Accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8
Accept-Language: en-us
Accept-Encoding: gzip, deflate
Content-Type: application/x-www-form-urlencoded
Origin: http://xxx.xxx.com
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10_14) AppleWebKit/605.1.15 (KHTML, like Gecko) Version/12.0 Safari/605.1.15
Referer: http://xxx.xxx.com/?pg=ogrgiris
Upgrade-Insecure-Requests: 1
DNT: 1
Content-Length: 60
Connection: close
seviye=700&ilkodu=34&kurumkodu=317381&ogrencino=40&isim=ahm
I repeated that packet by burp suite and saw works porperly. the response was the html of the member page.
Now I tried to do that on python. The code is below:
import requests
url = 'http://xxx.xxx.com/?pg=ogrgiris'
headers = {'Host':'xxx.xxx.com',
'Accept':'text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8',
'Accept-Encoding':'gzip, deflate',
'Content-Type':'application/x-www-form-urlencoded',
'Referer':'http://xxx.xxx.com/?pg=ogrgiris',
'Content-Lenght':'60','Connection':'close'}
credentials = {'seviye': '700','ilkodu': '34','kurumkodu': '317381','ogrecino': '40','isim': 'ahm'}
r = requests.post(url,headers=headers, data=credentials)
print(r.content)
the problem is, that code prints the html of the login page even I send all of the credentials enough to log in. How can I get the member page? thanks.
If the POST request displays a page with the content you want, then the problem is only that you are sending data as JSON, not in "form" data format (application/x-www-form-urlencoded).
If a session is created at the request base and you have to make another request for the requested data, then you have to deal with cookies.
Problem with data format:
r = requests.post(url, headers=headers, data=credentials)
Kwarg json = creates a request body as follows:
{"ogrecino": "40", "ilkodu": "34", "isim": "ahm", "kurumkodu": "317381", "seviye": "700"}
While data= creates a request body like this:
seviye=700&ilkodu=34&kurumkodu=317381&ogrencino=40&isim=ahm
You can try https://httpbin.org:
from requests import post
msg = {"a": 1, "b": True}
print(post("https://httpbin.org/post", data=msg).json()) # Data as Form data, look at key `form`, it's object in JSON because it's Form data format
print(post("https://httpbin.org/post", json=msg).json()) # Data as json, look at key `data`, it's string
If your goal is to replicate the sample request, you are missing a lot of the headers; this in particular is very important Content-Type: application/x-www-form-urlencoded because it will tell your HTTP client how to format/encode the payload.
Check the documentation for requests so see how these form posts can work.

Cannot get cookies with python requests while postman, curl, and wget work

I'm trying to authenticate on a French water provider website to get my water consumption data. The website does not provide any api and I'm trying to make a python script that authenticates on the website and crawls the data. My work is based on a working Domoticz python script and a shell script.
The workflow is the following:
Get a token from the website
Authenticate with login, password, and token get at step 1
Get 1 or more cookies from step 2
Get data using the cookie(s) from 3
I'm stuck at step 2 where I can't get the cookies with my python script. I tried with postman, curl, and wget and it is working. I even used the python code generated by postman and I still get no cookies.
Heres is a screenshot of my postman post request
which gives two cookies in the response.
And here is my python code:
import requests
url = "https://www.toutsurmoneau.fr/mon-compte-en-ligne/je-me-connecte"
querystring = {"_username":"mymail#gmail.com","_password":"mypass","_csrf_token":"knfOIFZNhiCVxHS0U84GW5CrfMt36eLvqPPYGDSsOww","signin[username]":"mymail#gmail.com","signin[password]":"mypass","tsme_user_login[_username]":"mymail#gmail.com","tsme_user_login[_password]":"mypass"}
payload = ""
headers = {
'Accept': "application/json, text/javascript, */*; q=0.01",
'Content-Type': "application/x-www-form-urlencoded",
'Accept-Language': "fr,fr-FR;q=0.8,en;q=0.6",
'User-Agent': "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_6_8) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/61.0.3163.100 Mobile Safari/537.36",
'Connection': "keep-alive",
'cache-control': "no-cache",
'Postman-Token': "c7e5f7ca-abea-4161-999a-3c28ec979628"
}
response = requests.request("POST", url, data=payload, headers=headers, params=querystring)
print(response.cookies.get_dict())
The output is {}.
I cannot figure out what I'm doing wrong.
If you have any help to provide, I'll be happy to get it.
Thanks for reading.
Edit:
Some of my assumptions were wrong. The shell script was indeed working but not Postman. I was confused because of response 200 I receive.
So I answer my own question.
First, when getting the token at step 1, I receive a cookie. I'm supposed to use this cookie when logging in which I did not do before.
Then, when using this cookie and the token to log in step 2, I was not able to see any cookie in the response I receive while I was well connected (I find in the content a "disconnect" string which is here only if well logged in). That's a normal behavior since cookies are not sent in the response of a post request.
I had to create a requests.session to post my log in form, and the session stores the cookie.
Now, I'm able to use this information to grab the data from the server.
Hope that will help others.

Where is the csrftoken stored in Django database?

Where is the csrftoken stored?
When I access an API endpoint (logout API, it do not need the params):
POST /rest-auth/logout/ HTTP/1.1
Host: 10.10.10.105:8001
Connection: keep-alive
Content-Length: 0
Accept: application/json, text/plain, */*
Origin: http://localhost:8080
Authorization: Token 0fe2977498e51ed12ddc93026b08ab0b1a06a434
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10_12_6) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/65.0.3325.146 Safari/537.36
Referer: http://localhost:8080/register
Accept-Encoding: gzip, deflate
Accept-Language: zh-CN,zh;q=0.9,en;q=0.8
Cookie: sessionid=b95zopro0qvkrexj8kq6mzo1d3z2hvbl; csrftoken=z53lKL0f7VHkilYS5Ax8FMaQCU2ceouje9OeTJOgTy4gH0UgHVltAlOe2KFNNNB6
the header is upper. In the Response I get an error:
{"detail":"CSRF Failed: CSRF token missing or incorrect."}
So, the backend must have verified the csrftoken.
In the backend database, I can not find the csrftoken field:
So I want to know where it is saved in the encrypted session_data?
Given this QA in the django docs, you can see that the framework by default uses the Double Submit Cookie approach (rather than the synchronizer pattern).
This approach does not require the server to store the CSRF token, as the only check it does is comparing the token within the cookie with the one in the header (or parameter) and verify that they are equal.
The synhronizer pattern, on the other hand, does store the CSRF token somewhere in the server, and for each request it verifies its validity by comparing it with the one sent over the header ( or as before, in a POST parameter ).
You can read more about the two approaches here.
I guess you are testing your API with a web service testing application, in which case you are missing the second token somewhere in your request.
This section explains how to place the token for AJAX calls:
AJAX
While the above method can be used for AJAX POST requests, it has some inconveniences: you have to remember to pass the CSRF token in as POST data with every POST request. For this reason, there is an alternative method: on each XMLHttpRequest, set a custom X-CSRFToken header to the value of the CSRF token. This is often easier, because many JavaScript frameworks provide hooks that allow headers to be set on every request.
Seeing your request above, therefore you should place this header (with the value of the current token, of course):
X-CSRFToken: z53lKL0f7VHkilYS5Ax8FMaQCU2ceouje9OeTJOgTy4gH0UgHVltAlOe2KFNNNB6

python-requests give me diffrent response from what I see in the browser, Why?

I want to get data from this site.
When I get data from the main url. I get an HTML file that contains structure but not the values.
import requests
from bs4 import BeautifulSoup
url ='http://option.ime.co.ir/'
r = requests.get(url)
soup = BeautifulSoup(r,'lxml')
print(soup.prettify())
I find out that the site get values from
url1 = 'http://option.ime.co.ir/GetTime'
url2 = 'http://option.ime.co.ir/GetMarketData'
When I watch responses from those url in the browser. I see a JSON format response and time in a specific format.
but when I use requests to get the data it gives me same HTML that I get from url.
Do you know whats the reason? How should I get the responses that I see in the browser?
I check headers for all urls and I didn't find something special that I should send with my request.
You have to provide the proper HTTP headers in the request. In my case, I was able to make it work using the following headers. Note that in my testing the HTTP response was a 200 OK rather than a redirect to the root website (as when no HTTP headers were provided in the request).
Raw HTTP Request:
GET http://option.ime.co.ir/GetTime HTTP/1.1
Host: option.ime.co.ir
Referer: "http://option.ime.co.ir/"
Accept: "application/json, text/plain, */*"
User-Agent: "Mozilla/5.0 (Windows NT 6.1; rv:45.0) Gecko/20100101 Firefox/45.0"
This should give you the proper JSON response you need.
You first connection using the browser is getting a 302 Redirection response (to the same url).
Then it is running some JS so the so the second request doesn't redirect anymore and gets the expected JSON.
It is a usual technique so other people don't use their API without permission.
Set the "preserve log" checkbox in dev. tools so you can see it by yourself.

Categories