Python, sending a request with headers and cookies - python

Sending a python request to a server that requires authentication to download a file. I am trying to send both a cookie and a header with the request. What is the right format to send this request?
From chrome developer, I can see the request header as:
The python request:
Session = requests.Session()
cookies = browser.get_cookies()
response = Session.get(url)
tt = "ASPSESSIONIDSGSRSATR"
cookie = {tt:Session.cookies.get_dict().get(tt,""),
cookies[2].get("name",""):cookies[2].get("value",""),
cookies[0].get("name",""):cookies[0].get("value","")}
header = {"Host":"ecsxxxxxxxxxxxxxxxxxxxxx",
"Connection": "keep-alive",
"Accept":"text/html,application/xhtml+xml,application/xml;q=0.9,image/webp,*/*;q=0.8",
"User-Agent":"Mozilla/5.0 (Windows NT 6.1; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/41.0.2272.89 Safari/537.36",
"Referer":"https://ecsxxxxxxxxxxxxxxxxxxxxx",
"Accept-Encoding":"gzip, deflate, sdch",
"Accept-Language":"en-US,en;q=0.8,fr;q=0.6"}
response = Session.get(url, cookies = cookie, headers = header)

Comments: ... how does that accept the username/pword?
Using the Python Kerberos Module
Kerberos Basics
When setting up Kerberos authentication on a server, there are two basic modes of operation.
The simplest from a client implementation point of view just uses Basic Auth to pass a username and password to the server, which then checks them with the Kerberos realm.
Comments: it's a kerberos server
There is a Optional
requests Kerberos/GSSAPI authentication library
This library adds optional Kerberos/GSSAPI authentication support and supports mutual authentication.
Basic GET usage:
import requests
from requests_kerberos import HTTPKerberosAuth
r = requests.get("http://example.org", auth=HTTPKerberosAuth())
SO: session auth in python

Related

How can i use Session mount to make site calls

I need to make multiple calls by using threading but when mounting the session is giving me a bad request error over http protocol
gateway = ApiGateway("https://my.com/plp_search_v2", access_key_id=aws_access_key_id, access_key_secret=aws_secret_access_id, regions=EXTRA_REGIONS)
url = "https://my.com/plp_search_v2"
header={'User-Agent':'Mozilla/5.0 (Windows NT 10.0; Win64; x64) \
AppleWebKit/537.36 (KHTML, like Gecko) Chrome/92.0.4515.159 Safari/537.36'}
params = {
"key": "9f36aeafbe60771e321a7cc95a78140772ab3e96",
"category": "ziogr",
"channel": "WEB",
"count": "24",
}
session = requests.Session()
session.mount("https://", gateway)
session.mount("http://", gateway)
response = session.get(url, params=params, headers=header)
I am trying to use the API Gateway from AWS to make calls to that URI and i don't know why without the sessions.mount lines it is working.
The service is always retrieving a 400 bad request and we need to make multiple calls with aws proxy to retrieve some information.
There is a way we can improve this calls to achieve a more proficient result by calling API each time
I used url parse lib to delete the path from the url and after that re do the gateway mount into the session
It is like this
url = "https://redsky.target.com/redsky_aggregations/v1/web/plp_search_v2"
src_parsed = urlparse(url)
src_nopath = "%s://%s" % (src_parsed.scheme, src_parsed.netloc)
session = requests.Session()
session.mount(src_nopath, gateway)
response = session.get(url, stream=True, params=params, headers=header)

Python urllib or requests library get stuck to open certain URLs?

I am trying to send HTTP GET request to certain website, for example, https://www.united.com, but it get stuck with no response.
Here is the code:
from urllib.request import urlopen
url = 'https://www.united.com'
resp = urlopen(url,timeout=10 )
Every time, it goes timeout. But the same code works for other URLs, for example, https://www.aa.com.
So I wonder what is behind https://www.united.com that keeps me from getting the HTTP request through. Thank you!
Update:
Adding a request header still doesn't work for this site:
from urllib.request import urlopen
url = 'https://www.united.com'
req = urllib.request.Request(
url,
data=None,
headers={
'User-Agent':' Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/91.0.4472.77 Safari/537.36'
}
)
resp = urlopen(req,timeout=3 )
The server of united.com might be only responding to certain user-agent strings or request headers and blocking for others. You have to send certain headers or user-agent string which are allowed by their server. This depends upon website to website who want to add some more security to their applications so they are very specific about user-agents like which resource is trying to access them.

Python3 - Requests - BS4 - Cloudflare -> 403 Forbidden not use Local Proxy

Codes aren't working. It has got 403 error because system using cloudflare
When i am using anyone http proxy(burp suite/fiddler etc.), I see csrfToken. It works.
Why it works when use local proxy?
import requests
from bs4 import BeautifulSoup
headerIstek = {
"User-Agent" : "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/70.0.3538.102 Safari/537.36 Edge/18.19041",
"Sec-Fetch-Site" : "none",
"Accept-Language" : "tr-TR,tr;q=0.9,en-US;q=0.8,en;q=0.7"
}
istekLazim = {"ref":"","display_type":"popup","loc":""}
istekLogin = requests.get("https://www.example.com/join/login-popup/", headers=headerIstek, cookies={"ud_rule_vars":""}, params=istekLazim, verify=False)
soup = BeautifulSoup(istekLogin.text, "html.parser")
print(istekLogin.request.headers)
csrfToken = soup.find("input", {"name":"csrfmiddlewaretoken"})["value"]
print(csrfToken)
Cloudflare performs JavaScript checks on the browser and returns a session if the checks have been successful. If you want to run a one-off script to download stuff off of a CloudFlare protected server, add a session cookie from a previously validated session you obtained using your browser.
The session cookie is named __cfduid. You can get it by fetching a resource using your browser and then opening the developer tools and the network panel. Once you inspect the request, you can see the cookies your browser sent to the server.
Then you can use that cookie for requests using your script:
cookies = {
"__cfduid": "xd0c0985ed80ffbc4dd29d1612168766",
}
response = requests.get(image_url, cookies=cookies)
response.raise_for_status()

How to implement ajax request using Python Request

I'm trying to log into a website using Python request. Unfortunately, it is always showing this error when printing its content.
b'<head><title>Not Acceptable!</title></head><body><h1>Not Acceptable!</h1><p>An appropriate representation of the requested resource could not be found on this server. This error was generated by Mod_Security.</p></body></html>
For reference my code
from requests import Session
import requests
INDEX_URL = 'https://phpzag.com/demo/ajax_login_script_with_php_jquery/index.php'
URL = 'https://phpzag.com/demo/ajax_login_script_with_php_jquery/welcome.php'
LOGIN_URL = 'https://phpzag.com/demo/ajax_login_script_with_php_jquery/login.php' # Or whatever the login request url is
payload = {'user_email': 'test#phpzag.com','password':'test'}
s = requests.Session()
user_agent = {'User-agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/81.0.4044.129 Safari/537.36'}
t=s.post(LOGIN_URL, data=payload, headers=user_agent)
r=s.get('https://phpzag.com/demo/ajax_login_script_with_php_jquery/welcome.php',headers=user_agent,cookies=t.cookies.get_dict())
print(r.content)
May I know what is missing and how can I get HTML code of welcome page from this
UPDATE
I'm trying to get make an API call after login authentication. However, I'm not able to succeed in login authentication. Hence I am not able to get the response of API Call. As per my thought it due to multi-factor authentication it is getting failed. I need to know how can I implement this?
For eg: www.abc.com is the URL of the website. The login is done through JS form submission Hence URL is specified in the ajax part. On the success of that, there is another third authentication party(okta) which will also verify the credentials and finally reach the home page. then I need to call the real API for my task.
But it is not working.
import requests
import sys
class Login:
def sendRequestWithAuthentication(self,loginDetails,requestDetails):
user_agent = {'User-agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/81.0.4044.129 Safari/537.36'}
action_url=loginDetails['action_url'] if 'action_url' in loginDetails.keys() else None
pay_load=loginDetails['payload'] if 'payload' in loginDetails.keys() else None
session_requests = requests.session()
if action_url and pay_load:
act_resp=session_requests.post(action_url, data=pay_load, headers=user_agent,verify=False,files=[ ])
print(act_resp)
auth_cookies=act_resp.cookies.get_dict()
url,method,request_payload = requestDetails['url'],requestDetails['method'],requestDetails['payload']
querystring=requestDetails['querystring']
response=session_requests.get(url,headers=user_agent,cookies=auth_cookies,data=request_payload,params=querystring)
print(response)
return response.json()
In the above action URL is the API given in the ajax part & in the second request, the URL is the API address for that GET.
In short, may I know how can implement multifactor authentication in python request
My Doubt
Do we need the cookies from the login form page to include in the login request
How to implement multifactor authentication in python request(Here we don't need any pin or something it is done through RSA.)Is there any need of a certificate for login as it now raising unable to validate the SSL certificate
Give a dummy example api that is implement such kind of scenario
No, you make it complex.This code worked:
import requests
login_url = "https://phpzag.com/demo/ajax_login_script_with_php_jquery/login.php"
welcome_url = "https://phpzag.com/demo/ajax_login_script_with_php_jquery/welcome.php"
payload = 'user_email=test#phpzag.com&password=test&login_button='
login_headers = {
'x-requested-with': 'XMLHttpRequest',
'Content-Type': 'application/x-www-form-urlencoded', # its urlencoded instead of form-data
'User-agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/81.0.4044.129 Safari/537.36',
}
s = requests.Session()
login = s.post(login_url, headers=login_headers, data=payload) # post requests
welcome = s.get(welcome_url, headers=login_headers)
print(welcome.text)
Result:
.....Hello, <br><br>Welcome to the members page.<br><br>
TL;DR
Change the part of your code that says data=payload to json=payload, and it should work.
Direct answer to your question
How [does one] implement [an] AJAX request using Python Requests?
You cannot do that. An AJAX request is specifically referring to a Javascript-based HTTP request. To quote from W3 school's AJAX introduction page, "AJAX = Asynchronous JavaScript And XML".
Indirect answer to your question
What I believe you're asking is how to perform auth/login HTTP requests using the popular python package, requests. The short answer— unfortunately, and like most things—is that it depends. Various auth pages handle the auth requests differently, and so you might have to do different things in order to authenticate against the specific web service.
Based on your code
I'm going to make some assumptions that the login page is probably looking for a POST request with the authentication details (e.g. credentials) in the form of a JSON object based on your code, and based on the response back from the server being a 406 error meaning that you're sending data with an accept header that doesn't align with how the server wants to respond.
When using requests, using the data parameter to the request function will send the data "raw"; that is, it'll send it in the native data format it is (like in cases of binary data), or it'll translate it to standard HTML form data if that format doesn't work (e.g. key1=value1&key2=value2&key3=value3, this form has the MIME type of application/x-www-form-urlencoded and is what requests will send when data has not been specified with an accept header). I'm going to make an educated guess based on the fact that you put your credentials into a dictionary that the login form is expecting a POST request with a JSON-formatted body (most modern web apps do this), and you were under the impression that setting the data parameter to requests will make this into a JSON object. This is a common gotcha/misconception with requests that has bitten me before. What you want is instead to pass the data using the json parameter.
Your code:
from requests import Session
import requests
INDEX_URL = 'https://phpzag.com/demo/ajax_login_script_with_php_jquery/index.php'
URL = 'https://phpzag.com/demo/ajax_login_script_with_php_jquery/welcome.php'
LOGIN_URL = 'https://phpzag.com/demo/ajax_login_script_with_php_jquery/login.php' # Or whatever the login request url is
payload = {'user_email': 'test#phpzag.com','password':'test'}
s = requests.Session()
user_agent = {'User-agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/81.0.4044.129 Safari/537.36'}
t=s.post(LOGIN_URL, data=payload, headers=user_agent)
r=s.get('https://phpzag.com/demo/ajax_login_script_with_php_jquery/welcome.php',headers=user_agent,cookies=t.cookies.get_dict())
print(r.content)
Fixed (and cleaned up) code:
#!/usr/bin/env python3
# -*- coding: utf-8 -*-
"""
Test script to login to php web app.
"""
import requests
INDEX_URL = 'https://phpzag.com/demo/ajax_login_script_with_php_jquery/index.php'
URL = 'https://phpzag.com/demo/ajax_login_script_with_php_jquery/welcome.php'
LOGIN_URL = 'https://phpzag.com/demo/ajax_login_script_with_php_jquery/login.php' # Or whatever the login request url is
payload = {
'user_email': 'test#phpzag.com',
'password':'test'
}
headers = {
'User-agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/81.0.4044.129 Safari/537.36'
}
session = requests.Session()
auth_response = session.post(
url=LOGIN_URL,
json=payload, # <--- THIS IS THE IMPORTANT BIT. Note: data param changed to json param
headers=user_agent
)
response = session.get(
'https://phpzag.com/demo/ajax_login_script_with_php_jquery/welcome.php',
headers=headers,
cookies=auth_response.cookies.get_dict() # TODO: not sure this is necessary, since you're using the session object to initiate the request, so that should maintain the cookies/session data throughout the session...
)
print(response.content)
Check out this section of the requests documentation on POST requests, if you scroll down a bit from there you'll see the docs talk about the github API which expects JSON and how to handle that.
Auth can be tricky overall. Sometimes things will want "basic auth", which requests will expect you to pass as a tuple to the auth parameter, sometimes they'll want a bearer token / OAUTH thing which can get headache-inducing-ly complicated/annoying.
Hope this helps!
You are missing the User agent that the server (apache?) requires
Try this:
import requests
from requests import Session
URL = 'https://phpzag.com/demo/ajax_login_script_with_php_jquery/welcome.php'
LOGIN_URL = 'https://phpzag.com/demo/ajax_login_script_with_php_jquery/login.php' # Or whatever the login request url is
payload = {'user_email': 'test#phpzag.com','password':'test'}
user_agent = {'User-agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/81.0.4044.129 Safari/537.36'}
s = requests.Session()
x=s.get(URL, headers=user_agent)
x=s.post(LOGIN_URL, data=payload, headers=user_agent)
print(x.content)
print(x.status_code)
Take a look at Requests: Basic Authentication
import requests
requests.post(URL, auth=('user', 'pass'))
# If there are some cookies you need to send
cookies = dict(cookies_are='working')
requests.post(URL, auth=('user', 'pass'), cookies=cookies)

Python httplib2: HTTPS login fails

I am trying to use httplib2 to log in to a web page. I am able to log in to the page by simply opening the following URL in a Chrome incognito window:
https://domain.com/auth?name=USERNAME&pw=PASSWORD
I tried the following code to emulate this login with httplib2:
from httplib2 import Http
h = Http(disable_ssl_certificate_validation=True)
resp, content = h.request('https://domain.com/auth?name=USERNAME&pw=PASSWORD')
Unfortunately, this request does not lead to a successful login.
I tried changing the request headers to match those provided by Chrome:
headers = {
'Host': 'domain.com',
'Connection': 'keep-alive',
'Accept': 'text/html,application/xhtml+xml,application/xml;q=0.9,image/webp,*/*;q=0.8',
'User-Agent': 'Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/30.0.1599.101 Safari/537.36',
'Accept-Encoding': 'gzip,deflate,sdch',
'Accept-Language': 'en-US,en;q=0.8'
}
resp, content = h.request('https://domain.com/auth?name=USERNAME&pw=PASSWORD', 'GET', headers=headers)
This changes the response slightly, but still does not lead to a successful login.
I tried inspecting the actual network traffic with Wireshark but since it's HTTPS and thus encrypted, I can't see the actual traffic.
Does anybody know what the difference in requests between Chrome and httplib2 could be? Maybe httplib2 changes some of my headers?
Following Games Brainiac's comment, I ended up simply using Python Requests instead of httplib2. The following requests code works out of the box:
import requests
session = requests.Session()
response = session.get('https://domain.com/auth?name=USERNAME&pw=PASSWORD')
Further requests with the same username/password can simply be performed on the Session object:
...
next_response = session.get('https://domain.com/someOtherPage')

Categories