Python requests get all other requests after sending post - python

I am logging in into a website with python request by sending a post with required data.
I am trying to get other http requests after sending the previous http post.
Is there a way to do it?
If I log in manually in browser I can see all other requests that are being sent after logging in (which is the first POST in screenshot), I want to grab them all (the ones marked with green marker):

I assume that when you login a new html side is responded to your web browser.
During the rendering of this site some files like images or javascript are requested from the server side. With selenium you can automate user interactions with a web browser and log the traffic like described in this example.

Related

Python requests - how to perform SAML SSO login (to login.microsoft.com for example)?

First of all, I googled this question but found some generic explanations which didn't provide me with good understanding how to do things.
Second - I'm a valid system user (not admin) and have access to the data. I.e. I have valid user credentials and may download file manually but for small automation I would like to have it downloaded by python script from my PC.
The download itself is simple, the only thing - I need to provide a valid session id cookie with request. I.e. finally I need to get this cookie by easiest way.
If my understaning is right in terms of SAML I'm a User Agent and want to download a file from Sevice Provider which need to authenticate me with Identity Provider (Microsoft). Usually I do it via browser and now I'm able to emulate it with help of PySide6 (QWebEngineView). I load target URL first in QWebEngineView. Actually it is a small embedded web-browser, it redirects me to login.microsoft.com, asks credentials and then redirects me back to Service Provider site and sets session id cookie. Then I'm able to use this cookie with my requests. It works but I would like to get rid of GUI (PySide) if possible.
I decided to replicate a flow that browser does and failed almost at the begining. What happens:
I'm requesting a file from my Service Provider side with usual get request.
Service provider replies with HTML page (instead of target file) as I'm not authenticated.
This HTML page contains Java script triggered by onPageLoad event - this java script simply redirects browswer to login.microsoft.com (long URL with some parameters).
Next request with this long URL for login.microsoft.com ends with "302 Moved Temporarily" with the same URL in "Location" header. And when I go with this URL it again gives me 302 with the same URL.
With the same scenario browswer gets only two redirections and finally receives an URL of web page with login/password request from microsoft.com.
I understand that I should put some more headers/cookies when I go again with URL provided in "Location" header of 302 response. But... I have no idea what login.microsoft.com expects here.
So my question is - is there any source where this message flow is described? Or maybe someone did it already and may give me advice how to proceed?
I found some SAML-related libraries for python but I see there quite complex configuration with x509 certificates and more stuff - it looks like they are more targeted for implementation on Service Provider side, not for external login.

why doesn't request lib provide all cookies

I'm not understanding why the python requests library isn't pulling in all cookies. For examples, I am running this code
import requests
a_session = requests.Session()
a_session.get('https://google.com/')
session_cookies = a_session.cookies
cookies_dictionary = session_cookies.get_dict()
print(cookies_dictionary)
But I only get the cookie "1P_JAR" even though there should be several cookies.
list of cookies shown up on inspector pannel
Ultimately I'm trying to figure out why its choosing only that 1 cookie and not the others because I'm trying to build my own application that generates a cookie but when I run this script on my application I get back and empty list even though the inspector shows that I have generated a cookie.
A cookie is set by a server response to a specific request.
Your basic google.com request only sets that cookie, which you can observe by the set-cookie header.
The other cookies are probably set by other requests or even the js code. Requests doesn't evaluate or run js and thus doesn't make any other requests.
If you don't want to completely reverse engeneer every single cookie, the way to go would be to simulate a browser by using Selenium + Chrome Driver or a similar solution.

How to view what python requests is doing on browser

I just started experimenting with Requests with python to interact with different sites. However sometimes I want to see if the POST Requests I'm sending is actually working. Is there anyway to open a browser to see what is actually happening in the browser when I send POST requests?

Can I do a GET request to a page behind a login without logging in

My task was to create an automation script that would login to a website and navigate to a page and download a csv file.I cannot login to this website using a script due to some security measures (the cookies and hidden variables inside the login form change every time a website request takes place or when the website is loaded). My idea to go around this is to:
Open a window and login manually
Navigate to the page I want
Capture all the cookies, headers, etc and the session_id of this page
Using Python, do a GET request with this page URL and send the trace I captured in step 3
Get the CSV file
Is it possible? What would be an example for this scenario? Could I do this using selenium webdriver, like doing the manual login, storing the trace then using it to open the page I want with driver.get(url)?
Can I do a GET request to a page behind a login without logging in?
Yes of course, but the HTTP response will be probably 401 or 302 code ( Unauthorized or Redirect to login ) as you are not authorized to perform that action.
See also: Hypertext Transfer Protocol (HTTP/1.1): Semantics and Content
Open a window and login manually
Capture all the cookies, headers, etc and the session_id of this page
Using Python, do a GET request with this page URL and send the trace I captured in step 3
Just for clear, when you open the browser, the SSL/TLS protocol initiate the connection with the remote server in order to create a new encrypted session which you will use to perform login and other navigation on the website.
Then when you perform the Python GET request (which is not a tab of your browser) that connection will fail because the client has failed to negotiate an SSL/TLS session since the client, in this case your script, has never performed the TLS Handshake.
Even if you provide the HTTP session captured before, which is different ( Layer 7 Protocol )
See also: The Transport Layer Security (TLS) Protocol Version 1.3
Get the CSV file
Again, you are trying to do something that the web application isn't build for.
Instead some organizations provides the REST API to fetch certain data without require HTTP sessions
See also: Representational state transfer
p.s. please provide some feedback if the replies are helping you, we are all sharing and learning ;)

Intercept http traffic with selenium webdriver in python

I am building an web application testing tool in Selenium, using the chrome webdrive in Python 3.5. So far the application is working properly, but the marketing team is telling me that it is affecting web analytics metrics. As I am crawling the pages, it is sending requests to our web analytics platform.
What would be the best approach to collect that the web analytics tag is being trigged (the request itself), but actually not send the request ?
Is using a proxy to intercept and block the request from being send a possible solution?
Edit:
The metric system is Google Analytics, the call looks like the following
https://www.google-analytics.com/r/collect?
After the ? comes the parameters to be sent to google analytics. But every time this url is called, a page view is registered.
You could also use the Chrome-Developer-Protocoll for request interception. It desn't use a proxy.
You can filter specific requests or responses by url or type to get paused
Have a look at https://stackoverflow.com/a/75067388/20443541.

Categories