Python trace URL get requests - using python script - python

I'm writing a script, to help me do some repetitive testing of a bunch of URLs.
I've written a python method in the script that it opens up the URL and sends a get request. I'm using Requests: HTTP for Humans -http://docs.python-requests.org/en/latest/- api to handle the http calls.
There's the request.history that returns a list of status codes of the directs. I need to be able to access the particular redirects for those list of 301s. There doesn't seem to be a way to do this - to access and trace what my URLS are redirecting to. I want to be able to access the redirected URLS (status code 301)
Can anyone offer any advice?
Thanks

Okay, I'm so silly. Here's the answer I was looking for
r = requests.get("http://someurl")
r.history[1].url will return the URL

Related

Modifying a GET Response in Python to Return a Specified Response

I am trying to implement a similar functionality to the Auto Responder feature in Fiddler, but in a Python script. The feature should prompt the user for a website link and allow them to specify the desired response to be returned. The script should intercept every GET request sent to the specified website link and return the user-defined response instead of forwarding the request to the server.
I am having trouble finding the right approach to achieve this in Python. I would appreciate any guidance or code examples on how to implement this functionality in Python.

Python get full response from a get request

I'm needing to write a script to confirm a part of website is vulnerable to reflected XSS but the request response doesn't contain complete HTML so I can't check it for the payload. For example in Burb the response contains the whole page HTML where I can see the 'alert('xss')' but in Python it does not. I've tried response.text/content etc. but they're all the same. Is there a seperate module for this stuff or am I just doing something wrong with the request?
for p in payloads:
response = requests.get(url+p)
if p in response.content:
print(f'Vulnerable: payload - {p}')
Burp response does contain the following
<pre>Hello <script>alert("XSS")</script></pre>
I need to have the same thing in the Python response
One possibility is that that part of the script only will load after a few seconds after GET. When using the request module, it will return the first thing it sees (i.e. the unloaded script).
To go around this you may want to use a web driver module like selenium that allows waiting before getting the HTML

How to retrieve html with grequests

When I was just doing some research on python web scraping I got to know of a package named grequests, it was said that this can send parallel HTTP requests thus gaining more speed than the normal python requests module. Well, that sounds great but I was not able to get the HTML of the web pages I requested as there is no .text method like the normal requests module. If I get some help it would be great!
Since grequests.imap function returns a list, you'll need to use an index or call the entire list in a loop.
responses = grequests.imap(session)
for response in responses:
print(response.text)

How to handle a post request from a Python program

So basically I would like to know how to handle a POST request from a python program, and store it on the website server so I can make a GET request to retrieve that information. I'm hoping you can help me. Currently this is my code:
import requests
url = 'mywebsitehere.com'
source_code = 'print('Hello World')
data = {'code': source_code, 'format': 'python'}
r = requests.post(url = url, data = data)
print(r.text)
I'm trying to send some code and the format for the code in the post request, but I'm not sure how to handle the post request once it reaches the website so other programs can access it with GET requests. I know how to actually send POST and GET requests in Python just not how to handle them once they reach the website/server. From my research, it seems like you have to make a PHP file or something and specify individual boxes or variables for the program to enter the information into.
I know it's a really noob question but I am just starting to get into more advanced stuff with Python and modules and stuff.
I'm going to learn more about general web development so instead of just barely understanding it I can get a good grasp on the post requests and actually develop my website into something custom rather than copying and pasting other peoples work without completely understanding it.
...also I'm not sure how to close a post to "answered" or something but yeah.

Get DOM from webpage in python

hello guys i'm wondering how to get DOM from web page !
so check out this
Example.com>Get Dom>Get Document from Dom > Get Cookie Values from Document
i tried this code but not working
response.urllib2.urlopen('http://Example.com')
print response.info().getheader("cookie")
also i tried print response.read()
but it's ouput None for print response.info().getheader("cookie")
i tried Set-Cookie i got values but not exact same from the broswer !! i open the web via webtext editor (Firebug) and i got diffrent information so i'm confused is Set-Cookie equal to cookie
i dunno please give me some suggest
There is something here about HTTP Cookies with Python. You might actually be better off using / learning about python's httplib / http.client documented here, that would allow you to simulate / build an http client. Or even use the more generic urllib documented here, that handle more protocols / arbitrary resources, and with it say you can access headers via the urllib.urlretrieve method is there were any.

Categories