python equivalent for curl --interface - python

What is the python equivalent for following shell command:
curl --interface 10.91.56.2 http:/10.91.55.3/file0.txt
????
I am using CentOS6.5-Linux and I want to send http request from virtual IP addresses like eth0:0,eth0:1,eth0:2,etc simultaneously with eth0. I am actually trying to make one traffic generator tool using python. I have been successful in sending multiple and concurrent http requests and now my next step is to send such requests from multiple ip addresses.I used following cURL command to send request from eth0:1 " curl--interface 10.91.56.2 http:/10.91.55.3/file0.txt" and I was successful in generating traffic from virtual eth0:1. Can anyone guide me how to do this using python? 10.91.56.2 is my virtual eth0:1 IP interface and 10.91.55.3 is my server address... –

Python Urllib2 provides perfect platform for making any HTTP request. In your case you can use urlopen() function...
More about this libary can be found in the below link:
how-to-use-urllib2-in-python

For me, eth0's ip is 10.91.56.3 and eth0:1's ip is 10.91.56.4 so, to generate traffic using 10.91.56.4(eth0:1)
Followed answer by #AKX here
In above answer in 3rd class write your interface's ip instead of 127.0.0.1 eg in my case i did like this:
class BindableHTTPHandler(urllib2.HTTPHandler):
def http_open(self, req):
return self.do_open(BindableHTTPConnectionFactory('10.91.56.4'), req)

Related

Can I intercept HTTP requests that are coming for another application and port using python

I am currently thinking on a project that automatically executes defensive actions such as adding the IP of a DoS attacker to iptables list to drop their requests permanently.
My question is can I intercept the HTTP requests that are coming for another application, using python? For example, can I count how many times an Apache server running on port 80, recieved a HTTP POST request and extract its sender etc.
I tried looking into requests documentation but couldn't find anything relevant.

Is it possible to recreate a request from the packets programatically?

For a script I am making, I need to be able to see the parameters that are sent with a request.
This is possible through Fiddler, but I am trying to automate the process.
Here are some screenshots to start with. As you can see in the first picture of Fiddler, I can see the URL of a request and the parameters sent with that request.
I tried to do some packet sniffing with scapy with the code below to see if I can get a similar result, but what I get is in the second picture. Basically, I can get the source and destination of a packet as ip addresses, but the packets themselves are just bytes.
def sniffer():
t = AsyncSniffer(prn = lambda x: x.summary(), count = 10)
t.start()
time.sleep(8)
results = t.results
print(len(results))
print(results)
print(results[0])
From my understanding, after we establish a TCP connection, the request is broken down into several IP packets and then sent over to the destination. I would like to be able to replicate the functionality of Fiddler, where I can see the url of the request and then the values of parameters being sent over.
Would it be feasible to recreate the information of a request through only the information gathered from the packets?
Or is this difference because the sniffing is done on Layer 2, and then maybe Fiddler operates on Layer 3/4 before/after the translation into IP packets is done, so it actually sees the content of the original request itself and the result of the combination of packets? If my understanding is wrong, please correct me.
Basically, my question boils down to: "Is there a python module I can use to replicate the features of Fiddler to identify the destination url of a request and the parameters sent along with that request?"
The sniffed traffic is HTTPS traffic - therefore just by sniffing you won't see any details on the HTTP request/response because it is encrypted via SSL/TLS.
Fiddler is a proxy with HTTPS interception, that is something totally different compared to sniffing traffic on network level. This means that for the client application Fiddler "mimics" the server and for the server Fiddler mimics the client. This allows Fiddler to decrypt the requests/responses and show them to you.
If you want to perform request interception on python level I would recommend to you to use mitmproxy instead of Fiddler. This proxy also can perform HTTPS interception but it is written in Python and therefore much easier to integrate in your Python environment.
Alternatively if you just want to see the request/response details of a Python program it may be easier to do so by setting the log-level in an appropriate way. See for example this question: Log all requests from the python-requests module

How to send HTTP request in SSH?

I'm trying to make a simple HTTP request in Python in an SSH terminal:
from requests import get
r = get("https://www.google.com")
However, this command just stalls to infinity. This does not happen when not in SSH.
Is there any way to send the request such that it goes through?
Thanks ahead of time.
EDIT: Running the logging in Joran's link yields only the following line:
INFO:requests.packages.urllib3.connectionpool:Starting new HTTPS connection (1): www.google.com
First, check that you have ability to reach URL using some system-wide tool, like curl: curl -I "https://www.google.com". In case, you will have not timeout error, and got success response, my answer is not for you :)
You code can run forever, just because there is not timeout defined for socket connections. And if for some reason your system is not able to read from socket (at low level), you have to wait for long time.
http://docs.python-requests.org/en/latest/user/quickstart/#timeouts
Try this (assuming you are using python3):
from urllib.request import urlopen
r = urlopen('https://www.google.com').read()

HTTP requests using multiple IP addresses on python [duplicate]

This question already has an answer here:
HTTP Requests using a range of IP address on python
(1 answer)
Closed 8 years ago.
I'm writing a python script that will send http requests concurrently to the urls mentioned in a file using python. The script works fine for a single IP address. The OS I'm using is linux. I've generated virtual IP addresses like eth0:1,eth0:2 etc. I want to send HTTP requests using these virtual IP addresses along with the eth0 IP address concurrently. I use the requests module for http requests and threading module for concurrent requests. Kindly help me. I'm trying to develop a web testing tool.
I think you wanted to avoid "Crawl Delay" and do faster crawl on one server!
In this case, Remote Web Server will recognize request from only one IP!!
I think using parallel + curl + python script is more simple and best way.
or use https://pypi.python.org/pypi/pyparallelcurl/0.0.4
or use a lot of servers.
and refer to https://code.google.com/p/httplib2/issues/detail?id=91

Using IP authenticated proxies in a distributed crawler

I'm working on a distributed web crawler in Python running on a cluster of CentOS 6.3 servers, the crawler uses many proxies from different proxy providers. Everything works like a charm for username/password authenticated proxy providers. But now we have bought some proxies that uses IP based authentication, this means that when I want do crawl into a webpage using one of this proxies I need to make the request from a subset of our servers.
The question is, is there a way in Python (using a library/software) to make a request to a domain passing trough 2 proxies? (one proxy is one of the subset needed to be used for the IP authentication and the second is the actual proxy from the provider) Or is there another way to do this without setting up this subset of our servers as proxies?
The code I'm using now to make the request trough a proxy uses the requests library:
import requests
from requests.auth import HTTPProxyAuth
proxy_obj = {
'http':proxy['ip']
}
auth = HTTPProxyAuth(proxy['username'], proxy['password')
data = requests.get(url, proxies = proxy_obj, auth = auth)
Thanks in advance!
is there a way in Python (using a library/software) to make a request
to a domain passing trough 2 proxies?
If you need to go through two proxies, it looks like you'll have to use HTTP tunneling, so any host which isn't on the authorized list would have to connect an HTTP proxy server on one of the hosts which is, and use the HTTP CONNECT method to create a tunnel to the remote proxy, but it may not be possible to achieve that with the requests library.
Or is there another way to do this without setting up this subset of
our servers as proxies?
Assuming that the remote proxies which use IP address-based authentication are all expecting the same IP address, then you could instead configure a NAT router, between your cluster and the remote proxies, to translate all outbound HTTP requests to come from that single IP address.
But, before you look into implementing either of these unnecessarily complicated options, and given that you're paying for this service, can't you just ask the provider to allow requests for the entire range of IP addresses which you're currently using?

Categories