I'm working on a project where we want to assign a whitelist packet filters for incoming traffic on a firewall and we are using python script with requests library to make some https requests to some servers outside of that network. For now the script is using ephemeral ports to connect to the servers, but we would like to make these https requests through specific ports. This would allow us to create strict whitelist for these ports.
How can I specify the port to the requests library through which the request should be sent? Script is currently using the following type of code to send the necessary requests.
response = requests.post(data[0], data=query, headers=headers, timeout=10)
This works, but I would now need to specify the port through which the http post request should be sent to allow for more strict packet filtering on the network. How could this port declaration be achieved? I have searched for solution to this from several sources already and came up with absolutely nothing.
requests is built on urllib3, which offers the ability to set a source address for connections; when you set the source address to ('', port_number) you tell it to use the default host name but pick a specific port.
You can set these options on the pool manager, and you tell requests to use a different pool manager by creating a new transport adapter:
from requests.adapters import HTTPAdapter
from requests.packages.urllib3.poolmanager import PoolManager
class SourcePortAdapter(HTTPAdapter):
""""Transport adapter" that allows us to set the source port."""
def __init__(self, port, *args, **kwargs):
self._source_port = port
super(SourcePortAdapter, self).__init__(*args, **kwargs)
def init_poolmanager(self, connections, maxsize, block=False):
self.poolmanager = PoolManager(
num_pools=connections, maxsize=maxsize,
block=block, source_address=('', self._source_port))
Use this adapter in a session object, the following mounts the adapter for all HTTP and HTTPS connections, using 54321 as the source port:
s = requests.Session()
s.mount('http://', SourcePortAdapter(54321))
s.mount('https://', SourcePortAdapter(54321))
You can only set the one source port, limiting you to one active connection at a time. If you need to rotate between ports, register multiple adapters (one per URL) or re-register the catch-all mounts each time.
See the create_connection() utility function documentation for the details on the source_address option:
If source_address is set it must be a tuple of (host, port) for the socket to bind as a source address before making the connection. An host of '' or port 0 tells the OS to use the default.
Related
Is it possible to get the IP of the default interface with Sanic?
Here is how I do it with Socket. The idea is to do the same thing with Sanic.
import socket
hostname = socket.gethostname()
IP_address = socket.gethostbyname(hostname)
print(IP_address) # 192.168.1.239
It depends upon what information you want and how the app is being served (reverse proxy, etc).
Check out these values:
request.ip (connected interface)
request.remote_addr (likely what you want https://sanic.readthedocs.io/en/stable/sanic/api/core.html#sanic.request.Request.remote_addr)
request.conn_info (object with a bunch of details you may want)
I am trying to understand how HTTP/3 works. Ultimately, my goal is to send HTTP/3 request to a host with proxy and receive a response back.
The host I am trying to reach only accepts HTTP/3 Connection.
There is a library that takes care of heavy lifting to initiate a HTTP 3 connection however they don't demonstrate how proxy can be passed into the packets.
https://github.com/aiortc/aioquic/blob/main/examples/http3_client.py
I am running the following file after cloning the repo like this:
python3 examples/http3_client.py 'https://www.truepeoplesearch.com/'
Doing so does route the request via HTTP/3 using QUIC protocol. How can I send the same request behind a proxy with IP, pOrt, username and password of the proxy.
I'm trying to send an HTTPS request through an HTTPS tunnel. That is, my proxy expects HTTPS for the CONNECT. It also expects a client certificate.
I'm using Requests' proxy features.
import requests
url = "https://some.external.com/endpoint"
with requests.Session() as session:
response = session.get(
url,
proxies={"https": "https://proxy.host:4443"},
# client certificates expected by proxy
cert=(cert_path, key_path),
verify="/home/savior/proxy-ca-bundle.pem",
)
with response:
...
This works, but with some limitations:
I can only set client certificates for the TLS connection with the proxy, not for the external endpoint.
The proxy-ca-bundle.pem only verifies the server certificates in the TLS connection with the proxy. The server certificates from the external endpoint are seemingly ignored.
Is there any way to use requests to address these two issues? I'd like to set a different set of CAs for the external endpoint.
I also tried using http.client and HTTPSConnection.set_tunnel but, as far as I can tell, its tunnel is done through HTTP and I need HTTPS.
Looking at the source code, it doesn't seem like requests currently supports this "TLS in TLS", ie. providing two sets of clients/CA bundles for a proxied requests.
We can use PycURL which simply wraps libcurl
from io import BytesIO
import pycurl
url = "https://some.external.com/endpoint"
buffer = BytesIO()
curl = pycurl.Curl()
curl.setopt(curl.URL, url)
curl.setopt(curl.WRITEDATA, buffer)
# proxy settings
curl.setopt(curl.HTTPPROXYTUNNEL, 1)
curl.setopt(curl.PROXY, "https://proxy.host")
curl.setopt(curl.PROXYPORT, 4443)
curl.setopt(curl.PROXY_SSLCERT, cert_path)
curl.setopt(curl.PROXY_SSLKEY, key_path)
curl.setopt(curl.PROXY_CAINFO, "/home/savior/proxy-ca-bundle.pem")
# endpoint verification
curl.setopt(curl.CAINFO, "/home/savior/external-ca-bundle.pem")
try:
curl.perform()
except pycurl.error:
pass # log or re-raise
else:
status_code = curl.getinfo(curl.RESPONSE_CODE)
PycURL will use the PROXY_ settings to establish a TLS connection to the proxy, send it an HTTP CONNECT request. Then it'll establish a new TLS session through the proxy connection to the external endpoint and use the CAINFO bundle to verify those server certificates.
I'm using TOR to proxy connections but am having difficulty proxying DNS lookups via socket.gethostbyname("www.yahoo.com") -- I learned that it was not sending DNS traffic via proxy by sniffing traffic with wireshark. Here's a copy of the code I'm using
import StringIO
import socket
import socks # SocksiPy module
import stem.process
from stem.util import term
SOCKS_PORT = 7000
socks.setdefaultproxy(socks.PROXY_TYPE_SOCKS5, '127.0.0.1', SOCKS_PORT)
socket.socket = socks.socksocket
def getaddrinfo(*args):
return [(socket.AF_INET, socket.SOCK_STREAM, 6, '', (args[0], args[1]))]
socket.getaddrinfo = getaddrinfo
socket.gethostbyname("www.yahoo.com") <--- This line is not sending traffic via proxy
Any help is greatly appreciated!
You're calling gethostbyname in the socket module. It doesn't know anything about your SOCKS socket; it is simply interacting with your operating system's name resolution mechanisms. Setting socket.socket = socks.socksocket may affect network connections made through the socket module, but the module does not make direct connections to DNS servers to perform name resolution so replacing socket.socket has no impact on this behavior.
If you simply call the connect(...) method on a socks.socksocket object using a hostname, the proxy will perform name resolution via SOCKS:
s = socks.socksocket()
s.connect(('www.yahoo.com', 80))
If you actually want to perform raw DNS queries over your SOCKS connection, you'll need to find a Python DNS module to which you can provide your socksocket object.
If you resolve the DNS yourself with Socks5 you may leak information about your own computer. Instead try tunneling with Proxifier, then to Tor. Alternatively you can use SocksiPy's Socks4A extension. This will make sure information is not leaked.
Recently I have been playing around with the HTTP Proxy in twisted. After much trial and error I think I finally I have something working. What I want to know though, is how, if it is possible, do I expand this proxy to also be able to handle HTTPS pages? Here is what I've got so far:
from twisted.internet import reactor
from twisted.web import http
from twisted.web.proxy import Proxy, ProxyRequest, ProxyClientFactory, ProxyClient
class HTTPProxyClient(ProxyClient):
def handleHeader(self, key, value):
print "%s : %s" % (key, value)
ProxyClient.handleHeader(self, key, value)
def handleResponsePart(self, buffer):
print buffer
ProxyClient.handleResponsePart(self, buffer)
class HTTPProxyFactory(ProxyClientFactory):
protocol = HTTPProxyClient
class HTTPProxyRequest(ProxyRequest):
protocols = {'http' : HTTPProxyFactory}
def process(self):
print self.method
for k,v in self.requestHeaders.getAllRawHeaders():
print "%s : %s" % (k,v)
print "\n \n"
ProxyRequest.process(self)
class HTTPProxy(Proxy):
requestFactory = HTTPProxyRequest
factory = http.HTTPFactory()
factory.protocol = HTTPProxy
reactor.listenSSL(8001, factory)
reactor.run()
As this code demonstrates, for the sake of example for now I am just printing out whatever is going through the connection. Is it possible to handle HTTPS with the same classes? If not, how should I go about implementing such a thing?
If you want to connect to an HTTPS website via an HTTP proxy, you need to use the CONNECT HTTP verb (because that's how a proxy works for HTTPS). In this case, the proxy server simply connects to the target server and relays whatever is sent by the server back to the client's socket (and vice versa). There's no caching involved in this case (but you might be able to log the hosts you're connecting to).
The exchange will look like this (client to proxy):
C->P: CONNECT target.host:443 HTTP/1.0
C->P:
P->C: 200 OK
P->C:
After this, the proxy simply opens a plain socket to the target server (no HTTP or SSL/TLS yet) and relays everything between the initial client and the target server (including the TLS handshake that the client initiates). The client upgrades the existing socket it has to the proxy to use TLS/SSL (by starting the SSL/TLS handshake). Once the client has read the '200' status line, as far as the client is concerned, it's as if it had made the connection to the target server directly.
I'm not sure about twisted, but I want to warn you that if you implement a HTTPS proxy, a web browser will expect the server's SSL certificate to match the domain name in the URL (address bar). The web browser will issue security warnings otherwise.
There are ways around this, such as generating certificates on the fly, but you'd need the root certificate to be trusted on the browser.