I'm trying to use Twisted's ProxyAgent class to connect to a proxy server and make HTTP requests, however the server requires a username and password. Is it possible to specify these credentials to the server using ProxyAgent?
endpoint = TCP4ClientEndpoint(reactor, host, port)
agent = ProxyAgent(endpoint)
# Maybe need to pass auth credentials in the header here?
body = agent.request("GET", path)
Figured out the problem, the Proxy-Authorization field has to be set in the headers:
endpoint = TCP4ClientEndpoint(reactor, host, port)
agent = ProxyAgent(endpoint)
headers = {}
auth = base64.b64encode("%s:%s" % (username, password))
headers["Proxy-Authorization"] = ["Basic " + auth.strip()]
body = agent.request("GET", path, Headers(headers))
Related
i'm creating an application with Flask and i'm using gunicorn as my application server. I enabled the verification of the client's certificate, and i would like to know if there is a way to disable the validation of client's certificate for a specific user or if there is a way to use two address:1 that uses https and another that uses http.
gunicorn configuration
import ssl
bind = "0.0.0.0:8080"
ca_certs = "certs/ca-crt.pem"
certfile = "certs/server-crt.pem"
keyfile = "certs/server-key.pem"
cert_reqs = ssl.CERT_REQUIRED
worker_class = 'proto_worker.CustomSyncWorker'
from gunicorn.workers.sync import SyncWorker
import werkzeug.serving
import OpenSSL
class CustomSyncWorker(SyncWorker):
def handle_request(self, listener, req, client, addr):
cert = client.getpeercert()
try:
key = client.get_password()
except:
key = ''
headers = dict(req.headers)
#headers['CERT'] = dict(cert)
headers['CERT'] = str(cert)+str(key)
req.headers = list(headers.items())
super(CustomSyncWorker, self).handle_request(listener, req, client, addr)
I am pretty new to Splunk and Python.
I am using Splunklib.client to connect the Splunk API. My code is below:
import splunklib.client as client
import splunklib.results as result
HOST = 'Localhost'
PORT = '8000
USERNAME = "username"
PASSWORD = "password"
service = client.connect(
host=HOST,
port=PORT,
username=USERNAME,
password=PASSWORD)
rr = results.ResultsReader(service.jobs.export("query")
My questions is I have multiple host such as localhost1, localhost2,localhost3 etc, is there a way to get the data through this module on multiple host?
Thanks
Should be as simple as:
service1 = client.connect(
host=HOST1,
port=PORT1,
username=USERNAME1,
password=PASSWORD1)
rr = results.ResultsReader(service1.jobs.export("query")
service2 = client.connect(
host=HOST2,
port=PORT2,
username=USERNAME2,
password=PASSWORD2)
rr = results.ResultsReader(service2.jobs.export("query")
I am trying to override the IP address for the destination host on the fly using urllib3, while I am passing client certificates. Here is my code:
import urllib3
conn = urllib3.connection_from_url('https://MYHOST', ca_certs='ca_crt.pem', key_file='pr.pem', cert_file='crt.pem', cert_reqs='REQUIRED')
response = conn.request('GET', 'https://MYHOST/OBJ', headers={"HOST": "MYHOST"})
print(response.data)
I was thinking to use transport adapters, but I am not quite sure how to do it without using sessions.
Any thoughts or help?
I guess we can follow the solutions presented here: Python 'requests' library - define specific DNS?
Hence a nasty way to do this is to override the hostname to the IP address we want by adding:
from urllib3.util import connection
import urllib3
hostname = "MYHOST"
host_ip = "10.10.10.10"
_orig_create_connection = connection.create_connection
def patched_create_connection(address, *args, **kwargs):
overrides = {
hostname: host_ip
}
host, port = address
if host in overrides:
return _orig_create_connection((overrides[host], port), *args, **kwargs)
else:
return _orig_create_connection((host, port), *args, **kwargs)
connection.create_connection = patched_create_connection
conn = urllib3.connection_from_url('https://MYHOST', ca_certs='ca_crt.pem', key_file='pr.pem', cert_file='crt.pem', cert_reqs='REQUIRED')
response = conn.request('GET', 'https://MYHOST/OBJ', headers={"HOST": "MYHOST"})
print(response.data)
But, again based on the posted link, the better way to implement it is to have a proper adapter to override the IP address.
I have a python proxy for DNS. When I get a DNS request I need to pass an http request to dansguardian on behalf of the original source, let it to decide what happens to the request, get the result and redirect client to elsewhere based on the response from dansguardian.
The network skeleton is like this:
Client -> DNS Proxy -> DG -> Privoxy -> Web.
Client requests A, DNS Proxy intercepts, asks DG on behalf of the client, get's answer: 1. If DG filtered it, proxy send a local ip address instead of actual IP for A question. 2. If DG didn't filter, DNS proxy let's the client's net to flow naturally.
Here is the sample python code that I've tried:
data,addr = sock.recvfrom(1024)
OriginalDNSPacket = data
# I get OriginalDNSPacket from a socket
# to which iptables redirected all port 53 packets
UDPanswer = sendQues(OriginalDNSPacket, '8.8.8.8')
proxies = {'http': 'http://127.0.0.1:8080'} # DG Port
s = requests.Session()
d = DNSRecord.parse(UDPanswer)
print d
ques_domain = str(d.questions[0].get_qname())[:-1]
ques_tld = tldextract.extract(ques_domain)
ques_tld = "{}.{}".format(ques_tld.domain, ques_tld.suffix)
print ques_tld
for rr in d.rr:
try:
s.mount("http://"+ques_tld, SourceAddressAdapter(addr[0])) # This was a silly try, I know.
s.proxies.update(proxies)
response = s.get("http://"+ques_tld)
print dir(response.content)
print response.content
if "Access Denied" in response.content:
d.rr = []
d.add_answer(*RR.fromZone(ques_domain + " A " + SERVER_IP))
d.add_answer(*RR.fromZone(ques_domain + " AAAA fe80::a00:27ff:fe4a:c8ec"))
print d
socket.sendto(d.pack(), addr)
return
else:
socket.sendto(UDPanswer, addr)
return
except Exception, e:
print e
pass
The question is how can I send the request to DG, and fool it, like, the req comes from a client?
In dansguardian.conf, usexforwardedfor is needed to enabled.
So the conf now looks like this:
...
# if on it adds an X-Forwarded-For: <clientip> to the HTTP request
# header. This may help solve some problem sites that need to know the
# source ip. on | off
forwardedfor = on
# if on it uses the X-Forwarded-For: <clientip> to determine the client
# IP. This is for when you have squid between the clients and DansGuardian.
# Warning - headers are easily spoofed. on | off
usexforwardedfor = on
...
And on proxy server I just needed to add the following, which I tried before but because of the DG conf it didn't work:
response = s.get("http://"+ques_tld, headers={'X-Forwarded-For': addr[0]})
It worked like a charm.
Thanks #boardrider.
The client.Agent class has a connection timeout argument:
agent = client.Agent(reactor, connectTimeout=timeout, pool=pool)
How can this timeout be set when using client.ProxyAgent?
auth = base64.b64encode("%s:%s" % (username, password))
headers['Proxy-Authorization'] = ["Basic " + auth.strip()]
endpoint = endpoints.TCP4ClientEndpoint(reactor, host, port)
agent = client.ProxyAgent(endpoint, reactor=reactor, pool=pool)
The TCP4ClientEndpoint you pass to ProxyAgent can be initialized with a timeout.
auth = base64.b64encode("%s:%s" % (username, password))
headers['Proxy-Authorization'] = ["Basic " + auth.strip()]
endpoint = endpoints.TCP4ClientEndpoint(reactor, host, port, timeout=yourTimeout)
agent = client.ProxyAgent(endpoint, reactor=reactor, pool=pool)
This is supposing you want to set the timeout for connecting to the proxy. If you wanted to set the timeout used by the proxy to connect to the upstream HTTP server, you can't control this.
It looks like client.ProxyAgent doesn't have a connectTimeout property:
class ProxyAgent(_AgentBase):
"""
An HTTP agent able to cross HTTP proxies.
#ivar _proxyEndpoint: The endpoint used to connect to the proxy.
#since: 11.1
"""
def __init__(self, endpoint, reactor=None, pool=None):
if reactor is None:
from twisted.internet import reactor
_AgentBase.__init__(self, reactor, pool)
self._proxyEndpoint = endpoint
def request(self, method, uri, headers=None, bodyProducer=None):
"""
Issue a new request via the configured proxy.
"""
# Cache *all* connections under the same key, since we are only
# connecting to a single destination, the proxy:
key = ("http-proxy", self._proxyEndpoint)
# To support proxying HTTPS via CONNECT, we will use key
# ("http-proxy-CONNECT", scheme, host, port), and an endpoint that
# wraps _proxyEndpoint with an additional callback to do the CONNECT.
return self._requestWithEndpoint(key, self._proxyEndpoint, method,
_URI.fromBytes(uri), headers,
bodyProducer, uri)
ProxyAgent inherits from the same class Agent does (_AgentBase) and not from Agent itself.