I have a python proxy for DNS. When I get a DNS request I need to pass an http request to dansguardian on behalf of the original source, let it to decide what happens to the request, get the result and redirect client to elsewhere based on the response from dansguardian.
The network skeleton is like this:
Client -> DNS Proxy -> DG -> Privoxy -> Web.
Client requests A, DNS Proxy intercepts, asks DG on behalf of the client, get's answer: 1. If DG filtered it, proxy send a local ip address instead of actual IP for A question. 2. If DG didn't filter, DNS proxy let's the client's net to flow naturally.
Here is the sample python code that I've tried:
data,addr = sock.recvfrom(1024)
OriginalDNSPacket = data
# I get OriginalDNSPacket from a socket
# to which iptables redirected all port 53 packets
UDPanswer = sendQues(OriginalDNSPacket, '8.8.8.8')
proxies = {'http': 'http://127.0.0.1:8080'} # DG Port
s = requests.Session()
d = DNSRecord.parse(UDPanswer)
print d
ques_domain = str(d.questions[0].get_qname())[:-1]
ques_tld = tldextract.extract(ques_domain)
ques_tld = "{}.{}".format(ques_tld.domain, ques_tld.suffix)
print ques_tld
for rr in d.rr:
try:
s.mount("http://"+ques_tld, SourceAddressAdapter(addr[0])) # This was a silly try, I know.
s.proxies.update(proxies)
response = s.get("http://"+ques_tld)
print dir(response.content)
print response.content
if "Access Denied" in response.content:
d.rr = []
d.add_answer(*RR.fromZone(ques_domain + " A " + SERVER_IP))
d.add_answer(*RR.fromZone(ques_domain + " AAAA fe80::a00:27ff:fe4a:c8ec"))
print d
socket.sendto(d.pack(), addr)
return
else:
socket.sendto(UDPanswer, addr)
return
except Exception, e:
print e
pass
The question is how can I send the request to DG, and fool it, like, the req comes from a client?
In dansguardian.conf, usexforwardedfor is needed to enabled.
So the conf now looks like this:
...
# if on it adds an X-Forwarded-For: <clientip> to the HTTP request
# header. This may help solve some problem sites that need to know the
# source ip. on | off
forwardedfor = on
# if on it uses the X-Forwarded-For: <clientip> to determine the client
# IP. This is for when you have squid between the clients and DansGuardian.
# Warning - headers are easily spoofed. on | off
usexforwardedfor = on
...
And on proxy server I just needed to add the following, which I tried before but because of the DG conf it didn't work:
response = s.get("http://"+ques_tld, headers={'X-Forwarded-For': addr[0]})
It worked like a charm.
Thanks #boardrider.
Related
I try to build a simple web server using Python.
I try to send a minimum response to mozilla web browser as a client. But, the client browser keep spinning. Code is below:
import socket
mysocket = socket.socket(2,1)
mysocket.bind(('',80))
mysocket.listen(5)
cli2,addr2 = mysocket.accept()
print('Client connected')
status = b'HTTP/1.1 200 OK\r\n'
connection_type=b'Connection: close\r\n'
content_type = b'Content-Type: text/html\r\n'
server = b'Server: Python-Server/5.2\r\n\r\n'
f = open('c:/users/totz/documents/index.html','r')
data = f.read()
data_b = data.encode()
content_html_length_calculation = len(data) * 8
content_length_header = 'Content-Length: ' + str(content_html_length_calculation) + '\r\n'
content_length_header_b = content_length_header.encode()
sending_data = status + connection_type + content_length_header_b + content_type + server
cli2.send(sending_data)
print('Data sent')
mysocket.close()
Why the client keep spinning, even Wireshark has told me that this web server has sent this response correctly to the client?
content_html_length_calculation = len(data) * 8
It looks like that you assume that the content-length is given in bits since you multiple the length of the data with 8. Only, the content-length is given in bytes. Since your claimed content-length is far bigger than the actual data the browser is still waiting for more data.
Apart from that your server does not read the request from the client which might cause additional problems (like reports about "Connection reset" if you close the client socket cli).
I need to setup connection with different websites from the list. Send some packet and sniff packet for just that website till I don't go for the next website (iteration). When I goes to next iteration(website) I want to sniff and filter for that address only. Can I achieve that within a single python code?
sniff(filter="ip and host " + ip_addr,prn=print_summary)
req = "GET / HTTP/1.1\r\nHost: "+ website +"\r\nConnection: keep-alive\r\nCache-Control: max-age=0\r\nUpgrade-Insecure-Requests: 1\r\nUser-Agent: Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Ubuntu Chromium/58.0.3029.110 Chrome/58.0.3029.110 Safari/537.36\r\nAccept: text/html,application/xhtml+xml,application/xml;q=0.9,image/webp,*/*;q=0.8\r\nAccept-Language: en-US,en;q=0.8\r\n\r\n"
url = (website, 80)
c = socket.socket(socket.AF_INET, socket.SOCK_STREAM, proto=socket.IPPROTO_TCP)
c.settimeout(5.0)
c.connect(url)
c.setsockopt(socket.SOL_IP, socket.IP_TTL, i)
c.send(req)
print str(c.recv(4096))
c.close()
I am running the above code in loop. But during its first run it stucks in sniff function. Can anyone help me with this?
OK I've edited the answer.
Sniffing packets for a single website isn't easy, as the Berkley Packet Filter syntax used by scrapy doesn't have a simple option for HTTP. See this question for some suggestions on the options available.
One possibility is to sniff the TCP packets to/from your web proxy server; I have done this in the code sample below, which saves the TCP packets for a list of different URLs to individual named files. I haven't put in any logic to detect when the page load finishes, I just used a 60 second timeout. If you want something different then you can use this as a starting point. If you don't have a proxy server to sniff then you'll need to change the bpf_filter variable.
NB if you want to save the raw packet data, instead of the converted-to-string version, then modify the relevant line (which is commented in the code.)
from scapy.all import *
import urllib
import urlparse
import threading
import re
proxy = "http://my.proxy.server:8080"
proxyIP = "1.2.3.4" # IP address of proxy
# list of URLs
urls = ["http://www.bbc.co.uk/news",
"http://www.google.co.uk"]
packets = []
# packet callback
def pkt_callback(pkt):
packets.append(pkt) # save the packet
# monitor function
def monitor(fname):
del packets[:]
bpf_filter = "tcp and host " + proxyIP # set this filter to capture the traffic you want
sniff(timeout=60, prn=pkt_callback, filter=bpf_filter, store=0)
f=open(fname+".data", 'w')
for pkt in packets:
f.write(repr(pkt)) # or just save the raw packet data instead
f.write('\n')
f.close()
for url in urls:
print "capturing: " + url
mon = threading.Thread(target=monitor, args=(re.sub(r'\W+', '', url),))
mon.start()
data = urllib.urlopen(url, proxies={'http': proxy})
# this line gets IP address of url host, might be helpful
# addr = socket.gethostbyname(urlparse.urlparse(data.geturl()).hostname)
mon.join()
Hope this gives you a good starting point.
I have a device that sends commands to a webserver. I've redirected those commands to my own server, with the goal of use the device to run another part of my system. In short, the device sends commands, I intercept them and use them.
The commands are sent to my server,but are not valid HTTP requests. I'm trying to use flask to read them with python, because I'd like to have these commands go straight into another web app.
Note that I can't change how the commands are sent.
Using sockets, I can read the data. For instance, here is a version of the data sent via socket (data is meaningless, just for illustration):
b'#123456#A'
In constrast, as HTTP message looks like this:
b'POST / HTTP/1.1\r\nHost: 123.123.123.123:12345\r\nRequest info here'
I know how to filter these (they always start with a #). Can I hook flask to let me handle these request differently, before they are parsed as HTTP requests?
Update: The code I used to read the requests, in case it provides context:
import socket
host = ''
port = 5000
backlog = 5
size = 1024
s = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
s.bind((host,port))
s.listen(backlog)
while 1:
client, address = s.accept()
data = client.recv(size)
print("Request:")
print(data)
print("\n\n")
if data:
client.send(data)
client.close()
#!/usr/bin/python
from scapy.all import *
def findWeb():
a = sr1(IP(dst="8.8.8.8")/UDP()/DNS(qd=DNSQR(qname="www.google.com")),verbose=0)
return a[DNSRR].rdata
def sendPacket(dst,src):
ip = IP(dst = dst)
SYN = TCP(sport=1500, dport=80, flags='S')
SYNACK = sr1(ip/SYN)
my_ack = SYNACK.seq + 1
ACK = TCP(sport=1050, dport=80, flags='A', ack=my_ack)
send(ip/ACK)
payload = "stuff"
PUSH = TCP(sport=1050, dport=80, flags='PA', seq=11, ack=my_ack)
send(ip/PUSH/payload)
http = sr1(ip/TCP()/'GET /index.html HTTP/1.0 \n\n',verbose=0)
print http.show()
src = '10.0.0.24'
dst = findWeb()
sendPacket(dst,src)
I'm trying to do HTTP packets with SCAPY
I am using UBUNTU on VMwaer
The problem is that every time I send messages I have RESET
How do we fix it?
Thanks
sniff package image
Few things I notice wrong.
1. You have your sequence number set statically (seq=11) which is wrong. Sequence numbers are always randomly generated and they must be used as per RFC793. So the sequence should be = SYNACK[TCP].ack
You set your source port as 1500 during SYN packet, but then you use it as 1050 (typo?)
You don't need extra payload/PUSH.
Also, have a look at these threads:
How to create HTTP GET request Scapy?
Python-Scapy or the like-How can I create an HTTP GET request at the packet level
I'm trying to write a function which will take a URL and return the contents of that URL. There is one additional argument (useTor) which, when set to True, will use SocksiPy to route the request over a SOCKS 5 proxy server (in this case, Tor).
I can set the proxy globally for all connections just fine but I cannot work out two things:
How can I move this setting into a function so that it can be decided on the useTor variable? I'm unable to access socks within the function and have no idea how to do so.
I'm assuming that if I don't set the proxy, then the next time the request is made it'll go direct. The SocksiPy documentation doesn't seem to give any indication of as to how the proxy is reset.
Can anyone advise? My (beginners) code is below:
import gzip
import socks
import socket
def create_connection(address, timeout=None, source_address=None):
sock = socks.socksocket()
sock.connect(address)
return sock
# next line works just fine if I want to set the proxy globally
# socks.setdefaultproxy(socks.PROXY_TYPE_SOCKS5, "127.0.0.1", 9050)
socket.socket = socks.socksocket
socket.create_connection = create_connection
import urllib2
import sys
def getURL(url, useTor=False):
if useTor:
print "Using tor..."
# Throws- AttributeError: 'module' object has no attribute 'setproxy'
socks.setproxy(socks.PROXY_TYPE_SOCKS5, "127.0.0.1", 9050)
else:
print "Not using tor..."
# Not sure how to cancel the proxy, assuming it persists
opener = urllib2.build_opener()
usock = opener.open(url)
url = usock.geturl()
encoding = usock.info().get("Content-Encoding")
if encoding in ('gzip', 'x-gzip', 'deflate'):
content = usock.read()
if encoding == 'deflate':
data = StringIO.StringIO(zlib.decompress(content))
else:
data = gzip.GzipFile('', 'rb', 9, StringIO.StringIO(content))
result = data.read()
else:
result = usock.read()
usock.close()
return result
# Connect to the same site both with and without using Tor
print getURL('https://check.torproject.org', False)
print getURL('https://check.torproject.org', True)
Example
Simply invoke socksocket.set_proxy with no arguments, this will effectively remove any previously set proxy settings.
import socks
sck = socks.socksocket ()
# use TOR
sck.setproxy (socks.PROXY_TYPE_SOCKS5, "127.0.0.1", 9050)
# reset to normal use
sck.setproxy ()
Details
By looking at the source of socks.py, and digging into the contents of socksocket.setproxy, we quickly realize that in order to discard of any previous proxy attributes we simply invoke the function with no additional arguments (besides self).
class socksocket(socket.socket):
... # additional functionality ignored
def setproxy(self,proxytype=None,addr=None,port=None,rdns=True,username=None,password=None):
"""setproxy(proxytype, addr[, port[, rdns[, username[, password]]]])
Sets the proxy to be used.
proxytype - The type of the proxy to be used. Three types
are supported: PROXY_TYPE_SOCKS4 (including socks4a),
PROXY_TYPE_SOCKS5 and PROXY_TYPE_HTTP
addr - The address of the server (IP or DNS).
port - The port of the server. Defaults to 1080 for SOCKS
servers and 8080 for HTTP proxy servers.
rdns - Should DNS queries be preformed on the remote side
(rather than the local side). The default is True.
Note: This has no effect with SOCKS4 servers.
username - Username to authenticate with to the server.
The default is no authentication.
password - Password to authenticate with to the server.
Only relevant when username is also provided.
"""
self.__proxy = (proxytype,addr,port,rdns,username,password)
... # additional functionality ignored
Note: When a new connection is about to be negotiated, the implementation will use the contents of self.__proxy unless the potentially required element is None (in which case the setting is simply ignored).