How to parse DNS response sent by the server? - python

I have successfully made a DNS request and I send it to my DNS server using UDP sockets in python. Then, I receive the response and convert into a byte-array using python's bytearray() method. But this response contains many jibberish, non-important data in addition to the IP addresses of the canonical server. How should I parse this response to get a result like this :
I am getting the values of the IP address like 74.125.200.106 and all in the response, but how should I extract it from the response bytearray?

Related

using sockets in Python to get information from a url fragment

I am writing a Python script that interacts with the Spotify API. To authenticate, I generate a url that looks something like this: https://accounts.spotify.com/authorize?response_type=token&client_id=<client_id_here>&redirect_uri=http%3A%2F%2F127.0.0.1%3A8083. Spotify then redirects me to my redirect uri, which is currently localhost (127.0.0.1:8083). Spotify attaches the access token as a fragment to the redirect uri. I need to access the uri fragment from Python.
Currently I am using the following code (the function generate_login_url() generates the accounts.spotify.com url I need)
webbrowser.open(generate_login_url())
with socket.socket(socket.AF_INET, socket.SOCK_STREAM) as sock:
sock.bind(('127.0.0.1', 8083))
sock.listen()
conn, _ = sock.accept()
with conn:
data = conn.recv(1024)
conn.sendall(data)
When I run this, I can see in the web browser that I have been redirected to the correct url (http://127.0.0.1:8083/#access_token=<access_token_here>&token_type=Bearer&expires_in=3600) but I cannot find a way to access the fragment from python. Calling conn.recv(1024) again causes a timeout, and the information I need is not found anywhere in data, which seems to just hold the user-agent string and the headers.

Persistent HTTP Connection with Pipelining

I read that HTTP Pipelining is not activated by default in modern browsers
How can I implement a Persistent HTTP Connection with Pipelining in Python (like code socket from scratch) without using requests library to download all pdf file in folder slides from http://web.stanford.edu/class/cs224w/slides/
I tried to send request from scratch many times just using import Socket and Threading (because I don't able to use requests lib or anything else like requests to automatically send request) but don't gain any result.
I made a TCP socket connection like this
client = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
client.connect((host, port))
After that I make many sending-thread with the request with the format like this:
request = f"GET {path}{file_name} HTTP/1.1\r\nHost:{host}\r\nConnection: Keep-Alive\r\n\r\n"
Then I make many receiving-thread to receive data but Host return the response by turn.

How can I send proxy w/ aioquic HTTP/3

I am trying to understand how HTTP/3 works. Ultimately, my goal is to send HTTP/3 request to a host with proxy and receive a response back.
The host I am trying to reach only accepts HTTP/3 Connection.
There is a library that takes care of heavy lifting to initiate a HTTP 3 connection however they don't demonstrate how proxy can be passed into the packets.
https://github.com/aiortc/aioquic/blob/main/examples/http3_client.py
I am running the following file after cloning the repo like this:
python3 examples/http3_client.py 'https://www.truepeoplesearch.com/'
Doing so does route the request via HTTP/3 using QUIC protocol. How can I send the same request behind a proxy with IP, pOrt, username and password of the proxy.

How can I see ip address in request header in Python requests?

When I make request to a url, I believe the ip address of the client is sent to the server. And in that case the ip address must be present in request header, or am I wrong?
So, I say
import requests
resp = requests.get("http://localhost:8000/test")
print resp.request.headers
But I can't see any ip address when printing resp.request.headers, so how can I view the ip address of client. And if I can't see it as part of resp.request.headers, how does server get the ip address of the client if it isn't present in request header?
IP address is not set by client, it is part of the connection. Quote from here:
(remote address) is taken from the ip address that hits the http server and not in
the request. as if you are behind a proxy you would only see the ip of
the proxy. it doesnt touch the http header data
A server gets a connection, natrually it also knows where it comes from.

How do I send and receive HTTP POST requests in Python?

I have these two Python scripts I'm using to attempt to work out how to send and receive POST requests in Python:
The Client:
import httplib
conn = httplib.HTTPConnection("localhost:8000")
conn.request("POST", "/testurl")
conn.send("clientdata")
response = conn.getresponse()
conn.close()
print(response.read())
The Server:
from BaseHTTPServer import BaseHTTPRequestHandler,HTTPServer
ADDR = "localhost"
PORT = 8000
class RequestHandler(BaseHTTPRequestHandler):
def do_POST(self):
print(self.path)
print(self.rfile.read())
self.send_response(200, "OK")
self.end_headers()
self.wfile.write("serverdata")
httpd = HTTPServer((ADDR, PORT), RequestHandler)
httpd.serve_forever()
The problem is that the server hangs on self.rfile.read() until conn.close() has been called on the client but if conn.close() is called on the client the client cannot receive a response from the server. This creates a situation where one can either get a response from the server or read the POST data but never both. I assume there is something I'm missing here that will fix this problem.
Additional information:
conn.getresponse() causes the client to hang until the response is received from the server. The response doesn't appear to be received until the function on the server has finished execution.
There are a couple of issues with your original example. The first is that if you use the request method, you should include the message body you want to send in that call, rather than calling send separately. The documentation notes send() can be used as an alternative to request:
As an alternative to using the request() method described above, you
can also send your request step by step, by using the four functions
below.
You just want conn.request("POST", "/testurl", "clientdata").
The second issue is the way you're trying to read what's sent to the server. self.rfile.read() attempts to read the entire input stream coming from the client, which means it will block until the stream is closed. The stream won't be closed until connection is closed. What you want to do is read exactly how many bytes were sent from the client, and that's it. How do you know how many bytes that is? The headers, of course:
length = int(self.headers['Content-length'])
print(self.rfile.read(length))
I do highly recommend the python-requests library if you're going to do more than very basic tests. I also recommend using a better HTTP framework/server than BaseHTTPServer for more than very basic tests (flask, bottle, tornado, etc.).
Long time answered but came up during a search so I bring another piece of answer. To prevent the server to keep the stream open (resulting in the response never being sent), you should use self.rfile.read1() instead of self.rfile.read()

Categories