Data from a Python script to URL as JSON - python

I've spent a lot of time on this but still can't seem to get it to work. The task is - I have to send system stats to a URL and the script is supposed to pull it, convert the namedtuple of each cpu stats of a machine and then send them all in 1 single POST request as JSON. The connection must close once the data has been sent.
For the '1 single POST request' functionality, I added the latter function (senddata_to_server) in the script. Without it (with the connection details simply listed there without a function) , when I ran it on Mac/Windows/Linux, it used to return all the namedtuples 1 by 1 and then a '200 OK' and then go on printing 'Connection refused' forever. Now when I run it, it just hangs there without returning anything.
(I have asked this question earlier ( HTTP Post request with Python JSON ) but I need to have the 'params' inside the loop and the connection details outside it.
import psutil
import socket
import time
import sample
import json
import httplib
import urllib
serverHost = sample.host
port = sample.port
thisClient = socket.gethostname()
currentTime = int(time.time())
s = socket.socket()
s.connect((serverHost, port))
cpuStats = psutil.cpu_times_percent(percpu=True)
def loop_thru_cpus():
while True:
global cpuStats
cpuStats = "/n".join([json.dumps(stats._asdict()) for stats in cpuStats])
try:
command = 'put cpu.usr ' + str(currentTime) + " " + str(cpuStats[0]) + "host ="+thisClient+ "/n"
s.sendall(command)
command = 'put cpu.nice ' + str(currentTime) + " " + str(cpuStats[1]) + "host ="+ thisClient+ "/n"
s.sendall(command)
command = 'put cpu.sys ' + str(currentTime) + " " + str(cpuStats[2]) + "host ="+ thisClient+ "/n"
s.sendall(command)
command = 'put cpu.idle ' + str(currentTime) + " " + str(cpuStats[3]) + "host ="+ thisClient+ "/n"
s.sendall(command)
params = urllib.urlencode({'cpuStats': cpuStats, 'deviceKey': 1234, 'timeStamp': str(currentTime)})
return params
print cpuStats
except IndexError:
continue
except socket.error:
print "Connection refused"
continue
finally:
s.close()
def senddata_to_server():
x = loop_thru_cpus()
headers = {'Content-type': 'application/json', 'Accept': 'text/plain'}
conn = httplib.HTTPConnection(serverHost, port)
conn.request = ("POST", "", x.params, headers)
response = conn.response()
print response.status, response. reason
print x.cpuStats
conn.close()
loop_thru_cpus()
senddata_to_server()

Given the task and the code/logic here, what am I doing wrong?
I can't quite tell what your task is, but here are some things you may be doing wrong:
You are connecting to the server directly (via socket.connect()) and through the framework (via HTTPlib.connect())
You misspelled newline: it should be '\n', not '/n'
You have a while loop that can only execute once (because you return in the middle of it).
You have a print statement after your return statement
You are sending malformed put commands to the web server
You call loop_thru_cpus() twice
You set the content-type incorrectly to application/json -- you aren't sending well-formed json.
You aren't sending a url to HTTPlib.HTTPConnection.request() (may be allowed in practice, disallowed in the documentation)
You aren't invoking conn.request() correctly -- get rid of =
In the documentation it says to call conn.getresponse(), not conn.response()
Here is a program that hopefully does what you ask for:
import psutil
import socket
import time
import json
import httplib
import urllib
# httpbin provides an echo service at http://httpbin.org/post
serverHost = 'httpbin.org'
port = 80
url = 'http://httpbin.org/post'
# My psutil only has cpu_times, not cpu_times_percent
cpuStats = psutil.cpu_times(percpu=True)
# Convert each namedTuple to a json string
cpuStats = [json.dumps(stats._asdict()) for stats in cpuStats]
# Convert each json string to the form required by the assignment
cpuStats = [urllib.urlencode({'cpuStats':stats, 'deviceKey':1234}) for stats in cpuStats]
# Join stats together, one per line
cpuStats = '\n'.join(cpuStats)
# Send the data ...
# connect
conn = httplib.HTTPConnection(serverHost, port)
# Send the data
conn.request("POST", url, cpuStats)
# Check the response, should be 200
response = conn.getresponse()
print response.status, response.reason
# httpbin.org provides an echo service -- what did we send?
print response.read()
conn.close()

Related

The Python Request module does not function when including a proxy

I have recently tried the python request module and it seems to work fine up until the point when I include a proxy in the command. I am using the Burp Suite proxy, when I run the code the program gets stuck on the line of code with the request module.
import requests
import sys
import urllib3
#input = "https://0a0100660376e8efc04b1a7600880072.web-security-academy.net/"
urllib3.disable_warnings(urllib3.exceptions.InsecureRequestWarning)
proxies = {'http': 'http://127.0.0.1:8080', 'https': 'https://127.0.0.1:8080'}
def exploit_sqli_column_number(URL):
path = "filter?category=Tech+gifts"
for i in range(1,51):
sql_payload = "'+order+by+%s--" %i
r = requests.get(url + path + sql_payload, verify = False, proxies = proxies)
res = r.text
if "Internal Server Error" in res:
return i - 1
return False
if __name__ == "__main__":
try:
url = sys.argv[1]
except IndexError:
print("[-] Usage: %s <url>" % sys.argv[0])
print("[-] Example: %s www.example.com" % sys.argv[0])
sys.exit(-1)
print("[+] Figuring out number of columns.")
num_col = exploit_sqli_column_number(URL)
if num_col:
print("[+] The number of columns is " + str(num_col)
+ ".")
else:
print("[-] The SQL Injection was not successful.")
I have tried other scripts where I just make the request without using the proxy and it works just fine, I have also checked the IP address and the Port, so there should be no issues with that.
Thank you for help in advance.
This code works for me:
import requests
import sys
import urllib3
urllib3.disable_warnings(urllib3.exceptions.InsecureRequestWarning)
proxies = {'http': 'http://127.0.0.1:8085', 'https': 'https://127.0.0.1:8085'}
r = requests.get('https://www.google.com', verify = False, proxies = proxies)
print(r)
I'd make sure you set the correct ports in Burp Suite under Proxy -> Options, and make sure you turn off intercept. If your code is just hanging and not giving any error then the issues is you have not turned off intercept. I would try using a port other than the default 8080 for your proxy.

Python proxy server fails to connect to host

I'm making a python proxy server for a school assignment and I've got the code below. When I run it in my command prompt and attempt to connect to google, the code doesn't make it past connecting the server socket, but the page still connects. I honestly have no idea why it doesn't even go through the connection step. Thoughts?
EDIT: And yeah there's been other homework posts about this but none of them seem to have addressed the fact the sys.exit() on line 8 ends the script (to my knowledge anyway) and whenever we comment it out, the script still does not get past connecting the server socket and hits the "illegal request" exception.
from socket import *
from urllib2 import HTTPError #Used for 404 Not Found error
import sys
import requests
if len(sys.argv) <= 1:
print 'Usage : "python ProxyServer.py server_ip"\n[server_ip : It is the IP Address Of Proxy Server]'
#sys.exit(2)
#POST request extension
print 'Fetching webpage using POST'
r = requests.post('http://httpbin.org/post', data = {'key':'value'})
print 'Printing webpage body'
print r.text
print 'Creating and binding socket for proxy server'
# Create a server socket, bind it to a port and start listening
tcpServerSock = socket(AF_INET, SOCK_STREAM)
# Fill in start.
tcpServerSock.bind(('',8888))
tcpServerSock.listen(10) #the number is the maximum number of connections we want to have
# Fill in end.
while 1:
# Start receiving data from the client
print 'Ready to serve...'
tcpClientSock, addr = tcpServerSock.accept()
print 'Received a connection from:', addr
# Fill in start.
message = tcpClientSock.recv(4096) #receive data with buffer size 4096
# Fill in end.
print 'Printing message'
print message
# Extract the filename from the given message
print message.split()[1]
filename = message.split()[1].partition("/")[2]
print '\n'
print 'Printing file name'
print filename
fileExist = "false"
filetouse = "/" + filename
print '\n'
print 'Printing file to use'
print filetouse
print '\n'
try:
# Check whether the file exist in the cache
f = open(filetouse[1:], "r")
outputdata = f.readlines()
fileExist = "true"
# ProxyServer finds a cache hit and generates a response message
tcpClientSock.send("HTTP/1.0 200 OK\r\n")
tcpClientSock.send("Content-Type:text/html\r\n")
# Fill in start.
for x in range(0,len(outputdata)):
tcpClientSock.send(outputdata[x])
# Fill in end.
print 'Read from cache\n'
# Error handling for file not found in cache
except IOError:
if fileExist == "false":
# Create a socket on the proxyserver
# Fill in start.
print 'Creating server socket\n'
c = socket(AF_INET, SOCK_STREAM)
# Fill in end.
hostn = filename
#hostn = filename.replace("www.","",1)
print 'Printing host to connect'
print hostn
print '\n'
print 'Attempting to connect to hostn\n'
try:
# Connect to the socket to port 80
# Fill in start.
c.connect((hostn,80)) #port 80 is used for http web pages
# Fill in end.
# Create a temporary file on this socket and ask port 80
# for the file requested by the client
fileobj = c.makefile('r', 0)
fileobj.write("GET "+"http://" + filename + "HTTP/1.0\n\n")
# Show what request was made
print "GET "+"http://" + filename + " HTTP/1.0"
# Read the response into buffer
# Fill in start.
buff = fileobj.readlines() #reads until EOF and returns a list with the lines read
# Fill in end.
# Create a new file in the cache for the requested file.
# Also send the response in the buffer to client socket
# and the corresponding file in the cache
tmpFile = open("./" + filename,"wb") #creates the temp file for the requested file
# Fill in start.
for x in range(0, len(buff)):
tmpFile.write(buff[x]) #writes the buffer response into the temp file (cache?)
tcpClientSock.send(buff[x]) #sends the response saved in the buffer to the client
# Fill in end.
tmpFile.close()
except:
print "Illegal request\n"
else:
# HTTP response message for file not found
# Fill in start.
print 'File not found'
# Fill in end.
#404 not found error handling
except HTTPError as e:
print 'The server couldn\'t fulfill the request.'
print 'Error code: ', e.code
# Close the client and the server sockets
tcpClientSock.close()
# Fill in start.
tcpServerSock.close()
# Fill in end
I'm aware this question is old, and Jose M's assignment is probably long past due.
if len(sys.argv) <= 1: checks for an additional argument that needs to be passed, which is the IP of the server. Commenting out the exit essentially removes the error checking.
A fix for the code above is to change line 20 from this tcpSerSock.bind(('', 8888)) to this tcpSerSock.bind((sys.argv[1], tcpSerPort))
You must then call the script correctly python ProxyServer.py 127.0.0.1.

Manually catch HTTPError/Exceptions in socket programming in Python

I'm sending raw HTTP headers to a website, and I want to detect errors such as 400 Bad Request or 404 Not Found manually without using urllib or Requests package. I'm sending a HEAD request like this:
head_request = "HEAD " + url_path + " HTTP/1.1\nHost: %s\r\n\r\n" % (host)
socket_id = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
socket_id.connect((host, 80))
socket_id.send(head_request)
recv_head = socket_id.recv(1024)
How should I manually catch Exceptions?
One way is to manually search for the HTTP response using a regular expression.
Another way is to port what you need from the http_parser.c module from the http-parser project.
It can be downloaded from here: https://pypi.python.org/pypi/http-parser/
You can parse the HTTP response using http-parser which works on the socket level.
Here is the description:
http-parser provide you parser.HttpParser low-level parser in C that you can access in your python program and http.HttpStream providing higher-level access to a readable,sequential io.RawIOBase object.
Here is how you can parse the HTTP response using sockets in Python in the manner according to the example you gave:
https://github.com/benoitc/http-parser/tree/master/http_parser
def main():
p = HttpParser()
s = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
body = []
try:
s.connect(('gunicorn.org', 80))
s.send("GET / HTTP/1.1\r\nHost: gunicorn.org\r\n\r\n")
while True:
data = s.recv(1024)
if not data:
break
recved = len(data)
nparsed = p.execute(data, recved)
assert nparsed == recved
if p.is_headers_complete():
print p.get_headers()
if p.is_partial_body():
body.append(p.recv_body())
if p.is_message_complete():
break
print "".join(body)
finally:
s.close()

socket.makefile issues in python 3 while creating a http proxy

from socket import *
import sys
# Create a server socket, bind it to a port and start listening
tcpSerSock = socket(AF_INET, SOCK_STREAM)
serverPort = 12000
tcpSerSock.bind(('', serverPort))
tcpSerSock.listen(1)
print ("Server ready")
while 1==1:
# Start receiving data from the client. e.g. request = "GET http://localhost:portNum/www.google.com"
tcpCliSock, addr = tcpSerSock.accept()
print ('Received a connection from:', addr)
request = str(tcpCliSock.recv(1024).decode())
print ("Requested " + request)
# Extract the file name from the given request
fileName = request.split()[1]
print ("File name is " + fileName)
fileExist = "false"
fileToUse = "/" + fileName
print ("File to use: " + fileToUse)
try:
# Check wether the file exist in the cache. The open will fail and go to "except" in case the file doesn't exist. Similar to try/catch in java
f = open(fileToUse[1:], "r")
outputData = f.readlines()
fileExist = "true"
# ProxyServer finds a cache hit and generates a response message
tcpCliSock.send("HTTP/1.1 200 OK\r\n")
tcpCliSock.send("Content-Type:text/html\r\n")
tcpCliSock.send(outputData)
print ('This was read from cache')
except IOError:
if fileExist == "false":
# Create a socket on the proxyserver
c = socket(AF_INET, SOCK_STREAM)
hostn = fileName.replace("www.","",1) #max arg specified to 1 in case the webpage contains "www." other than the usual one
print (hostn)
try:
# Connect to the socket to port 80
c.bind(('', 80))
# Create a temporary file on this socket and ask port 80 for the file requested by the client
print("premake")
fileObj = c.makefile('r', 0)
print("postmake")
fileObj.write("GET " + "http://" + fileName + " HTTP/1.1\r\n")
# Read the response into buffer
print("post write")
buff = fileObj.readlines()
# Create a new file in the cache for the requested file.
tmpFile = open("./" + filename,"wb")
# Send the response in the buffer to both client socket and the corresponding file in the cache
for line in buff:
tmpFile.write(line)
tcpCliSock.send(tmpFile)
except:
print ("Illegal request")
break
else:
# HTTP response message for file not found
print("HTTP response Not found")
# Close the client and the server sockets
tcpCliSock.close()
#tcpSerSock.close()
The code never manages to execute the 'try' entered in 'except IOError'. The problem seems to be the socket.makefile(mode, buffsize) function, which has poor documentation for python 3. I tried passing 'rwb', 'r+', 'r+b' and so on to the function, but at most I would manage to create the file and be unable to write to it thereafter.
This is a python2.7 vs python3 issue. While makefile('r',0) works in python 2.7, you need makefile('r',None) in python3.
From the documentation for python2.7:
socket.makefile([mode[, bufsize]])
From the documentation for python3:
socket.makefile(mode='r', buffering=None, *, encoding=None, errors=None, newline=None)

Cache a HTTP GET REQUEST in Python Sockets

I'm making a proxy server using sockets. When the requested file is not in my current directory (cache), I do a http get request to the origin server (which is the www) and I cache it for later.
The problem with my code is that every time I get a resource from the www I cache it but the content of the file is always "Moved permanently".
So this is what happens: user requests "stackoverlflow.com" by entering "localhost:8080/stackoverflow.com" into the browser. The browser will return the page correctly. When the user enters "localhost:8080/stackoverflow.com" for a 2nd time in the browser, the browser will return a page saying that stackoverflow.com has moved permanently.
Here is the code of the method that does the http get request and the caching:
#staticmethod
def find_on_www(conn, requested_file):
try:
# Create a socket on the proxy server
print 'Creating socket on proxy server'
c = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
host_name = requested_file.replace("www.","",1)
print 'Host Name: ', host_name
# Connect to the socket to port 80
c.connect((host_name, 80))
print 'Socket connected to port 80 of the host'
# Create a temporary file on this socket and ask port 80
# for the file requested by the client
file_object = c.makefile('r', 0)
file_object.write("GET " + "http://" + requested_file + " HTTP/1.0\n\n")
# Read the response into buffer
buff = file_object.readlines()
# Create a new file in the cache for the requested file.
# Also send the response in the buffer to client socket
# and the corresponding file in the cache
temp_file = open("./" + requested_file, "wb")
for i in range(0, len(buff)):
temp_file.write(buff[i])
conn.send(buff[i])
conn.close()
And here is the rest of my code, if anyone is interested:
import socket # Socket programming
import signal # To shut down server on ctrl+c
import time # Current time
import os # To get the last-modified
import mimetypes # To guess the type of requested file
import sys # To exit the program
from threading import Thread
def generate_header_lines(code, modified, length, mimetype):
""" Generates the header lines for the response message """
h = ''
if code == 200:
# Append status code
h = 'HTTP/1.1 200 OK\n'
# Append the date
# Append the name of the server
h += 'Server: Proxy-Server-Thomas\n'
# Append the date of the last modification to the file
h += 'Last-Modified: ' + modified + '\n'
elif code == 404:
# Append the status code
h = 'HTTP/1.1 404 Not Found\n'
# Append the date
h += 'Date: ' + time.strftime("%a, %d %b %Y %H:%M:%S", time.localtime()) + '\n'
# Append the name of the web server
h += 'Server: Web-Server-Thomas\n'
# Append the length of the content
h += 'Content-Length: ' + str(length) + '\n'
# Append the type of the content
h += 'Content-Type: ' + mimetype + '\n'
# Append the connection closed - let the client know we close the connection
h += 'Connection: close\n\n'
return h
def get_mime_type(requested_file):
# Get the file's mimetype and encoding
try:
(mimetype, encoding) = mimetypes.guess_type(requested_file, True)
if not mimetype:
print "Mimetype found: text/html"
return 'text/html'
else:
print "Mimetype found: ", mimetype
return mimetype
except TypeError:
print "Mimetype found: text/html"
return 'text/html'
class WebServer:
def __init__(self):
"""
Constructor
:return:
"""
self.host = '' # Host for the server
self.port = 8000 # Port for the server
# Create socket
self.socket = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
def start_server(self):
""" Starts the server
:return:
"""
# Bind the socket to the host and port
self.socket.bind((self.host, self.port))
print "Connection started on ", self.port
# Start the main loop of the server - start handling clients
self.main_loop()
#staticmethod
def shutdown():
""" Shuts down the server """
try:
s.socket.close()
except Exception as e:
print "Something went wrong closing the socket: ", e
def main_loop(self):
"""Main loop of the server"""
while True:
# Start listening
self.socket.listen(1)
# Wait for a client to connect
client_socket, client_address = self.socket.accept()
# Wait for a request from the client
data = client_socket.recv(1024)
t = Thread(target=self.handle_request, args=(client_socket, data))
t.start()
# # Handle the request from the client
# self.handle_request(client_socket, data)
def handle_request(self, conn, data):
""" Handles a request from the client """
# Decode the data
string = bytes.decode(data)
# Split the request
requested_file = string.split(' ')
# Get the method that is requested
request_method = requested_file[0]
if request_method == 'GET':
# Get the part of the request that contains the name
requested_file = requested_file[1]
# Get the name of the file from the request
requested_file = requested_file[1:]
print "Searching for: ", requested_file
try:
# Open the file
file_handler = open(requested_file, 'rb')
# Get the content of the file
response_content = file_handler.read()
# Close the handler
file_handler.close()
# Get information about the file from the OS
file_info = os.stat(requested_file)
# Extract the last modified time from the information
time_modified = time.ctime(file_info[8])
# Get the time modified in seconds
modified_seconds = os.path.getctime(requested_file)
print "Current time: ", time.time()
print "Modified: ", time_modified
if (float(time.time()) - float(modified_seconds)) > 120: # more than 2 minutes
print "Time outdated!"
#self.find_on_www(conn, requested_file)
# Get the file's mimetype and encoding
mimetype = get_mime_type(requested_file)
print "Mimetype = ", mimetype
# Create the correct header lines
response_headers = generate_header_lines(200, time_modified, len(response_content), mimetype)
# Create the response to the request
server_response = response_headers.encode() + response_content
# Send the response back to the client
conn.send(server_response)
# Close the connection
conn.close()
except IOError: # Couldn't find the file in the cache - Go find file on www
print "Error: " + requested_file + " not found in cache!"
self.find_on_www(conn, requested_file)
#staticmethod
def find_on_www(conn, requested_file):
try:
# Create a socket on the proxy server
print 'Creating socket on proxy server'
c = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
host_name = requested_file.replace("www.","",1)
print 'Host Name: ', host_name
# Connect to the socket to port 80
c.connect((host_name, 80))
print 'Socket connected to port 80 of the host'
# Create a temporary file on this socket and ask port 80
# for the file requested by the client
file_object = c.makefile('r', 0)
file_object.write("GET " + "http://" + requested_file + " HTTP/1.0\n\n")
# Read the response into buffer
buff = file_object.readlines()
# Create a new file in the cache for the requested file.
# Also send the response in the buffer to client socket
# and the corresponding file in the cache
temp_file = open("./" + requested_file, "wb")
for i in range(0, len(buff)):
temp_file.write(buff[i])
conn.send(buff[i])
conn.close()
except Exception as e:
# Generate a body for the file - so we don't have an empty page
response_content = "<html><body><p>Error 404: File not found</p></body></html>"
# Generate the correct header lines
response_headers = generate_header_lines(404, '', len(response_content), 'text/html')
# Create the response to the request
server_response = response_headers.encode() + response_content
# Send the response back to the client
conn.send(server_response)
# Close the connection
conn.close()
def shutdown_server(sig, dummy):
""" Shuts down the server """
# Shutdown the server
s.shutdown()
# exit the program
sys.exit(1)
# Shut down on ctrl+c
signal.signal(signal.SIGINT, shutdown_server)
# Create a web server
s = WebServer()
# Start the server
s.start_server()
The problem with your code is that when if you go to a page with that returns a status code of 301 page moved, it adds this to the header. When you view a page that is not stored in your cache, you copy the GET request that the proxy server makes straight to client. This will inform the client to make another GET request, which it makes ignoring your proxy server.
The second time you attempt to request the page through the proxy server, it retrieves the previous request from the cache. This file contains the headers from the previous request which correctly contain the redirect status code however you then add your own status code of 200 ok to the returned message. As the client reads this status code first it does not realise that you wish it to make another request to find the page that has been redirected. Therefore it just shows the page that tells you the page has moved.
What you need to do is parse the headers that are returned by the web server when the proxy server has to look at the actual page on the internet. Then depending on these server the correct headers back to the client.

Categories