I am experiencing problems with visibility / accessibility of my python web server running on Ubuntu. Server code is below:
#!/usr/bin/python
from BaseHTTPServer import BaseHTTPRequestHandler,HTTPServer
PORT_NUMBER = 8899
#This class will handles any incoming request from
#the browser
class myHandler(BaseHTTPRequestHandler):
#Handler for the GET requests
def do_GET(self):
self.send_response(200)
self.send_header('Content-type','text/html')
self.end_headers()
# Send the html message
self.wfile.write("Hello World !")
return
try:
#Create a web server and define the handler to manage the
#incoming request
server = HTTPServer(('', PORT_NUMBER), myHandler)
print 'Started httpserver on port ' , PORT_NUMBER
#Wait forever for incoming htto requests
server.serve_forever()
except KeyboardInterrupt:
print '^C received, shutting down the web server'
server.socket.close()
Calling it locally using curl with below command works - I receive answer with 'hello world'.
curl {externalIP}:8899
Opening address in the browser (chrome, ie) fails!
http://{externalIP}:8899/
ufw status is inactive
iptables as below
Chain INPUT (policy ACCEPT)
target prot opt source destination
ACCEPT tcp -- anywhere anywhere tcp dpt:8765
Chain FORWARD (policy ACCEPT)
target prot opt source destination
Chain OUTPUT (policy ACCEPT)
target prot opt source destination
Ubuntu has apache2 server installed and opening html files using web browser, external ip and port 80 is working with no problem from above server...
Any ideas what else could I check?
I think you might be listening on the loopback interface and not the one that is connected to the internet.
Either specify IP or use:
server = HTTPServer(('0.0.0.0', PORT_NUMBER), myHandler)
to specify to listen to all your network interfaces.
Removing apache fixed this case. I do not know why, because it should block port 80, but it works now after this:
apt-get remove apache2*
Related
I've been working on a project that requires a bit of networking between a server (hosted on GCE) and multiple clients. I created a Compute Engine Instance to run a Python script as shown in this video: https://www.youtube.com/watch?v=5OL7fu2R4M8.
Here is my server-side script:
server = socket.gethostbyname(socket.gethostname()) # 10.128.X.XXX which is the Internal IP
print(server)
port = 5555
clients = 0
s = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
s.bind((server, port))
s.listen(2)
print("Waiting for connection...")
while True:
conn, addr = s.accept()
print("Connected to: ", addr)
conn.send(str.encode(f"{clients}"))
clients += 1
and here is my client side-script:
class Network:
def __init__(self):
self.client = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
self.server = "10.128.0.2"
self.port = 5555
self.addr = (self.server, self.port)
self.id = int(self.connect())
def connect(self):
self.client.connect(self.addr)
return self.client.recv(2048).decode()
network = Network()
print(f"Connected as client {network.id}")
I know this script works because I have tested it with my computer being the server and 1 client, and another computer being the 2nd client. But when I use the GCE as the server, I get this error in the client script:
TimeoutError: [WinError 10060] A connection attempt failed because the connected party did not properly respond after a period of time, or established connection failed because connected host has failed to respond
Could this be because I am using the internal IP address and not the external?
After this, I tried changing the firewall settings (added 'python-socket') of the GCE and this is what they look like:
But the error still persists...
As answered by W_B, I tried to run these commands on my VM and got the following outputs:
From your description it's evident it's the connection problem.
First of all you have to check if the firewall rule you created is still there. If it's "too broad" and allows for very wide access then it might be removed automatically even without you knowing it. It's on you'r screenshot but check it again just to be sure.
If it's there select the protocol you're goint to be using (I assume it's TCP) - some protocols are always blocked by default by GCP (you can't change this) so creating a rule with "any protocol" allowed is risky. Also - put one or two target IP's (not all inside this VPC) - this is not a must but improves security of your network.
Second - make sure port 5555 you're trying to connect to is accessible from other computers. You can scan the target host with nmap -p 5554 put.server.ip.here
You can scan it from the Internet or other VM's in the same VPC network.
You should get something like this:
root#localhost:~$ nmap -p 443 192.168.1.6
Starting Nmap 7.70 ( https://nmap.org ) at 2020-06-25 17:12 UTC
Nmap scan report for 192.168.1.6
Host is up (0.00091s latency).
PORT STATE SERVICE
443/tcp open https
Nmap done: 1 IP address (1 host up) scanned in 0.04 seconds
If you see 5555/tcp filtered freeciv this means that something blocks the port.
Run nmap on the server (I assume you run some version of Linux) and if you don't want to install any non-essencial software you can use sudo netstat -tulpn | grep LISTEN to get a list of open ports (5555 should be on the list).
Also make sure firewall on your server doesn't block this port. You can use iptables for that.
I'm trying to implement a Python server supporting both HTTP and HTTPS based in BaseHTTPServer. This is my code:
server_class = BaseHTTPServer.HTTPServer
# Configure servers
httpd = server_class(("0.0.0.0", 1044), MyHandler)
httpsd = server_class(("0.0.0.0", 11044), MyHandler)
httpsd.socket = ssl.wrap_socket(httpsd.socket, keyfile="/tmp/localhost.key", certfile="/tmp/localhost.crt", server_side=True)
# Run the servers
try:
httpd.serve_forever()
httpsd.serve_forever()
except KeyboardInterrupt:
print("Closing the server...")
httpd.server_close()
httpsd.server_close()
So, HTTP runs in port 1044 and HTTPS runs in 11044. The MyHandler class is omitted for the sake of briefness.
Using that code, when I send requests to HTTP port (e.g. curl http://localhost:1044/path) it works. However, when I send requests to the HTTPS port (e.g. curl -k https://localhost:11104/path) the server never responses, i.e. the curl terminal gets hanged.
I have observed that if I comment the line starting the HTTP server (i.e. httpd.server_forever()) then the HTTPS server works,.i.e. curl -k https://localhost:11104/path works. Thus, I guess that I'm doing something wrong which is precluding not being able to set both servers at the same time.
Any help is appreciated!
Following feedback comments, I have refactored the code in a multithread way and now it works as expected.
def init_server(http):
server_class = BaseHTTPServer.HTTPServer
if http:
httpd = server_class(("0.0.0.0", 1044), MyHandler)
else: # https
httpd = server_class(("0.0.0.0", 11044), MyHandler)
httpd.socket = ssl.wrap_socket(httpd.socket, keyfile="/tmp/localhost.key", certfile="/tmp/localhost.crt", server_side=True)
httpd.serve_forever()
httpd.server_close()
VERBOSE = "True"
thread.start_new_thread(init_server, (True, ))
thread.start_new_thread(init_server, (False, ))
while 1:
time.sleep(10)
I had a working HTTP server using BaseHTTPServer in Python, so I attempted to add an SSL cert to allow for https using LetsEncrypt, and now it won't serve any files or respond. No exceptions or errors thrown, nor will it serve any content.
server_address = ('0.0.0.0', 80)
httpd = HTTPServer(server_address, MyHandler)
# I can comment out the following line and it'll work
httpd.socket = ssl.wrap_socket(httpd.socket, keyfile=ssl_key, certfile=ssl_cert, server_side=True)
httpd.serve_forever()
#ssl_key = '/etc/letsencrypt/live/example.com/privkey.pem'
#ssl_cert = '/etc/letsencrypt/live/example.com/fullchain.pem'
Where MyHandler is the following:
class MyHandler(BaseHTTPRequestHandler):
def do_GET(self):
self.send_response(204)
self.send_header("Content-Type", "text/html")
self.end_headers()
return
def do_POST(self):
self.send_response(204)
self.send_header("Content-Type", "text/html")
self.end_headers()
return
Attempting to access the site via web browser from https://example.com returns a standard no-response "Server not found".
I followed the following instructions to generate a certificate using LetsEncrypt: https://certbot.eff.org/#ubuntuxenial-other
sudo apt-get install letsencrypt
Followed by:
letsencrypt certonly --standalone -d example.com
Is there any way I can easily figure out what the problem is here? Using Python 3.5. Happy to provide additional info if needed.
server_address = ('0.0.0.0', 80)
Attempting to access the site via web browser from https://example.com returns a standard no-response "Server not found".
https://host without explicit port specification means that the server is accessed on the default port for the https protocol, which is 443. But, you have setup your server to use port 80 in server_address.
There are two ways to fix this: either explicitly specify a non-standard port for https in the URL, i.e. https://host:80 or change the port in server_address from 80 to 443. The last option is probably the better one.
I've got a simple python socket server. Here's the code:
import socket
host = "0.0.0.0" # address to bind on.
port = 8081
def listen_serv():
try:
s = socket.socket(socket.AF_INET,socket.SOCK_STREAM)
s.setsockopt(socket.SOL_SOCKET, socket.SO_REUSEADDR, 1)
s.bind((host,port))
s.listen(4)
...
messages back and forth between the server and client
...
if __name__ == "__main__":
while True:
listen_serv()
When I run the python server locally and then scan with nmap localhost i see the open port 8081 with the service blackice-icecap running on it. A quick google search revealed that this is a firewall service that uses the port 8081 for a service called ice-cap remote. If I change the port to 12000 for example, I get another service called cce4x.
A further scan with nmap localhost -sV returns the contents of the python script
1 service unrecognized despite returning data. If you know the service/version,
please submit the following fingerprint at https://nmap.org/cgi-bin/submit.cgi?new-service :
SF-Port8081-TCP:V=7.25BETA1%I=7%D=8/18%Time=57B58EE7%P=x86_64-pc-linux-gn
SF:u%r(NULL,1A4,"\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\
SF:*\*\*\*\*\*\*\*\*\*\*\*\*\n\*\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x
SF:20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\
SF:x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\*\n\*\x20\x20\x20\x20\x20\x
SF:20Welcome\x20to\x20ScapeX\x20Mail\x20Server\x20\x20\x20\x20\*\n\*\x20\x
SF:20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\
SF:x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20
SF:\x20\x20\*\n\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\
SF:*\*\*\*\*\*\*\*\*\*\*\*\nHere\x20is\x20a\x20quiz\x20to\x20test\x20your\
SF:x20knowledge\x20of\x20hacking\.\.\.\n\n\nAnswer\x20correctly\x20and\x20
SF:we\x20will\x20reward\x20you\x20with\x20a\x20shell\x20:-\)\x20\nQuestion
etc...
etc...
Is there a way I can customize the service and version descriptions that are displayed by nmap for my simple python server?
Found a solution by sending the following line as the first message from the server
c.send("HTTP/1.1 200 OK\r\nServer: Netscape-Enterprise/6.1\r\nDate: Fri, 19 Aug 2016 10:28:43 GMT\r\nContent-Type: text/html; charset=UTF-8\r\nConnection: close\r\nVary: Accept-Encoding\n\nContent-Length: 32092\r\n\n\n""")
I have some code that hosts a local server and when a user connects it will send them some html code, which works fine.
But I want it so if they connect to http://localhost:90/abc it will show something different. How can I get the exact url they connected to?
Here is my code:
import socket
sock = socket.socket()
sock.bind(('', 90))
sock.listen(5)
print("Listening...")
while True:
client, address = sock.accept()
print("Connection recieved: ", address)
print(The exact url they connected to.)
print()
client.send(b'HTTP/1.0 200 OK\r\n')
client.send(b"Content-Type: text/html\r\n\r\n")
client.send(b'<html><body><h1>Hello, User!</body></html>')
client.close()
sock.close()
I tried print(client.getpeername()[1]), but that gets the client ip, and if there is a similar way to get the ip they connected to it probably wont get the 'abc' part of the url.
Thanks in advance.
Socket's don't have a notion of URL, that's specific to the HTTP protocol which runs on top of a socket. For this reason, only part of the HTTP URL is even used in the creation of a socket.
|--1---|----2----|-3-|--4-|
http:// localhost :90 /abc
Specifies which protocol inside of TCP the URL uses
Specifies the remote host, either by IP address or hostname
Specifies the remote port and is optional
Specifies the path of the URL
Only parts 2 and 3 are actually known to a TCP socket though! This is because TCP is a very basic form of communication, HTTP adds a bunch of functionality on top of it like requests and responses and paths and so on.
Basically if you're implementing an HTTP server, knowing the /abc part is your job. Take a look at this example. The client actually sends the /abc part to the server, otherwise it has no way of knowing which path the request is for.
When the client connects to your server, it will send:
GET /abc HTTP/1.1
Host: localhost
more headers...
<blank line>
Your server needs to parse the GET line and extract /abc from that.