Websocket Permessage-deflate not occuring in server -> client direction - python

I have written my own implementation of a websocket in python to teach myself their inner workings. I was going to be sending large repetitive JSON objects over the websocket so I am trying to implement permessage-deflate. The compression works in the client->server direction, but not in the server -> client direction
This is the header exchange:
Request
Host: awebsite.com:port
Connection: Upgrade
Pragma: no-cache
Cache-Control: no-cache
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/89.0.4389.114 Safari/537.36
Upgrade: websocket
Origin: http://awebsite.com
Sec-WebSocket-Version: 13
Accept-Encoding: gzip, deflate
Accept-Language: en-US,en;q=0.9
Sec-WebSocket-Key: JItmF32mfGXXKYyhcEoW/A==
Sec-WebSocket-Extensions: permessage-deflate; client_max_window_bits
Response
Upgrade: websocket
Connection: Upgrade
Sec-WebSocket-Extensions: permessage-deflate
Sec-WebSocket-Accept: zYQKJ6gvwlTU/j2xw1Kf0BErg9c=
When I do this, I get compressed data from the client as expected and it inflates as expected.
When I send an uncompressed message, I get a normal response on the client, Ie i send "hello" and I get "hello"
When I try to deflate my message using this simple python function:
def deflate(self,data,B64_encode=False):
data=zlib.compress(data)
if B64_encode:
return base64.b64encode(data[2:-4])
else:
return data[2:-4]
I get an error message about the characters not being utf-8, and when I base64 encode the compressed message, I just get the base64 encoded string. I also tried sending the data as Binary over the websocket, and get a blob at the other end. I've been scouring the internet for a while now and haven't heard of this happening. My Guess is that I am compressing the data at the wrong step. Below is the function I use to send the data. So far I've been feeding in the compressed message into the send() function because from what I've read permessage compression happens on the message level, and all the other data remains uncompressed.
def send(self, string, TYPE="TEXT"):
import struct
conn = self.conn
datatypes = {
"TEXT": 0x01,
"BINARY": 0x02,
"CLOSE": 0X08,
"PING": 0x09,
"PONG": 0x0A}
b1 = 0x80
b2 = 0
message = ""
if TYPE == "TEXT":
if type(string) == unicode:
b1 |= datatypes["TEXT"]
payload = string.encode("UTF8")
elif type(string) == str:
b1 |= datatypes["TEXT"]
payload = string
message += chr(b1)
else:
b1 |= datatypes[TYPE]
payload = string
message += chr(b1)
length = len(payload)
if length < 126:
b2 |= length
message += chr(b2)
elif length < (2 ** 16) - 1:
b2 |= 126
message += chr(b2)
l = struct.pack(">H", length)
message += l
else:
l = struct.pack(">Q", length)
b2 |= 127
message += chr(b2)
message += l
message += payload
try:
conn.send(str(message))
except socket.error:
traceback.print_exc()
conn.close()
if TYPE == "CLOSE":
self.Die = True
conn.shutdown(2)
conn.close()
print self.myid,"Closed"

After a lot of sleuthing, I found out my problem was a case of "RTFM". In the third paragraph of (a?) the manual on perMessage Compression it says
A WebSocket client may offer multiple PMCEs during the WebSocket
opening handshake. A peer WebSocket server received those offers may
choose and accept preferred one or decline all of them. PMCEs use
the RSV1 bit of the WebSocket frame header to indicate whether a
message is compressed or not, so that an endpoint can choose not to
compress messages with incompressible contents.
I didn't know what the rsv bits did when I first set this up, and had them set to 0 by default. My code now allows for compression to be set in the send() function in my program. It nicely shrinks my messages from 30200 bytes to 149 bytes.
My Modified code now looks like this:
def deflate2(self,data):
data=zlib.compress(data)
data=data[2:-4]
return data
def send(self, string, TYPE="TEXT",deflate=False):
import struct
if (deflate):
string=self.deflate(string)
conn = self.conn
datatypes = {
"TEXT": 0x01,
"BINARY": 0x02,
"CLOSE": 0X08,
"PING": 0x09,
"PONG": 0x0A}
b1 = 0x80 #0b100000000
if(deflate): b1=0xC0 #0b110000000 sets RSV1 to 1 for compression
b2 = 0
message = ""
if TYPE == "TEXT":
if type(string) == unicode:
b1 |= datatypes["TEXT"]
payload = string.encode("UTF8")
elif type(string) == str:
b1 |= datatypes["TEXT"]
payload = string
message += chr(b1)
else:
b1 |= datatypes[TYPE]
payload = string
message += chr(b1)
length = len(payload)
if length < 126:
b2 |= length
message += chr(b2)
elif length < (2 ** 16) - 1:
b2 |= 126
message += chr(b2)
l = struct.pack(">H", length)
message += l
else:
l = struct.pack(">Q", length)
b2 |= 127
message += chr(b2)
message += l
message += payload
try:
if self.debug:
for x in message: print("S>: ",x,hex(ord(x)),ord(x))
conn.send(str(message))
except socket.error:
traceback.print_exc()
conn.close()
if TYPE == "CLOSE":
self.Die = True
conn.shutdown(2)
conn.close()
print self.myid,"Closed"

Related

How to get http server response on a different device in LAN?

I am new to python and my networking logics are at the beginner level. I have an HTTP server running in a VM and when I curl it from a different terminal on the same machine, I get the expected response. I am looking for a functionality where I can get the same response on my mobile device when I type the ip and port in the browser. My mobile device is connected to the same WiFi. Here's the server code:
import socket
MAX_PACKET = 32768
def recv_all(sock):
r'''Receive everything from `sock`, until timeout occurs, meaning sender
is exhausted, return result as string.'''
# dirty hack to simplify this stuff - you should really use zero timeout,
# deal with async socket and implement finite automata to handle incoming data
prev_timeout = sock.gettimeout()
try:
sock.settimeout(0.01)
rdata = []
while True:
try:
rdata.append(sock.recv(MAX_PACKET))
except socket.timeout:
return ''.join(rdata)
# unreachable
finally:
sock.settimeout(prev_timeout)
def normalize_line_endings(s):
r'''Convert string containing various line endings like \n, \r or \r\n,
to uniform \n.'''
return ''.join((line + '\n') for line in s.splitlines())
def run():
r'''Main loop'''
# Create TCP socket listening on 10000 port for all connections,
# with connection queue of length 1
server_sock = socket.socket(socket.AF_INET, socket.SOCK_STREAM, \
socket.IPPROTO_TCP)
server_sock.bind(('10.0.2.15',80))
server_sock.listen(1)
while True:
# accept connection
client_sock, client_addr = server_sock.accept()
# headers and body are divided with \n\n (or \r\n\r\n - that's why we
# normalize endings). In real application usage, you should handle
# all variations of line endings not to screw request body
request = normalize_line_endings(recv_all(client_sock)) # hack again
request_head, request_body = request.split('\n\n', 1)
# first line is request headline, and others are headers
request_head = request_head.splitlines()
request_headline = request_head[0]
# headers have their name up to first ': '. In real world uses, they
# could duplicate, and dict drops duplicates by default, so
# be aware of this.
request_headers = dict(x.split(': ', 1) for x in request_head[1:])
# headline has form of "POST /can/i/haz/requests HTTP/1.0"
request_method, request_uri, request_proto = request_headline.split(' ', 3)
response_body = [
'<html><body><h1>Hello, world!</h1>',
'<p>This page is in location %(request_uri)r, was requested ' % locals(),
'using %(request_method)r, and with %(request_proto)r.</p>' % locals(),
'<p>Request body is %(request_body)r</p>' % locals(),
'<p>Actual set of headers received:</p>',
'<ul>',
]
for request_header_name, request_header_value in request_headers.iteritems():
response_body.append('<li><b>%r</b> == %r</li>' % (request_header_name, \
request_header_value))
response_body.append('</ul></body></html>')
response_body_raw = ''.join(response_body)
# Clearly state that connection will be closed after this response,
# and specify length of response body
response_headers = {
'Content-Type': 'text/html; encoding=utf8',
'Content-Length': len(response_body_raw),
'Connection': 'close',
}
response_headers_raw = ''.join('%s: %s\n' % (k, v) for k, v in \
response_headers.iteritems())
# Reply as HTTP/1.1 server, saying "HTTP OK" (code 200).
response_proto = 'HTTP/1.1'
response_status = '200'
response_status_text = 'OK' # this can be random
# sending all this stuff
client_sock.send('%s %s %s' % (response_proto, response_status, \
response_status_text))
client_sock.send(response_headers_raw)
client_sock.send('\n') # to separate headers from body
client_sock.send(response_body_raw)
# and closing connection, as we stated before
client_sock.close()
run()
Here's the response when I run curl from a different terminal on the same VM.
I want to ping it from my mobile device connected to the same WiFi. Thank you!

Not receiving any data back from bittorrent peer handshake

I'm having some trouble on the bit torrent protocol. I'm at the point of sending a handshake message to some peers. I have my client basically connect to every peer in list then send the 'handshake'. Code is below -
peer_id = 'autobahn012345678bit'
peer_id = peer_id.encode('utf-8')
pstr = 'BitTorrent protocol'
pstr = pstr.encode('utf-8')
pstrlen = chr(19)
pstrlen = pstrlen.encode('utf-8')
reserved = chr(0) * 8
reserved = reserved.encode('utf-8')
There are my variables that I'm sending. My msg is -
msg = (pstrlen + pstr + reserved + new.torrent_hash() + peer_id)
Based on the bit torrent specification my message is the appropriate len of 49 + len(pstr) -
lenmsg = (pstrlen + reserved + new.torrent_hash() + peer_id)
print(lenmsg)
print(len(lenmsg))
is out put -
b'\x13\x00\x00\x00\x00\x00\x00\x00\x00\x94z\xb0\x12\xbd\x1b\xf1\x1fO\x1d)\xf8\xfa\x1e\xabs\xa8_\xe7\x93autobahn012345678bit'
49
the entire message looks like this -
b'\x13\x00\x00\x00\x00\x00\x00\x00\x00\x94z\xb0\x12\xbd\x1b\xf1\x1fO\x1d)\xf8\xfa\x1e\xabs\xa8_\xe7\x93autobahn012345678bit'
My main problem being I don't receive any data back. I have the socket.settimeout(4) and it'll just timeout?
The output is incorrect, it misses 'BitTorrent protocol'.
A proper handshake string is 68 bytes long.
It should be:
\x13BitTorrent protocol\x00\x00\x00\x00\x00\x00\x00\x00\x94z\xb0\x12\xbd\x1b\xf1\x1fO\x1d)\xf8\xfa\x1e\xabs\xa8_\xe7\x93autobahn012345678bit

Python socket server loses message

Python server:
import socket
import re
from base64 import b64encode
from hashlib import sha1
import base64
import struct
from queue import Queue
import threading
import select
def decodea(data):
buf = data
payload_start = 2
if len(buf) < 3:
return
b = (buf[0])
fin = b & 0x80
opcode = b & 0x0f
b2 = (buf[1])
mask = b2 & 0x80
length = b2 & 0x7f
if len(buf) < payload_start + 4:
return
elif length == 126:
length, = struct.unpack(">H", buf[2:4])
payload_start += 2
elif length == 127:
length, = struct.unpack(">I", buf[2:6])
payload_start += 4
if mask:
mask_bytes = [(b) for b in buf[payload_start:payload_start + 4]]
payload_start += 4
if len(buf) < payload_start + length:
return
payload = buf[payload_start:payload_start + length]
if mask:
unmasked = [mask_bytes[i % 4] ^ (b)
for b, i in zip(payload, range(len(payload)))]
payload = "".join([chr(c) for c in unmasked])
return [payload.encode('latin-1'), length]
def status(decoded):
status_ = ''
status_16 = 0
if(decoded[1] == 2):
for c in decoded[0]:
status_ += (str('%02x' % ord(chr(c))))
status_16 = int(status_, 16)
if(status_16 > 0):
cases = {
1000: "Normal Closure",
1001: "Going Away",
1002: "Protocol error",
1003: "Unsupported Data",
1004: "---Reserved----",
1005: "No Status Rcvd",
1006: "Abnormal Closure",
1007: "Invalid frame payload data",
1008: "Policy Violation",
1009: "Message Too Big",
1010: "Mandatory Ext.",
1011: "Internal Server Error",
1015: "TLS handshake"
}
if(status_16 in cases):
return status_16
return 0
def handshake(conn, globals__):
data = conn.recv(1024)
key = (re.search('Sec-WebSocket-Key:\s+(.*?)[\n\r]+', data.decode('utf-8'))
.groups()[0]
.strip())
sha1f = sha1()
sha1f.update(key.encode('utf-8') + globals__['GUID'].encode('utf-8'))
response_key = b64encode(sha1f.digest()).decode('utf-8')
response = '\r\n'.join(globals__['websocket_answer']).format(key=response_key)
conn.send(response.encode('utf-8'))
def socket_accept__(lock__, globals__):
lock__.acquire()
s = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
s.bind((globals__['socket_settings']['HOST'],globals__['socket_settings']['PORT']))
s.listen(globals__['socket_settings']['LISTEN'])
globals__['client_list'].append(s)
lock__.release()
while True:
lock__.acquire()
read_sockets,write_sockets,error_sockets = select.select(globals__['client_list'],[],[])
for sock in read_sockets:
if(sock == s):
conn, addr = s.accept()
handshake(conn, globals__)
globals__['client_list'].append(conn)
else:
for client in globals__['client_list']:
try:
client.settimeout(0.001)
data = client.recv(1024)
print(decodea(data)[0].decode('UTF-8'))
except(socket.timeout):
continue
lock__.release()
#thead_queue = Queue()
lock_ = threading.Lock()
globals_ = {
'GUID':'258EAFA5-E914-47DA-95CA-C5AB0DC85B11',
'websocket_answer': (
'HTTP/1.1 101 Switching Protocols',
'Upgrade: websocket',
'Connection: Upgrade',
'Sec-WebSocket-Accept: {key}\r\n\r\n'
),
'client_list': [],
'socket_settings': {
'HOST': '10.10.10.12',
'PORT': 8999,
'LISTEN': 200
},
'threads':[]
}
globals_['threads'].append(threading.Thread(target=socket_accept__, args=(lock_,globals_)))
globals_['threads'][0].setDaemon(True)
for threadi in globals_['threads']:
threadi.start()
for threadi in globals_['threads']:
threadi.join()
#thread2.join()
HTML5:
<!DOCTYPE html>
<html>
<head>
<title>test</title>
<meta http-equiv="Content-Type" content="text/html; charset=UTF-8" />
<script type="text/javascript">
var s = new WebSocket('ws://10.10.10.12:8999');
s.onmessage = function(t){console.log(t); alert(t); };
s.onopen = function(){
s.send('hello from client');
s.send('my name is richard');
}
alert('load');
</script>
</head>
<body>
</body>
</html>
Output:
Hello from client
Expected output:
hello from client
my name is richard
I am sure this is because client.settimout(0.001) is not fast enough?
I am pretty lost for words, as i do not know why this is happening.
No message is being lost due to communication problems, it's just that it is not being decoded. It has nothing to do with client.settimeout(0.001).
When two or more messages from a client arrive close together (in time), both messages will be received in a single data = client.recv(1024) call.
That means that data can contain multiple messages. The decodea() function, however, only handles one message. Any additional message is completely ignored by the decoder, and that is why you seem to be losing messages.
You can write your decoder to decode and return multiple messages, perhaps changing it to a generator function so that you can yield each message in turn. The calling code would then loop over the messages.
Alternatively you could inspect the incoming message by reading just the first few bytes in order to determine the message's length. Then read the remaining bytes from the socket and decode the message. Any additional messages will be decoded during the next iteration.
One thing worth mentioning is that iterating over the client list with
for client in globals__['client_list']:
seems wrong as each client is just a socket object anyway, and you already know which sockets have data pending: those in the read_sockets list. You could write that code like this:
while True:
lock__.acquire()
read_sockets,write_sockets,error_sockets = select.select(globals__['client_list'],[],[])
for sock in read_sockets:
if(sock == s):
conn, addr = s.accept()
handshake(conn, globals__)
globals__['client_list'].append(conn)
else:
data = sock.recv(1024)
print(decodea(data)[0].decode('UTF-8'))
But you still need to figure out how to handle multiple messages arriving together - either in the decoder, or by ensuring that your code reads only one message at a time.

Receive UDP packet from specific source

I am trying to measure the responses back from DNS servers. Making a sniffer for a typical DNS response that is less than 512 bytes is no big deal. My issue is receiving large 3000+ byte responses - in some cases 5000+ bytes. I haven't been able to get a socket working that can receive that data reliably. Is there a way with Python sockets to receive from a specific source address?
Here is what I have so far:
import socket
import struct
def craft_dns(Qdns):
iden = struct.pack('!H', randint(0, 65535))
QR_thru_RD = chr(int('00000001', 2)) # '\x01'
RA_thru_RCode = chr(int('00100000', 2)) # '\x00'
Qcount = '\x00\x01' # question count is 1
ANcount = '\x00\x00'
NScount = '\x00\x00'
ARcount = '\x00\x01' # additional resource count is 1
pad = '\x00' #
Rtype_ANY = '\x00\xff' # Request ANY record
PROtype = '\x00\x01' # Protocol IN || '\x00\xff' # Protocol ANY
DNSsec_do = chr(int('10000000', 2)) # flips DNSsec bit to enable
edns0 = '\x00\x00\x29\x10\x00\x00\x00\x00\x00\x00\x00' # DNSsec disabled
domain = Qdns.split('.')
quest = ''
for x in domain:
quest += struct.pack('!B', len(x)) + x
packet = (iden+QR_thru_RD+RA_thru_RCode+Qcount+ANcount+NScount+ARcount+
quest+pad+Rtype_ANY+PROtype+edns0) # remove pad if asking <root>
return packet
def craft_ip(target, resolv):
ip_ver_len = int('01000101', 2) # IPvers: 4, 0100 | IP_hdr len: 5, 0101 = 69
ipvers = 4
ip_tos = 0
ip_len = 0 # socket will put in the right length
iden = randint(0, 65535)
ip_frag = 0 # off
ttl = 255
ip_proto = socket.IPPROTO_UDP # dns, brah
chksm = 0 # socket will do the checksum
s_addr = socket.inet_aton(target)
d_addr = socket.inet_aton(resolv)
ip_hdr = struct.pack('!BBHHHBBH4s4s', ip_ver_len, ip_tos, ip_len, iden,
ip_frag, ttl, ip_proto, chksm, s_addr, d_addr)
return ip_hdr
def craft_udp(sport, dest_port, packet):
#sport = randint(0, 65535) # not recommended to do a random port generation
udp_len = 8 + len(packet) # calculate length of UDP frame in bytes.
chksm = 0 # socket fills in
udp_hdr = struct.pack('!HHHH', sport, dest_port, udp_len, chksm)
return udp_hdr
def get_len(resolv, domain):
target = "10.0.0.3"
d_port = 53
s_port = 5353
ip_hdr = craft_ip(target, resolv)
dns_payload = craft_dns(domain) # '\x00' for root
udp_hdr = craft_udp(s_port, d_port, dns_payload)
packet = ip_hdr + udp_hdr + dns_payload
buf = bytearray("-" * 60000)
recvSock = socket.socket(socket.PF_PACKET, socket.SOCK_RAW, socket.ntohs(0x0800))
recvSock.settimeout(1)
sendSock = socket.socket(socket.AF_INET, socket.SOCK_RAW, socket.IPPROTO_RAW)
sendSock.settimeout(1)
sendSock.connect((resolv, d_port))
sendSock.send(packet)
msglen = 0
while True:
try:
pkt = recvSock.recvfrom(65535)
msglen += len(pkt[0])
print repr(pkt[0])
except socket.timeout as e:
break
sendSock.close()
recvSock.close()
return msglen
result = get_len('75.75.75.75', 'isc.org')
print result
For some reason doing
pkt = sendSock.recvfrom(65535)
Recieves nothing at all. Since I'm using SOCK_RAW the above code is less than ideal, but it works - sort of. If the socket is extremely noisy (like on a WLAN), I could end up receiving well beyond the DNS packets, because I have no way to know when to stop receiving packets when receiving a multipacket DNS answer. For a quiet network, like a lab VM, it works.
Is there a better way to use a receiving socket in this case?
Obviously from the code, I'm not that strong with Python sockets.
I have to send with SOCK_RAW because I am constructing the packet in a raw format. If I use SOCK_DGRAM the custom packet will be malformed when sending to a DNS resolver.
The only way I could see is to use the raw sockets receiver (recvSock.recv or recvfrom) and unpack each packet, look if the source and dest address match within what is supplied in get_len(), then look to see if the fragment bit is flipped. Then record the byte length of each packet with len(). I'd rather not do that. It just seems there is a better way.
Ok I was stupid and didn't look at the protocol for the receiving socket. Socket gets kind of flaky when you try to receive packets on a IPPROTO_RAW protocol, so we do need two sockets. By changing to IPPROTO_UDP and then binding it, the socket was able to follow the complete DNS response over multiple requests. I got rid of the try/catch and the while loop, as it was no longer necessary and I'm able to pull the response length with this block:
recvSock = socket.socket(socket.AF_INET, socket.SOCK_RAW, socket.IPPROTO_UDP)
recvSock.settimeout(.3)
recvSock.bind((target, s_port))
sendSock = socket.socket(socket.AF_INET, socket.SOCK_RAW, socket.IPPROTO_RAW)
#sendSock.setsockopt(socket.SOL_SOCKET, socket.SO_REUSEADDR, 1)
sendSock.settimeout(.3)
sendSock.bind((target, s_port))
sendSock.connect((resolv, d_port))
sendSock.send(packet)
pkt = recvSock.recvfrom(65535)
msglen = len(pkt[0])
Now the method will return the exact bytes received from a DNS query. I'll leave this up in case anyone else needs to do something similar :)

Python UDP SocketServer can't read whole packet

At sender side I have the following code using processing language (portion code):
udp = new UDP( this, 6002 ); // create a new datagram connection on port 6000
//udp.log( true ); // <-- printout the connection activity
udp.listen( true ); // and wait for incoming message
escribeUDPLog3(1,TRANSMIT,getTime()); //call function
int[] getTime(){
int year = year();
int month = month()
int day = day();
int hour = hour();
int minute = minute();
int second = second();
int[] time_constructed = {year, month,day,hour,minute,second};
return time_constructed;
}
void escribeUDPLog3(int pkg_type, int state, int[] time){
short year = (short)(time[0]); //>> 8;
byte year_msb = byte(year >> 8);
byte year_lsb = byte(year & 0x00FF);
byte month = byte(time[1]);
byte day = byte(time[2]);
byte hour = byte(time[3]);
byte minute = byte(time[4]);
byte second = byte(time[5]);
byte[] payload = {byte(pkg_type), byte(state), year_msb, year_lsb, month, day, hour, minute,second};
try {
if (UDP5_flag) {udp.send(payload, UDP5_IP, UDP5_PORT);}
}
catch (Exception e) {
e.printStackTrace();
}
}
At receiver side I'm using SocketServer python structure to set up a server listening for udp datagrams, as following.
from datetime import datetime
import csv
import SocketServer
def nodeStateCheckout(nodeid, state, nodeState):
if (state == ord(nodeState)):
return "OK"
else:
return "FAIL"
def timeConstructor(time):
year = str(ord(time[0]) << 8 | ord(time[1]))
month = str(ord(time[2]))
day = str(ord(time[3]))
hour = str(ord(time[4]))
minute = str(ord(time[5]))
second = str(ord(time[6]))
time_formatted = year + "-" + month + "-" + day \
+ " " + hour + ":" + minute + ":" + second
return time_formatted
class MyUDPHandler(SocketServer.BaseRequestHandler):
"""
This class works similar to the TCP handler class, except that
self.request consists of a pair of data and client socket, and since
there is no connection the client address must be given explicitly
when sending data back via sendto().
"""
def handle(self):
try:
data = self.request[0].strip()
socket = self.request[1]
#print "{} wrote:".format(self.client_address[0])
pkg_type = ord(data[0])
if pkg_type == 1: # log 3
state = ord(data[1])
csvfile = open("log3.csv", "a+")
csvwriter = csv.writer(csvfile, delimiter=',')
time_reconstructed = timeConstructor(data[2:9])
if state == 3:
csvwriter.writerow(["STOP",time_reconstructed])
elif state == 2:
csvwriter.writerow(["START",time_reconstructed])
else:
print "unknown state"
csvfile.close()
else:
print "packet not known"
except IndexError:
print "Bad parsed byte"
if __name__ == "__main__":
HOST, PORT = "localhost", 8892
server = SocketServer.UDPServer((HOST, PORT), MyUDPHandler)
server.serve_forever()
Edited:
I have problem specifically when using timeConstructor(data[2:9]), because I'm accessing out of index data, sometimes (with the help of print) I can't received second byte from data, and one time it get me out of index because I didn't received minute and second. Most of the time the code works well, but this type of error get me curious.
Old:
The problem is when reading the payload, sometimes its seems that some bytes doesn't arrive, even when I captured the whole payload using Wireshark (but Wireshark didn't tell me if this is the sent packet or received packet because I'm using loopback interfaces, maybe duplicated info?). If the datagram has 16 bytes payload long, sometimes I received 15 because when parsing from data I get out of index error.
I think that there are some buffer problems. Isn't it? How to configured it properly? I know that I can get packet loss because of connectionless protocol but I dont think that bytes get lost. It is supposed that "data" has all payload data from one udp datagram.
I believe your problem is that socket.sendto() does not always send all the bytes that you give it. It returns the number of bytes sent and you may have to call it again. You might be better off with opening the socket yourself and calling socket.sendall()

Categories