How to develop a robust UDP Client in python? - python
I have to develop a UDP Client in Python. The purpose of the UDP client is to receive the packets via a port and process it (requires a map lookup) and then publish the processed data to a Kafka topic. The number of Packets received in a second is more than 2000.
I have tried a code which is as shown below. But there are packet losses.
import socket
from kafka import KafkaProducer
producer = KafkaProducer(bootstrap_servers=config.KAFKA_BOOTSTRAP_SERVER,
value_serializer=lambda m: json.dumps(m).encode('ascii'),security_protocol='SSL')
client_socket = socket.socket(socket.AF_INET, socket.SOCK_DGRAM)
client_socket.settimeout(1.0)
addr = ("0.0.0.0", 5000)
client_socket.bind(addr)
while True:
data, server = client_socket.recvfrom(1024)
d_1 = some_logic()
producer.send("XYZ",d_1)
Please suggest me a approach with a small code snippet to perform this activity without or minimal packet loss
Thanks in advance.
Using this code :
sender.py
import socket
import tqdm # pip install
# example data from https://opensource.adobe.com/Spry/samples/data_region/JSONDataSetSample.html
data = '\
[{"id":"0001","type":"donut","name":"Cake","ppu":0.55,"batters":{"batter":[{"id":"1001","type":"Regular"},{"id":"1002","type":"Chocolate"},{"id":"1003","type":"Blueberry"},{"id":"1004","type":"Devil\'s Food"}]},"topping":[{"id":"5001","type":"None"},{"id":"5002","type":"Glazed"},{"id":"5005","type":"Sugar"},{"id":"5007","type":"Powdered Sugar"},{"id":"5006","type":"Chocolate with Sprinkles"},{"id":"5003","type":"Chocolate"},{"id":"5004","type":"Maple"}]},{"id":"0002","type":"donut","name":"Raised","ppu":0.55,"batters":{"batter":[{"id":"1001","type":"Regular"}]},"topping":[{"id":"5001","type":"None"},{"id":"5002","type":"Glazed"},{"id":"5005","type":"Sugar"},{"id":"5003","type":"Chocolate"},{"id":"5004","type":"Maple"}]},{"id":"0003","type":"donut","name":"Old Fashioned","ppu":0.55,"batters":{"batter":[{"id":"1001","type":"Regular"},{"id":"1002","type":"Chocolate"}]},"topping":[{"id":"5001","type":"None"},{"id":"5002","type":"Glazed"},{"id":"5003","type":"Chocolate"},{"id":"5004","type":"Maple"}]}]\
'.encode("ascii")
assert len(data) == 1011, len(data) # close to the 1000 you average in your case
sender_socket = socket.socket(socket.AF_INET, socket.SOCK_DGRAM)
sender_socket.settimeout(1.0) # 1 second is laaarge
addr = ("127.0.0.1", 6410)
sender_socket.connect(addr)
progress_bar = tqdm.tqdm(unit_scale=True)
while True:
bytes_sent = sender_socket.send(data)
assert bytes_sent == 1011, bytes_sent
progress_bar.update(1)
receiver.py
import json
import socket
import tqdm # pip install
receiver_socket = socket.socket(socket.AF_INET, socket.SOCK_DGRAM)
receiver_socket.settimeout(5.0)
addr = ("127.0.0.1", 6410)
receiver_socket.bind(addr)
progress_bar = tqdm.tqdm(unit_scale=True)
while True:
data_bytes, from_address = receiver_socket.recvfrom(1024)
data = json.loads(data_bytes)
progress_bar.update(1)
(using tqdm for easy speed monitoring)
I am around ~80 K it/s on my computer, which is roughly 80 times more than your case.
Try it yourself, see how much you get. Then add d_1 = some_logic() and measure again. Then add producer.send("XYZ",d_1) and measure again.
This will give you a pretty good picture of what is slowing you. Then ask another question on the specific problem. Better if you produce a Minimal Reproducible Example
Edit:
Indeed, the sender saturates the receiver, such that packets get dropped. It's because the receiver throughput is lower than the sender (because of the processing time), so here is an alternative :
steady_sender.py
import socket
import time
import tqdm # pip install
# example data from https://opensource.adobe.com/Spry/samples/data_region/JSONDataSetSample.html
data = '\
[{"id":"0001","type":"donut","name":"Cake","ppu":0.55,"batters":{"batter":[{"id":"1001","type":"Regular"},{"id":"1002","type":"Chocolate"},{"id":"1003","type":"Blueberry"},{"id":"1004","type":"Devil\'s Food"}]},"topping":[{"id":"5001","type":"None"},{"id":"5002","type":"Glazed"},{"id":"5005","type":"Sugar"},{"id":"5007","type":"Powdered Sugar"},{"id":"5006","type":"Chocolate with Sprinkles"},{"id":"5003","type":"Chocolate"},{"id":"5004","type":"Maple"}]},{"id":"0002","type":"donut","name":"Raised","ppu":0.55,"batters":{"batter":[{"id":"1001","type":"Regular"}]},"topping":[{"id":"5001","type":"None"},{"id":"5002","type":"Glazed"},{"id":"5005","type":"Sugar"},{"id":"5003","type":"Chocolate"},{"id":"5004","type":"Maple"}]},{"id":"0003","type":"donut","name":"Old Fashioned","ppu":0.55,"batters":{"batter":[{"id":"1001","type":"Regular"},{"id":"1002","type":"Chocolate"}]},"topping":[{"id":"5001","type":"None"},{"id":"5002","type":"Glazed"},{"id":"5003","type":"Chocolate"},{"id":"5004","type":"Maple"}]}]\
'.encode("ascii")
assert len(data) == 1011, len(data) # close to the 1000 you average in your case
sender_socket = socket.socket(socket.AF_INET, socket.SOCK_DGRAM)
sender_socket.settimeout(1.0) # 1 second is laaarge
addr = ("127.0.0.1", 6410)
sender_socket.connect(addr)
progress_bar = tqdm.tqdm(unit_scale=True)
while True:
start_time = time.time()
bytes_sent = sender_socket.send(data)
assert bytes_sent == 1011, bytes_sent
progress_bar.update(1)
current_time = time.time()
remaining_time = 0.001 - (current_time - start_time) # until next millisecond
time.sleep(remaining_time)
It tries to send one packet every millisecond. It stays around ~900 packets/s for me, because the code is too simple (falling asleep takes time too !).
This way, the receiver processes fast enough so that no packet gets dropped (because UDP).
But here is another version, where the sender is bursty : it sends 1000 packet then goes to sleep until the next second.
bursty_sender.py
import socket
import time
import tqdm # pip install
# example data from https://opensource.adobe.com/Spry/samples/data_region/JSONDataSetSample.html
data = '\
[{"id":"0001","type":"donut","name":"Cake","ppu":0.55,"batters":{"batter":[{"id":"1001","type":"Regular"},{"id":"1002","type":"Chocolate"},{"id":"1003","type":"Blueberry"},{"id":"1004","type":"Devil\'s Food"}]},"topping":[{"id":"5001","type":"None"},{"id":"5002","type":"Glazed"},{"id":"5005","type":"Sugar"},{"id":"5007","type":"Powdered Sugar"},{"id":"5006","type":"Chocolate with Sprinkles"},{"id":"5003","type":"Chocolate"},{"id":"5004","type":"Maple"}]},{"id":"0002","type":"donut","name":"Raised","ppu":0.55,"batters":{"batter":[{"id":"1001","type":"Regular"}]},"topping":[{"id":"5001","type":"None"},{"id":"5002","type":"Glazed"},{"id":"5005","type":"Sugar"},{"id":"5003","type":"Chocolate"},{"id":"5004","type":"Maple"}]},{"id":"0003","type":"donut","name":"Old Fashioned","ppu":0.55,"batters":{"batter":[{"id":"1001","type":"Regular"},{"id":"1002","type":"Chocolate"}]},"topping":[{"id":"5001","type":"None"},{"id":"5002","type":"Glazed"},{"id":"5003","type":"Chocolate"},{"id":"5004","type":"Maple"}]}]\
'.encode("ascii")
assert len(data) == 1011, len(data) # close to the 1000 you average in your case
sender_socket = socket.socket(socket.AF_INET, socket.SOCK_DGRAM)
sender_socket.settimeout(1.0) # 1 second is laaarge
addr = ("127.0.0.1", 6410)
sender_socket.connect(addr)
progress_bar = tqdm.tqdm(unit_scale=True)
while True:
start_time = time.time()
bytes_sent = sender_socket.send(data)
assert bytes_sent == 1011, bytes_sent
progress_bar.update(1)
if progress_bar.n % 1000 == 0:
current_time = time.time()
remaining_time = 1.0 - (current_time - start_time) # until next second
time.sleep(remaining_time)
It sends on average ~990 packets per second (losing less time to getting in and out of sleep). But the receiver only handles ~280 per second, the rest got dropped because the burst filled the receiver's buffer.
If I'm sending bursts at 400/s I process ~160/s.
You can monitor the drop using your OS's tool for monitoring network packet drop, Python can't.
If you don't want to drop, another solution is to use a queue : have the first one simply read from the socket and adding it to the queue, and the other reads from the queue and process. But then you have to ensure that the queue does not grow too large.
I'm able to handle bursts of 50 with my current system config, nearly 100, but not 150.
Here is an example with the queue :
queued_receiver.py
import json
import queue
import socket
import threading
import tqdm # pip install
messages_queue = queue.Queue(maxsize=-1) # infinite
received_packets_bar = tqdm.tqdm(position=0, desc="received", unit_scale=True)
queue_size_bar = tqdm.tqdm(position=1, desc="queue size", unit_scale=True)
processed_packets_bar = tqdm.tqdm(position=2, desc="processed", unit_scale=True)
def read_from_the_socket_into_the_queue():
receiver_socket = socket.socket(socket.AF_INET, socket.SOCK_DGRAM)
receiver_socket.settimeout(5.0)
addr = ("127.0.0.1", 6410)
receiver_socket.bind(addr)
while True:
data_bytes, from_address = receiver_socket.recvfrom(1024)
# no processing at all here ! we want to ensure the packet gets read, so that we are not dropping
messages_queue.put_nowait(data_bytes)
queue_size_bar.update(1)
received_packets_bar.update(1)
def read_from_the_queue_and_process():
while True:
data_bytes = messages_queue.get(block=True, timeout=None) # until a message is available
data = json.loads(data_bytes)
queue_size_bar.update(-1)
processed_packets_bar.update(1)
sum(range(10**5)) # slow computation, adjust
socket_thread = threading.Thread(target=read_from_the_socket_into_the_queue)
process_thread = threading.Thread(target=read_from_the_queue_and_process)
socket_thread.start()
process_thread.start()
Related
Socket Programming in Python Either Returning Error 10061 or Leaving My Server Not Receiving Data
Working on learning socket programming and I am having a strange issue crop up between my two codes depending on what IP I try to run them through. Server: import socket import time import datetime import filecmp HOST = 'localhost' PORT = 9100 n = 1 x = 0 average_list = [] print('I am ready for any client side request \n') file_comparison = "send.txt" s=socket.socket(socket.AF_INET,socket.SOCK_STREAM) s.setsockopt(socket.SOL_SOCKET, socket.SO_REUSEADDR, 1) s.bind((HOST,PORT)) s.listen(1) conn, addr = s.accept() while n <= 100: data = conn.recv(1024) file = 'receive1.txt'; print('I am starting receiving file', file,'for the',n,'th time') a = datetime.datetime.now() f = open(file, 'wb') f.write(data) print('I am finishing receiving file', file,'for the',n,'th time') b = datetime.datetime.now() rawtime = b - a millidelta = rawtime * 1000 average_list.append(millidelta) real_average = ((sum(average_list, datetime.timedelta(0,0))) / n) print('The time used in milliseconds to receive',file,'for the',n,'th time','is:',millidelta,'milliseconds') print('The average time to receive',file,'in milliseconds is:',real_average) if filecmp.cmp(file,file_comparison,shallow=False): x = x+1 n=n + 1 f.close() conn.close() s.close() print('I am done \n') print('Total errors: ',x,'out of',n-1 ) Client: import socket import datetime import time import filecmp #initializing host, port, filename, total time and number of times to send the file host = 'localhost' port = 9100 fileName = "send.txt" n = 1 average_list = [] file_to_send = open(fileName,'rb') while n <= 100: data = file_to_send.read(1024) s=socket.socket() s.connect((host,port)) s.sendall(data) #reading the next 1024 bits print('I am connecting to server side:',host,'\n') print('I am sending file',fileName,'for the',n,'th time') a = datetime.datetime.now() print('I am finishing sending file',fileName,'for the',n,'th time') b = datetime.datetime.now() rawtime = b - a millidelta = rawtime * 1000 average_list.append(millidelta) real_average = ((sum(average_list, datetime.timedelta(0,0))) / n) print('The time used in milliseconds to send',fileName,'for the',n,'th time','is:',millidelta,'milliseconds') print('The average time to send',fileName,'in milliseconds is:',real_average) n = n + 1 file_to_send.close() s.close() print('I am done') In this current iteration my client side code simply runs through the loop trying to send the data of a .txt file to a server that isnt receiving anything. If i change 'localhost' to my actual IP address, I instead get the server side code cycling through its while loop while the client side gives up after 2 iterations with: ConnectionRefusedError: [WinError 10061] No connection could be made because the target machine actively refused it with the error citing line 15, "s.connect((host,port)) as the cause of the issue. Ultimately Im stuck since changing my host between what I assumed were two correct implementations of the host are giving me drastically different results with neither working as intended.
What I think the error is trying to tell us from other times I have seen that is that the port the socket is trying to connect is still connected to another socket. So my diagnosis of why that might be happening is that the s.close() is not in the while loop so it keeps making a new socket and then tries to connect on the same port. Edit: I got a chance to run it on my side and it works for me if I pull the whole making and binding of a socket out of the loop like this: import socket import datetime import time import filecmp #initializing host, port, filename, total time and number of times to send the file host = 'localhost' port = 9100 fileName = "send.txt" n = 1 average_list = [] file_to_send = open(fileName,'rb') s=socket.socket() s.connect((host,port)) while n <= 100: data = file_to_send.read(1024) s.sendall(data) #reading the next 1024 bits print('I am connecting to server side:',host,'\n') print('I am sending file',fileName,'for the',n,'th time') a = datetime.datetime.now() print('I am finishing sending file',fileName,'for the',n,'th time') b = datetime.datetime.now() rawtime = b - a millidelta = rawtime * 1000 average_list.append(millidelta) real_average = ((sum(average_list, datetime.timedelta(0,0))) / n) print('The time used in milliseconds to send',fileName,'for the',n,'th time','is:',millidelta,'milliseconds') print('The average time to send',fileName,'in milliseconds is:',real_average) n = n + 1 s.close() file_to_send.close() This definitely works for me and sends the file 100 times and it gets received 100 times but I don't know if in your use case you need it to be a hundred new sockets instead of one socket sending 100 files that get successfully received.
Code for streaming video over tcp socket in python; need help understanding parts of it
I got this code for streaming a video from a client to a server: Client: import cv2, imutils import mss import numpy from win32api import GetSystemMetrics import pickle import socket, struct client_socket = socket.socket(socket.AF_INET,socket.SOCK_STREAM) host_ip = "IPADRESS" port = 9999 client_socket.connect((host_ip,port)) with mss.mss() as sct: monitor = {"top": 0, "left": 0, "width": GetSystemMetrics(0), "height": GetSystemMetrics(1)} while True: img = numpy.array(sct.grab(monitor)) frame = imutils.resize(img, width=1400) a = pickle.dumps(frame) message = struct.pack("Q",len(a))+a client_socket.send(message) Server: import cv2, imutils import numpy as np import pickle, struct import socket import threading server_socket = socket.socket(socket.AF_INET,socket.SOCK_STREAM) host_ip = "IP_ADRESS" port = 9999 socket_address = (host_ip,port) server_socket.bind(socket_address) server_socket.listen() print("Listening at",socket_address) def show_client(addr,client_socket): try: print('CLIENT {} CONNECTED!'.format(addr)) if client_socket: # if a client socket exists data = b"" payload_size = struct.calcsize("Q") while True: while len(data) < payload_size: packet = client_socket.recv(4*1024) if not packet: break data+=packet packed_msg_size = data[:payload_size] data = data[payload_size:] msg_size = struct.unpack("Q",packed_msg_size)[0] while len(data) < msg_size: data += client_socket.recv(4*1024) frame_data = data[:msg_size] data = data[msg_size:] frame = pickle.loads(frame_data) cv2.imshow("Screen", frame) key = cv2.waitKey(1) & 0xFF if key == ord('q'): break client_socket.close() except Exception as e: print(e) print(f"CLINET {addr} DISCONNECTED") pass while True: client_socket,addr = server_socket.accept() thread = threading.Thread(target=show_client, args=(addr,client_socket)) thread.start() print("TOTAL CLIENTS ",threading.activeCount() - 1) A lot of this code is from a youtuber called "pyshine", and everything is working just fine, but I don't understand, what a specific part of this code is really doing. These are the parts: First of all in the client-code: message = struct.pack("Q",len(a))+a I know that it does something with the length of the pickle and, that it appends the pickle to it, but not more. Second of all in the server-code: data = b"" payload_size = struct.calcsize("Q") while True: while len(data) < payload_size: packet = client_socket.recv(4*1024) if not packet: break data+=packet packed_msg_size = data[:payload_size] data = data[payload_size:] msg_size = struct.unpack("Q",packed_msg_size)[0] while len(data) < msg_size: data += client_socket.recv(4*1024) frame_data = data[:msg_size] With printing out some values, I definitely understood it a bit better, but the whole process, how it gets the final "frame_data", is still a mystery to me. So I would really appreciate, if someone could explain me the process that is going there.
socket is primitive object and it doesn't care what data you send. You can send two frames and client can get it as single package or it may get it as many small packages - socket doesn't care where is end of first frame. To resolve this problem this code first sends len(data) and next data. It uses struct("Q") so this value len(data) has always 8 bytes. This way receiver knows how much data it has to receive to have complete frame - first it gets 8 bytes to get len(data) and later it use this value to get all data. And this is what second code does - it repeats recv() until it gets all data. It also checks if it doesn't get data from next frame - and keep this part as data[payload_size:] to use it with next frame/ If you will use the same rule(s) on both sides - sender first sends 8 bytes with sizeand next data, receiver first gets 8 bytes with size and next get data (using size) - then you have defined protocol. (similar to other protocols: HTTP (HyperText Transfer Protocol), FTP (File Transfer Protocol), SMTP (Send Mail Transfer Protocol), etc.)
Ending TCP connection when no data is sent from equipment
For some background: We currently have a piece of equipment in house which we use to measure the height of an object. It will scan the object, compare it with a reference image and return a pattern match percentage, and if that percentage is above some specified threshold, it will take the height of the object. We use Non-Procedural Ethernet to connect to the sensor through a python socket, and the data is sent by the sensor. The code below showcases how I connect to the sensor: import socket import time import pandas as pd s = socket.socket(socket.AF_INET, socket.SOCK_STREAM) s.setsockopt(socket.SOL_SOCKET, socket.SO_KEEPALIVE, 1) try: s.connect(("192.168.1.20", 8500)) s.settimeout(30) data = [] print('Recording data...') while True: msg = s.recv(496) d = msg.decode('utf-8').split(',') data.append(d) finally: s.close() out = [] out.append({'height': float(data[i][0]), 'reference_pctg': float(data[i][1].split('\r')[0]) }) csv = pd.DataFrame(data = out) csv.to_csv('./data/' + sheet + '.csv', index = False) print(csv) Currently, the socket lasts 30 seconds, and is timed out after that. Issue is, we cannot use the controller to close the connection when the data is done being sent. Is there any way to set the socket to close when the sensor doesn't send any data for a specified time?
Capture UDP packets in Python without loss
I'm trying to capture a fast stream of UDP packets without missing any. Packet size is known and fixed at 1472 bytes. I'm generating UDP transmissions with another Python application and the first two bytes have a incremental counter so I can check if any have been lost on capture. Approach 1: read packet -> write to file Packets are being written to file one at a time as they come in. Result: 100 to 200 packets lost out of 10,000 received import socket import capture_verify sock = socket.socket(socket.AF_INET, socket.SOCK_DGRAM) address = ("192.168.252.147", 23850) sock.bind(address) previous_first_val = 0 out_file = open("out_bin.bin", "wb") packet_count = 1E5 while(packet_count): data, addr = sock.recvfrom(2048) out_file.write(data) packet_count -= 1 capture_verify.verify_file() Approach 2: read packets to memory buffer, write to file once 10K captured Packets are being stored to a pre-allocated buffer. Then written to file once 10K packets received. Result: 7 to 15 packets lost out of 10,000 received import socket import constants as cs import capture_verify sock = socket.socket(socket.AF_INET, socket.SOCK_DGRAM) address = ("192.168.252.147", 23850) sock.bind(address) out_file = open("out_bin.bin", "wb") packet_count = 1E5 bytes_to_read = int(cs.packet_size * packet_count) in_buf = bytearray(bytes_to_read) view = memoryview(in_buf) while packet_count: # nbytes = sock.recv_into(view, bytes_to_read) sock.recvfrom_into(view, cs.packet_size) packet_count -= 1 nbytes = cs.packet_size view = view[nbytes:] out_file.write(in_buf) capture_verify.verify_file() Is there anything I can do to speed up the code and ensure no packet loss? *UDP protocol is a requirement, can't throttle down transmission speed. Currently using Python 3.7 on Windows 7
Network Bandwidth testing or speed testing in python 2.7
I am needing to test network bandwidth between a client and a server. I can do a simple drag and drop of the file and I get somewhere in the 800meg range according to the windows network monitor. The goal is to perform this same test using a python app, something like speedtest.net for inside the network. Here is the code I have been using but the results do not give me anything like I am seeing and I may just not understand them. The code comes from this site https://svn.python.org/projects/python/trunk/Demo/sockets/throughput.py #! /usr/bin/env python # Test network throughput. # # Usage: # 1) on host_A: throughput -s [port] # start a server # 2) on host_B: throughput -c count host_A [port] # start a client # # The server will service multiple clients until it is killed. # # The client performs one transfer of count*BUFSIZE bytes and # measures the time it takes (roundtrip!). import sys, time from socket import * MY_PORT = 50000 + 42 BUFSIZE = 1024 def main(): if len(sys.argv) < 2: usage() if sys.argv[1] == '-s': server() elif sys.argv[1] == '-c': client() else: usage() def usage(): sys.stdout = sys.stderr print 'Usage: (on host_A) throughput -s [port]' print 'and then: (on host_B) throughput -c count host_A [port]' sys.exit(2) def server(): if len(sys.argv) > 2: port = eval(sys.argv[2]) else: port = MY_PORT s = socket(AF_INET, SOCK_STREAM) s.bind(('', port)) s.listen(1) print 'Server ready...' while 1: conn, (host, remoteport) = s.accept() while 1: data = conn.recv(BUFSIZE) if not data: break del data conn.send('OK\n') conn.close() print 'Done with', host, 'port', remoteport def client(): if len(sys.argv) < 4: usage() count = int(eval(sys.argv[2])) host = sys.argv[3] if len(sys.argv) > 4: port = eval(sys.argv[4]) else: port = MY_PORT testdata = 'x' * (BUFSIZE-1) + '\n' t1 = time.time() s = socket(AF_INET, SOCK_STREAM) t2 = time.time() s.connect((host, port)) t3 = time.time() i = 0 while i < count: i = i+1 s.send(testdata) s.shutdown(1) # Send EOF t4 = time.time() data = s.recv(BUFSIZE) t5 = time.time() print data print 'Raw timers:', t1, t2, t3, t4, t5 print 'Intervals:', t2-t1, t3-t2, t4-t3, t5-t4 print 'Total:', t5-t1 print 'Throughput:', round((BUFSIZE*count*0.001) / (t5-t1), 3), print 'K/sec.' main() Here is a sample output OK Raw timers: 1497614245.55 1497614245.55 1497614245.55 1497614268.85 1497614268.85 Intervals: 0.000999927520752 0.000999927520752 23.2929999828 0.00300002098083 Total: 23.2979998589 Throughput: 43952.271 K/sec.
I'm doing the similar test and looks like the result could be better if the size of test data is increased -- but still can't make use of the full available bandwidth (1Gbps in my case), don't know why. For more detail, I was testing the bandwidth from a Win7 client to a Win7 server, If I changes the testdata to be 4 times of receiving buffer size, the network usage could be up to more than 80% on a 1Gbps link. If the testdata's size is similar with the buffer size, the usage would only be a little more than 30%. When I do the test between a Debian8 client and the same Win7 server, the usage could be up to near 100%. Also, when I just copy a large file between the same Win7 machines through file sharing, it's also 100% usage. Looks like the problem lies in the client side. Any suggestion would be appreciated. Code for server (Python3.6): from __future__ import print_function import datetime import socket HOST = '0.0.0.0' PORT = 50000 BUFFER = 4096 sock = socket.socket(socket.AF_INET, socket.SOCK_STREAM) sock.bind((HOST,PORT)) sock.listen(0) print('listening at %s:%s\n\r' %(HOST, PORT)) while True: client_sock, client_addr = sock.accept() starttime = datetime.datetime.now() print(starttime, end="") print('%s:%s connected\n\r' % client_addr) count = 0 while True: data = client_sock.recv(BUFFER) if data: count += len(data) del data continue client_sock.close() endtime = datetime.datetime.now() print(endtime) print('%s:%s disconnected\n\r' % client_addr) print('bytes transferred: %d' % count) delta = endtime - starttime delta = delta.seconds + delta.microseconds / 1000000.0 print('time used (seconds): %f' % delta) print('averaged speed (MB/s): %f\n\r' % (count / 1024 / 1024 / delta)) break sock.close() Code for client (Python3.6): import datetime import socket HOST = 'a.b.c.d' PORT = 50000 BUFFER = 4096 testdata = b'x' * BUFFER * 4 sock = socket.socket(socket.AF_INET, socket.SOCK_STREAM) sock.connect((HOST,PORT)) for i in range(1, 1000000): sock.send(testdata) sock.close()
I have been trying to achieve the same and to an extent i have using this code Speedtest.py. They do even provide a API if you wish to render the test results on a webpage, which would require a python framework. I'd recommend flask. Speedtest.net - Uses sockets for tests instead of https which is used in this code. ps - If you have already achieved a better approach, i'd be very nice of you to tell us all.