Measuring wifi file-transfer speed in python - python

I am trying to measure the speed of file transfer through sockets in python. I set up measurements on both ends (sending and receiving side) and get somewhat different results (i.e. 16 vs 17 Mbps for a 1MB file transferred via ad-hoc wifi). My question is whether this kind of difference is something I should expect, given the measurement setup following. This is all running on two Raspberry Pi models 2 B.
sender:
import socket as s
sock = s.socket(s.AF_INET, s.SOCK_STREAM)
sock.connect((addr,5000))
start = t.time()
sock.sendall(data)
finish = t.time()
receiver:
import socket as s
sock = s.socket(s.AF_INET, s.SOCK_STREAM)
sock.setsockopt(s.SOL_SOCKET, s.SO_REUSEADDR, 1)
sock.bind(("", 5000))
sock.listen(1)
conn, addr = sock.accept()
pack = []
start = t.time()
while True:
piece = conn.recv(8192)
if not piece:
finish = t.time()
break
pack.append(piece.decode())
Also very welcome, any other transfer speed measurements advices, if there is any way to do this better.

I think the speedtest-cli what are you locking for. Also, there is a good article about it. It seems that Raspberry Pi supported.
Matt Martz has created a Python project called speedtest-cli which allows you to do a basic upload/download measurement using SpeedNet’s infrastructure. It works fine on the Pi and is really easy to try out on the command line.
If you want make your own script, anyway speedtest_cli.py good place to start.

Related

Listen to UDP port already in use Python

I have an application which communicates on a specific port, and I would like to listen to all UDP traffic which has this specific port as a source or destination.
Naively I try to do something like:
import socket
UDP_IP = "0.0.0.0"
UDP_PORT = my_port
s = socket.socket(socket.AF_INET, socket.SOCK_RAW, socket.IPPROTO_UDP)
sock.bind((UDP_IP, UDP_PORT))
while True:
data, addr = s.recvfrom(4096)
print("received message:", data)
however this does not work because the application in question is already bound to the port, so I get an error if I try to bind to it in my code.
My next attempt was to use scapy with something like:
from scapy.all import *
import queue
scapy.config.conf.use_pcap = True
pending_pkts = queue.Queue()
def callback(pkt):
pending_pkts.put(pkt)
def worker():
while True:
pkt = pending_pkts.get()
# I do some stuff with the packet here
pending_pkts.task_done()
t = Thread(target=worker)
t.daemon = True
t.start()
sniff(prn=callback, filter="udp and port my_port")
The idea here was that I would stuff the packets into my queue and then do the relatively costly processing on a separate thread. While this does somewhat work, I miss something like 50% of the packets I am interested in, which is unacceptable for the project. I have seen other people running into this issue and try everything suggested (using a very specific filter, using pcap, using multiple threads to avoid costly processing holding things), but evidently this is still not fast enough since I miss so many packets.
I would greatly appreciate it if someone could point me in the right direction as to how I can get ideally 100% packet capture (doesn't need to be realtime, I can accept a couple second delay in processing if it means I get everything) on a UDP port which is already in use. Ideally I would like to stick with python, but I would be willing to also try something in C++ if someone knows of a solution.
Thanks for your time :)

My port scanner takes too long to scan for ports. Here is my code, could somebody help me?

Here is my code. Can somebody explain to me how to make my port scanner faster? I made the port scanner by connecting to the ip with the 'connect.ex()' and maybe should I use another command?
try:
for port in range(1,1000):
sock = socket.socket(socket.AF_INET,socket.SOCK_STREAM)
result = sock.connect_ex((remoteHost,port))
if result == 0:
print(colored("[+] Port {}: Open".format(port), 'green'))
sock.close()
So a quick look on Github led me to find portSpider which bills itself as
A lightning fast multithreaded network scanner framework with modules.
Reading through it, I noticed that it uses socket.connect and not connect_ex. It absolutely does multithreading though to increase performance. I would tend toward using an existing solution rather than building one from scratch, unless this is just for tinkering.
To speed up your example in particular, you could do a simple optimization using multiprocessing.Pool.
def scan_port(port):
try:
sock = socket.socket(socket.AF_INET,socket.SOCK_STREAM)
result = sock.connect_ex((remoteHost,port))
if result == 0:
print(colored("[+] Port {}: Open".format(port), 'green'))
sock.close()
except:
pass # you should handle this error
import multiprocessing as mp
p = mp.Pool() # will parallelize to number of CPUs you have
p.map(scan_port, range(1, 1000))

Python multiprocessing queue generating strange data behavior

I'm working with a little project that is essentially one RPi2 distributing data to four RPi1. This is done via socket, and every client that connects to the RPi2 gets its own process. There are also one process capturing images and one waiting for new clients(on the RPi2).
The capture process sends four(at most) vectors to the "client processes" and they send to the (ecery) RPi1:s via socket.
My problem is: When I run more than one client, there seem to be some kind of communication fault, ether in the IPC or the socket. Because data that are supposed to be sent to RPi1_A ends up on RPi1_B.
Can it have something to do with the queue going between processes that aren't in a Parent-Child relation?
Some snippets from the code:
# Create a list of queues
Main_Queue = [Queue(IPC_QUEUE_SIZE)]*MAX_NUMBER_OF_CONNECTIONS
#Creation of Camera process:
Camera_process = Process(target=Camera_capture, args=(Main_Queue,Client_Update_Queue, ))
#Wait for client snippet:
conn, addr = s.accept()
p.append((Process(target=clientthread, args=(conn , Main_Queue[nr_of_clients] ,Client_Update_Queue, sock_lock, )),addr[0],addr[1]))
p[len(p)-1][0].start()
nr_of_clients = nr_of_clients + 1

Python multiple loops at the same time

I know that it is not possible to run multiple loops at the same time in Python.
Anyhow, what I need to achieve is that I have one loop running reading loads of sensor data, every 0.25 seconds.
At the same time I have signal devices running in parallel that need to send signals every 3 seconds.
My question is what way is best practice to achieve this?
Does it make sense to write two scripts and run them in parallel?
Does it make sense to use threading?
Is there any other possibility in order to make this work?
I would be greatful for code samples.
Thank you!
Edit:
Both loops are absolutely independent.
So, let's say while script 1 is running, reading the sensor data, when one of the sensors received a value < 300, it should run script 2 which will send the signals. At the same time when the sensors data gets > 300 it should stop script 2.
"Python multiple loops at the same time. I know that it is not possible [...]" - this looks really funny.
It is possible to run two loops at the same time, exactly how you described it. And both ways make much sense, depending on what do you actually need and want. If the tasks are completely independent you should run it as two scripts. If you need those two loops to realize one task and it makes sense for them to be in one file you can use multiprocessing.
Tested for python 2.7.5+ and 3.3.2+.
Here is some minimal example:
from multiprocessing import Process
import time
def f(name):
print('hello', name)
time.sleep(10)
def d(name):
print('test2', name)
time.sleep(10)
if __name__ == '__main__':
p1 = Process(target=f, args=('bob',))
p2 = Process(target=d, args=('alice',))
p1.start()
p2.start()
p1.join()
p2.join()
Script runs for 10s and both strings are printed right away, which means everything works.
time python3 ./process.py
hello bob
test2 alice
real 0m10.073s
user 0m0.040s
sys 0m0.016s
It is also possible by running multiple scripts and some as .pyw for convenience and having them exchange information by UDP sockets. Note 127.0.0.1 is to send to yourself under ANY circumstance. Also for port, just make sure no other programs use the port you use. As in other programs, I mean ANY program that uses ports or even basic router settings.
Sample (send)
import os
from sockets import *
host = "ip"
port = "9000"
addr = (host, port)
UDPSock = socket(AF_INET, SOCK_DGRAM)
data = "Random Text"
send = data.encode("ascii")
UDPSock.sendto(send, addr)
UDPSock.close()
Sample (Receive)
import os
from socket import *
host = ""
port = 9000
addr = (host, port)
UDPSock = socket(AF_INET, SOCK_DGRAM)
UDPSock.bind(addr)
(data, addr) = UDPSock.recvfrom(1024)#1024 is MAX bytes to receive
data = data.decode('ascii')
UDPSock.close()
You can use these to run separate loops and tell what to do from two separate programs.

Python: Listen on two ports

import socket
backlog = 1 #Number of queues
sk_1 = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
sk_2 = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
local = {"port":1433}
internet = {"port":9999}
sk_1.bind (('', internet["port"]))
sk_1.listen(backlog)
sk_2.bind (('', local["port"]))
sk_2.listen(backlog)
Basically, I have this code. I am trying to listen on two ports: 1433 and 9999. But, this doesn't seems to work.
How can I listen on two ports, within the same python script??
The fancy-pants way to do this if you want to use Python std-lib would be to use SocketServer with the ThreadingMixin -- although the 'select' suggestion is probably the more efficient.
Even though we only define one ThreadedTCPRequestHandler you can easily repurpose it such that each listener has it's own unique handler and it should be fairly trivial to wrap the server/thread creation into a single method if thats the kind of thing you like.
#!/usr/bin/python
import threading
import time
import SocketServer
class ThreadedTCPRequestHandler(SocketServer.BaseRequestHandler):
def handle(self):
self.data = self.request.recv(1024).strip()
print "%s wrote: " % self.client_address[0]
print self.data
self.request.send(self.data.upper())
class ThreadedTCPServer(SocketServer.ThreadingMixIn, SocketServer.TCPServer):
pass
if __name__ == "__main__":
HOST = ''
PORT_A = 9999
PORT_B = 9876
server_A = ThreadedTCPServer((HOST, PORT_A), ThreadedTCPRequestHandler)
server_B = ThreadedTCPServer((HOST, PORT_B), ThreadedTCPRequestHandler)
server_A_thread = threading.Thread(target=server_A.serve_forever)
server_B_thread = threading.Thread(target=server_B.serve_forever)
server_A_thread.setDaemon(True)
server_B_thread.setDaemon(True)
server_A_thread.start()
server_B_thread.start()
while 1:
time.sleep(1)
The code so far is fine, as far as it goes (except that a backlog of 1 seems unduly strict), the problem of course comes when you try to accept a connection on either listening socket, since accept is normally a blocking call (and "polling" by trying to accept with short timeouts on either socket alternately will burn machine cycles to no good purpose).
select to the rescue!-) select.select (or on the better OSs select.poll or even select.epoll or select.kqueue... but, good old select.select works everywhere!-) will let you know which socket is ready and when, so you can accept appropriately. Along these lines, asyncore and asynchat provide a bit more organization (and third-party framework twisted, of course, adds a lot of such "asynchronous" functionality).
Alternatively, you can devote separate threads to servicing the two listening sockets, but in this case, if the different sockets' functionality needs to affect the same shared data structures, coordination (locking &c) may become ticklish. I would certainly recommend trying the async approach first -- it's actually simpler, as well as offering potential for substantially better performance!-)

Categories