I know that it is not possible to run multiple loops at the same time in Python.
Anyhow, what I need to achieve is that I have one loop running reading loads of sensor data, every 0.25 seconds.
At the same time I have signal devices running in parallel that need to send signals every 3 seconds.
My question is what way is best practice to achieve this?
Does it make sense to write two scripts and run them in parallel?
Does it make sense to use threading?
Is there any other possibility in order to make this work?
I would be greatful for code samples.
Thank you!
Edit:
Both loops are absolutely independent.
So, let's say while script 1 is running, reading the sensor data, when one of the sensors received a value < 300, it should run script 2 which will send the signals. At the same time when the sensors data gets > 300 it should stop script 2.
"Python multiple loops at the same time. I know that it is not possible [...]" - this looks really funny.
It is possible to run two loops at the same time, exactly how you described it. And both ways make much sense, depending on what do you actually need and want. If the tasks are completely independent you should run it as two scripts. If you need those two loops to realize one task and it makes sense for them to be in one file you can use multiprocessing.
Tested for python 2.7.5+ and 3.3.2+.
Here is some minimal example:
from multiprocessing import Process
import time
def f(name):
print('hello', name)
time.sleep(10)
def d(name):
print('test2', name)
time.sleep(10)
if __name__ == '__main__':
p1 = Process(target=f, args=('bob',))
p2 = Process(target=d, args=('alice',))
p1.start()
p2.start()
p1.join()
p2.join()
Script runs for 10s and both strings are printed right away, which means everything works.
time python3 ./process.py
hello bob
test2 alice
real 0m10.073s
user 0m0.040s
sys 0m0.016s
It is also possible by running multiple scripts and some as .pyw for convenience and having them exchange information by UDP sockets. Note 127.0.0.1 is to send to yourself under ANY circumstance. Also for port, just make sure no other programs use the port you use. As in other programs, I mean ANY program that uses ports or even basic router settings.
Sample (send)
import os
from sockets import *
host = "ip"
port = "9000"
addr = (host, port)
UDPSock = socket(AF_INET, SOCK_DGRAM)
data = "Random Text"
send = data.encode("ascii")
UDPSock.sendto(send, addr)
UDPSock.close()
Sample (Receive)
import os
from socket import *
host = ""
port = 9000
addr = (host, port)
UDPSock = socket(AF_INET, SOCK_DGRAM)
UDPSock.bind(addr)
(data, addr) = UDPSock.recvfrom(1024)#1024 is MAX bytes to receive
data = data.decode('ascii')
UDPSock.close()
You can use these to run separate loops and tell what to do from two separate programs.
Related
I have an application which communicates on a specific port, and I would like to listen to all UDP traffic which has this specific port as a source or destination.
Naively I try to do something like:
import socket
UDP_IP = "0.0.0.0"
UDP_PORT = my_port
s = socket.socket(socket.AF_INET, socket.SOCK_RAW, socket.IPPROTO_UDP)
sock.bind((UDP_IP, UDP_PORT))
while True:
data, addr = s.recvfrom(4096)
print("received message:", data)
however this does not work because the application in question is already bound to the port, so I get an error if I try to bind to it in my code.
My next attempt was to use scapy with something like:
from scapy.all import *
import queue
scapy.config.conf.use_pcap = True
pending_pkts = queue.Queue()
def callback(pkt):
pending_pkts.put(pkt)
def worker():
while True:
pkt = pending_pkts.get()
# I do some stuff with the packet here
pending_pkts.task_done()
t = Thread(target=worker)
t.daemon = True
t.start()
sniff(prn=callback, filter="udp and port my_port")
The idea here was that I would stuff the packets into my queue and then do the relatively costly processing on a separate thread. While this does somewhat work, I miss something like 50% of the packets I am interested in, which is unacceptable for the project. I have seen other people running into this issue and try everything suggested (using a very specific filter, using pcap, using multiple threads to avoid costly processing holding things), but evidently this is still not fast enough since I miss so many packets.
I would greatly appreciate it if someone could point me in the right direction as to how I can get ideally 100% packet capture (doesn't need to be realtime, I can accept a couple second delay in processing if it means I get everything) on a UDP port which is already in use. Ideally I would like to stick with python, but I would be willing to also try something in C++ if someone knows of a solution.
Thanks for your time :)
I have a simple code in python 3 using schedule and socket:
import schedule
import socket
from time import sleep
def readDataFromFile():
data = []
with open("/tmp/tmp.txt", "r") as f:
for singleLine in f.readlines():
data.append(str(singleLine))
if(len(data)>0):
writeToBuffer(data)
def readDataFromUDP():
udpData = []
rcvData, addr = sock.recvfrom(256)
udpData.append(rcvData.decode('ascii'))
if(len(udpData)>0):
writeToBuffer(udpData)
.
.
.
def main():
schedule.every().second.do(readDataFromFile)
schedule.every().second.do(readDataFromUDP)
while(1):
schedule.run_pending()
sleep(1)
UDP_IP = "192.xxx.xxx.xxx"
UDP_PORT = xxxx
sock = socket.socket(socket.AF_INET, socket.SOCK_DGRAM)
sock.bind((UDP_IP, UDP_PORT))
main()
The problem is, script hung up on the sock.rcvfrom() instruction, and wait until data come.
How force python to run this job independently? Better idea is to run this in threads?
You can use threads here, and it'll work fine, but it will require a few changes. First, the scheduler on your background thread is going to try to kick off a new recvfrom every second, no matter how long the last one took. Second, since both threads are apparently trying to call the same writeToBuffer function, you're probably going to need a Lock or something else to synchronize them.
Rewriting the whole program around an asynchronous event loop is almost certainly overkill here.
Just changing the socket to be nonblocking and doing a hybrid is probably the simplest change, e.g., by using settimeout:
# wherever you create your socket
sock.settimeout(0.8)
# ...
def readDataFromUDP():
udpData = []
try:
rcvData, addr = sock.recvfrom(256)
except socket.timeout:
return
udpData.append(rcvData.decode('ascii'))
if(len(udpData)>0):
writeToBuffer(udpData)
Now, every time you call recvfrom, if there's data available, you'll handle it immediately; if not, it'll wait up to 0.8 seconds, and then raise an exception, which means you have no data to process, so go back and wait for the next loop. (There's nothing magical about that 0.8; I just figured something a little less than 1 second would be a good idea, so there's time left to do all the other work before the next schedule time hits.)
Under the covers, this works by setting the OS-level socket to non-blocking mode and doing some implementation-specific thing to wait with a timeout. You could do the same yourself by using setblocking(False) and using the select or selectors module to wait up to 0.8 seconds for the socket to be ready, but it's easier to just let Python take care of that for you.
I am trying to measure the speed of file transfer through sockets in python. I set up measurements on both ends (sending and receiving side) and get somewhat different results (i.e. 16 vs 17 Mbps for a 1MB file transferred via ad-hoc wifi). My question is whether this kind of difference is something I should expect, given the measurement setup following. This is all running on two Raspberry Pi models 2 B.
sender:
import socket as s
sock = s.socket(s.AF_INET, s.SOCK_STREAM)
sock.connect((addr,5000))
start = t.time()
sock.sendall(data)
finish = t.time()
receiver:
import socket as s
sock = s.socket(s.AF_INET, s.SOCK_STREAM)
sock.setsockopt(s.SOL_SOCKET, s.SO_REUSEADDR, 1)
sock.bind(("", 5000))
sock.listen(1)
conn, addr = sock.accept()
pack = []
start = t.time()
while True:
piece = conn.recv(8192)
if not piece:
finish = t.time()
break
pack.append(piece.decode())
Also very welcome, any other transfer speed measurements advices, if there is any way to do this better.
I think the speedtest-cli what are you locking for. Also, there is a good article about it. It seems that Raspberry Pi supported.
Matt Martz has created a Python project called speedtest-cli which allows you to do a basic upload/download measurement using SpeedNet’s infrastructure. It works fine on the Pi and is really easy to try out on the command line.
If you want make your own script, anyway speedtest_cli.py good place to start.
I want to connect to multiple telnet hosts using threading in python, but I stumbled about an issue I'm not able to solve.
Using the following code on MAC OS X Lion / Python 2.7
import threading,telnetlib,socket
class ReaderThread(threading.Thread):
def __init__(self, ip, port):
threading.Thread.__init__(self)
self.ip = ip
self.port = port
self.telnet_con = telnetlib.Telnet()
def run(self):
try:
print 'Start %s' % self.ip
self.telnet_con.open(self.ip,self.port,30)
print 'Done %s' % self.ip
except socket.timeout:
print 'Timeout in %s' % self.ip
def join(self):
self.telnet_con.close()
ta = []
t1 = ReaderThread('10.0.1.162',9999)
ta.append(t1)
t2 = ReaderThread('10.0.1.163',9999)
ta.append(t2)
for t in ta:
t.start()
print 'Threads started\n'
In general it works, but either one of the threads (it is not always the same one) takes a long time to connect (about 20 second and sometimes even runs into a timeout). During that awfully long connection time (in an all local network), cpu load also goes up to 100 %.
Even more strange is the fact that if I'm using only one thread in the array it always works flawlessly. So it must have something to do with the use of multiple threads.
I already added hostname entries for all IP addresses to avoid a DNS lookup issue. This didn't make a difference.
Thanks in advance for your help.
Best regards
senexi
Ok, You have overridden join(), and you are not supposed to do that. The main thread calls join() on each thread when the main thread finishes, which is right after the last line in your code. Since your join() method returns before your telnet thread actually exits, Python gets confused and tries to call join() again, and this is what causes the 100% cpu usage. Try to put a 'print' statement in your join() method.
Your implementation of join() tries to close the socket (probably while the other thread is still trying to open a connection), and this might be what causing your telnet threads to never finish.
import socket
backlog = 1 #Number of queues
sk_1 = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
sk_2 = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
local = {"port":1433}
internet = {"port":9999}
sk_1.bind (('', internet["port"]))
sk_1.listen(backlog)
sk_2.bind (('', local["port"]))
sk_2.listen(backlog)
Basically, I have this code. I am trying to listen on two ports: 1433 and 9999. But, this doesn't seems to work.
How can I listen on two ports, within the same python script??
The fancy-pants way to do this if you want to use Python std-lib would be to use SocketServer with the ThreadingMixin -- although the 'select' suggestion is probably the more efficient.
Even though we only define one ThreadedTCPRequestHandler you can easily repurpose it such that each listener has it's own unique handler and it should be fairly trivial to wrap the server/thread creation into a single method if thats the kind of thing you like.
#!/usr/bin/python
import threading
import time
import SocketServer
class ThreadedTCPRequestHandler(SocketServer.BaseRequestHandler):
def handle(self):
self.data = self.request.recv(1024).strip()
print "%s wrote: " % self.client_address[0]
print self.data
self.request.send(self.data.upper())
class ThreadedTCPServer(SocketServer.ThreadingMixIn, SocketServer.TCPServer):
pass
if __name__ == "__main__":
HOST = ''
PORT_A = 9999
PORT_B = 9876
server_A = ThreadedTCPServer((HOST, PORT_A), ThreadedTCPRequestHandler)
server_B = ThreadedTCPServer((HOST, PORT_B), ThreadedTCPRequestHandler)
server_A_thread = threading.Thread(target=server_A.serve_forever)
server_B_thread = threading.Thread(target=server_B.serve_forever)
server_A_thread.setDaemon(True)
server_B_thread.setDaemon(True)
server_A_thread.start()
server_B_thread.start()
while 1:
time.sleep(1)
The code so far is fine, as far as it goes (except that a backlog of 1 seems unduly strict), the problem of course comes when you try to accept a connection on either listening socket, since accept is normally a blocking call (and "polling" by trying to accept with short timeouts on either socket alternately will burn machine cycles to no good purpose).
select to the rescue!-) select.select (or on the better OSs select.poll or even select.epoll or select.kqueue... but, good old select.select works everywhere!-) will let you know which socket is ready and when, so you can accept appropriately. Along these lines, asyncore and asynchat provide a bit more organization (and third-party framework twisted, of course, adds a lot of such "asynchronous" functionality).
Alternatively, you can devote separate threads to servicing the two listening sockets, but in this case, if the different sockets' functionality needs to affect the same shared data structures, coordination (locking &c) may become ticklish. I would certainly recommend trying the async approach first -- it's actually simpler, as well as offering potential for substantially better performance!-)