I'm currently programming a python class which acts as a client.
Because I don't want to block the main thread, receiving of packets is done in another thread and a callback function is called if a packet arrives.
The received packets are either broadcast messages or a reply for a command sent by the client. The function for sending commands is synchronous, it blocks until the reply arrives so it can directly return the result.
Simplified example:
import socket
import threading
class SocketThread(threading.Thread):
packet_received_callback = None
_reply = None
_reply_event = threading.Event()
def run(self):
self._initialize_socket()
while True:
# This function blocks until a packet arrives
p = self._receive_packet()
if self._is_reply(p):
self._reply = p
self._reply_event.set()
else:
self.packet_received_callback(p)
def send_command(self, command):
# Send command via socket
self.sock.send(command)
# Wait for reply
self._reply_event.wait()
self._reply_event.clear()
return self._process_reply(self._reply)
The problem which I'm facing now is that I can't send commands in the callback function because that would end in a deadlock (send_command waits for a reply but no packets can be received because the thread which receives packets is actually executing the callback function).
My current solution is to start a new thread each time to call the callback function. But that way a lot of threads are spawned and it will be difficult to ensure that packets are processed synchronously in heavy traffic situations.
Does anybody know a more elegant solution or am I going the right way?
Thanks for your help!
A proper answer to this question depends a lot on the details of the problem you are trying to solve, but here is one solution:
Rather than invoking the callback function immediately upon receiving the packet, I think it would make more sense for the socket thread to simply store the packet that it received and continue polling for packets. Then when the main thread has time, it can check for new packets that have arrived and act on them.
Recently had another idea, let me know how you think about it. It's just a general approach to solve such problems in case someone else has a similar problem and needs to use multi-threading.
import threading
import queue
class EventBase(threading.Thread):
''' Class which provides a base for event-based programming. '''
def __init__(self):
self._event_queue = queue.Queue()
def run(self):
''' Starts the event loop. '''
while True:
# Get next event
e = self._event_queue.get()
# If there is a "None" in the queue, someone wants to stop
if not e:
break
# Call event handler
e[0](*e[1], **e[2])
# Mark as done
self._event_queue.task_done()
def stop(self, join=True):
''' Stops processing events. '''
if self.is_alive():
# Put poison-pill to queue
self._event_queue.put(None)
# Wait until finished
if join:
self.join()
def create_event_launcher(self, func):
''' Creates a function which can be used to call the passed func in the event-loop. '''
def event_launcher(*args, **kwargs):
self._event_queue.put((func, args, kwargs))
return event_launcher
Use it like so:
event_loop = eventbase.EventBase()
event_loop.start()
# Or any other callback
sock_thread.packet_received_callback = event_loop.create_event_launcher(my_event_handler)
# ...
# Finally
event_loop.stop()
Related
Essentially Im using the socketserver python library to try and handle communications from a central server to multiple raspberry pi4 and esp32 peripherals. Currently i have the socketserver running serve_forever, then the request handler calls a method from a processmanager class which starts a process that should handle the actual communication with the client.
It works fine if i use .join() on the process such that the processmanager method doesnt exit, but thats not how i would like it to run. Without .join() i get a broken pipe error as soon as the client communication process tries to send a message back to the client.
This is the process manager class, it gets defined in the main file and buildprocess is called through the request handler of the socketserver class:
import multiprocessing as mp
mp.allow_connection_pickling()
import queuemanager as qm
import hostmain as hmain
import camproc
import keyproc
import controlproc
# method that gets called into a process so that class and socket share memory
def callprocess(periclass, peritype, clientsocket, inqueue, genqueue):
periclass.startup(clientsocket)
class ProcessManager(qm.QueueManager):
def wipeproc(self, target):
# TODO make wipeproc integrate with the queue manager rather than directly to the class
for macid in list(self.procdict.keys()):
if target == macid:
# calls proc kill for the class
try:
self.procdict[macid]["class"].prockill()
except Exception as e:
print("exception:", e, "in wipeproc")
# waits for process to exit naturally (class threads to close)
self.procdict[macid]["process"].join()
# remove dict entry for this macid
self.procdict.pop(macid)
# called externally to create the new process and append to procdict
def buildprocess(self, peritype, macid, clientsocket):
# TODO put some logic here to handle the differences of the controller process
# generates queue object
inqueue = mp.Queue()
# creates periclass instance based on type
if peritype == hmain.cam:
periclass = camproc.CamMain(self, inqueue, self.genqueue)
elif peritype == hmain.keypad:
print("to be added to")
elif peritype == hmain.motion:
print("to be added to")
elif peritype == hmain.controller:
print("to be added to")
# init and start call for the new process
self.procdict[macid] = {"type": peritype, "inqueue": inqueue, "class": periclass, "process": None}
self.procdict[macid]["process"] = mp.Process(target=callprocess,
args=(self.procdict[macid]["class"], self.procdict[macid]["type"], clientsocket, self.procdict[macid]["inqueue"], self.genqueue))
self.procdict[macid]["process"].start()
# updating the process dictionary before class obj gets appended
# if macid in list(self.procdict.keys()):
# self.wipeproc(macid)
print(self.procdict)
print("client added")
to my eye, all the pertinent objects should be stored in the procdict dictionary but as i mentioned it just gets a broken pipe error unless i join the process with self.procdict[macid]["process"].join() before the end of the buildprocess method
I would like it to exit the method but leave the communication process running as is, ive tried a few different things with restructuring what gets defined within the process and without, but to no avail. Thus far i havent been able to find any pertinent solutions online but of course i may have missed something too.
Thankyou for reading this far if you did! Ive been stuck on this for a couple days so any help would be appreciated, this is my first project with multiprocessing and sockets on any sort of scale.
#################
Edit to include pastebin with all the code:
https://pastebin.com/u/kadytoast/1/PPWfyCFT
Without .join() i get a broken pipe error as soon as the client communication process tries to send a message back to the client.
That's because at the time when the request handler handle() returns, the socketserver does shutdown the connection. That socketserver simplifies the task of writing network servers means it does certain things automatically which are usually done in the course of network request handling. Your code is not quite making the intended use of socketserver. Especially, for handling requests asynchronously, Asynchronous Mixins are intended. With the ForkingMixIn the server will spawn a new process for each request, in contrast to your current code which does this by itself with mp.Process. So, I think you have basically two options:
code less of the request handling yourself and use the provided socketserver methods
stay with your own handling and don't use socketserver at all, so it won't get in the way.
I have a specific problem.
Main content of program starts with creating Process with dbus loop, where I listen for signals.
Content of signals I store in queues. In next part of main I have a threadpool.
When some thread takes item from queue, it use specific function(detection) to handle request - based on content of item from queue. (There is operation on database, from where I take data and make some operations depends on request)
Every thread in thread pool starts one more thread, which should handle signals (current status and interrupt).
For example: I receive signal, which means I have to handle something on numbers. Any thread from threadpool takes this item from queue and starts function which handle something on numbers - it can take long time. So after any time, I receive signal for current status and I need to send current status of detection - that's why I use threads (for shared memory). Also I can receive interrupt signal from D-Bus ("it takes too long time, so stop this detection and be free for another request"). And the interrupt is the main problem...
So my main questions are:
Is there any way, I can raise exception on interrupt signal and stop function (detection)? (I just found solution, but only for catch in main... but I need to catch it in thread which is in threadpool and raise in thread which is in thread in threadpool)
Second question is about GIL... does my thread with signal receiving receive all signals? I think it doesn't... (Yes, I use threads_init())
program:
SERVICE = multiprocessing.Process(target=dbus_signal_receiver, args=(...))
SERVICE.daemon = True
SERVICE.start()
class worker(threading.Thread):
def __init__(self,...):
threading.Thread.__init__(self)
def run(self):
while True:
#get item from queue
s = threading.Thread(target=curr_and_interr_signal_handle, args=(ID of item from queue,...))
s.daemon = True
s.start()
#start specific detection based on request
for i in range(number of threads):
t = worker(...)
t.daemon = True
t.start()
and I hoped, something like this will work... (but it doesn't)
...
class worker(threading.Thread):
def __init__(self,...):
threading.Thread.__init__(self)
def run(self):
while True:
try:
#get item from queue
s = threading.Thread(target=curr_and_interr_signal_handle, args=(ID of item from queue,...))
s.daemon = True
s.start()
#start specific detection based on request
except raised_interrupt_exception:
#continue - wait for another request from queue
...
Read about 18.8.1.2. Signals and threads
Python signal handlers are always executed in the main Python thread,
even if the signal was received in another thread.
This means that signals can’t be used as a means of inter-thread communication.
You can use the synchronization primitives from the threading module instead.
Besides, only the main thread is allowed to set a new signal handler.
Read about 17.1.7. Event Objects
This is one of the simplest mechanisms for communication between threads: one thread signals an event and other threads wait for it
Isn't clear why you have to use thread in thread.
Why could your worker thread not handle detection?
For instance, the following should be do it:
def run(self):
while self.running.is_set():
#get item from queue
#start specific detection based on request
I have this thread running :
def run(self):
while 1:
msg = self.connection.recv(1024).decode()
I wish I could end this thread when I close the Tkinter Window like this :
self.window.protocol('WM_DELETE_WINDOW', self.closeThreads)
def closeThreads(self):
self.game.destroy()
#End the thread
Can't use thread._close() because it is deprecated and python 3.4 does not allow it.
The only really satisfactory solution I've seen for this problem is not to allow your thread to block inside recv(). Instead, set the socket to non-blocking and have the thread block inside select() instead. The advantage of blocking inside select() is that you can tell select() to return when any one of several sockets becomes ready-for-read, which brings us to the next part: as part of setting up your thread, create a second socket (either a locally-connected TCP socket e.g. as provided by socketpair, or a UDP socket listening on a port for packets from localhost). When your main thread wants your networking thread to go away, your main thread should send a byte to that socket (or in the TCP case, the main thread could just close its end of the socket-pair). That will cause select() to return ready-for-read on that socket, and when your network thread realizes that the socket is marked ready-for-read, it should respond by exiting immediately.
The advantages of doing it that way are that it works well on all OS's, always reacts immediately (unlike a polling/timeout solution), takes up zero extra CPU cycles when the network is idle, and doesn't have any nasty side effects in multithreaded environments. The downside is that it uses up a couple of extra sockets, but that's usually not a big deal.
Two solutions:
1) Don't stop the thread, just allow it to die when the process exits with sys.exit()
2) Start the thread with a "die now" flag. The Event class is specifically designed to signal one thread from another.
The following example starts a thread, which connects to a server. Any data is handled, and if the parent signals the thread to exit, it will. As an additional safety feature we have an alarm signal to kill everything, just it case something gets out of hand.
source
import signal, socket, threading
class MyThread(threading.Thread):
def __init__(self, conn, event):
super(MyThread,self).__init__()
self.conn = conn
self.event = event
def handle_data(self):
"process data if any"
try:
data = self.conn.recv(4096)
if data:
print 'data:',data,len(data)
except socket.timeout:
print '(timeout)'
def run(self):
self.conn.settimeout(1.0)
# exit on signal from caller
while not self.event.is_set():
# handle any data; continue loop after 1 second
self.handle_data()
print 'got event; returning to caller'
sock = socket.create_connection( ('example.com', 80) )
event = threading.Event()
# connect to server and start connection handler
th = MyThread(conn=sock, event=event)
# watchdog: kill everything in 3 seconds
signal.alarm(3)
# after 2 seconds, tell data thread to exit
threading.Timer(2.0, event.set).start()
# start data thread and wait for it
th.start()
th.join()
output
(timeout)
(timeout)
got event; returning to caller
The problem I've got right now is one concerning this chat client I've been trying to get working for some days now. It's supposed to be an upgrade of my original chat client, that could only reply to people if it received a message first.
So after asking around and researching people I decided to use select.select to handle my client.
Problem is it has the same problem as always.
*The loop gets stuck on receiving and won't complete until it receives something*
Here's what I wrote so far:
import select
import sys #because why not?
import threading
import queue
print("New Chat Client Using Select Module")
HOST = input("Host: ")
PORT = int(input("Port: "))
s = socket(AF_INET,SOCK_STREAM)
print("Trying to connect....")
s.connect((HOST,PORT))
s.setblocking(0)
# Not including setblocking(0) because select handles that.
print("You just connected to",HOST,)
# Lets now try to handle the client a different way!
while True:
# Attempting to create a few threads
Reading_Thread = threading.Thread(None,s)
Reading_Thread.start()
Writing_Thread = threading.Thread()
Writing_Thread.start()
Incoming_data = [s]
Exportable_data = []
Exceptions = []
User_input = input("Your message: ")
rlist,wlist,xlist = select.select(Incoming_data,Exportable_data,Exceptions)
if User_input == True:
Exportable_data += [User_input]
Your probably wondering why I've got threading and queues in there.
That's because people told me I could solve the problem by using threading and queues, but after reading documentation, looking for video tutorials or examples that matched my case. I still don't know at all how I can use them to make my client work.
Could someone please help me out here? I just need to find a way to have the client enter messages as much as they'd like without waiting for a reply. This is just one of the ways I am trying to do it.
Normally you'd create a function in which your While True loop runs and can receive the data, which it can write to some buffer or queue to which your main thread has access.
You'd need to synchronize access to this queue so as to avoid data races.
I'm not too familiar with Python's threading API, however creating a function which runs in a thread can't be that hard. Lemme find an example.
Turns out you could create a class with a function where the class derives from threading.Thread. Then you can create an instance of your class and start the thread that way.
class WorkerThread(threading.Thread):
def run(self):
while True:
print 'Working hard'
time.sleep(0.5)
def runstuff():
worker = WorkerThread()
worker.start() #start thread here, which will call run()
You can also use a simpler API and create a function and call thread.start_new_thread(fun, args) on it, which will run that function in a thread.
def fun():
While True:
#do stuff
thread.start_new_thread(fun) #run in thread.
I have a queue that always needs to be ready to process items when they are added to it. The function that runs on each item in the queue creates and starts thread to execute the operation in the background so the program can go do other things.
However, the function I am calling on each item in the queue simply starts the thread and then completes execution, regardless of whether or not the thread it started completed. Because of this, the loop will move on to the next item in the queue before the program is done processing the last item.
Here is code to better demonstrate what I am trying to do:
queue = Queue.Queue()
t = threading.Thread(target=worker)
t.start()
def addTask():
queue.put(SomeObject())
def worker():
while True:
try:
# If an item is put onto the queue, immediately execute it (unless
# an item on the queue is still being processed, in which case wait
# for it to complete before moving on to the next item in the queue)
item = queue.get()
runTests(item)
# I want to wait for 'runTests' to complete before moving past this point
except Queue.Empty, err:
# If the queue is empty, just keep running the loop until something
# is put on top of it.
pass
def runTests(args):
op_thread = SomeThread(args)
op_thread.start()
# My problem is once this last line 't.start()' starts the thread,
# the 'runTests' function completes operation, but the operation executed
# by some thread is not yet done executing because it is still running in
# the background. I do not want the 'runTests' function to actually complete
# execution until the operation in thread t is done executing.
"""t.join()"""
# I tried putting this line after 't.start()', but that did not solve anything.
# I have commented it out because it is not necessary to demonstrate what
# I am trying to do, but I just wanted to show that I tried it.
Some notes:
This is all running in a PyGTK application. Once the 'SomeThread' operation is complete, it sends a callback to the GUI to display the results of the operation.
I do not know how much this affects the issue I am having, but I thought it might be important.
A fundamental issue with Python threads is that you can't just kill them - they have to agree to die.
What you should do is:
Implement the thread as a class
Add a threading.Event member which the join method clears and the thread's main loop occasionally checks. If it sees it's cleared, it returns. For this override threading.Thread.join to check the event and then call Thread.join on itself
To allow (2), make the read from Queue block with some small timeout. This way your thread's "response time" to the kill request will be the timeout, and OTOH no CPU choking is done
Here's some code from a socket client thread I have that has the same issue with blocking on a queue:
class SocketClientThread(threading.Thread):
""" Implements the threading.Thread interface (start, join, etc.) and
can be controlled via the cmd_q Queue attribute. Replies are placed in
the reply_q Queue attribute.
"""
def __init__(self, cmd_q=Queue.Queue(), reply_q=Queue.Queue()):
super(SocketClientThread, self).__init__()
self.cmd_q = cmd_q
self.reply_q = reply_q
self.alive = threading.Event()
self.alive.set()
self.socket = None
self.handlers = {
ClientCommand.CONNECT: self._handle_CONNECT,
ClientCommand.CLOSE: self._handle_CLOSE,
ClientCommand.SEND: self._handle_SEND,
ClientCommand.RECEIVE: self._handle_RECEIVE,
}
def run(self):
while self.alive.isSet():
try:
# Queue.get with timeout to allow checking self.alive
cmd = self.cmd_q.get(True, 0.1)
self.handlers[cmd.type](cmd)
except Queue.Empty as e:
continue
def join(self, timeout=None):
self.alive.clear()
threading.Thread.join(self, timeout)
Note self.alive and the loop in run.