Using TCP connection to execute parallel threads over different ports - python

I am trying to execute this python script for implementing a distributed computing protocol. Currently this executes the functions sequentially one after the other. i want to be able to run all the processes parallel on different ports instead of the ** multiprocessing.Manager().Queue()** as has been mentioned in the statement below but i have no clue how should i go about. Any head start would be appreciated to lead me in the right direction
import multiprocessing
from threading import Thread
class Process(Thread):
def __init__(self, env, id):
super(Process, self).__init__()
self.inbox = multiprocessing.Manager().Queue()
self.env = env
self.id = id
def run(self):
try:
self.body()
self.env.removeProc(self.id)
except EOFError:
print "Exiting.."
def getNextMessage(self):
return self.inbox.get()
def sendMessage(self, dst, msg):
self.env.sendMessage(dst, msg)
def deliver(self, msg):
self.inbox.put(msg)

i was able to run this code in parallel mode by implementing simple socket programming instead of Queues by following the python documentation and then making the communication of messages possible over those sockets.

Related

outReceived from twisted ProcessProtocol merges messages if received too fast (buffering problem?)

I am using Klein, a micro web-framework based on twisted. I have a server (running on windows!), which will spawn an external long running process (end-to-end test) via reactor.spawnProcess().
To send status information about the running test, I implemented a ProcessProtocol:
class IPCProtocol(protocol.ProcessProtocol):
def __init__(self, status: 'Status', history: 'History'):
super().__init__()
self.status: Status = status
self.history: History = history
self.pid = None
def connectionMade(self):
self.pid = self.transport.pid
log.msg("process started, pid={}".format(self.pid))
def processExited(self, reason):
log.msg("process exited, status={}".format(reason.value.exitCode))
# add current run to history
self.history.add(self.status.current_run)
# create empty testrun and save status
self.status.current_run = Testrun()
self.status.status = StatusEnum.ready
self.status.save()
# check for more queue items
if not self.status.queue.is_empty():
start_testrun()
def outReceived(self, data: bytes):
data = data.decode('utf-8').strip()
if data.startswith(constants.LOG_PREFIX_FAILURE):
self.failureReceived()
if data.startswith(constants.LOG_PREFIX_SERVER):
data = data[len(constants.LOG_PREFIX_SERVER):]
log.msg("Testrunner: " + data)
self.serverMsgReceived(data)
I start the process with the following command:
ipc_protocol = IPCProtocol(status=app.status, history=app.history)
args = [sys.executable, 'testrunner.py', next_entry.suite, json.dumps(next_entry.testscripts)]
log.msg("Starting testrunn.py with args: {}".format(args))
reactor.spawnProcess(ipc_protocol, sys.executable, args=args)
To send information, I just print out messages (with a prefix to distinct them) in my testrunner.py.
The problem is that if I send the print commands to fast, then outReceived will merge the messages.
I already tried adding a flush=True for print() calls in the external process, but this didn't fix the problem. Some other question suggested using usePTY=True for the spawnProcess but this is not supported under windows.
Is there a better way to fix this, than adding a small delay (like time.sleep(0.1)) to each print()call?
You didn't say it, but it seems like the child process writes lines to its stdout.
You need to parse the output to find the line boundaries if you want to operate on these lines.
You can use LineOnlyReceiver to help you with this. Since processes aren't stream transports, you can't just use LineOnlyReceiver directly. You have to adapt it to the process protocol interface. You can do this yourself or you can use ProcessEndpoint (instead of spawnProcess) to do it for you.
For example:
from twisted.protocols.basic import LineOnlyReceiver
from twisted.internet.protocol import Factory
from twisted.internet.endpoints import ProcessEndpoint
from twisted.internet import reactor
endpoint = ProcessEndpoint(reactor, b"/some/some-executable", ...)
spawning_deferred = endpoint.connect(Factory.forProtocol(LineOnlyReceiver))
...

python design pattern queue with workers

I'm currently working on a project that involves three components,
an observer that check for changes in a directory, a worker and an command line interface.
What I want to achieve is:
The observer, when a change happens send a string to the worker (add a job to the worker's queue).
The worker has a queue of jobs and forever works on his queue.
Now I want the possibility to run a python script to check the status of the worker (number of active jobs, errors and so on)
I don't know how to achieve this with python in terms of which component to use and how to link the three components.
I though as a singleton worker where the observer add a job to a queue but 1) I was not able to write a working code and 2) How can I fit the checker in?
Another solution that I thought of may be multiple child processes from a father that has the queue but I'm a bit lost...
Thanks for any advices
I'd use some kind of observer pattern or publish-subscribe pattern. For the former you can use for example the Python version of ReactiveX. But for a more basic example let's stay with the Python core. Parts of your program can subscribe to the worker and receive updates from the process via queues for example.
import itertools as it
from queue import Queue
from threading import Thread
import time
class Observable(Thread):
def __init__(self):
super().__init__()
self._observers = []
def notify(self, msg):
for obs in self._observers:
obs.put(msg)
def subscribe(self, obs):
self._observers.append(obs)
class Observer(Thread):
def __init__(self):
super().__init__()
self.updates = Queue()
class Watcher(Observable):
def run(self):
for i in it.count():
self.notify(i)
time.sleep(1)
class Worker(Observable, Observer):
def run(self):
while True:
task = self.updates.get()
self.notify((str(task), 'start'))
time.sleep(1)
self.notify((str(task), 'stop'))
class Supervisor(Observer):
def __init__(self):
super().__init__()
self._statuses = {}
def run(self):
while True:
status = self.updates.get()
print(status)
self._statuses[status[0]] = status[1]
# Do something based on status updates.
if status[1] == 'stop':
del self._statuses[status[0]]
watcher = Watcher()
worker = Worker()
supervisor = Supervisor()
watcher.subscribe(worker.updates)
worker.subscribe(supervisor.updates)
supervisor.start()
worker.start()
watcher.start()
However many variations are possible and you can check the various patterns which suits you most.

unable to stop thread from a module

I need to be able to call a stop funtion of a running thread. I tried several ways to achieve this but so far no luck. I think I need a thread id but have no idea how this is done.
relevant code:
model:
import MODULE
class do_it():
def __init__(self):
self.on_pushButton_start_clicked()
return
def on_pushButton_start_clicked(self):
self.Contr = MODULE.Controller()
self.Contr.start()
def on_pushButton_stop_clicked(self):
if self.Contr:
self.Contr.stop()
self.Contr = None
return
module:
import thread
class Controller():
def __init__(self):
self.runme = False
def start(self):
"""Using this to start the controllers eventloop."""
thread.start_new_thread(self._run, ())
def stop(self):
"""Using this to stop the eventloop."""
self.runme = False
def _run(self):
"""The actual eventloop"""
self.runme = True
I think my issue lies here...
my controller:
"""I want to use this to control my model, that in turn controls the module"""
def start_module():
start=do_it().on_pushButton_start_clicked()
return 'Ran start function'
def stop_module():
stop=do_it().on_pushButton_stop_clicked()
return 'Ran stop function'
Regarding the thread module, this is what the docs say:
This module provides low-level primitives for working with multiple
threads […] The threading module provides an easier to use and
higher-level threading API built on top of this module.
Furthermore, there is no way to stop or kill a thread once it's started. This SO question goes further into detail and shows how to use a threading.Event to implement a stoppable thread.
The only difference is a daemon thread, which will be killed automatically when your main program exists.

Listening for a threading Event in python

first time SO user, please excuse any etiquette errors. I'm trying to implement a multithreaded program in python and am having troubles. This is no doubt due to a lack of understanding of how threading is implemented, but hopefully you can help me figure it out.
I have a basic program that continually listens for messages on a serial port and can then print/save/process/etc them, which works fine. It basically looks like this:
import serial
def main():
usb = serial.Serial('/dev/cu.usbserial-A603UBRB', 57600) #open serial w\ baud rate
while True:
line = usb.readline()
print(line)
However what I want to do is continually listen for the messages on a serial port, but not necessarily do anything with them. This should run in the background, and meanwhile in the foreground I want to have some kind of interface where the user can command the program to read/use/save these data for a while and then stop again.
So I created the following code:
import time
import serial
import threading
# this runs in the background constantly, reading the serial bus input
class serial_listener(threading.Thread):
def __init__(self, line, event):
super(serial_listener, self).__init__()
self.event = threading.Event()
self.line = ''
self.usb = serial.Serial('/dev/cu.usbserial-A603UBRB', 57600)
def run(self):
while True:
self.line = self.usb.readline()
self.event.set()
self.event.clear()
time.sleep(0.01)
# this lets the user command the software to record several values from serial
class record_data(threading.Thread):
def __init__(self):
super(record_data, self).__init__()
self.line = ''
self.event = threading.Event()
self.ser = serial_listener(self.line,self.event)
self.ser.start() #run thread
def run(self):
while(True):
user_input = raw_input('Record data: ')
if user_input == 'r':
event_counter = 0
while(event_counter < 16):
self.event.wait()
print(self.line)
event_counter += 1
# this is going to be the mother function
def main():
dat = record_data()
dat.start()
# this makes the code behave like C code.
if __name__ == '__main__':
main()
It compiles and runs, but when I order the program to record by typing r into the CLI, nothing happens. It doesn't seem to be receiving any events.
Any clues how to make this work? Workarounds are also fine, the only thing is that I can't constantly open and close the serial interface, it has to remain open the whole time, or else the device stops working until un/replugged.
Instead of using multiple threads, I would suggest using multiple processes. When you use threads, you have to think about the global interpreter lock. So you either listen to events or do something in your main thread. Both at the same time will not work.
When using multiple processes I would then use a queue to forward the events from your watchdog that you would like to handle. Or you could code your own event handler. Here you can find an example for multiprocess event handlers

Runing class method multiple times parallel in Python

I have implemented a Python socket server. It sends image data from multiple cameras to a client. My request handler class looks like:
class RequestHandler(SocketServer.BaseRequestHandler):
def handle(self):
while True:
data = self.request.recv(1024)
if data.endswith('0000000050'): # client requests data
for camera_id, camera_path in _video_devices.iteritems():
message = self.create_image_transfer_message(camera_id, camera_path)
self.request.sendto(message, self.client_address)
def create_image_transfer_message(self, camera_id, camera_path):
# somecode ...
I am forced to stick to the socket server because of the client. It works however the problem is that it works sequentially, so there are large delays between the camera images being uploaded. I would like to create the transfer messages in parallel with a small delay between the calls.
I tried to use the pool class from multiprocessing:
import multiprocessing
class RequestHandler(SocketServer.BaseRequestHandler):
def handle(self):
...
pool = multiprocessing.Pool(processes=4)
messages = [pool.apply(self.create_image_transfer_message, args=(camera_id, camera_path)) for camId, camPath in _video_devices.iteritems()]
But this throws:
PicklingError: Can't pickle <type 'function'>: attribute lookup __builtin__.function failed
I want to know if there is an another way to create those transfer messages in parallel with a defined delay between the calls?
EDIT:
I create the response messages using data from multiple cameras. The problem is, that if I run the image grabbing routines too close to each other I get image artifacts, because the USB bus is overloaded. I figured out, that calling the image grabbing sequentially with 0.2 sec delay will solve the problem. The cameras are not sending data the whole time the image grabbing function is running, so the delayed parallel cal result in good images with only a small delay between them.
I think you're on the right path already, no need to throw away your work.
Here's an answer to how to use a class method with multiprocessing I found via Google after searching for "multiprocessing class method"
from multiprocessing import Pool
import time
pool = Pool(processes=2)
def unwrap_self_f(arg, **kwarg):
return C.create_image_transfer_message(*arg, **kwarg)
class RequestHandler(SocketServer.BaseRequestHandler):
#classmethod
def create_image_transfer_message(cls, camera_id, camera_path):
# your logic goes here
def handle(self):
while True:
data = self.request.recv(1024)
if not data.endswith('0000000050'): # client requests data
continue
pool.map(unwrap_self_f,
(
(camera_id, camera_path)
for camera_id, camera_path in _video_devices.iteritems()
)
)
Note, if you want to return values from the workers then you'll need to explore using a shared resource see this answer here - How can I recover the return value of a function passed to multiprocessing.Process?
This code did the trick for me:
class RequestHandler(SocketServer.BaseRequestHandler):
def handle(self):
while True:
data = self.request.recv(1024)
if data.endswith('0000000050'): # client requests data
process_manager = multiprocessing.Manager()
messaging_queue = process_manager.Queue()
jobs = []
for camId, camPath in _video_devices.iteritems():
p = multiprocessing.Process(target=self.create_image_transfer_message,
args=(camera_id, camera_path, messaging_queue))
jobs.append(p)
p.start()
time.sleep(0.3)
# wait for all processes to finish
for p in jobs:
p.join()
while not messaging_queue.empty():
self.request.sendto(messaging_queue.get(), self.client_address)

Categories