Elegant solution for IPC in python with multiprocessing - python

I have two independent processes on the same machine in need of IPC. As of now, I have this working solution:
server.py
#!/usr/bin/python3
from multiprocessing.managers import BaseManager
from multiprocessing import Process, Queue
def do_whatever():
print('function do whatever, triggered by xyz')
# do something
def start_queue_server(q):
class QueueManager(BaseManager): pass
QueueManager.register('get_queue', callable=lambda:q)
m = QueueManager(address=('', 55555), authkey=b'tuktuktuk')
s = m.get_server()
s.serve_forever()
def main():
queue = Queue()
proc = Process(target=start_queue_server, args=(queue,))
proc.start()
while True:
command = queue.get()
print('command from queue:', command)
if command == 'xyz':
do_whatever()
# many more if, elif, else statements
if __name__ == "__main__":
main()
client.py
#!/usr/bin/python3
from multiprocessing.managers import BaseManager
def communicator(command):
class QueueManager(BaseManager): pass
QueueManager.register('get_queue')
m = QueueManager(address=('', 55555), authkey=b'tuktuktuk')
m.connect()
queue = m.get_queue()
queue.put(command)
def main():
command = ('xyz')
communicator(command)
if __name__ == "__main__":
main()
Is there a more elegant way to call 'do_whatever' than parsing the commands passed on by the queue and then calling the target function?
Can I somehow pass on a reference to 'do_whatever' and call it directly from the client?
How is an answer from the server, e.g. True or False, communicated to the client? I tried passing a shared variable instead of a queue object but failed. Do I need to open another connection using a second socket to pass the answer?
I read the python documentation but couldn't find more options for unrelated processes. Inputs would be welcome!
Cheers singultus

Finally, I settled for an additional Listener
server.py
#!/usr/bin/python3
from multiprocessing.managers import BaseManager
from multiprocessing import Process, Queue
from multiprocessing.connection import Client
def do_whatever():
print('function do whatever, triggered by xyz')
# do something
def start_queue_server(q):
class QueueManager(BaseManager): pass
QueueManager.register('get_queue', callable=lambda:q)
m = QueueManager(address=('', 55555), authkey=b'tuktuktuk')
s = m.get_server()
s.serve_forever()
def talkback(msg, port):
conn = Client(address=('', port), authkey=b'tuktuktuk')
conn.send(msg)
conn.close()
def main():
queue = Queue()
proc = Process(target=start_queue_server, args=(queue,))
proc.start()
while True:
command = queue.get()
print('command from queue:', command)
if command[0] == 'xyz':
do_whatever()
talkback('aaa', command[1])
# many more if, elif, else statements
if __name__ == "__main__":
main()
client.py
#!/usr/bin/python3
from multiprocessing.managers import BaseManager
from multiprocessing.connection import Listener
def communicator(command, talkback=False):
if talkback:
listener = Listener(address=('', 0), authkey=b'prusaprinter')
return_port = listener.address[1]
command = command + (return_port,)
class QueueManager(BaseManager): pass
QueueManager.register('get_queue')
m = QueueManager(address=('', 55555), authkey=b'tuktuktuk')
m.connect()
queue = m.get_queue()
queue.put(command)
if talkback:
conn = listener.accept()
server_return = conn.recv()
conn.close()
listener.close()
return server_return
def main():
command = ('xyz')
communicator(command, True)
if __name__ == "__main__":
main()
The client opens an available port and starts listening on it. It then sends the command to the server together with the aforementioned port number. The server executes the command, then uses the port number to report back to the client. After receiving the answer, the client closes the port.

Related

Stop child thread from main thread in python

I'm starting a webserver in a new thread. After all tests are run I want to kill the child thread with running server inside. The only one solution is interrupting entire process with all threads inside by calling "os.system('kill %d' % os.getpid())" (see the code below). I'm not sure it's the smartest solution. I'm not sure all threads will be killed after all. Could I send some kind of "Keyboard interrupt" signal to stop the thread before exiting main thread?
import http
import os
import sys
import unittest
import time
import requests
import threading
from addresses import handle_get_addresses, load_addresses
from webserver import HTTPHandler
def run_in_thread(fn):
def run(*k, **kw):
t = threading.Thread(target=fn, args=k, kwargs=kw)
t.start()
return t
return run
#run_in_thread
def start_web_server():
web_host = 'localhost'
print("starting server...")
web_port = 8808
httpd = http.server.HTTPServer((web_host, web_port), HTTPHandler)
try:
httpd.serve_forever()
except KeyboardInterrupt:
pass
class TestAddressesApi(unittest.TestCase):
WEB_SERVER_THREAD: threading.Thread = None
#classmethod
def setUpClass(cls):
cls.WEB_SERVER_THREAD = start_web_server()
pass
#classmethod
def tearDownClass(cls):
print("shutting down the webserver...")
# here someting like cls.WEB_SERVER_THREAD.terminate()
# instead of line below
os.system('kill %d' % os.getpid())
def test_get_all_addresses(self):
pass
def test_1(self):
pass
if __name__ == "__main__":
unittest.main()
Maybe threading.Event is you wanted.
Just found a solution. Daemon Threads stop executing when main thread stops working

Pipe blocking the function

recently i have been working with Pipe and raspberry pi. I am trying to send a signal to my function to kill it however the "pipe.recv" is blocking the function. The signal is sent however the while loop doesnt get executed.
from multiprocessing import Process, Pipe
import time
import os
import signal
def start(pipe):
pipe1 = pipe[1].recv()
while True:
print('hello world')
os.kill(pipe1,signal.SIGTERM)
if __name__ == "__main__":
conn1 = Pipe()
a = Process(target = start,args = (conn1,))
a.start()
time.sleep(5)
print("TIMES UP")
conn1[1].send(a.pid)
You are sending, and attempting to retrieve the item from the same end of the pipe. Try this, where pipe[0] and pipe[1] are named to parent and child, for readability, instead:
from multiprocessing import Process, Pipe
import time
import os
import signal
def start(child):
pipe1 = child.recv()
while True:
print('hello world')
os.kill(pipe1,signal.SIGTERM)
if __name__ == "__main__":
parent, child = Pipe()
a = Process(target = start,args = (child,))
a.start()
time.sleep(5)
print("TIMES UP")
parent.send(a.pid)

Getting different outputs on Gitbash and VScode Terminal

Below is the smallest reproducible example I could come up with. In the main function, first an object of serverUDP class is created and with the use of threading a function run is called which also creates another thread to call another function RecvData. Problem is the main thread is not printing port value until the program is stopped with ctrl + C. Cannot understand why is this happening.
import socket, simpleaudio as sa
import threading, queue
from threading import Thread
import time
class ServerUDP:
def __init__(self):
while 1:
try:
self.s = socket.socket(socket.AF_INET, socket.SOCK_DGRAM)
self.s.bind(('127.0.0.1', 0))
self.clients = set()
self.recvPackets = queue.Queue()
break
except:
print("Couldn't bind to that port")
def get_ports(self):
return self.s.getsockname()
def RecvData(self, name, delay, run_event):
while run_event.is_set():
time.sleep(delay)
pass
def run(self, name, delay, run_event):
threading.Thread(target=self.RecvData, args = ("bob",d1,run_event)).start() #separate thread for listening to the clients
while run_event.is_set():
time.sleep(delay)
pass
self.s.close()
def close(self):
self.s.close()
if __name__ == "__main__":
roomserver = ServerUDP()
run_event = threading.Event()
run_event.set()
d1 = 1
t = Thread(target= roomserver.run, args = ("bob",d1,run_event))
t.start()
port = roomserver.get_ports()[1]
print("port is", port)
try:
while 1:
time.sleep(.1)
except KeyboardInterrupt:
print("attempting to close threads. Max wait =",d1)
run_event.clear()
t.join()
print("threads successfully closed")
UPD: I'm on windows platform and was using VScode editor for coding and Git Bash terminal to run this program. I just ran this on VScode terminal and magically it was giving the port number. Is this a known issue in Git Bash terminal?
Adding VScode and Git Bash tags to know something about it.

Terminal doesn't show input after running threading nmap scan

i have written a NMap-TCP-Port-Scanner in python and everything works just fine except that i'm no longer able to see what i'm writing on the terminal.
First things first.
The code:
import argparse, nmap, sys
from threading import *
def initParser():
parser = argparse.ArgumentParser()
parser.add_argument("tgtHost", help="Specify target host")
parser.add_argument("tgtPort", help="Specify target port")
args = parser.parse_args()
return (args.tgtHost,args.tgtPort.split(","))
def nmapScan(tgtHost, tgtPorts):
nm = nmap.PortScanner()
lock = Semaphore(value=1)
for tgtPort in tgtPorts:
t = Thread(target=nmapScanThread, args=(tgtHost, tgtPort, lock, nm))
t.start()
def nmapScanThread(tgtHost, tgtPort, lock, nm):
nm.scan(tgtHost, tgtPort)
state = nm[tgtHost]['tcp'][int(tgtPort)]['state']
lock.acquire()
print("Port {} is {}".format(tgtPort, state))
lock.release()
if __name__ == '__main__':
(tgtHost, tgtPorts) = initParser()
nmapScan(tgtHost, tgtPorts)
sys.exit(0)
So, after i have run the script i don't see what i'm typing on the console anymore, but i can still execute my invisible commands. As you can see i want to start a thread for each port just because i am learning about threading right now.
My assumption is that not all threads are terminated properly because everthing works just fine after i have added "t.join()" to the code.
Unfortunately i couldn't manage to find anything about this issue.
Just like this:
import argparse, nmap, sys
from threading import *
def initParser():
parser = argparse.ArgumentParser()
parser.add_argument("tgtHost", help="Specify target host")
parser.add_argument("tgtPort", help="Specify target port")
args = parser.parse_args()
return (args.tgtHost,args.tgtPort.split(","))
def nmapScan(tgtHost, tgtPorts):
nm = nmap.PortScanner()
lock = Semaphore(value=1)
for tgtPort in tgtPorts:
t = Thread(target=nmapScanThread, args=(tgtHost, tgtPort, lock, nm))
t.start()
t.join()
def nmapScanThread(tgtHost, tgtPort, lock, nm):
nm.scan(tgtHost, tgtPort)
state = nm[tgtHost]['tcp'][int(tgtPort)]['state']
lock.acquire()
print("Port {} is {}".format(tgtPort, state))
lock.release()
if __name__ == '__main__':
(tgtHost, tgtPorts) = initParser()
nmapScan(tgtHost, tgtPorts)
sys.exit(0)
Is it the proper way to handle this problem or did i mess things up a bit?
Additionally:
I cannot see the join() method as useful in this example because i don't think that there is any major difference to the same script without threading

python master/child looping unintentionally

Problem: I expect child to time out and be done. but instead it times out and begins to run again.
Can anyone tell me why this program runs forever? I expect it to run one time and exit...
Here is a working program. Master threads a function to spawn a child. Works great except it ends up looping.
Here is the master:
# master.py
import multiprocessing, subprocess, sys, time
def f():
p = subprocess.Popen(["C:\\Python32\\python.exe", "child.py"])
# wait until child ends and check exit code
while p.poll() == None:
time.sleep(2)
if p.poll() != 0:
print("something went wrong with child.py")
# multithread a function process to launch and monitor a child
p1 = multiprocessing.Process(target = f())
p1.start()
and the child:
# child.py
import socket, sys
def main(args):
try:
sock = socket.socket(socket.AF_INET, socket.SOCK_DGRAM)
sock.settimeout(10)
sock.bind(('', 54324))
data, addr = sock.recvfrom(1024) # buffer size is 1024 bytes
print(data)
sock.close()
return 0
except KeyboardInterrupt as e:
try:
sock.close()
return 0
except:
return 0
if __name__ == "__main__":
sys.exit(main(sys.argv))
The problem is that your master.py doesn't have an if __name__ == '__main__' guard. On Windows, multiprocessing has to be able to reimport the main module in the child process, and if you don't use this if guard, you will re-execute the multiprocessing.Process in the child (resulting in an accidental forkbomb).
To fix, simply put all of the commands in master.py in the if guard:
if __name__ == '__main__':
# multithread a function process to launch and monitor a child
p1 = multiprocessing.Process(target = f())
p1.start()

Categories