Problem: I expect child to time out and be done. but instead it times out and begins to run again.
Can anyone tell me why this program runs forever? I expect it to run one time and exit...
Here is a working program. Master threads a function to spawn a child. Works great except it ends up looping.
Here is the master:
# master.py
import multiprocessing, subprocess, sys, time
def f():
p = subprocess.Popen(["C:\\Python32\\python.exe", "child.py"])
# wait until child ends and check exit code
while p.poll() == None:
time.sleep(2)
if p.poll() != 0:
print("something went wrong with child.py")
# multithread a function process to launch and monitor a child
p1 = multiprocessing.Process(target = f())
p1.start()
and the child:
# child.py
import socket, sys
def main(args):
try:
sock = socket.socket(socket.AF_INET, socket.SOCK_DGRAM)
sock.settimeout(10)
sock.bind(('', 54324))
data, addr = sock.recvfrom(1024) # buffer size is 1024 bytes
print(data)
sock.close()
return 0
except KeyboardInterrupt as e:
try:
sock.close()
return 0
except:
return 0
if __name__ == "__main__":
sys.exit(main(sys.argv))
The problem is that your master.py doesn't have an if __name__ == '__main__' guard. On Windows, multiprocessing has to be able to reimport the main module in the child process, and if you don't use this if guard, you will re-execute the multiprocessing.Process in the child (resulting in an accidental forkbomb).
To fix, simply put all of the commands in master.py in the if guard:
if __name__ == '__main__':
# multithread a function process to launch and monitor a child
p1 = multiprocessing.Process(target = f())
p1.start()
Related
recently i have been working with Pipe and raspberry pi. I am trying to send a signal to my function to kill it however the "pipe.recv" is blocking the function. The signal is sent however the while loop doesnt get executed.
from multiprocessing import Process, Pipe
import time
import os
import signal
def start(pipe):
pipe1 = pipe[1].recv()
while True:
print('hello world')
os.kill(pipe1,signal.SIGTERM)
if __name__ == "__main__":
conn1 = Pipe()
a = Process(target = start,args = (conn1,))
a.start()
time.sleep(5)
print("TIMES UP")
conn1[1].send(a.pid)
You are sending, and attempting to retrieve the item from the same end of the pipe. Try this, where pipe[0] and pipe[1] are named to parent and child, for readability, instead:
from multiprocessing import Process, Pipe
import time
import os
import signal
def start(child):
pipe1 = child.recv()
while True:
print('hello world')
os.kill(pipe1,signal.SIGTERM)
if __name__ == "__main__":
parent, child = Pipe()
a = Process(target = start,args = (child,))
a.start()
time.sleep(5)
print("TIMES UP")
parent.send(a.pid)
I can't get this code to run an input whilst another block of code is running. I want to know if there are any workarounds, my code is as follows.
import multiprocessing
def test1():
input('hello')
def test2():
a=True
while a == True:
b = 5
if __name__ == "__main__":
p1 = multiprocessing.Process(target=test1)
p2 = multiprocessing.Process(target=test2)
p1.start()
p2.start()
p1.join()
p2.join()
When the code is run I get an EOF error which apparently happens when the input function is interrupted.
I would have the main process create a daemon thread responsible for doing the input in conjunction with creating the greatly under-utilized full duplex Pipe which provides two two-way communication Connection instances. For simplicity the following demo just creates one Process instance that loops doing input requests echoing the response until the user enters 'quit':
import multiprocessing
import threading
def test1(conn):
while True:
conn.send('Please enter a value: ')
s = conn.recv()
if s == 'quit':
break
print(f'You entered: "{s}"')
def inputter(conn):
while True:
# The contents of the request is the prompt to be used:
prompt = conn.recv()
conn.send(input(prompt))
if __name__ == "__main__":
conn1, conn2 = multiprocessing.Pipe(duplex=True)
t = threading.Thread(target=inputter, args=(conn1,), daemon=True)
p = multiprocessing.Process(target=test1, args=(conn2,))
t.start()
p.start()
p.join()
That's not all of your code, because it doesn't show the multiprocessing. However, the issue is that only the main process can interact with the console. The other processes do not have a stdin. You can use a Queue to communicate with the main process if you need to, but in general you want the secondary processes to be pretty much standalone.
I'm building a GUI to control a robot in Python.
The GUI has a few buttons, each one executes a function that loops indefinitely.
It's like a roomba, where the "clean kitchen" function makes it continually clean until interrupted.
In order to keep the GUI interactive, I'm executing the function in a separate process using multiprocessing.
I've got a stop function that I call that returns the robot to home, and kills the child process (otherwise the child process would reach the next line, and the robot would start turning around when it's left the kitchen and gone home). The stop function runs in the main/parent process as it doesn't loop.
I've got a GUI button which calls Stop, and I'll also call it whenever I start a new process.
My processes are started like this:
from file1 import kitchen
from file2 import bedroom
if event == "Clean kitchen":
stoptour()
p = multiprocessing.Process(target=kitchen,args=(s,),daemon=True)
p.start()
if event == "Clean bedroom":
stoptour()
p = multiprocessing.Process(target=bedroom,args=(s,),daemon=True)
p.start()
The argument being passed is just the socket that the script is using to connect to the robot.
My stop function is:
def stoptour():
p.terminate()
p.kill()
s.send(bytes.fromhex(XXXX)) #command to send the stop signal to the robot
p.join()
This all runs without error and the robot stops, but then starts up again (because the child process is still running). I confirmed this by adding to the stop function:
if p.is_alive:
print('Error, still not dead')
else:
print('Success, its dead')
Every time it prints "Error, still not dead"...
Why are p.kill and p.terminate not working? Is something spawning more child processes?
Is there a way to write my stoptour() function so that it kills any and all child processes completely indiscriminately?
Edited to show the code:
import socket
import PySimpleGUI as sg
import multiprocessing
import time
from file1 import room1
from file2 import room2
from file3 import room3
#Define the GUI window
layout = [[sg.Button("Room 1")],
[sg.Text("Start cleaning of room1")],
[sg.Button("Room 2")],
[sg.Text("Start cleaning of room2")],
[sg.Button("Room 3")],
[sg.Text("Start cleaning room3")],
[sg.Button("Stop")],
[sg.Text("Stop what you're doing")]]
# Create the window
window = sg.Window("Robot Utility", layout)
#Setup TCP Connection & Connect
TCP_IP = '192.168.1.100' #IP
TCP_port = 2222 #Port
s = socket.socket(socket.AF_INET, socket.SOCK_STREAM) #Setup TCP connection
s.connect((TCP_IP, TCP_port)) #Connect
#Define p so I can define stop function
if __name__ == '__main__':
p = multiprocessing.Process(target=room1, args=(s,), daemon=True)
p.start()
p.terminate()
p.kill()
p.join()
#Define stop function
def stoptour():
s.send(bytes.fromhex('longhexkey'))
p.terminate()
p.kill()
p.join()
s.send(bytes.fromhex('longhexkey')) #No harm stopping twice...
if p.is_alive:
print('Error, still not dead')
else:
print('Success, its dead')
stoptour()
#GUI event loop
while True:
event, values = window.read()
if event == "Room 1":
if __name__ == '__main__':
stoptour()
p = multiprocessing.Process(target=room1, args=(s,), daemon=True)
p.start()
if event == "Room 2":
if __name__ == '__main__':
stoptour()
p = multiprocessing.Process(target=room2, args=(s,), daemon=True)
p.start()
if event == "Room 3":
if __name__ == '__main__':
stoptour()
p = multiprocessing.Process(target=room3, args=(s,), daemon=True)
p.start()
if event == "Stop":
stoptour()
sg.popup("Tour stopped")
if event == sg.WIN_CLOSED:
stoptour()
s.close()
print('Closing Program')
break
window.close()
I am building a watchdog timer that runs another Python program, and if it fails to find a check-in from any of the threads, shuts down the whole program. This is so it will, eventually, be able to take control of needed communication ports. The code for the timer is as follows:
from multiprocessing import Process, Queue
from time import sleep
from copy import deepcopy
PATH_TO_FILE = r'.\test_program.py'
WATCHDOG_TIMEOUT = 2
class Watchdog:
def __init__(self, filepath, timeout):
self.filepath = filepath
self.timeout = timeout
self.threadIdQ = Queue()
self.knownThreads = {}
def start(self):
threadIdQ = self.threadIdQ
process = Process(target = self._executeFile)
process.start()
try:
while True:
unaccountedThreads = deepcopy(self.knownThreads)
# Empty queue since last wake. Add new thread IDs to knownThreads, and account for all known thread IDs
# in queue
while not threadIdQ.empty():
threadId = threadIdQ.get()
if threadId in self.knownThreads:
unaccountedThreads.pop(threadId, None)
else:
print('New threadId < {} > discovered'.format(threadId))
self.knownThreads[threadId] = False
# If there is a known thread that is unaccounted for, then it has either hung or crashed.
# Shut everything down.
if len(unaccountedThreads) > 0:
print('The following threads are unaccounted for:\n')
for threadId in unaccountedThreads:
print(threadId)
print('\nShutting down!!!')
break
else:
print('No unaccounted threads...')
sleep(self.timeout)
# Account for any exceptions thrown in the watchdog timer itself
except:
process.terminate()
raise
process.terminate()
def _executeFile(self):
with open(self.filepath, 'r') as f:
exec(f.read(), {'wdQueue' : self.threadIdQ})
if __name__ == '__main__':
wd = Watchdog(PATH_TO_FILE, WATCHDOG_TIMEOUT)
wd.start()
I also have a small program to test the watchdog functionality
from time import sleep
from threading import Thread
from queue import SimpleQueue
Q_TO_Q_DELAY = 0.013
class QToQ:
def __init__(self, processQueue, threadQueue):
self.processQueue = processQueue
self.threadQueue = threadQueue
Thread(name='queueToQueue', target=self._run).start()
def _run(self):
pQ = self.processQueue
tQ = self.threadQueue
while True:
while not tQ.empty():
sleep(Q_TO_Q_DELAY)
pQ.put(tQ.get())
def fastThread(q):
while True:
print('Fast thread, checking in!')
q.put('fastID')
sleep(0.5)
def slowThread(q):
while True:
print('Slow thread, checking in...')
q.put('slowID')
sleep(1.5)
def hangThread(q):
print('Hanging thread, checked in')
q.put('hangID')
while True:
pass
print('Hello! I am a program that spawns threads!\n\n')
threadQ = SimpleQueue()
Thread(name='fastThread', target=fastThread, args=(threadQ,)).start()
Thread(name='slowThread', target=slowThread, args=(threadQ,)).start()
Thread(name='hangThread', target=hangThread, args=(threadQ,)).start()
QToQ(wdQueue, threadQ)
As you can see, I need to have the threads put into a queue.Queue, while a separate object slowly feeds the output of the queue.Queue into the multiprocessing queue. If instead I have the threads put directly into the multiprocessing queue, or do not have the QToQ object sleep in between puts, the multiprocessing queue will lock up, and will appear to always be empty on the watchdog side.
Now, as the multiprocessing queue is supposed to be thread and process safe, I can only assume I have messed something up in the implementation. My solution seems to work, but also feels hacky enough that I feel I should fix it.
I am using Python 3.7.2, if it matters.
I suspect that test_program.py exits.
I changed the last few lines to this:
tq = threadQ
# tq = wdQueue # option to send messages direct to WD
t1 = Thread(name='fastThread', target=fastThread, args=(tq,))
t2 = Thread(name='slowThread', target=slowThread, args=(tq,))
t3 = Thread(name='hangThread', target=hangThread, args=(tq,))
t1.start()
t2.start()
t3.start()
QToQ(wdQueue, threadQ)
print('Joining with threads...')
t1.join()
t2.join()
t3.join()
print('test_program exit')
The calls to join() means that the test program never exits all by itself since none of the threads ever exit.
So, as is, t3 hangs and the watchdog program detects this and detects the unaccounted for thread and stops the test program.
If t3 is removed from the above program, then the other two threads are well behaved and the watchdog program allows the test program to continue indefinitely.
When you import and use package, this package can run non daemon threads. Until these threads are finished, python cannot exit properly (like with sys.exit(0)). For example imagine that thread t is from some package. When unhandled exception occurs in the main thread, you want to terminate. But this won't exit immediately, it will wait 60s till the thread terminates.
import time, threading
def main():
t = threading.Thread(target=time.sleep, args=(60,))
t.start()
a = 5 / 0
if __name__ == '__main__':
try:
main()
except:
sys.exit(1)
So I came up with 2 things. Replace sys.exit(1) with os._exit(1) or enumerate all threads and make them daemon. Both of them seems to work, but what do you thing is better? os._exit won't flush stdio buffers but setting daemon attribute to threads seems like a hack and maybe it's not guaranteed to work all the time.
import time, threading
def main():
t = thread.Thread(target=time.sleep, args=(60,))
t.start()
a = 5 / 0
if __name__ == '__main__':
try:
main()
except:
for t in threading.enumerate():
if not t.daemon and t.name != "MainThread":
t._daemonic = True
sys.exit(1)