I wrote a twitter like service, and I want to do some stress testing on it with python.
I have a client program, called "client".
I want to write a script that will start several processes of the the "client" program, send a few messages, wait a few seconds and will exit.
what I wrote is
p = subprocess.Popen(['client','c1','localhost','4981'],stdin=subprocess.PIPE)
now I can't call the communicate method, because it waits for an EOF but the process isn't over yet.
calling the stdin.flush doesn't seems to work either.
Any tips on how do I do this?
(I don't have to do this in python, if theres a way to do this with a bash script its also ok)
You can use bash loop to run some clients and put them into the background. If you need to communicate with one, just put it to the foreground using fg and then put it to background again using bg
Call p.stdin.close() to signal that there are no more messages:
#!/usr/bin/python
import time
from subprocess import Popen, PIPE
# start several processes of the the "client" program
processes = [Popen(['client','c1','localhost','4981'], stdin=PIPE)
for _ in range(5)]
# send a few messages
for p in processes:
print >>p.stdin, message
p.stdin.close()
# wait a few seconds (remove finished processes while we wait)
for _ in range(3):
for p in processes[:]:
if p.poll() is not None:
processes.remove(p)
time.sleep(1)
# and will exit (kill unfinished subprocesses)
for p in processes:
if p.poll() is None:
p.kill()
p.wait()
Related
Let's say I have a main.py script which does different stuff inside a while loop, among which it calls a binary via terminal command with subprocess.Popen(). The binary is meant to control an external piece of hardware. At execution, first it sets the configuration and then it enters an infinite while loop waiting for the user to input control commands or exit, both via terminal.
The goal is to create the subprocess in main.py, wait a few seconds until the configuration is done (for example, 10 seconds) and then keep it alive waiting until it is required to send a command while the main code continues with the 'other stuff'.
After the while loop in 'main.py' is done, the subprocess should be killed by typing 'exit'.
Can this be done without the parent and child processes impacting each other in a way one messes up with the other?
The current approach:
main.py
import subprocess
# initiate subprocess to set configuration
p = subprocess.Popen(path/to/binary,
stdout=subprocess.PIPE,
stderr=subprocess.PIPE,
stdin=subprocess.PIPE,
universal_newlines=True)
# sleep 10 seconds until setting is done
sleep(10)
while True:
# do stuff
# send command to subprocess
(stdout, stderr) = p.communicate('cmd_to_subprocess\n')
# continue with stuff
# while loop finished
# kill subprocessed
p.communicate('exit\n')
Looking for some best practices for the following python3 task:
run process program1 in non-blocking mode
run process program2 in non-blocking mode
wait for all or kill them after timeout exceeded
I reckon I know how to do it for 1 process:
import subprocess
p = subprocess.Popen('program1', shell=True)
try:
stdout, stderr = p.communicate(timeout=200)
except subprocess.TimeoutExpired as t:
print(t)
p.kill()
outs, errs = p.communicate()
However I can't expand this approach for 2-processes case, because p.communicate blocks until program1 ends or timeout exceeds.
Also I'd like to know immediately if one of the programs fails.
Python3, OS Linux
UPD I need to implement it wisely, without any busy loops, threads etc
Process has a join() method similar to threads. So, in your case:
proc_1 = Process(target=my_func_1)
proc_2 = Process(target=my_func_2)
for proc in [proc_1, proc_2]:
proc.start()
# They are doing their thing and when they've completed they will join the
# main process and your main program can resume
for proc in [proc_1, proc_2]:
proc.join()
I am trying to run all mp3 files in the background by creating a process using the multiprocessing library.
import os
import subprocess
from multiprocessing import Process
def music_player():
music_folder = "/home/pi/Music/"
files = os.listdir(music_folder)
for mp3_file in files:
print("playing " + mp3_file)
p = subprocess.Popen(["omxplayer","-o","local",music_folder+mp3_file],
stdout = subprocess.PIPE,
stdin = subprocess.PIPE,
stderr = subprocess.PIPE)
print(p)
print(p.poll())
print(p.pid)
p.wait()
p = Process(target = music_player)
print(p, p.is_alive())
p.start()
print(p.pid)
print(p, p.is_alive())
command = raw_input()
if(command == "stop"):
print("terminating...")
p.terminate()
print(p, p.is_alive())
print(p.exitcode)
After entering the "stop" command the code exits but the music is still running and on executing ps I see 2 process of omxplayer which I then have to manually kill through kill <pid> to make the music stop.
I previously tried using the subprocess library and killing the process using kill() and terminate() but the same issue occurred.
First observation, you don't need the multiprocessing module for what you're doing here. subprocess is for creating and managing processes which will run other scripts and programs; multiprocessing is for creating and managing processes which will be calling code which is already internal to your (parent) script.
I suspect that your seeing the effect of buffering. By the time you kill this process it's already buffered a significant amount of music out to the hardware (or even the OS buffers for the device).
What happens if you start the same program omxplayer from your shell, but in the background (as the & token to the end of your Unix shell command line to push a program into the background). Then use the kill command on that process and see if you see the same results.
From within my python script, I want to start another python script which will run in the background waiting for the instruction to terminate.
Host Python script (H1) starts subprocess P1.
P1 performs some short lived work & returns a sentinel to indicate that it is now going to sleep awaiting instructions to terminate.
H1 polls for this sentinel repeatedly. When it receives the sentinel, it performs some other IO bound task and when that completes, tells P1 to die gracefully (meaning close any resources that you have acquired).
Is this feasible to do with the subprocess module ?
Yes, start the process with :
p=subprocess.Popen([list for the script to execute], stdout=subprocess.PIPE, stdin=subprocess.PIPE, stderr=subprocess.PIPE)
You can then read from p.stdout and p.stderr to watch for your sentinel and write to p.stdin to send messages to the child process. If you are running on a posix system, you might consider using pexpect instead; it doesn't support MS Windows, but it handles communicating with child processes better than subprocess.
"""H1"""
from multiprocessing import Process, Pipe
import sys
def P1(conn):
print 'P1: some short lived work'
sys.stdout.flush()
conn.send('work done')
# wait for shutdown command...
conn.recv()
conn.close()
print 'P1: shutting down'
if __name__ == '__main__':
parent_conn, child_conn = Pipe()
p = Process(target=P1, args=(child_conn,))
p.start()
print parent_conn.recv()
print 'H1: some other IO bound task'
parent_conn.send("game over")
p.join()
Output:
P1: some short lived work
work done
H1: some other IO bound task
P1: shutting down
I write a simple script that executes a system command on a sequence of files.
To speed things up, I'd like to run them in parallel, but not all at once - i need to control maximum number of simultaneously running commands.
What whould be the easiest way to approach this ?
If you are calling subprocesses anyway, I don't see the need to use a thread pool. A basic implementation using the subprocess module would be
import subprocess
import os
import time
files = <list of file names>
command = "/bin/touch"
processes = set()
max_processes = 5
for name in files:
processes.add(subprocess.Popen([command, name]))
if len(processes) >= max_processes:
os.wait()
processes.difference_update([
p for p in processes if p.poll() is not None])
On Windows, os.wait() is not available (nor any other method of waiting for any child process to terminate). You can work around this by polling in certain intervals:
for name in files:
processes.add(subprocess.Popen([command, name]))
while len(processes) >= max_processes:
time.sleep(.1)
processes.difference_update([
p for p in processes if p.poll() is not None])
The time to sleep for depends on the expected execution time of the subprocesses.
The answer from Sven Marnach is almost right, but there is a problem. If one of the last max_processes processes ends, the main program will try to start another process, and the for looping will end. This will close the main process, which can in turn close the child processes. For me, this behavior happened with the screen command.
The code in Linux will be like this (and will only work on python2.7):
import subprocess
import os
import time
files = <list of file names>
command = "/bin/touch"
processes = set()
max_processes = 5
for name in files:
processes.add(subprocess.Popen([command, name]))
if len(processes) >= max_processes:
os.wait()
processes.difference_update(
[p for p in processes if p.poll() is not None])
#Check if all the child processes were closed
for p in processes:
if p.poll() is None:
p.wait()
You need to combine a Semaphore object with threads. A Semaphore is an object that lets you limit the number of threads that are running in a given section of code. In this case we'll use a semaphore to limit the number of threads that can run the os.system call.
First we import the modules we need:
#!/usr/bin/python
import threading
import os
Next we create a Semaphore object. The number four here is the number of threads that can acquire the semaphore at one time. This limits the number of subprocesses that can be run at once.
semaphore = threading.Semaphore(4)
This function simply wraps the call to the subprocess in calls to the Semaphore.
def run_command(cmd):
semaphore.acquire()
try:
os.system(cmd)
finally:
semaphore.release()
If you're using Python 2.6+ this can become even simpler as you can use the 'with' statement to perform both the acquire and release calls.
def run_command(cmd):
with semaphore:
os.system(cmd)
Finally, to show that this works as expected we'll call the "sleep 10" command eight times.
for i in range(8):
threading.Thread(target=run_command, args=("sleep 10", )).start()
Running the script using the 'time' program shows that it only takes 20 seconds as two lots of four sleeps are run in parallel.
aw#aw-laptop:~/personal/stackoverflow$ time python 4992400.py
real 0m20.032s
user 0m0.020s
sys 0m0.008s
I merged the solutions by Sven and Thuener into one that waits for trailing processes and also stops if one of the processes crashes:
def removeFinishedProcesses(processes):
""" given a list of (commandString, process),
remove those that have completed and return the result
"""
newProcs = []
for pollCmd, pollProc in processes:
retCode = pollProc.poll()
if retCode==None:
# still running
newProcs.append((pollCmd, pollProc))
elif retCode!=0:
# failed
raise Exception("Command %s failed" % pollCmd)
else:
logging.info("Command %s completed successfully" % pollCmd)
return newProcs
def runCommands(commands, maxCpu):
processes = []
for command in commands:
logging.info("Starting process %s" % command)
proc = subprocess.Popen(shlex.split(command))
procTuple = (command, proc)
processes.append(procTuple)
while len(processes) >= maxCpu:
time.sleep(.2)
processes = removeFinishedProcesses(processes)
# wait for all processes
while len(processes)>0:
time.sleep(0.5)
processes = removeFinishedProcesses(processes)
logging.info("All processes completed")
What you are asking for is a thread pool. There is a fixed number of threads that can be used to execute tasks. When is not running a task, it waits on a task queue in order to get a new piece of code to execute.
There is this thread pool module, but there is a comment saying it is not considered complete yet. There may be other packages out there, but this was the first one I found.
If your running system commands you can just create the process instances with the subprocess module, call them as you want. There shouldn't be any need to thread (its unpythonic) and multiprocess seems a tad overkill for this task.
This answer is very similar to other answers present here but it uses a list instead of sets.
For some reason when using those answers I was getting a runtime error regarding the size of the set changing.
from subprocess import PIPE
import subprocess
import time
def submit_job_max_len(job_list, max_processes):
sleep_time = 0.1
processes = list()
for command in job_list:
print 'running {n} processes. Submitting {proc}.'.format(n=len(processes),
proc=str(command))
processes.append(subprocess.Popen(command, shell=False, stdout=None,
stdin=PIPE))
while len(processes) >= max_processes:
time.sleep(sleep_time)
processes = [proc for proc in processes if proc.poll() is None]
while len(processes) > 0:
time.sleep(sleep_time)
processes = [proc for proc in processes if proc.poll() is None]
cmd = '/bin/bash run_what.sh {n}'
job_list = ((cmd.format(n=i)).split() for i in range(100))
submit_job_max_len(job_list, max_processes=50)