I have 2 scripts that start multiple processes. Right now I'm opening up two different terminals and running python start.py to start both the scripts. How can I achieve this with one command, or one running one script.
Start.py 1
# globals
my_queue = multiprocessing.Manager().Queue() # queue to store our values
stop_event = multiprocessing.Event() # flag which signals processes to stop
my_pool = None
def my_function(foo):
print("starting %s" % foo)
try:
addnews.varfoo)
except Exception,e:
print str(e)
MAX_PROCESSES = 50
my_pool = multiprocessing.Pool(MAX_PROCESSES)
x = Var.objects.order_by('name').values('link')
for t in x:
t = t.values()[0]
my_pool.apply_async(my_function, args=(t,))
my_pool.close()
my_pool.join()
Start1.py 2
# globals
MAX_PROCESSES = 50
my_queue = multiprocessing.Manager().Queue() # queue to store our values
stop_event = multiprocessing.Event() # flag which signals processes to stop
my_pool = None
def my_function(var):
var.run_main(var)
stop_event.set()
def var_scanner():
# Since `t` could have unlimited size we'll put all `t` value in queue
while not stop_event.is_set(): # forever scan `values` for new items
y = Var.objects.order_by('foo).values('foo__foo')
for t in y:
t = t.values()[0]
my_queue.put(t)
try:
var_scanner_process = multiprocessing.Process(target=var_scanner)
var_scanner_process.start()
my_pool = multiprocessing.Pool(MAX_PROCESSES)
#while not stop_event.is_set():
try: # if queue isn't empty, get value from queue and create new process
var = my_queue.get_nowait() # getting value from queue
p = multiprocessing.Process(target=my_function, args=(var,))
p.start()
except Queue.Empty:
print "No more items in queue"
time.sleep(1)
#stop_event.set()
except KeyboardInterrupt as stop_test_exception:
print(" CTRL+C pressed. Stopping test....")
stop_event.set()
You can run the first script in the background on the same terminal, using the & shell modifier.
python start.py &
python start1.py
Related
I have created a 3 process in python. I have attached a code.
Now I want to stop the execution of running p2,p3 process because I got an error due to p1 process.I have idea to add p2.terminate(),I don't know where to add in this case. Thanks in advance.
def table(a):
try:
for i in range(100):
print(i,'x',a,'=',a*i)
except:
print("error")
processes = []
p1= multiprocessing.Process(target = table,args=['s'])
p2= multiprocessing.Process(target = table,args=[5])
p3= multiprocessing.Process(target = table,args=[2])
p1.start()
p2.start()
p3.start()
processes.append(p1)
processes.append(p2)
processes.append(p3)
for process in processes:
process.join()```
To stop any given process once one of the process terminates due to an error, first set up your target table() to exit with an appropriate exitcode > 0
def table(args):
try:
for i in range(100):
print(i,'x', a ,'=', a*i)
except:
sys.exit(1)
sys.exit(0)
Then you can start your processes and poll the processes to see if any one has terminated.
#!/usr/bin/env python3
# coding: utf-8
import multiprocessing
import time
import logging
import sys
logging.basicConfig(level=logging.INFO, format='[%(asctime)-15s] [%(processName)-10s] %(message)s', datefmt='%Y-%m-%d %H:%M:%S')
def table(args):
try:
for i in range(5):
logging.info('{} x {} = {}'.format(i, args, i*args))
if isinstance(args, str):
raise ValueError()
time.sleep(5)
except:
logging.error('Done in Error Path: {}'.format(args))
sys.exit(1)
logging.info('Done in Success Path: {}'.format(args))
sys.exit(0)
if __name__ == '__main__':
p1 = multiprocessing.Process(target=table, args=('s',))
p2 = multiprocessing.Process(target=table, args=(5,))
p3 = multiprocessing.Process(target=table, args=(2,))
processes = [p1, p2, p3]
for process in processes:
process.start()
while True:
failed = []
completed = []
for process in processes:
if process.exitcode is not None and process.exitcode != 0:
failed.append(process)
if failed:
for process in processes:
if process not in failed:
logging.info('Terminating Process: {}'.format(process))
process.terminate()
break
if len(completed) == len(processes):
break
time.sleep(1)
Essentially, you are using terminate() to stop the remaining processes that are still running.
to stop a all cores when one core has faced with error, i use this code block:
processes = []
for j in range(0, n_core):
p = multiprocessing.Process(target=table, args=('some input',))
processes.append(p)
time.sleep(0.1)
p.start()
flag = True
while flag:
flag = False
for p in processes:
if p.exitcode == 1:
for z in processes:
z.kill()
sys.exit(1)
elif p.is_alive():
flag = True
for p in processes:
p.join()
First, I have modified function table to throw an exception that is not caught when the argument passed to it is 's' and to delay .1 seconds otherwise before printing to give the main process a chance to realize that the sub-process through an exception and can cancel the other processes before they have started printing. Otherwise, the other processes will have completed before you can cancel them. Here I am using a process pool, which supports a terminate method that conveniently terminates all submitted, uncompleted tasks without having to cancel each one individually (although that is also an option).
The code creates a multiprocessing pool of size 3 since that is the number of "tasks" being submitted and then uses method apply_async to submit the 3 tasks to run in parallel (assuming you have at least 3 processors). apply_sync returns an AsyncResult instance whose get method can be called to wait for the completion of the submitted task and to get the return value from the worker function table, which is None for the second and third tasks submitted and of no interest, or will throw an exception if the worker function had an uncaught exception, which is the case with the first task submitted:
import multiprocessing
import time
def table(a):
if a == 's':
raise Exception('I am "s"')
time.sleep(.1)
for i in range(100):
print(i,'x',a,'=',a*i)
# required for Windows:
if __name__ == '__main__':
pool = multiprocessing.Pool(3) # create a pool of 3 processes
result1 = pool.apply_async(table, args=('s',))
result2 = pool.apply_async(table, args=(5,))
result3 = pool.apply_async(table, args=(2,))
try:
result1.get() # wait for completion of first task
except Exception as e:
print(e)
pool.terminate() # kill all processes in the pool
else:
# wait for all submitted tasks to complete:
pool.close()
pool.join()
"""
# or alternatively:
result2.get() # wait for second task to finish
result3.get() # wait for third task to finish
"""
Prints:
I am "s"
I've searched StackOverflow and although I've found many questions on this, I haven't found an answer that fits for my situation/not a strong python programmer to adapt their answer to fit my need.
I've looked here to no avail:
kill a function after a certain time in windows
Python: kill or terminate subprocess when timeout
signal.alarm replacement in Windows [Python]
I am using multiprocessing to run multiple SAP windows at once to pull reports. The is set up to run on a schedule every 5 minutes. Every once in a while, one of the reports gets stalled due to the GUI interface and never ends. I don't get an error or exception, it just stalls forever. What I would like is to have a timeout function that during this part of the code that is executed in SAP, if it takes longer than 4 minutes, it times out, closes SAP, skips the rest of the code, and waits for next scheduled report time.
I am using Windows Python 2.7
import multiprocessing
from multiprocessing import Manager, Process
import time
import datetime
### OPEN SAP ###
def start_SAP():
print 'opening SAP program'
### REPORTS IN SAP ###
def report_1(q, lock):
while True: # logic to get shared queue
if not q.empty():
lock.acquire()
k = q.get()
time.sleep(1)
lock.release()
break
else:
time.sleep(1)
print 'running report 1'
def report_2(q, lock):
while True: # logic to get shared queue
if not q.empty():
lock.acquire()
k = q.get()
time.sleep(1)
lock.release()
break
else:
time.sleep(1)
print 'running report 2'
def report_3(q, lock):
while True: # logic to get shared queue
if not q.empty():
lock.acquire()
k = q.get()
time.sleep(1)
lock.release()
break
else:
time.sleep(1)
time.sleep(60000) #mimicking the stall for report 3 that takes longer than allotted time
print 'running report 3'
def report_N(q, lock):
while True: # logic to get shared queue
if not q.empty():
lock.acquire()
k = q.get()
time.sleep(1)
lock.release()
break
else:
time.sleep(1)
print 'running report N'
### CLOSES SAP ###
def close_SAP():
print 'closes SAP'
def format_file():
print 'formatting files'
def multi_daily_pull():
lock = multiprocessing.Lock() # creating a lock in multiprocessing
shared_list = range(6) # creating a shared list for all functions to use
q = multiprocessing.Queue() # creating an empty queue in mulitprocessing
for n in shared_list: # putting list into the queue
q.put(n)
print 'Starting process at ', time.strftime('%m/%d/%Y %H:%M:%S')
print 'Starting SAP Pulls at ', time.strftime('%m/%d/%Y %H:%M:%S')
StartSAP = Process(target=start_SAP)
StartSAP.start()
StartSAP.join()
report1= Process(target=report_1, args=(q, lock))
report2= Process(target=report_2, args=(q, lock))
report3= Process(target=report_3, args=(q, lock))
reportN= Process(target=report_N, args=(q, lock))
report1.start()
report2.start()
report3.start()
reportN.start()
report1.join()
report2.join()
report3.join()
reportN.join()
EndSAP = Process(target=close_SAP)
EndSAP.start()
EndSAP.join()
formatfile = Process(target=format_file)
formatfile .start()
formatfile .join()
if __name__ == '__main__':
multi_daily_pull()
One way to do what you want would be to use the optional timeout argument that the Process.join() method accepts. This will make it only block the calling thread at most that length of time.
I also set the daemon attribute of each Process instance so your main thread will be able to terminate even if one of the processes it started is still "running" (or has hung up).
One final point, you don't need a multiprocessing.Lock to control access a multiprocessing.Queue, because they handle that aspect of things automatically, so I removed it. You may still want to have one for some other reason, such as controlling access to stdout so printing to it from the various processes doesn't overlap and mess up what is output to the screen.
import multiprocessing
from multiprocessing import Process
import time
import datetime
def start_SAP():
print 'opening SAP program'
### REPORTS IN SAP ###
def report_1(q):
while True: # logic to get shared queue
if q.empty():
time.sleep(1)
else:
k = q.get()
time.sleep(1)
break
print 'report 1 finished'
def report_2(q):
while True: # logic to get shared queue
if q.empty():
time.sleep(1)
else:
k = q.get()
time.sleep(1)
break
print 'report 2 finished'
def report_3(q):
while True: # logic to get shared queue
if q.empty():
time.sleep(1)
else:
k = q.get()
time.sleep(60000) # Take longer than allotted time
break
print 'report 3 finished'
def report_N(q):
while True: # logic to get shared queue
if q.empty():
time.sleep(1)
else:
k = q.get()
time.sleep(1)
break
print 'report N finished'
def close_SAP():
print 'closing SAP'
def format_file():
print 'formatting files'
def multi_daily_pull():
shared_list = range(6) # creating a shared list for all functions to use
q = multiprocessing.Queue() # creating an empty queue in mulitprocessing
for n in shared_list: # putting list into the queue
q.put(n)
print 'Starting process at ', time.strftime('%m/%d/%Y %H:%M:%S')
print 'Starting SAP Pulls at ', time.strftime('%m/%d/%Y %H:%M:%S')
StartSAP = Process(target=start_SAP)
StartSAP.start()
StartSAP.join()
report1 = Process(target=report_1, args=(q,))
report1.daemon = True
report2 = Process(target=report_2, args=(q,))
report2.daemon = True
report3 = Process(target=report_3, args=(q,))
report3.daemon = True
reportN = Process(target=report_N, args=(q,))
reportN.daemon = True
report1.start()
report2.start()
report3.start()
reportN.start()
report1.join(30)
report2.join(30)
report3.join(30)
reportN.join(30)
EndSAP = Process(target=close_SAP)
EndSAP.start()
EndSAP.join()
formatfile = Process(target=format_file)
formatfile .start()
formatfile .join()
if __name__ == '__main__':
multi_daily_pull()
I have noticed that when I have many threads pulling elements from a queue, there are less elements processed than the number that I put into the queue. This is sporadic but seems to happen somewhere around half the time when I run the following code.
#!/bin/env python
from threading import Thread
import httplib, sys
from Queue import Queue
import time
import random
concurrent = 500
num_jobs = 500
results = {}
def doWork():
while True:
result = None
try:
result = curl(q.get())
except Exception as e:
print "Error when trying to get from queue: {0}".format(str(e))
if results.has_key(result):
results[result] += 1
else:
results[result] = 1
try:
q.task_done()
except:
print "Called task_done when all tasks were done"
def curl(ourl):
result = 'all good'
try:
time.sleep(random.random() * 2)
except Exception as e:
result = "error: %s" % str(e)
except:
result = str(sys.exc_info()[0])
finally:
return result or "None"
print "\nRunning {0} jobs on {1} threads...".format(num_jobs, concurrent)
q = Queue()
for i in range(concurrent):
t = Thread(target=doWork)
t.daemon = True
t.start()
for x in range(num_jobs):
q.put("something")
try:
q.join()
except KeyboardInterrupt:
sys.exit(1)
total_responses = 0
for result in results:
num_responses = results[result]
print "{0}: {1} time(s)".format(result, num_responses)
total_responses += num_responses
print "Number of elements processed: {0}".format(total_responses)
Tim Peters hit the nail on the head in the comments. The issue is that the tracking of results is threaded and isn't protected by any sort of mutex. That allows something like this to happen:
thread A gets result: "all good"
thread A checks results[result]
thread A sees no such key
thread A suspends # <-- before counting its result
thread B gets result: "all good"
thread B checks results[result]
thread B sees no such key
thread B sets results['all good'] = 1
thread C ...
thread C sets results['all good'] = 2
thread D ...
thread A resumes # <-- and remembers it needs to count its result still
thread A sets results['all good'] = 1 # resetting previous work!
A more typical workflow might have a results queue that the main thread is listening on.
workq = queue.Queue()
resultsq = queue.Queue()
make_work(into=workq)
do_work(from=workq, respond_on=resultsq)
# do_work would do respond_on.put_nowait(result) instead of
# return result
results = {}
while True:
try:
result = resultsq.get()
except queue.Empty:
break # maybe? You'd probably want to retry a few times
results.setdefault(result, 0) += 1
Here is my script:
# globals
MAX_PROCESSES = 50
my_queue = Manager().Queue() # queue to store our values
stop_event = Event() # flag which signals processes to stop
my_pool = None
def my_function(var):
while not stop_event.is_set():
#this script will run forever for each variable found
return
def var_scanner():
# Since `t` could have unlimited size we'll put all `t` value in queue
while not stop_event.is_set(): # forever scan `values` for new items
x = Variable.objects.order_by('var').values('var__var')
for t in x:
t = t.values()
my_queue.put(t)
time.sleep(10)
try:
var_scanner = Process(target=var_scanner)
var_scanner.start()
my_pool = Pool(MAX_PROCESSES)
while not stop_event.is_set():
try: # if queue isn't empty, get value from queue and create new process
var = my_queue.get_nowait() # getting value from queue
p = Process(target=my_function, args=("process-%s" % var))
p.start()
except Queue.Empty:
print "No more items in queue"
except KeyboardInterrupt as stop_test_exception:
print(" CTRL+C pressed. Stopping test....")
stop_event.set()
However I don't think this script is exactly what I want. Here's what I was looking for when I wrote the script. I want it to scan for variables in "Variables" table, add "new" variables if they don't already exists to the queue, run "my_function" for each variable in the queue.
I believe I have WAYYYY to many while not stop_event.is_set() functions. Because right now it just prints out "No more items in queue" about a million times.
Please HELP!! :)
I am writing a server for some library (python).
I want that while the server is working in his loop it will open 1 thread to do something else.
I am controlling this thread with queue and until there is no return value in the queue i don't want the server to open another thread.
try:
#we have a return in the queqe
returnValue = threadQueue.get(False)
startAnotherThread = True
except Empty:
print "Waiting for return value from thread thread....."
if there is some return value in the queue then startAnotherThread will tell to some if statement to open another thread.
i don't know why it's not working mabye some one have an idea?
Solved:
Init before the server loop:
# thread queue
threadQueue = Queue()
# thread numbering
threadNumber = 0
# thread start
threadCanStart = True
Inside the server loop:
if threadCanStart:
myThread(threadNumber, threadQueue).start()
threadNumber += 1
threadCanStart = False
try:
#we have a return in the quqe
returnValue = threadQueue.get(False)
print "Queue return: ", returnValue
threadCanStart = True
except Empty:
print "Waiting for thread....."