Send changing counter int variable to another python script concurrently - python

I have two python script, script1.py and script2.py. One is a counter which increment int x independently, and script2.py is to fetch the value of int x every 5 seconds, inputted into script2.py. I have tried doing this with multiprocessing verbatim from the following post,
Passing data between separately running Python scripts
and i applied While True function for script1. Here is my attempt, but i dont think i understand the general thought and i am getting various errors, since i am new to python and i miss some details.
script1.py:
from multiprocessing import Process, Pipe
x = 0
def function(child_conn):
global x
while True:
x += 1
print(x)
child_conn.send(x)
child_conn.close()
script2.py:
from multiprocessing import Proces,Queue,Pipe
from script1 import function
from time import sleep
if __name__=='__main__':
parent_conn,child_conn = Pipe()
p = Process(target=function, args=(child_conn,))
p.start()
print(parent_conn.recv())
time.sleep(5)
thanks in Advance!

You have a loop in the child process but no loop in the parent process. With no loop, the child can only send a single message then throws an error.
Try this code. Run script2.py to start the process.
script1.py
from multiprocessing import Process, Pipe
from time import sleep
x = 0
def function(child_conn):
global x
while True:
x += 1
print(x)
child_conn.send(x)
#child_conn.close()
sleep(1)
script2.py
from multiprocessing import Process,Queue,Pipe
from script1 import function
from time import sleep
if __name__=='__main__':
parent_conn,child_conn = Pipe()
p = Process(target=function, args=(child_conn,))
p.start()
while True:
print(parent_conn.recv())
sleep(1)

Related

multiprocessing, threading gets stuck and printing output gets messed up

I'm running multiple threads in python. I've tried using threading module, multiprocessing module. Even though the execution gives the correct result, everytime the terminal gets stuck and printing of the output gets messed up.
Here's a simplified version of the code.
import subprocess
import threading
import argparse
import sys
result = []
def check_thread(args,components,id):
for i in components:
cmd = <command to be given to terminal>
output = subprocess.check_output([cmd],shell=True)
result.append((id,i,output))
def check(args,components):
# lock = threading.Lock()
# lock = threading.Semaphore(value=1)
thread_list = []
for id in range(3):
t=threading.Thread(target=check_thread, args=(args,components,i))
thread_list.append(t)
for thread in thread_list:
thread.start()
for thread in thread_list:
thread.join()
for res in result:
print(res)
return res
if __name__ == 'main':
parser = argparse.ArgumentParser(....)
parser.add_argument(.....)
args = parser.parse_args()
components = ['comp1','comp2']
while True:
print('SELECTION MENU\n1)\n2)\n')
option = raw_input('Enter option')
if option=='1':
res = check(args, components)
if option=='2':
<do something else>
else:
sys.exit(0)
I've tried using multiprocessing module with Process, pool. Tried passing a lock to check_thread, tried returning a value from check_thread() and using a queue to take in the values, but everytime it's the same result, execution is successful but the terminal gets stuck and printed output is shabby.
Is there any fix to this? I'm using python 2.7. I'm using a linux terminal.
Here is how the shabby output looks
output
You should use queue module not list.
import multiprocessing as mp
# Define an output queue
output = mp.Queue()
# define a example function
def function(params, output):
""" Generates a random string of numbers, lower- and uppercase chars. """
# Process params and store results in res variable
output.put(res)
# Setup a list of processes that we want to run
processes = [mp.Process(target=function, args=(5, output)) for x in range(10)]
# Run processes
for p in processes:
p.start()
# Exit the completed processes
for p in processes:
p.join()
# Get process results from the output queue
results = [output.get() for p in processes]
print(results)

How to write multiprocessing code correctly in Python 2

I'm trying to implement very simple multiprocessing code in python 2.7, but it looks like the code run serially and not parallel.
The following code prints *****1***** while I expect it to print *****2***** immediately after *****1*****.
import os
import multiprocessing
from time import sleep
def main():
func1_proc = multiprocessing.Process(target=func1())
func2_proc = multiprocessing.Process(target=func2())
func1_proc.start()
func2_proc.start()
pass
def func1():
print "*****1*****"
sleep(100)
def func2():
print "*****2*****"
sleep(100)
if __name__ == "__main__":
main()
You're calling func1 and func2 before passing their returning values to Process, so func1 is going to sleep 100 seconds before returning None, for which Process will raise an error.
You should pass function objects to Process instead so that it will run them in separate processes:
func1_proc = multiprocessing.Process(target=func1)
func2_proc = multiprocessing.Process(target=func2)

Get live value of variable from another script

I have two scripts, new.py and test.py.
Test.py
import time
while True:
x = "hello"
time.sleep(1)
x = "world"
time.sleep(1)
new.py
import time
while True:
import test
x = test.x
print(x)
time.sleep(1)
Now from my understanding this should print "hello" and a second later "world" all the time when executing new.py.
It does not print anything, how can i fix that?
Thanks
I think the code below captures what you are asking. Here I simulate two scripts running independently (by using threads), then show how you can use shelve to communicate between them. Note, there are likely much better ways to get to what you are after -- but if you absolutely must run the scripts independently, this will work for you.
Incidentally, any persistent source would do (such as a database).
import shelve
import time
import threading
def script1():
while True:
with shelve.open('my_store') as holder3:
if holder3['flag'] is not None: break
print('waiting')
time.sleep(1)
print("Done")
def script2():
print("writing")
with shelve.open('my_store') as holder2:
holder2['flag'] = 1
if __name__ == "__main__":
with shelve.open('my_store') as holder1:
holder1['flag'] = None
t = threading.Thread(target=script1)
t.start()
time.sleep(5)
script2()
t.join()
Yields:
waiting
waiting
waiting
waiting
waiting
writing
Done
Test.py
import time
def hello():
callList = ['hello', 'world']
for item in callList:
print item
time.sleep(1)
hello()
new.py
from parent import hello
while True:
hello()

Using threading to run a subprocess in parallel

I a linux script that I'm looking to automate through subprocess. Each iteration of subprocess should run the linux script in each subdirectory of a parent directory, and each of these subprocesses should run in a separate thread.
The way my directory is organized is as follows:
/parent/p1
/parent/p2....and so on till
/parent/p[n]
The first part of my code aims to run the process across all the subdirectories (p1, p2, p3...etc). It works fine for a fast process. However, many of my jobs need to run in the background, for which I usually use nohup and manually run them on a separate node. So every node in my terminal will run the same job on each directory (p1, p2, p3..etc). The latter part of my code (using threading) aims to achieve this, but what ends up happening is every node runs the same process (p1,p1,p1...etc) - basically by entire 'jobs' function is being passed through runSims when I want them separated out over the threads. Would someone know how I could further iterate the threading function to place different jobs on each node?
import os
import sys
import subprocess
import os.path
import threading
#takes the argument: python FOLDER_NAME #ofThreads
#Example: python /parent 8
directory = sys.argv[1] #in my case input is /parent
threads = int(sys.argv[2]) #input is 8
category_name = directory.split('/')[-1] #splits parent as a word
folder_list = next(os.walk(directory))[1] #makes a list of subdirectories [p1,p2,p3..]
def jobs(cmd):
for i in folder_list:
f = open("/vol01/bin/dir/nohup.out", "w")
cmd = subprocess.call(['nohup','python','np.py','{0}/{1}' .format(directory,i)],cwd = '/vol01/bin/dir', stdout=f)
return cmd
def runSimThreads(numThreads):
threads = []
for i in range(numThreads):
t = threading.Thread(target=jobs, args=(i,))
threads.append(t)
t.start()
#Wait for all threads to complete
main_thread = threading.currentThread()
for t in threads:
if t is main_thread:
continue
t.join()
runSimThreads(threads)
That can't be your code.
import os
import sys
import subprocess
import os.path
import threading
#takes the argument: python FOLDER_NAME #ofThreads
#Example: python /parent 8
threads = 8 #input is 8
...
...
for t in threads:
print("hello")
--output:--
TypeError: 'int' object is not iterable
You are using the same variable names everywhere, and that is confusing you (or me?).
You also do this:
def jobs(cmd):
for i in folder_list:
f = open("/vol01/bin/dir/nohup.out", "w")
cmd = "something"
You are overwriting your cmd parameter variable, which means that jobs() shouldn't have a parameter variable.
Response to comment1:
import threading as thr
import time
def greet():
print("hello world")
t = thr.Thread(target=greet)
t.start()
t.join()
--output:--
hello world
import threading as thr
import time
def greet(greeting):
print(greeting)
t = thr.Thread(target=greet, args=("Hello, Newman.",) )
t.start()
t.join()
--output:--
Hello, Newman.
Below is the equivalent of what you are doing:
import threading as thr
import time
def greet(greeting):
greeting = "Hello, Jerry."
print(greeting)
t = thr.Thread(target=greet, args=("Hello, Newman.",) )
t.start()
t.join()
--output:--
Hello, Jerry.
And anyone reading that code would ask, "Why are you passing an argument to the greet() function when you don't use it?"
I'm relatively new to python
Well, your code does this:
threads = 8
#Other irrelevant stuff here
for t in threads:
print("hello")
and that will produce the error:
TypeError: 'int' object is not iterable
Do you know why?

how to kill zombie processes created by multiprocessing module?

I'm very new to multiprocessing module. And I just tried to create the following: I have one process that's job is to get message from RabbitMQ and pass it to internal queue (multiprocessing.Queue). Then what I want to do is : spawn a process when new message comes in. It works, but after the job is finished it leaves a zombie process not terminated by it's parent. Here is my code:
Main Process:
#!/usr/bin/env python
import multiprocessing
import logging
import consumer
import producer
import worker
import time
import base
conf = base.get_settings()
logger = base.logger(identity='launcher')
request_order_q = multiprocessing.Queue()
result_order_q = multiprocessing.Queue()
request_status_q = multiprocessing.Queue()
result_status_q = multiprocessing.Queue()
CONSUMER_KEYS = [{'queue':'product.order',
'routing_key':'product.order',
'internal_q':request_order_q}]
# {'queue':'product.status',
# 'routing_key':'product.status',
# 'internal_q':request_status_q}]
def main():
# Launch consumers
for key in CONSUMER_KEYS:
cons = consumer.RabbitConsumer(rabbit_q=key['queue'],
routing_key=key['routing_key'],
internal_q=key['internal_q'])
cons.start()
# Check reques_order_q if not empty spaw a process and process message
while True:
time.sleep(0.5)
if not request_order_q.empty():
handler = worker.Worker(request_order_q.get())
logger.info('Launching Worker')
handler.start()
if __name__ == "__main__":
main()
And here is my Worker:
import multiprocessing
import sys
import time
import base
conf = base.get_settings()
logger = base.logger(identity='worker')
class Worker(multiprocessing.Process):
def __init__(self, msg):
super(Worker, self).__init__()
self.msg = msg
self.daemon = True
def run(self):
logger.info('%s' % self.msg)
time.sleep(10)
sys.exit(1)
So after all the messages gets processed I can see processes with ps aux command. But I would really like them to be terminated once finished.
Thanks.
Using multiprocessing.active_children is better than Process.join. The function active_children cleans any zombies created since the last call to active_children. The method join awaits the selected process. During that time, other processes can terminate and become zombies, but the parent process will not notice, until the awaited method is joined. To see this in action:
import multiprocessing as mp
import time
def main():
n = 3
c = list()
for i in range(n):
d = dict(i=i)
p = mp.Process(target=count, kwargs=d)
p.start()
c.append(p)
for p in reversed(c):
p.join()
print('joined')
def count(i):
print(f'{i} going to sleep')
time.sleep(i * 10)
print(f'{i} woke up')
if __name__ == '__main__':
main()
The above will create 3 processes that terminate 10 seconds apart each. As the code is, the last process is joined first, so the other two, which terminated earlier, will be zombies for 20 seconds. You can see them with:
ps aux | grep Z
There will be no zombies if the processes are awaited in the sequence that they will terminate. Remove the call to the function reversed to see this case. However, in real applications we rarely know the sequence that children will terminate, so using the method multiprocessing.Process.join will result in some zombies.
The alternative active_children does not leave any zombies.
In the above example, replace the loop for p in reversed(c): with:
while True:
time.sleep(1)
if not mp.active_children():
break
and see what happens.
A couple of things:
Make sure the parent joins its children, to avoid zombies. See Python Multiprocessing Kill Processes
You can check whether a child is still running with the is_alive() member function. See http://docs.python.org/2/library/multiprocessing.html#multiprocessing.Process
Use active_children.
multiprocessing.active_children

Categories