Passing Variables to a process python - python

Need help with how to modify/fix code to allow me to control what is occurring in a process. I have looked around and read I need to either make a global variable which the process can read or use an event function to trigger the process. Problem though is I don't know how to implement them in a class function. I thought that if I followed pyimagesearch code that it would work but it appears that it only works with the threading module and not the multiprocessing module.
import RPi.GPIO as GPIO
from RPI.GPIO import LOW,OUT,HIGH,BCM
import multiprocessing as mp
import time
class TestClass():
def __init__(self,PinOne=22,PinTwo=27):
self.PinOne = PinOne
self.PinTwo = PinTwo
self.RunningSys = True
GPIO.setmode(BCM)
GPIO.setup(PinOne,OUT)
GPIO.output(PinOne,LOW)
GPIO.setup(PinTwo,OUT)
GPIO.output(PinTwo,LOW)
def Testloop(self):
while self.RunningSys:
GPIO.output(PinOne,HIGH)
GPIO.output(PinTwo,HIGH)
time.sleep(1)
GPIO.output(PinOne,LOW)
GPIO.output(PinTwo,LOW)
GPIO.output(PinOne,LOW)
GPIO.output(PinTwo,LOW)
def StopPr(self):
self.RunningSys = False
def MProc(self):
MPGP = mp.process(target=TestClass().Testloop())
MPGP.start()
MPGP.join()
In a separate script
From testfile import TestClass
import time
TestClass().MProc()
time.sleep(4)
TestClass().StopPr()

Related

How can a multiprocessing pool be set up as a global variable?

I need to have multiprocessing pools available to multiple methods in multiple files as a global variable.
I have tried this using a setup with 5 files:
main.py
pool_settings.py
population_calculation.py
individual calculation.py
inside_of_individual_calculation.py
Here is the code of all files:
pool_settings.py
numberOfIndividuals = 10
workersPerIndividuals = 5
import multiprocessing as mp
def init():
global pool
pool = mp.Pool(numberOfIndividuals)
global listOfPools
listOfPools = []
for i in range(0,numberOfIndividuals):
listOfPools.append(mp.Pool(workersPerIndividuals))
population_calculation.py
import multiprocessing as mp
import pool_settings
import individual_calculation as ic
def calculatePopulation():
results_population = pool_settings.pool.map_async(ic.calculateIndividual, range(0,pool_settings.numberOfIndividuals)).get()
print(results_population)
individual_calculation.py
import multiprocessing as mp
import pool_settings
import numpy as np
import inside_of_individual_calculation as ioic
def calculateIndividual(individual):
data = []
for i in range(0,pool_settings.workersPerIndividuals):
data.append(np.random.rand(100))
results_individual = pool_settings.listOfPools[mp.current_process()._identity[0]-1].map_async(ioic.calculateInsideIndividual,data).get()
return sum(results_individual)
inside_of_individual_calculation.py
import numpy as np
def calculateInsideIndividual(individualRow):
return individualRow.sum()
main.py
import pool_settings
import population_calculation
pool_settings.init()
population_calculation.calculatePopulation()
When I run main.py (>>>python main.py) I get the following error:
"AttributeError: module 'pool_settings' has no attribute 'listOfPools'"
I have tried multiple ways and I always get the same error. How can I set up a multiprocessing pool as a global variable so that it is accessible to multiple methods in multiple files?
Thanks a lot,
Joe
P.S.: I also tried a multiprocessing pool in which each process would spin up another multiprocessing pool, and didn't work either
You may change pool_settings.py like:
numberOfIndividuals = 10
workersPerIndividuals = 5
import multiprocessing as mp
def init():
pool = mp.Pool(numberOfIndividuals)
global listOfPools
listOfPools = []
for i in range(0,numberOfIndividuals):
listOfPools.append(mp.Pool(workersPerIndividuals))
return pool, listOfPools
pool, listOfPools = init()
That will allow to use from pool_settings import pool or import pool_settings; pool_settings.pool
pool and listOfPools will be automatically initialized on first pool_settings import (instead of manually calling pool_settings.init())

Thread library in the following code isn't work

what's the problem in the code below?
It only shows two arrows when I run it
Of course, the first was the import thread, but because it gave an error (no module named 'thread'), I changed it to import threading
import threading
import turtle
def f(painter):
for i in range(3):
painter.fd(50)
painter.lt(60)
def g(painter):
for i in range(3):
painter.rt(60)
painter.fd(50)
try:
pat=turtle.Turtle()
mat=turtle.Turtle()
mat.seth(180)
thread.start_new_thread(f,(pat,))
thread.start_new_thread(g,(mat,))
turtle.done()
except:
print("hello")
while True:
pass
threaing and thread are separate modules to deal with threads in python. However thread module is considered as deprecated in python3 but renamed to _thread for backward compatibility. In your case i assume you are trying to use _thread module with python3.
So your code should be as follows.
import _thread
import turtle
def f(painter):
for i in range(3):
painter.fd(50)
painter.lt(60)
def g(painter):
for i in range(3):
painter.rt(60)
painter.fd(50)
try:
pat=turtle.Turtle()
mat=turtle.Turtle()
mat.seth(180)
_thread.start_new_thread(f,(pat,))
_thread.start_new_thread(g,(mat,))
turtle.done()
except:
print("Hello")
while True:
pass
since _thread module is deprecated, it is better you move to threading module.

Running two python code in parallel from two different directory using Multiprocessing

Below is my code for running two python code in parallel using multiprocessing :
defs.py
import os
def pro(process):
#print(process)
os.system('python {}'.format(process))
Multiprocessing.py
import os
from multiprocessing import Pool
import multiprocessing as mp
import defs
import datetime
import pandas as pd
processes = ('python_code1.py','python_code2.py')
if __name__ == '__main__':
pool = Pool(processes=4)
start = datetime.datetime.now()
print('Start:',start)
pool.map(defs.pro, processes)
end = datetime.datetime.now()
print('End :',end)
total = end-start
print('Total :', end-start)
This code is running perfectly fine. But my requirement is I need to run the python code 'python_code1.py' and 'python_code2.py' from two different directory.
so I made the below changes in Multiprocessing.py:
path1 = r'C:\Users\code1\python_code1.py'
path2 = r'C:\Users\code2\python_code2.py'
processes = (path1,path2)
but this is not working for me.
My Multiprocessing.py and defs.py are kept on path `C:\Users\Multiprocessing\'
Well an elegant solution using asyncio. It is used as a foundation for multiple Python asynchronous frameworks that provide high-performance network and web-servers, database connection libraries, distributed task queues, etc. Plus it has both high-level and low-level APIs to accomodate any kind of problem. And you might find syntax easier as I do:
import os
import asyncio
def background(f):
def wrapped(*args, **kwargs):
return asyncio.get_event_loop().run_in_executor(None, f, *args, **kwargs)
return wrapped
#background
def pro(process):
#print(process)
os.system('python {}'.format(process))
processes = (r'C:\Users\code1\python_code1.py',r'C:\Users\code2\python_code2.py')
for process in processes:
pro(process)
Detailed answer on parallelizing for loop. You might find useful.

Python's threads block on IO operation

I have the following problem. Whenever a child thread wants to perform some IO operation (writing to file, downloading a file) the program hangs. In the following example the program hangs on opener.retrieve. If I execute python main.py the program is blocked on an retrieve function. If I execute python ./src/tmp.py everything is fine. I don't understand why. Can anybody explain me what is happening?
I am using python2.7 on Linux system (kernel 3.5.0-27).
File ordering:
main.py
./src
__init__.py
tmp.py
main.py
import src.tmp
tmp.py
import threading
import urllib
class DownloaderThread(threading.Thread):
def __init__(self, pool_sema, i):
threading.Thread.__init__(self)
self.pool_sema = pool_sema
self.daemon = True
self.i = i
def run(self):
try:
opener = urllib.FancyURLopener({})
opener.retrieve("http://www.greenteapress.com/thinkpython/thinkCSpy.pdf", "/tmp/" + str(self.i) + ".pdf")
finally:
self.pool_sema.release()
class Downloader(object):
def __init__(self):
maxthreads = 1
self.pool_sema = threading.BoundedSemaphore(value=maxthreads)
def download_folder(self):
for i in xrange(20):
self.pool_sema.acquire()
print "Downloading", i
t = DownloaderThread(self.pool_sema,i)
t.start()
d = Downloader()
d.download_folder()
I managed to get it to work by hacking urllib.py - if you inspect it you will see many import statements dispersed within the code - i.e. it uses imports stuff 'on the fly' and not just when the module loads.
So, the real reason is still unknown - but not worth investigating - probably some deadlock in Python's import system. You just shouldn't run nontrivial code during an import - that's just asking for trouble.
If you insist, you can get it to work if you move all these weird import statements to the beginning of urllib.py.

Strange problems when using requests and multiprocessing

Please check this python code:
#!/usr/bin/env python
import requests
import multiprocessing
from time import sleep, time
from requests import async
def do_req():
r = requests.get("http://w3c.org/")
def do_sth():
while True:
sleep(10)
if __name__ == '__main__':
do_req()
multiprocessing.Process( target=do_sth, args=() ).start()
When I press Ctrl-C (wait 2sec after run - let Process run), it doesn't stop. When I change the import order to:
from requests import async
from time import sleep, time
it stops after Ctrl-C. Why it doesn't stop/kill in first example?
It's a bug or a feature?
Notes:
Yes I know, that I didn't use async in this code, this is just stripped down code. In real code I use it. I did it to simplify my question.
After pressing Ctrl-C there is a new (child) process running. Why?
multiprocessing.__version__ == 0.70a1, requests.__version__ == 0.11.2, gevent.__version__ == 0.13.7
Requests async module uses gevent. If you look at the source code of gevent you will see that it monkey patches many of Python's standard library functions, including sleep:
request.async module during import executes:
from gevent import monkey as curious_george
# Monkey-patch.
curious_george.patch_all(thread=False, select=False)
Looking at the monkey.py module of gevent you can see:
https://bitbucket.org/denis/gevent/src/f838056c793d/gevent/monkey.py#cl-128
def patch_time():
"""Replace :func:`time.sleep` with :func:`gevent.sleep`."""
from gevent.hub import sleep
import time
patch_item(time, 'sleep', sleep)
Take a look at the code from the gevent's repository for details.

Categories