for some reason the code does not get to the main function. I am using cloud9 to run the code so that might be the issue.
from multiprocessing import Process, Value
import time
def main():
print "main function"
def market_price_thread():
while True:
market_price()
time.sleep(5)
def market_price():
#do something
print "end"
def start_threads():
thread = Process(target=market_price_thread())
thread.start()
time.sleep(5)
if __name__ == '__main__':
start_threads()
main() #does not seem to get to this
You've asked Python to call market_price_thread:
thread = Process(target=market_price_thread())
and then use whatever it returns, as the target value. So, before calling Process, we'll have to wait for market_price_thread to return. What value does it return, and when?
(Compare with Process(target=market_price_thread), which does not call market_price_thread yet, but rather, passes the function to Process so that Process can call it.)
Related
I'm running a script on my Raspberry. Sometimes happens that the program freezes, so I've to close the terminal and re-run the .py
So I wanted to "multiprocess" this program. I made two function, the first one does the work, the second one has the job to check the time, and kill the process of the first function in the case the condition is true.
However I tried to do like so:
def AntiFreeze():
print("AntiFreeze partito\n")
global stop
global endtime
global freq
proc_SPN = multiprocessing.Process(target=SPN(), args=())
proc_SPN.start()
time.sleep(2)
proc_SPN.terminate()
proc_SPN.join()
if __name__ == '__main__':
proc_AF = multiprocessing.Process(target=AntiFreeze(), args=())
proc_AF.start()
The main function start the "AntiFreeze" function on a process, this one create another process to run the function that will do the Job I want.
THE PROBLEM (I think):
The function "SPN()" (that is the one that does the job) is busy in a very long while loop that calls function in another .py file.
So when I use proc_SPN.terminate() or proc_SPN.kill() nothing happens... why?
There is another way to force a process to kill? maybe I've to do two different programs?
Thanks in advance for help
You are calling your function at process creation, so most likely the process is never correctly spawned. Your code should be changed into:
def AntiFreeze():
print("AntiFreeze partito\n")
global stop
global endtime
global freq
proc_SPN = multiprocessing.Process(target=SPN, args=())
proc_SPN.start()
time.sleep(2)
proc_SPN.terminate()
proc_SPN.join()
if __name__ == '__main__':
proc_AF = multiprocessing.Process(target=AntiFreeze, args=())
proc_AF.start()
Furthermore, you shouldn't use globals (unless strictly necessarry). You could pass the needed arguments to the AntiFreeze function instead.
I have a multiprocessing pool , that runs with 1 thread, and it keeps repeating the code before my function, i have tried with different threads, and also, i make things like this quite a bit, so i think i know what is causing the problem but i dont understand why, usually i use argparse to to parse files from the user, but i instead wanted to use input, no errors are thrown so i honestly have no clue.
from colorama import Fore
import colorama
import os
import ctypes
import multiprocessing
from multiprocessing import Pool
import random
colorama.init(autoreset=False)
print("headerhere")
#as you can see i used input instead of argparse
g = open(input(Fore.RED + " File Path?: " + Fore.RESET))
gg = open(input(Fore.RED + "File Path?: " + Fore.RESET))
#I messed around with this to see if it was the problem, ultimately disabling it until i fixed it, i just use 1 thread
threads = int(input(Fore.RED + "Amount of Threads?: " + Fore.RESET))
arrange = [lines.replace("\n", "")for lines in g]
good = [items.replace("\n", "") for items in gg]
#this is all of the code before the function that Pool calls
def che(line):
print("f")
#i would show my code but as i said this isnt the problem since ive made programs like this before, the only thing i changed is how i take file inputs from the user
def main():
pool = Pool(1)
pool.daemon = True
result = pool.map(che, arrange)
if __name__ == "__main__":
main()
if __name__ == "__main__":
main()
Here's a minimal, reproducible example of your issue:
from multiprocessing import Pool
print('header')
def func(n):
print(f'func {n}')
def main():
pool = Pool(3)
pool.map(func,[1,2,3])
if __name__ == '__main__':
main()
On OSes where "spawn" (Windows and MacOS) or "forkserver" (some Unix) are the default start methods, the sub-process imports your script. Since print('header') is at global scope, it will run the first time a script is imported into a process, so the output is:
header
header
header
header
func 1
func 2
func 3
A multiprocessing script should have everything meant to run once inside function(s), and they should be called once by the main script via if_name__ == '__main__':, so the solution is to move it into your def main()::
from multiprocessing import Pool
def func(n):
print(f'func {n}')
def main():
print('header')
pool = Pool(3)
pool.map(func,[1,2,3])
if __name__ == '__main__':
main()
Output:
header
func 1
func 2
func 3
If you want the top level code before the definition of che to only be executed in the master process, then place it in a function and call that function in main.
In multiprocessing, the top level statements will be interpreted/executed by both the master process and every child process. So, if some code should be executed only by the master and not by the children, then such code should not placed that at the top-level. Instead, such code should be placed in functions and these functions should be invoked in the main scope, i.e., in the scope of if block controlled by __main__ (or called in the main function in your code snippet).
I wanted to run a function repeating itself while my main code (I guess it is called main thread) is still running so I did this, there is probably a better way of doing this but I am new to coding and python so I have no idea what am I doing.
import threading
import time
def x():
print("hey")
time.sleep(1)
x()
t = threading.Thread(target=x)
t.daemon = True
t.start()
when I make daemon False it repeats itself but when I stop the program I get an error
CPython (the reference implementation of Python) does not implement Tail Call Optimization (TCO).¹ This means you can't run excessive recursion since it is limited and you would get a RuntimeError when you hit this limit.
sys.getrecursionlimit() # 3000
So instead of calling x() from within x() again, make a while True-loop within x():
import threading
import time
def x():
while True:
print("hey")
time.sleep(1)
t = threading.Thread(target=x, daemon=True)
t.start()
time.sleep(10) # do something, sleep for demo
¹ Stackless Python would be a Python implementation without recursion limit.
I want to run a function independently. From the function I call, I want return without waiting for the other function ending.
I tried with threadind, but this will wait, the end.
thread = threading.Thread(target=myFunc)
thread.daemon = True
thread.start()
return 'something'
Is it possible to return immediately and the other process still run?
Thanks for the Answers.
EDITED
The working code looks like:
import concurrent.futures
executor = concurrent.futures.ThreadPoolExecutor(2)
executor.submit(myFunc, arg1, arg2)
You are more or less asking the following question:
Is it possible to run function in a subprocess without threading or writing a separate file/script
You have to change the example code from the link like this:
from multiprocessing import Process
def myFunc():
pass # whatever function you like
p = Process(target=myFunc)
p.start() # start execution of myFunc() asychronously
print)'something')
p.start() is executed asychronously, i.e. 'something' is printed out immediately, no matter how time consuming the execution of myFunc() is. The script executes myFunc() and does not wait for it to finish.
if I understood your request correctly, you might want to take a look on worker queues
https://www.djangopackages.com/grids/g/workers-queues-tasks/
Basically it's not a good idea to offload the work to thread created in view, this is usually handled by having a pool of background workers (processes, threads) and the queue for incoming requests.
I think the syntax you are using is correct and I don't see why your request shouldn't return immediately. Did you verify the request actually hang till the thread is over?
I would suggest to set myFunc to write to a file for you to track this
def myFunc():
f = open('file.txt', 'w')
while True:
f.write('hello world')
I tried to look for an answer, but couldn't find anything relevant. Hence, the decision to ask.
I have a script A. At the beginning of script A, it calls script B (or a function, either works) in a separate thread.
A continues to do some task. I want to continue executing script A till script B doesn't finish.
How do I listen for the finishing of B while continuing with tasks of A?
For example,
Call Script B using subprocess, import file and run function (either way)
while(1):
count=count+1
if script B ended:
break
Can anyone illustrate how to check the "script B ended" part?
Here's a really simple way to do what you want:
import time
from threading import Thread
def stupid_work():
time.sleep(4)
if __name__ == '__main__':
t = Thread(target=stupid_work)
t.start()
while 1:
if not t.is_alive():
print 'thread is done'
break # or whatever
else:
print 'thread is working'
time.sleep(1)
The thread will die when it's done, so you just check intermittently to see if it's still around. You did not mention you want a return value. If you do, then you can pass the target function a queue, and replace if not t.is_alive() with if not q.empty(). Then do a q.get() to retrieve the return value when it's ready. And be sure to have the target put the return value in the queue, or you'll be waiting for quite a while.
If you're using the subprocess module you can do something like this.
from subprocess import Popen
proc = Popen(["sleep", "100"])
while True:
if proc.poll() is not None:
print("proc is done")
break
More on subprocess and poll here.