I'd like to run multiple python scripts in parallel and start them from a master script. I did find solutions for this in previously asked questions, however, none of these worked if the scripts running in parallel contained loops.
Let's for example define two scripts.
Script 1:
array_1 = []
x = 0
while True:
array_1.append(x)
x = x + 1
Script 2:
array_2 = []
x = 0
while True:
array_2.append(x)
x = x + 1
Now I want to run both processes simultaneously. Previous solutions suggested the following code for a master script:
import script_1, script_2
exec(open(script_1))
exec(open(script_2))
While this is a solution for starting scripts from within another script, however, this will not run the two scripts in parallel.
What should such a master script actually look like ?
Thanks for your suggestions!
Edit
I tried the following threading approach:
def function_1():
print('function 1 started...')
while True:
print('1')
sleep(1)
def function_2():
print('function 2 started...')
while True:
print('2')
sleep(1)
thread_1 = Thread(target=function_1())
thread_2 = Thread(target=function_2())
thread_1.start()
thread_2.start()
thread_1.join()
thread_2.join()
print("thread finished")
It doesn't work, only the first function gets started so I get the following output:
function 1 started...
1
1
1
1
1
1
When you want to spawn a new thread, you need to pass the address of the function you want the thread to execute, and not to call it. What you are doing here is essentially spawning a new thread that immediately calls function_1() which of course runs forever.
Also, you won't be able to reach this line of code:
print("thread finished")
As the threads are executing a while loop - forever, so it is redundent..
from time import sleep
from threading import Thread
def function_1():
print('function 1 started...')
while True:
print('1')
sleep(1)
def function_2():
print('function 2 started...')
while True:
print('2')
sleep(1)
thread_1 = Thread(target=function_1)
thread_2 = Thread(target=function_2)
thread_1.start()
thread_2.start()
thread_1.join()
thread_2.join()
# print("thread finished") - redundant
Related
I am trying to code an alarm in python that has 6 functions that need to multithread. 5 of these are alarms and one of them displays the time. The threads need to start and stop whenever the menu option is selected and when the alarm rings. The Display thread is the only thread that keeps going until the program stops. My current code for the alarm looks like this (I've removed a lot for the sake of clarity)
class TAlarm1 (threading.Thread):
def Alarm1():
while True:
#code which keeps running until the time is equal to the input given (expected to thread)
thread1 = threading.Thread(target=TAlarm1)
thread1.start()
def AlarmSelector():
print("Select an Alarm") #5 alarms will be added however each one accomplishes the same task. all of them need to run simultaneously
choice = int(input())
if choice == 1:
ala = TAlarm1()
ala.Alarm1()
if choice == 6:
DisplayTime() #goes back to displaying time
Whenever I run this code, the program displays no errors however it does not run the code in TAlarm1().
How can I solve this problem?
While your intent isn't clear to me. Here is how you can subclass Thread with its run method overridden and start it conditionally.
import threading
class TAlarm1 (threading.Thread):
def run(self):
n =4
while True:
#code which keeps running until the time is equal to the input given (expected to thread)
print(n,end=' | ')
n -= 1
if n < 0:
break
print()
t1 = TAlarm1()
if True:
t1.start()
A thread can only be started once so you have to make a new one every time you need it to run.
>>> t = TAlarm1()
>>> t.start()
4 | 3 | 2 | 1 | 0 |
>>> t.start()
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "C:\Python38\lib\threading.py", line 848, in start
raise RuntimeError("threads can only be started once")
RuntimeError: threads can only be started once
>>> t = TAlarm1()
>>> t.start()
4 | 3 | 2 | 1 | 0 |
>>>
The target parameter of Thread takes a callable. A class is a callable, but calling it just creates an instance of the class. Pass it a function instead:
import threading
def Alarm1():
print('Alarm1 called')
thread1 = threading.Thread(target=Alarm1)
thread1.start()
There are two basic ways of implementing threaded code in Python. You seem to have half of each.
The first implementation model is to put the logic to run in the thread into a function, then pass that function as the target argument when you create a threading.Thread instance:
import threading
import time
def worker(n):
for i in range(n):
print(i)
time.sleep(0.5)
my_thread = threading.Thread(target=worker, args=(10,))
my_thread.start()
# do other stuff in the main thread, if desired
my_thread.join()
The other implementation approach is to subclass threading.Thread and put the code to be run in the thread inside of the run method (or in other methods called from run). This is especially useful if your thread code has some complicated state and you want to be able to use additional methods to manipulate that state while the thread is running:
class MyThread(threading.Thread):
def __init__(self, n):
super().__init__()
self.n = n
self.unpaused = threading.Event()
self.unpaused.set() # we start unpaused
def run(self):
for i in range(self.n):
self.unpaused.wait() # block if we're paused
print(i)
time.sleep(0.5)
def pause(self):
self.unpaused.clear()
def unpause(self):
self.unpaused.set()
my_thread = MyThread(10)
my_thread.start()
# an example of inter-thread communication, we pause and unpause our thread using its methods
time.sleep(2)
my_thread.pause()
time.sleep(2)
my_thread.unpause()
my_thread.join()
I am making a simple project to learn about threading and this is my code:
import time
import threading
x = 0
def printfunction():
while x == 0:
print("process running")
def timer(delay):
while True:
time.sleep(delay)
break
x = 1
return x
t1 = threading.Thread(target = timer,args=[3])
t2 = threading.Thread(target = printfunction)
t1.start()
t2.start()
t1.join()
t2.join()
It is supposed to just print out process running in the console for three seconds but it never stops printing. The console shows me no errors and I have tried shortening the time to see if I wasn't waiting long enough but it still doesn't work. Then I tried to delete the t1.join()and t2.join()but I still have no luck and the program continues running.
What am I doing wrong?
Add
global x
to the top of timer(). As is, because timer() assigns to x, x is considered to be local to timer(), and its x = 1 has no effect on the module-level variable also named x. The global x remains 0 forever, so the while x == 0: in printfunction() always succeeds. It really has nothing to do with threading :-)
I am working on python 3 and my class is as below.
class MyClass():
def values(self):
***values***
i =0
def check_values(self):
for i in ValueList[i:i+1]:
self.server_connect()
new_value = self.update.values(i)
def run(self):
self.check_values()
if __name__ == "__main__"
format1 = "%(asctime)s: %(message)s"
logging.basicConfig(format=format1, level=logging.INFO,
datefmt="%H:%M:%S")
for i in range(4):
thread = threading.Thread(target=MyClass().run())
threads.append(thread)
i += 1
print("the %s thread is running", thread)
thread.start()
There are no threads getting created but code works.
I am not able to catch what I am doing wrong here.
EDIT
First, I would like to thank you for response and time given for the answer.
I have to update code and inherit other class as per new update from team as below.
class MyClass(MainServer):
Now, the server has it's own run function as below.
class MainServer(object):
***constructor***
***other functions ***
def run(self):
self.add_arguments()
self.parse_arguments()
self.check_values()
Now, without run(), my code is not properly running.
while including run() as below.
*** main ***
update_perform = MyClass()
for i range(4):
thread = threading.Thread(target=Myclass().run()) <-- code starts from here
threads.append(thread)
i += 1
print("the %s thread is running", thread)
thread.start() <-- not reaching till here
As per my knowledge I will require thread.start() to start threading. So I have tried below option
class MyClass(MainServer):
***code as above***
def check_values(self):
self.server_authenticate()
update_value = self.update.values()
def run(self):
self.server_connect()
i = 0
threads = list()
for i in ValueList[i:i+1]:
print("Updating the value = ", i)
thread = threading.Thread(target=check_values(), args=[i])
thread.start()
i += 1
print("Currently running thread", thread)
threads.append(thread)
for thread in threads:
thread.join()
Here thread is executing from start and in print I can see as below
for threading :-
Currently running threads = <Thread(Thread-8, stopped 14852)>
But for the value I can see only one is in process as below
for value :-
Updating the value = 10 <- first value
So, now threads may be getting created but the values are not getting executed in parallel.
Which I am not able to figure out.
modify the run function like this
def run(self):
self.check_values()
I have some Python code that look this:
from ltpd import *
def thread_function():
for i in range(5):
if activatewindow('Confirm New Extension'):
generatekeyevent('<left><space>')
break
time.sleep(1)
def main():
for i in range some_big_range:
thread = Thread(target = thread_function)
thread.start()
# Code that runs for really long time
I was expecting for a new thread to be created for every i in the loop. However, the thread is being created only once. I need the thread to be started freshly for every iteration of the for loop. Can anyone tell me what's wrong and how to fix it?
Every iteration a new thread is startet:
>>> from threading import Thread
>>> def fun(cnt):
... print cnt
...
>>> for i in range(5):
... thread = Thread(target=fun, args=(i,))
... thread.start()
...
0
1
2
3
>>> 4
I've read a lot of posts about using threads, subprocesses, etc.. A lot of it seems over complicated for what I'm trying to do...
All I want to do is stop executing a function after X amount of time has elapsed.
def big_loop(bob):
x = bob
start = time.time()
while True:
print time.time()-start
This function is an endless loop that never throws any errors or exceptions, period.
I"m not sure the difference between "commands, shells, subprocesses, threads, etc.." and this function, which is why I'm having trouble manipulating subprocesses.
I found this code here, and tried it but as you can see it keeps printing after 10 seconds have elapsed:
import time
import threading
import subprocess as sub
import time
class RunCmd(threading.Thread):
def __init__(self, cmd, timeout):
threading.Thread.__init__(self)
self.cmd = cmd
self.timeout = timeout
def run(self):
self.p = sub.Popen(self.cmd)
self.p.wait()
def Run(self):
self.start()
self.join(self.timeout)
if self.is_alive():
self.p.terminate()
self.join()
def big_loop(bob):
x = bob
start = time.time()
while True:
print time.time()-start
RunCmd(big_loop('jimijojo'), 10).Run() #supposed to quit after 10 seconds, but doesn't
x = raw_input('DONEEEEEEEEEEEE')
What's a simple way this function can be killed. As you can see in my attempt above, it doesn't terminate after 20 seconds and just keeps on going...
***OH also, I've read about using signal, but I"m on windows so I can't use the alarm feature.. (python 2.7)
**assume the "infinitely running function" can't be manipulated or changed to be non-infinite, if I could change the function, well I'd just change it to be non infinite wouldn't I?
Here are some similar questions, which I haven't able to port over their code to work with my simple function:
Perhaps you can?
Python: kill or terminate subprocess when timeout
signal.alarm replacement in Windows [Python]
Ok I tried an answer I received, it works.. but how can I use it if I remove the if __name__ == "__main__": statement? When I remove this statement, the loop never ends as it did before..
import multiprocessing
import Queue
import time
def infinite_loop_function(bob):
var = bob
start = time.time()
while True:
time.sleep(1)
print time.time()-start
print 'this statement will never print'
def wrapper(queue, bob):
result = infinite_loop_function(bob)
queue.put(result)
queue.close()
#if __name__ == "__main__":
queue = multiprocessing.Queue(1) # Maximum size is 1
proc = multiprocessing.Process(target=wrapper, args=(queue, 'var'))
proc.start()
# Wait for TIMEOUT seconds
try:
timeout = 10
result = queue.get(True, timeout)
except Queue.Empty:
# Deal with lack of data somehow
result = None
finally:
proc.terminate()
print 'running other code, now that that infinite loop has been defeated!'
print 'bla bla bla'
x = raw_input('done')
Use the building blocks in the multiprocessing module:
import multiprocessing
import Queue
TIMEOUT = 5
def big_loop(bob):
import time
time.sleep(4)
return bob*2
def wrapper(queue, bob):
result = big_loop(bob)
queue.put(result)
queue.close()
def run_loop_with_timeout():
bob = 21 # Whatever sensible value you need
queue = multiprocessing.Queue(1) # Maximum size is 1
proc = multiprocessing.Process(target=wrapper, args=(queue, bob))
proc.start()
# Wait for TIMEOUT seconds
try:
result = queue.get(True, TIMEOUT)
except Queue.Empty:
# Deal with lack of data somehow
result = None
finally:
proc.terminate()
# Process data here, not in try block above, otherwise your process keeps running
print result
if __name__ == "__main__":
run_loop_with_timeout()
You could also accomplish this with a Pipe/Connection pair, but I'm not familiar with their API. Change the sleep time or TIMEOUT to check the behaviour for either case.
There is no straightforward way to kill a function after a certain amount of time without running the function in a separate process. A better approach would probably be to rewrite the function so that it returns after a specified time:
import time
def big_loop(bob, timeout):
x = bob
start = time.time()
end = start + timeout
while time.time() < end:
print time.time() - start
# Do more stuff here as needed
Can't you just return from the loop?
start = time.time()
endt = start + 30
while True:
now = time.time()
if now > endt:
return
else:
print end - start
import os,signal,time
cpid = os.fork()
if cpid == 0:
while True:
# do stuff
else:
time.sleep(10)
os.kill(cpid, signal.SIGKILL)
You can also check in the loop of a thread for an event, which is more portable and flexible as it allows other reactions than brute killing. However, this approach fails if # do stuff can take time (or even wait forever on some event).