Why doesn't the second python function run? Infinite loop - python

I have to make two separate functions, one that prints 'Ha' after every 1 second, and another one that prints 'Ho' after every 2 seconds.
But b() is greyed out, and it won't run.
import time
def a():
while True:
time.sleep(1)
print('Ha')
def b():
while True:
time.sleep(2)
print('Ho')
a()
b()
Why is the second function not running?
Edit: I have to have 2 separate functions that can both run infinitely.

The b() function is never called because a() never returns. Here's a simple approach that achieves what you are looking for:
import time
count = 0
while True:
time.sleep(1)
print("Ha")
if count % 2 :
print("Ho")
count += 1
If you made your program multi-threaded, you could run a() in one thread and b() in another thread and your approach would work.
Here's a version of your program with multi-thread - which allows both your a() and b() functions to run somewhat simultaneously:
import time
from threading import Thread
class ThreadA(Thread):
def a():
while True:
time.sleep(1)
print('Ha')
def run(self):
ThreadA.a()
class ThreadB(Thread):
def b():
while True:
time.sleep(2)
print('Ho')
def run(self):
ThreadB.b()
ThreadA().start()
ThreadB().start()
Edit: Here's a simpler multi-threaded version which allows you to specify the function to execute when you start the thread (I probably should look through the threading module for this feature - it seems like it should be in there):
import time
from threading import Thread
class ThreadAB(Thread):
def run(self):
self.func()
def start(self,func):
self.func = func
super().start()
def a():
while True:
time.sleep(1)
print('Ha')
def b():
while True:
time.sleep(2)
print('Ho')
ThreadAB().start(a)
ThreadAB().start(b)
Here's an absolutely horrible solution that runs both functions in different processes without threads or classes:
#!/usr/bin/env python
import time
import sys
import os
def a():
while True:
time.sleep(1)
print('Ha')
def b():
while True:
time.sleep(2)
print('Ho')
if len(sys.argv) > 1:
eval( sys.argv[1] )
else:
os.system(f"{sys.argv[0]} 'a()' &")
os.system(f"{sys.argv[0]} 'b()' &")
For the above solution to work, I made my program executable and ran it from the command line like this:
command
The results were somewhat awful. I kicked off two programs running at the same time in the background. One of the programs printed Ha and the other Ho. They both were running in the background so I had to use the following command to kill them:
ps -ef | grep command | awk '{print $2}' | xargs kill
Edit: And finally, here's an asyncio approach (my first time writing something like this):
import asyncio
async def a():
while True:
print('Ha(1)')
await asyncio.sleep(1)
async def b():
while True:
print('Ho(2)')
await asyncio.sleep(2)
async def main():
taskA = loop.create_task (a())
taskB = loop.create_task(b())
await asyncio.wait([taskA,taskB])
if __name__ == "__main__":
try:
loop = asyncio.get_event_loop()
loop.run_until_complete(main())
except :
pass
Credit for the above code: https://www.velotio.com/engineering-blog/async-features-in-python

Related

Creating processes that contains infinite loop in python

I want to create processes without waiting for other processes finish which they can't because they are in an infinite loop.
import time
from multiprocessing import Process
def child_function(param1, param2):
print(str(param1 * param2))
while True:
print("doing some stuff")
time.sleep(3)
def main_function():
print("Initializing some things.")
for _ in range(10):
Process(target=child_function(3, 5)).start()
if __name__ == '__main__':
main_function()
This code only starts one process and waits for it to finish. How can I do this?
Edit: Comment answer works fine and the answer below also works fine but for creating thread. Thank you everyone.
Try this Python module Threading
import time
import threading
def child_function(param1, param2):
print(str(param1 * param2))
while True:
print("doing some stuff")
time.sleep(3)
def main_function():
print("Initializing some things.")
for _ in range(10):
x = threading.Thread(target=child_function, args=(3,5, ))
x.start()
main_function()
Explanation: as already mentioned in the comments, notice that we are passing the function as opposed to calling it via the thread constructor, Also you can compare Threading vs Multiprocessing and use whichever best suits the project.

Python wait for x seconds without sleeping program? [duplicate]

I'm trying to run 2 functions at the same time.
def func1():
print('Working')
def func2():
print('Working')
func1()
func2()
Does anyone know how to do this?
Do this:
from threading import Thread
def func1():
print('Working')
def func2():
print("Working")
if __name__ == '__main__':
Thread(target = func1).start()
Thread(target = func2).start()
The answer about threading is good, but you need to be a bit more specific about what you want to do.
If you have two functions that both use a lot of CPU, threading (in CPython) will probably get you nowhere. Then you might want to have a look at the multiprocessing module or possibly you might want to use jython/IronPython.
If CPU-bound performance is the reason, you could even implement things in (non-threaded) C and get a much bigger speedup than doing two parallel things in python.
Without more information, it isn't easy to come up with a good answer.
This can be done elegantly with Ray, a system that allows you to easily parallelize and distribute your Python code.
To parallelize your example, you'd need to define your functions with the #ray.remote decorator, and then invoke them with .remote.
import ray
ray.init()
# Define functions you want to execute in parallel using
# the ray.remote decorator.
#ray.remote
def func1():
print("Working")
#ray.remote
def func2():
print("Working")
# Execute func1 and func2 in parallel.
ray.get([func1.remote(), func2.remote()])
If func1() and func2() return results, you need to rewrite the above code a bit, by replacing ray.get([func1.remote(), func2.remote()]) with:
ret_id1 = func1.remote()
ret_id2 = func1.remote()
ret1, ret2 = ray.get([ret_id1, ret_id2])
There are a number of advantages of using Ray over the multiprocessing module or using multithreading. In particular, the same code will run on a single machine as well as on a cluster of machines.
For more advantages of Ray see this related post.
One option, that looks like it makes two functions run at the same
time, is using the threading module (example in this answer).
However, it has a small delay, as an Official Python Documentation
page describes. A better module to try using is multiprocessing.
Also, there's other Python modules that can be used for asynchronous execution (two pieces of code working at the same time). For some information about them and help to choose one, you can read this Stack Overflow question.
Comment from another user about the threading module
He might want to know that because of the Global Interpreter Lock
they will not execute at the exact same time even if the machine in
question has multiple CPUs. wiki.python.org/moin/GlobalInterpreterLock
– Jonas Elfström Jun 2 '10 at 11:39
Quote from the Documentation about threading module not working
CPython implementation detail: In CPython, due to the Global Interpreter
Lock, only one thread can execute Python code at once (even though
certain performance-oriented libraries might overcome this limitation).
If you want your application to make better use of the computational resources of multi-core machines, you are advised to use multiprocessing or concurrent.futures.ProcessPoolExecutor.
However, threading is still an appropriate model if you
want to run multiple I/O-bound tasks simultaneously.
The thread module does work simultaneously unlike multiprocess, but the timing is a bit off. The code below prints a "1" and a "2". These are called by different functions respectively. I did notice that when printed to the console, they would have slightly different timings.
from threading import Thread
def one():
while(1 == num):
print("1")
time.sleep(2)
def two():
while(1 == num):
print("2")
time.sleep(2)
p1 = Thread(target = one)
p2 = Thread(target = two)
p1.start()
p2.start()
Output: (Note the space is for the wait in between printing)
1
2
2
1
12
21
12
1
2
Not sure if there is a way to correct this, or if it matters at all. Just something I noticed.
Try this
from threading import Thread
def fun1():
print("Working1")
def fun2():
print("Working2")
t1 = Thread(target=fun1)
t2 = Thread(target=fun2)
t1.start()
t2.start()
In case you also want to wait until both functions have been completed:
from threading import Thread
def func1():
print 'Working'
def func2():
print 'Working'
# Define the threads and put them in an array
threads = [
Thread(target = self.func1),
Thread(target = self.func2)
]
# Func1 and Func2 run in separate threads
for thread in threads:
thread.start()
# Wait until both Func1 and Func2 have finished
for thread in threads:
thread.join()
Another approach to run multiple functions concurrently in python is using asyncio that I couldn't see within the answers.
import asyncio
async def func1():
for _ in range(5):
print(func1.__name__)
await asyncio.sleep(0) # switches tasks every iteration.
async def func2():
for _ in range(5):
print(func2.__name__)
await asyncio.sleep(0)
tasks = [func1(), func2()]
await asyncio.gather(*tasks)
Out:
func1
func2
func1
func2
func1
func2
func1
func2
func1
func2
[NOTE]:
The above asyncio syntax is valid on python 3.7 and later
multiprocessing vs multithreading vs asyncio
This code below can run 2 functions parallelly:
from multiprocessing import Process
def test1():
print("Test1")
def test2():
print("Test2")
if __name__ == "__main__":
process1 = Process(target=test1)
process2 = Process(target=test2)
process1.start()
process2.start()
process1.join()
process2.join()
Result:
Test1
Test2
And, these 2 sets of code below can run 2 functions concurrently:
from threading import Thread
def test1():
print("Test1")
def test2():
print("Test2")
thread1 = Thread(target=test1)
thread2 = Thread(target=test2)
thread1.start()
thread2.start()
thread1.join()
thread2.join()
from operator import methodcaller
from multiprocessing.pool import ThreadPool
def test1():
print("Test1")
def test2():
print("Test2")
caller = methodcaller("__call__")
ThreadPool().map(caller, [test1, test2])
Result:
Test1
Test2
And, this code below can run 2 async functions concurrently and asynchronously:
import asyncio
async def test1():
print("Test1")
async def test2():
print("Test2")
async def call_tests():
await asyncio.gather(test1(), test2())
asyncio.run(call_tests())
Result:
Test1
Test2
I think what you are trying to convey can be achieved through multiprocessing. However if you want to do it through threads you can do this.
This might help
from threading import Thread
import time
def func1():
print 'Working'
time.sleep(2)
def func2():
print 'Working'
time.sleep(2)
th = Thread(target=func1)
th.start()
th1=Thread(target=func2)
th1.start()
test using APscheduler:
from apscheduler.schedulers.background import BackgroundScheduler
import datetime
dt = datetime.datetime
Future = dt.now() + datetime.timedelta(milliseconds=2550) # 2.55 seconds from now testing start accuracy
def myjob1():
print('started job 1: ' + str(dt.now())[:-3]) # timed to millisecond because thats where it varies
time.sleep(5)
print('job 1 half at: ' + str(dt.now())[:-3])
time.sleep(5)
print('job 1 done at: ' + str(dt.now())[:-3])
def myjob2():
print('started job 2: ' + str(dt.now())[:-3])
time.sleep(5)
print('job 2 half at: ' + str(dt.now())[:-3])
time.sleep(5)
print('job 2 done at: ' + str(dt.now())[:-3])
print(' current time: ' + str(dt.now())[:-3])
print(' do job 1 at: ' + str(Future)[:-3] + '''
do job 2 at: ''' + str(Future)[:-3])
sched.add_job(myjob1, 'date', run_date=Future)
sched.add_job(myjob2, 'date', run_date=Future)
i got these results. which proves they are running at the same time.
current time: 2020-12-15 01:54:26.526
do job 1 at: 2020-12-15 01:54:29.072 # i figure these both say .072 because its 1 line of print code
do job 2 at: 2020-12-15 01:54:29.072
started job 2: 2020-12-15 01:54:29.075 # notice job 2 started before job 1, but code calls job 1 first.
started job 1: 2020-12-15 01:54:29.076
job 2 half at: 2020-12-15 01:54:34.077 # halfway point on each job completed same time accurate to the millisecond
job 1 half at: 2020-12-15 01:54:34.077
job 1 done at: 2020-12-15 01:54:39.078 # job 1 finished first. making it .004 seconds faster.
job 2 done at: 2020-12-15 01:54:39.091 # job 2 was .002 seconds faster the second test
I might be wrong but:
with this piece of code:
def function_sleep():
time.sleep(5)
start_time = time.time()
p1=Process(target=function_sleep)
p2=Process(target=function_sleep)
p1.start()
p2.start()
end_time = time.time()
I took the time and I would expect to get 5/6 seconds, while it always takes the double of the argument passed to the function sleep (10 seconds in this case).
What's the matter?
Sorry guys, as mentioned in the previous comment, the "join()" need to be called.
That's very important!

Threading Python3

I am trying to use Threading in Python, and struggle to kick off two functions at the same time, then wait for both to finish and load returned data into variables in the main code. How can this be achieved?
import threading
from threading import Thread
func1():
#<do something>
return(x,y,z)
func2():
#<do something>
return(a,b,c)
Thread(target=func1).start()
Thread(target=func2).start()
#<hold until both threads are done, load returned values>
More clarity is definitely required from the question asked. Perhaps you're after something like the below?
import threading
from threading import Thread
def func1():
print("inside func1")
return 5
def func2():
print("inside func2")
return 6
if __name__ == "__main__":
t1 = Thread(target=func1)
t2 = Thread(target=func2)
threads = [t1, t2]
for t in threads:
t.start()
I believe you were missing the start() method to actually launch your threads?

Get live value of variable from another script

I have two scripts, new.py and test.py.
Test.py
import time
while True:
x = "hello"
time.sleep(1)
x = "world"
time.sleep(1)
new.py
import time
while True:
import test
x = test.x
print(x)
time.sleep(1)
Now from my understanding this should print "hello" and a second later "world" all the time when executing new.py.
It does not print anything, how can i fix that?
Thanks
I think the code below captures what you are asking. Here I simulate two scripts running independently (by using threads), then show how you can use shelve to communicate between them. Note, there are likely much better ways to get to what you are after -- but if you absolutely must run the scripts independently, this will work for you.
Incidentally, any persistent source would do (such as a database).
import shelve
import time
import threading
def script1():
while True:
with shelve.open('my_store') as holder3:
if holder3['flag'] is not None: break
print('waiting')
time.sleep(1)
print("Done")
def script2():
print("writing")
with shelve.open('my_store') as holder2:
holder2['flag'] = 1
if __name__ == "__main__":
with shelve.open('my_store') as holder1:
holder1['flag'] = None
t = threading.Thread(target=script1)
t.start()
time.sleep(5)
script2()
t.join()
Yields:
waiting
waiting
waiting
waiting
waiting
writing
Done
Test.py
import time
def hello():
callList = ['hello', 'world']
for item in callList:
print item
time.sleep(1)
hello()
new.py
from parent import hello
while True:
hello()

How to not wait for function to finish python

I'm trying to program a loop with a asynchronous part in it. I dont want to wait for this asynchronous part every iteration though. Is there a way to not wait for this function inside the loop to finish?
In code (example):
import time
def test():
global a
time.sleep(1)
a += 1
test()
global a
a = 10
test()
while(1):
print a
You can put it in a thread. Instead of test()
from threading import Thread
Thread(target=test).start()
print("this will be printed immediately")
To expand on blue_note, let's say you have a function with arguments:
def test(b):
global a
time.sleep(1)
a += 1 + b
You need to pass in your args like this:
from threading import Thread
b = 1
Thread(target=test, args=(b, )).start()
print("this will be printed immediately")
Note args must be a tuple.
A simple way is to run test() in another thread
import threading
th = threading.Thread(target=test)
th.start()
You should look at a library meant for asynchronous requests, such as gevent
Examples here: http://sdiehl.github.io/gevent-tutorial/#synchronous-asynchronous-execution
import gevent
def foo():
print('Running in foo')
gevent.sleep(0)
print('Explicit context switch to foo again')
def bar():
print('Explicit context to bar')
gevent.sleep(0)
print('Implicit context switch back to bar')
gevent.joinall([
gevent.spawn(foo),
gevent.spawn(bar),
])
use thread. it creates a new thread in that the asynchronous function runs
https://www.tutorialspoint.com/python/python_multithreading.htm

Categories