Python: threading - python

I want to start thread multiple times, but only when it is not running.
There is a simple model what I am trying:
import threading
import time
def up (x, r):
time.sleep(3)
r['h'] = x + 1
hum = {'h' : 0}
while True:
print(hum['h'])
H = threading.Thread(target = up, args=(hum['h'],hum))
H.daemon=True
if not H.isAlive():
H.start()
print(threading.active_count())
Also what i don't understand is that:
When I run program it prints: 0. Then after 3 seconds it prints:1 and so on after every 3 second it is increased by 1.
But I thought it would print: 0. Then after 3 second it would print: 1. And then immediately increasing fast.
Because after starting first thread it would immediately start the next one and so on. why does this happen?
How not to start a thread 'up', if it's already running?

Not sure if I got your question completely, but here are some thoughts.
When I run your code I get an increasing number of active threads, as you are creating a new thread every time, checking its status (which will always be not alive) and then starting it.
What you want to do instead is to check the status of the last run thread and if that's not alive start a new one. In order to do that you should create a new thread if the old one is done:
def up (x, r):
time.sleep(3)
r['h'] = x + 1
def main():
hum = {'h' : 0}
H = threading.Thread(target = up, args=(hum['h'],hum))
H.daemon=True
while True:
# print(hum['h'])
if not H.isAlive():
H = threading.Thread(target = up, args=(hum['h'],hum))
H.daemon=True
H.start()
print(threading.active_count())

What happens in your code:
Print the value of hum['h']
Create a thread (note you create it, you are not starting it yet)
Set the value of a property
If the thread is not started then start it
Print the count of active threads (active, NOT started)
Since you replace the H variable every time, you'll have a new thread every time that gets immediately started.
If you add a print that says "starting" in the if for the is alive, you'll see that it gets called every time.

You can use join() to wait for the thread to finish:
import threading
import time
def up (x, r):
time.sleep(3)
r['h'] = x + 1
hum = {'h' : 0}
while True:
print(hum['h'])
H = threading.Thread(target = up, args=(hum['h'],hum))
H.daemon=True
H.start()
H.join()
print(threading.active_count())
If you don't want to wait, you can just save the current running thread in a variable and check it in the loop:
import threading
import time
def up (x, r):
time.sleep(3)
r['h'] = x + 1
hum = {'h' : 0}
current_thread = None
while True:
print(hum['h'])
if current_thread is None:
current_thread = threading.Thread(target = up, args=(hum['h'],hum))
current_thread.daemon=True
current_thread.start()
elif not current_thread.isAlive():
current_thread = threading.Thread(target = up, args=(hum['h'],hum))
current_thread.daemon=True
current_thread.start()

Related

Race Condition Doesn't happen

I have written a bit of code to see the race condition, But it Doesn't happen.
class SharedContent:
def __init__(self, initia_value = 0) -> None:
self.initial_value = initia_value
def incerease(self ,delta = 1):
sleep(1)
self.initial_value += delta
content = SharedContent(0)
threads: list[Thread] = []
for i in range(250):
t = Thread(target=content.incerease)
t.start()
threads.append(t)
#wait until all threads have finished their job
while True:
n = 0
for t in threads:
if t.is_alive():
sleep(0.2)
continue
n += 1
if n == len(threads):
break
print(content.initial_value)
The output is 250 which implies no race condition has happened!
Why is that?
I even tried this with random sleep time but the output was the same.
I changed your program. This version prints a different number every time I run it.
#!/usr/bin/env python3
from threading import Thread
class SharedContent:
def __init__(self, initia_value = 0) -> None:
self.initial_value = initia_value
def incerease(self ,delta = 1):
for i in range(0, 1000000):
self.initial_value += delta
content = SharedContent(0)
threads = []
for i in range(2):
t = Thread(target=content.incerease)
t.start()
threads.append(t)
#wait until all threads have finished their job
for t in threads:
t.join()
print(content.initial_value)
What I changed:
Only two threads instead of 250.
Got rid of sleep() calls.
Each thread increments the variable one million times instead of just one time.
Main program uses join() to wait for the threads to finish.

Why doesn't my timer thread run in python?

I am making a simple project to learn about threading and this is my code:
import time
import threading
x = 0
def printfunction():
while x == 0:
print("process running")
def timer(delay):
while True:
time.sleep(delay)
break
x = 1
return x
t1 = threading.Thread(target = timer,args=[3])
t2 = threading.Thread(target = printfunction)
t1.start()
t2.start()
t1.join()
t2.join()
It is supposed to just print out process running in the console for three seconds but it never stops printing. The console shows me no errors and I have tried shortening the time to see if I wasn't waiting long enough but it still doesn't work. Then I tried to delete the t1.join()and t2.join()but I still have no luck and the program continues running.
What am I doing wrong?
Add
global x
to the top of timer(). As is, because timer() assigns to x, x is considered to be local to timer(), and its x = 1 has no effect on the module-level variable also named x. The global x remains 0 forever, so the while x == 0: in printfunction() always succeeds. It really has nothing to do with threading :-)

How to use Timer Thread with Python

I am writing a Ryu application(Python) in which I have if else statement. If a condition satisfies for the first time, then it should start the timer till 10 seconds, within these 10 seconds there will be other packets arriving as well matching the same condition but I don't want to start timer every time a condition is satisfied(within these 10 seconds). In short, the timer should run in parallel.
Here is the code snippet I used for thread.
Every time I run this and send multiple packets then multiple threads start whereas I want only one thread to run till 10 seconds
def timeit():
time.sleep(10)
aggr()
return
def aggr():
self.no_of_data=len(self.iot_data)
self.ip_proto=proto
self.ip_saddr=source
self.ip_daddr=destination
ip_head= pack('!BBHHHBBH16s16s' , self.ip_ihl_ver, self.ip_tos, self.ip_tot_len, self.ip_id, self.ip_frag_off, self.ip_ttl,self.ip_check,self.ip_proto, self.ip_saddr, self.ip_daddr)
total_pkts= pack('!I', self.no_of_data)
print "TOTALLLL,,,,",self.no_of_data
ip_head="{" + ip_head + "}"
total_pkts="{" + total_pkts + "}"
s='$'
data = s.join(self.iot_data)
data="$" + data
pckt= ip_head + total_pkts + data
self.iot_data = []
print "BUFFER: ", self.iot_data
self.iot_data_size = 0
self.start_time = time.time()
self.logger.info("packet-out %s" % (repr(pckt),))
out_port = ofproto.OFPP_FLOOD
actions = [parser.OFPActionOutput(out_port)]
out = parser.OFPPacketOut(datapath=datapath,
buffer_id=ofproto.OFP_NO_BUFFER,
in_port=in_port, actions=actions,
data=pckt)
print "out--->" , out
datapath.send_msg(out)
thread1 = threading.Thread(target=timeit)
thread1.start()
if proto == 150 and total_len < 1500:
if not thread1.isAlive():
thread1.run()
print "ifff"
data = msg.data
#print " # stores the packet data"
self.iot_data.append(data)
#print "# increment size counter"
self.iot_data_size += total_len
#elapsed_time = time.time() - self.start_time
print "ELAPSED: ", elapsed_time
print "BUFFER: ", self.iot_data
After 10 seconds, again timer should start when the first packet arrives and it should run parallel with the same code.
I am so much confused with this. Please anyone help.
I hope this is clear if not I am sorry please ask for the clarification.
Thank you
Indeed, you have to go with multi-threading (might be achieved without it but it would certainly be a pain in the ass). The idea is to run a thread that will run a function that sleeps for 10 seconds and returns. After this function returns, the thread will be set as inactive, until we run it the next time.
By knowing that we can write the following code. All details and explanations are written as comments for easier reference.
import time
import threading
packet_groups = [] # Groups of packets inside 10 seconds.
group = [] # Temporary group that will get stored into packet_groups.
# The function that will count 10 seconds:
def timeit():
sleep(10)
return
# Do something with packets.
def packet_handler():
...
# Put all your code inside another function that does not create
# new thread each time. Create a thread in main and then run this function.
def get_packets(thread1):
... # get packets
if dst == 'some_address':
# Check if thread is alive. If it is alive, a counter is running.
# If it is not alive, then we must start the counter by running
# thread.
if not thread1.isAlive():
thread1.run()
packet_handler(packet, True)
else:
packet_handler(packet, False)
if __name__ == '__main__':
# Create thread.
thread1 = threading.Thread(target=timeit)
# Start the thread. This is done only once per each thread.
thread1.start()
get_packets(thread1)
Now since you mentioned that you want to group the packets inside these 10 seconds blocks, you can implement packet_handler() like this:
def packet_handler(packet, new):
# If we started new thread and the group isn't empty, we must
# append group to packet_groups (that is all groups) and reset
# the group to only contain current packet
if new and group != []:
packet_groups.append(group)
group = [packet]
return
# If group isn't new, we are still inside 10 seconds block. We
# just append the current packet to this block.
if not new:
group.append(packet)
If you want to be able to print or in any other way be able to show the timer, you can't sleep for 10 seconds because if you sleep for 10 seconds, nothing will be done in between. In such case you want to change timeit() to something like this:
def timeit():
for i in range(10):
print 'Time remaining: {0}s'.format(10-i)
sleep(1)
return

python global variable inside a thread

The story begin with two threads and a global variable that change.. a lot of time :)
Thread number one (for simplicity we will call t1) generates a random number and store it in a global variable GLB.
Thread number two (aka t2) check the value of the global variable and when it reaches a value starts to print his value until a period of time.
BUT if t1 changes the value of that global variable, also change the value inside the loop and I don't want this!
I try to write pseudocode:
import random
import time
import threading
GLB = [0,0]
#this is a thread
def t1():
while True:
GLB[0] = random.randint(0, 100)
GLB[1] = 1
print GLB
time.sleep(5)
#this is a thread
def t2():
while True:
if GLB[0]<=30:
static = GLB
for i in range(50):
print i," ",static
time.sleep(1)
a = threading.Thread(target=t1)
a.start()
b = threading.Thread(target=t2)
b.start()
while True:
time.sleep(1)
The question is: why variable static change inside the loop for? It should be remain constant unitl it escapes from loop!
Could I create a lock to the variable? Or there is any other way to solve the problem?
Thanks regards.
GLB is a mutable object. To let one thread see a consistent value while another thread modifies it you can either protect the object temporarily with a lock (the modifier will wait) or copy the object. In your example, a copy seems the best option. In python, a slice copy is atomic so does not need any other locking.
import random
import time
import threading
GLB = [0,0]
#this is a thread
def t1():
while True:
GLB[0] = random.randint(0, 100)
GLB[1] = 1
print GLB
time.sleep(5)
#this is a thread
def t2():
while True:
static = GLB[:]
if static[0]<=30:
for i in range(50):
print i," ",static
time.sleep(1)
a = threading.Thread(target=t1)
a.start()
b = threading.Thread(target=t2)
b.start()
while True:
time.sleep(1)

kill a function after a certain time in windows

I've read a lot of posts about using threads, subprocesses, etc.. A lot of it seems over complicated for what I'm trying to do...
All I want to do is stop executing a function after X amount of time has elapsed.
def big_loop(bob):
x = bob
start = time.time()
while True:
print time.time()-start
This function is an endless loop that never throws any errors or exceptions, period.
I"m not sure the difference between "commands, shells, subprocesses, threads, etc.." and this function, which is why I'm having trouble manipulating subprocesses.
I found this code here, and tried it but as you can see it keeps printing after 10 seconds have elapsed:
import time
import threading
import subprocess as sub
import time
class RunCmd(threading.Thread):
def __init__(self, cmd, timeout):
threading.Thread.__init__(self)
self.cmd = cmd
self.timeout = timeout
def run(self):
self.p = sub.Popen(self.cmd)
self.p.wait()
def Run(self):
self.start()
self.join(self.timeout)
if self.is_alive():
self.p.terminate()
self.join()
def big_loop(bob):
x = bob
start = time.time()
while True:
print time.time()-start
RunCmd(big_loop('jimijojo'), 10).Run() #supposed to quit after 10 seconds, but doesn't
x = raw_input('DONEEEEEEEEEEEE')
What's a simple way this function can be killed. As you can see in my attempt above, it doesn't terminate after 20 seconds and just keeps on going...
***OH also, I've read about using signal, but I"m on windows so I can't use the alarm feature.. (python 2.7)
**assume the "infinitely running function" can't be manipulated or changed to be non-infinite, if I could change the function, well I'd just change it to be non infinite wouldn't I?
Here are some similar questions, which I haven't able to port over their code to work with my simple function:
Perhaps you can?
Python: kill or terminate subprocess when timeout
signal.alarm replacement in Windows [Python]
Ok I tried an answer I received, it works.. but how can I use it if I remove the if __name__ == "__main__": statement? When I remove this statement, the loop never ends as it did before..
import multiprocessing
import Queue
import time
def infinite_loop_function(bob):
var = bob
start = time.time()
while True:
time.sleep(1)
print time.time()-start
print 'this statement will never print'
def wrapper(queue, bob):
result = infinite_loop_function(bob)
queue.put(result)
queue.close()
#if __name__ == "__main__":
queue = multiprocessing.Queue(1) # Maximum size is 1
proc = multiprocessing.Process(target=wrapper, args=(queue, 'var'))
proc.start()
# Wait for TIMEOUT seconds
try:
timeout = 10
result = queue.get(True, timeout)
except Queue.Empty:
# Deal with lack of data somehow
result = None
finally:
proc.terminate()
print 'running other code, now that that infinite loop has been defeated!'
print 'bla bla bla'
x = raw_input('done')
Use the building blocks in the multiprocessing module:
import multiprocessing
import Queue
TIMEOUT = 5
def big_loop(bob):
import time
time.sleep(4)
return bob*2
def wrapper(queue, bob):
result = big_loop(bob)
queue.put(result)
queue.close()
def run_loop_with_timeout():
bob = 21 # Whatever sensible value you need
queue = multiprocessing.Queue(1) # Maximum size is 1
proc = multiprocessing.Process(target=wrapper, args=(queue, bob))
proc.start()
# Wait for TIMEOUT seconds
try:
result = queue.get(True, TIMEOUT)
except Queue.Empty:
# Deal with lack of data somehow
result = None
finally:
proc.terminate()
# Process data here, not in try block above, otherwise your process keeps running
print result
if __name__ == "__main__":
run_loop_with_timeout()
You could also accomplish this with a Pipe/Connection pair, but I'm not familiar with their API. Change the sleep time or TIMEOUT to check the behaviour for either case.
There is no straightforward way to kill a function after a certain amount of time without running the function in a separate process. A better approach would probably be to rewrite the function so that it returns after a specified time:
import time
def big_loop(bob, timeout):
x = bob
start = time.time()
end = start + timeout
while time.time() < end:
print time.time() - start
# Do more stuff here as needed
Can't you just return from the loop?
start = time.time()
endt = start + 30
while True:
now = time.time()
if now > endt:
return
else:
print end - start
import os,signal,time
cpid = os.fork()
if cpid == 0:
while True:
# do stuff
else:
time.sleep(10)
os.kill(cpid, signal.SIGKILL)
You can also check in the loop of a thread for an event, which is more portable and flexible as it allows other reactions than brute killing. However, this approach fails if # do stuff can take time (or even wait forever on some event).

Categories