Avoding maximum recursion depth - python

I have the following codes which will be running 24 hours but only after 10-20 minutes I get error maximum recursion depth reached. My code is folowing
def startEnd():
flag = 0
f = open('file')
lq = f.readlines()
cur2 = lq[0]
cur1 = datetime.datetime.now()
while flag == 0:
if cur1 == cur2: # cur2 is datetime read from file
q.put(True)
flag = 1
else:
flag = 0
startEnd()
How can I avoid recursion in the following code? I need to come out of while loop since cur2 value changes.
My other question is that will following code will also lead to recursion depth error in the long run since my code need to be run 24 hours.
def planIncr():
f=open('input.txt')
lines=f.readlines()
cycle_time = int(lines[1])
f.close()
q2.put(True)
threading.Timer(cycle_time, planIncr).start()

Regarding the first function - just put everything inside a while loop:
def startEnd():
while True:
flag = 0
f = open('file')
lq = f.readlines()
cur2 = lq[0]
cur1 = datetime.datetime.now()
while flag == 0:
if cur1 == cur2: # cur2 is datetime read from file
q.put(True)
flag = 1
else:
flag = 0

Note that your second function is not recursive - it will never trigger a recursion depth error. Yes, planIncr is passed to threading.Timer, but not executed by that call. planIncr is executed by some other thread later, and the thread that called planIncr to begin with sees planIncr return right away.

Related

Inconsistent Behaviour while executing a Background Task in Flask Web App

I am creating a flask web app which has a background task of making an API Calls every x seconds.The interval is decided based on a value coming from the API,if the value is True then and if the database don't have any data for counter it will insert a counter value 1.If it has a counter value it will increment the counter and update it on the database and the next call for this task will be after 40 seconds but if it is false the the next call will be in 10 seconds.Here is my code
import time
import threading
def writemotiondata():
processing = False
while True:
deviceid = 'eb4f7839b63063bdec3ikc'
openapi = authorization()
openapi.connect()
response = openapi.get("/v1.0/devices/{}".format(deviceid))
result = response['result']
status = result['status']
value = status[0]['value']
print(value)
if (value == True):
tempdata = motiondata.find()
previousdata = [{item: key[item] for item in key if item != '_id'} for key in tempdata]
if len(previousdata) == 0:
data = {"sensorname": "pir",
"numberofpeople": 1}
motiondata.insert_one(data).inserted_id
else:
count = previousdata[0]['numberofpeople']
count += 1
filter = {"sensorname": "pir"}
updateddata = {"$set": {"numberofpeople": count}}
motiondata.update_one(filter, updateddata)
time.sleep(40)
else:
time.sleep(10)
if __name__ == '__main__':
t1 = threading.Thread(target=writemotiondata)
t1.start()
app.run(debug=True)
Here the expected output is if the value is True the task should increment the counter and update the database.For example if the previous count was 1,after completing the task the new count should be 2 and the task should be called to execute again in 40 seconds. If the value is false the task will be called again in 10 seconds.
However The output I am getting is the counter in my database gets incremented inconsistently,sometimes it gets incremented by 1 ,sometimes it gets incremented by 2 or 3.I have printed out the value and saw in 40 seconds sometimes the value true is being printed 2/3 times instead of one,same goes for false.Basically the task is being executed more than once,in that time interval.Can anyone please help me figure out why is this happening and what I am doing wrong.
I am using a single thread,no more threads has been used in the code.

Detect long-press using Keybow

I'm trying to use this python library https://github.com/pimoroni/keybow-python to control a raspberry pi (initiate events, e.g. launch a script or shutdown the pi).
This works will so far. I'm struggling to detect a long press. The API linked above allows 'catching' the event of pressing the button or releasing it. Not sure how I go about measuring the time between one and the other... I tried this:
`
def time_string_to_decimals(time_string):
fields = time_string.split(":")
hours = fields[0] if len(fields) > 0 else 0.0
minutes = fields[1] if len(fields) > 1 else 0.0
seconds = fields[2] if len(fields) > 2 else 0.0
return float(hours) + (float(minutes) / 60.0) + (float(seconds) / pow(60.0, 2))
while True:
t = datetime.now().strftime('%H:%M:%S')
now = time_string_to_decimals(t)
#keybow.on()
def handle_key(index, state):
key0Up = 0
key0Down = 0
if index == 0 and state:
key0Down = now
print ("down: " + str(now))
if index == 0 and not state:
key0Up = now
downtime = key0Up - key0Down
print ("down: " + str(now))
print ("up: " + str(now))
print ("downtime: " + str(downtime))
if downtime >= 0.001:
print ("shutdown!")
if index == 3 and not state:
print ("Hello!")
if index == 6 and not state:
print ("World!")
`
... the print commands are just to follow what's going on. The problem is that the key0Down also get's set to the current time when the button is released. I'm stumped. Can anyone point me in the right direction?
Thanks!
Best regards,
Andrew
See above... I keep getting the same 'timestamp' for the key Down und key Up event...
It sounds like your issue is that when the #keybow.on decorator attaches the callback it has a static value for now that doesn't get updated by the while loop which will be in a different scope. Repeatedly declaring the callback during the while loop looks wrong also.
I don't have this hardware so it is not possible for me to test this. However, looking through the repository you linked to I would be tempted to do the following...
As you need to share the key events timings between functions I would put them into a class and have class variables for the key0up and key0down values.
I have also gone for different event handlers for the different keys to simplify the complex chain of if statements.
I was not sure if the keybow on decorator would do the right thing if used inside a class, so I have attached callbacks to keys without using a decorator.
import keybow
import time
class MyKeys:
key0Up = 0
key0Down = 0
def handle_key0(self, index, state):
if state:
self.key0Down = time.time()
print("down: ", self.key0Down)
elif not state:
self.key0Up = time.time()
downtime = self.key0Up - self.key0Down
print("down: ", self.key0Down)
print("up: ", self.key0Up)
print("downtime: ", downtime)
if downtime >= 1: # Greater than 1 second
print("shutdown!")
def handle_key3(self, index, state):
if not state:
print("Hello!")
def handle_key6(self, index, state):
if not state:
print("World!")
def main():
my_keys = MyKeys()
keybow.on(0, my_keys.handle_key0)
keybow.on(3, my_keys.handle_key3)
keybow.on(6, my_keys.handle_key6)
while True:
keybow.show()
time.sleep(1.0 / 60.0)
if __name__ == '__main__':
main()

Multiprocessing: callback on condition?

I'm using this code as a template (KILLING IT section)
https://stackoverflow.com/a/36962624/9274778
So I've solved this for now... changed the code to the following
import random
from time import sleep
def worker(i,ListOfData):
print "%d started" % i
#MyCalculations with ListOfData
x = ListOfData * Calcs
if x > 0.95:
return ListOfDataRow, True
else:
return ListOfDataRow, False
callback running only in main
def quit(arg):
if arg[1] == True:
p.terminate() # kill all pool workers
if __name__ == "__main__":
import multiprocessing as mp
Loops = len(ListOfData) / 25
Start = 0
End = 25
pool = mp.Pool()
for y in range(0,Loops)
results = [pool.apply(worker, args=(i,ListOfData[x]),callback = quit)
for y in range(0,len(ListofData))]
for c in results:
if c[1] == True
break
Start = Start + 25
End = End +25
So I chunk my data frame (assume for now my ListOfData is always divisible by 25) and send it off to the multiprocessing. I've found for my PC performance groups of 25 works best. If the 1st set doesn't return a TRUE, then I go to the next chunk.
I couldn't use the async method as it ran all at different times and sometimes I'd get a TRUE back that was further down the list (not what I wanted).

Performing a task based in specific time interval in python

I am trying to switch on and off the LED based on a set set_car_id returning some value within a time interval. If the set returns some value, i want the LED to be brighter for 8 seconds. In the code shown below, once the set returns a value, the LED is switched on for 8 seconds. But, if the set returns a value at 5 second (within the 8 sec), then the LED won't be switched on till next 13 sec, it will be on for 3 more seconds and then suddenly switches off. I am showing only smaller part of the code. Any suggestions to solve?
last_bright_time = None
last_dim_time = None
new_action = -1
def LED_control(set_car_id):
global last_bright_time
global last_dim_time
curr_time = time.time()
should_remain_bright = False
should_remain_dim = False
if (new_action == 0): #new_action ==0 corresponds to set_car_id returning some value
if last_bright_time == None:
last_bright_time = time.time()
if (curr_time - last_bright_time) < 8:
should_remain_bright = True
if ((len(set_car_id) > 0) or should_remain_bright = True):
car_light(1) # function to bright the LED
last_dim_time = None
else:
car_light(0) # function to dim the LED
last_bright_time = None
Try this:
import time
while True:
brighten()
time.sleep(8)
dim()
time.sleep(8)
If you want something more precise:
import time
def sleep():
time_start = time.time()
while time.time() < time_start + 8000:
pass
while True:
brighten()
sleep()
dim()
sleep()
In the pieces of code above, you have to define the brighten and dim functions.

Running program on multiple cores

I am running a program in Python using threading to parallelise the task. The task is simple string matching, I am matching a large number of short strings to a database of long strings. When I tried to parallelise it, I decided to split the list of short strings into a number of sublists equal to the number of cores and run each of them separately, on a different core. However, when I run the task on 5 or 10 cores, it is about twice slower than just on one core. What could the reason for that be and how can I possibly fix it?
Edit: my code can be seen below
import sys
import os
import csv
import re
import threading
from Queue import Queue
from time import sleep
from threading import Lock
q_in = Queue()
q_out = Queue()
lock = Lock()
def ceil(nu):
if int(nu) == nu:
return int(nu)
else:
return int(nu) + 1
def opencsv(csvv):
with open(csvv) as csvfile:
peptides = []
reader = csv.DictReader(csvfile)
k = 0
lon = ""
for row in reader:
pept = str(row["Peptide"])
pept = re.sub("\((\+\d+\.\d+)\)", "", pept)
peptides.append(pept)
return peptides
def openfasta(fast):
with open(fast, "r") as fastafile:
dic = {}
for line in fastafile:
l = line.strip()
if l[0] == ">":
cur = l
dic[l] = ""
else:
dic[cur] = dic[cur] + l
return dic
def match(text, pattern):
text = list(text.upper())
pattern = list(pattern.upper())
ans = []
cur = 0
mis = 0
i = 0
while True:
if i == len(text):
break
if text[i] != pattern[cur]:
mis += 1
if mis > 1:
mis = 0
cur = 0
continue
cur = cur + 1
i = i + 1
if cur == len(pattern):
ans.append(i - len(pattern))
cur = 0
mis = 0
continue
return ans
def job(pepts, outfile, genes):
c = 0
it = 0
towrite = []
for i in pepts:
# if it % 1000 == 0:
# with lock:
# print float(it) / float(len(pepts))
it = it + 1
found = 0
for j in genes:
m = match(genes[j], i)
if len(m) > 0:
found = 1
remb = m[0]
wh = j
c = c + len(m)
if c > 1:
found = 0
c = 0
break
if found == 1:
towrite.append("\t".join([i, str(remb), str(wh)]) + "\n")
return towrite
def worker(outfile, genes):
s = q_in.qsize()
while True:
item = q_in.get()
print "\r{0:.2f}%".format(1 - float(q_in.qsize()) / float(s))
if item is None:
break #kill thread
pepts = item
q_out.put(job(pepts, outfile, genes))
q_in.task_done()
def main(args):
num_worker_threads = int(args[4])
pept = opencsv(args[1])
l = len(pept)
howman = num_worker_threads
ll = ceil(float(l) / float(howman * 100))
remain = pept
pepties = []
while len(remain) > 0:
pepties.append(remain[0:ll])
remain = remain[ll:]
for i in pepties:
print len(i)
print l
print "Csv file loaded..."
genes = openfasta(args[2])
out = args[3]
print "Fasta file loaded..."
threads = []
with open(out, "w") as outfile:
for pepts in pepties:
q_in.put(pepts)
for i in range(num_worker_threads):
t = threading.Thread(target=worker, args=(outfile, genes, ))
# t.daemon = True
t.start()
threads.append(t)
q_in.join() # run workers
# stop workers
for _ in range(num_worker_threads):
q_in.put(None)
for t in threads:
t.join()
# print(t)
return 0
if __name__ == "__main__":
sys.exit(main(sys.argv))
The important part of the code is within the job function, where short sequences in pepts get matched to long sequences in genes.
This should be because of GIL (Global Interpreter Lock) in CPython.
In CPython, the global interpreter lock, or GIL, is a mutex that prevents multiple native threads from executing Python bytecodes at once.
David Beazley's presentation at PyCon 2010 gave a detailed explanation about GIL. And from page 32 to page 34, he explained why the same multiple-threading code (of CPU-bound computation) could have worse performance when running with multiple cores than when running with single core.
(with single core) Threads alternate execution, but switch far
less frequently than you might imagine
With multiple cores, runnable threads get scheduled simultaneously (on different cores) and battle over the GIL
David's this experiment result visualizes "how thread switching gets more rapid as the number of CPUs increases".
Even though your job function contains some I/O, according to its 3-level nested loops (two in job and one in match), it is more like CPU-bound computation.
Changing your code to multiple-processing will help you utilize multiple cores and may improve the performance. However, how much you could gain depends on the quantity of the computation - whether the benefit from parallelizing the computation could far surpass the overhead introduced by multiple-processing such as inter-process communication.

Categories