Please, accept my apologies for the delayed feedback. It was impossible before.
I'll explain it better.
I've built a Midi - OSC control surface for Reaper in Python. It receives and sends OSC messages.
I use UP and DOWN arrows on the keyboard to move through tracks in Reaper. I can move very fast through tracks until I find the one I want to stay on. Then, my script sends a OSC message to receive everything from Reaper using PythonOSC.
I will explain using code.
That function receives OSC messages (e.g.: /track/volume, etc.) Each time the client receives /select, it tells my script that a track changed in Reaper.
def OSC(self, *args):
s = re.search(r'/track/(.+)', args[0])
if s:
if s.group(1) == 'select' and args[1] == 0:
"""I want that next line to be executed only when about 0.5 second had passed
since the last time it enters that point. It is to avoid that PythonOSC sends
that line 10 times in 1 second and overloads the process."""
client.send_message('/device/track/follows/last_touched', 1)
I imagined something like that but I don't know if it's overkill, if I miss something.
Each time I press up or down, a timer starts in a thread or something. It resets each time.
When I land on my track, the timer continues until 0.5 second and then, launches my client.send_message() line.
Is it too much? The thing is, a thread will still have to be killed or something. I'm very new to Python.
The example given, with the decorator is great but it activates the function (in my case, it's finally only a line of code) when it's triggred within 0.5 second while I want it to be triggered after a 0.5 second pause following the last time it's called.
Note: I realize that my code example is all sent in a block in the preview box. I indented it but it breaks it. I'm very sorry for that, I can't see the option I need to use.
I'm not sure that I've interpreted your question correctly, but if you're trying to make function execution to take not less than provided time, you can use this decorator
from time import time, sleep
def timed(time_to_sleep):
def decorator(function):
def wrapper(*args, **kwargs):
start_time = time()
result = function(*args, **kwargs)
time_to_sleep_left = time_to_sleep - (time() - start_time)
if time_to_sleep_left > 0:
sleep(time_to_sleep_left)
return result
return wrapper
return decorator
Usage:
#timed(0.5)
def test():
pass
Related
I have a problem with my Tkinter window crashing due to the usage of the 'sleep' function in the code below.
The programm in the background works just fine, even if the window crashed a long time ago.
How can I use functions like time.time to ensure that I don't make queries to the API too frequently rather than making a blocking call to time.sleep.
In other words, once I follow a list of new users, how do I prevent the program from making another request in the next 30 seconds?
def follow_users(self,users_list):
api = self.api
api.login()
api.getSelfUsersFollowing()
result = api.LastJson
for user in result['users']:
following_users.append(user['pk'])
for user in users_list:
if not user['pk'] in following_users:
print('Following #' + user['username'])
api.follow(user['pk'])
# set this really long to avoid from suspension
sleep(30)
else:
print('Already following #' + user['username'])
sleep(15)
def unfollow_users(self):
api = self.api
api.login()
api.getSelfUserFollowers()
result = api.LastJson
for user in result['users']:
follower_users.append({'pk':user['pk'], 'username':user['username']})
api.getSelfUsersFollowing()
result = api.LastJson
for user in result['users']:
following_users.append({'pk':user['pk'],'username':user['username']})
for user in following_users:
if not user['pk'] in follower_users:
print('Unfollowing #' + user['username'])
api.unfollow(user['pk'])
sleep(20)
# set this really long to avoid from suspension
I would start by writing a function that follows a single user, and nothing more. It doesn't need to loop over a list or sleep or anything like that, it should just follow that one user.
For example:
def follow_user(self, user):
... code to follow this user ...
Next, define a function that pulls one user off of a list, calls the follow_user function, and then re-schedules itself to run again after a timeout.
The following example assumes there's a global variable named root which represents the root window, but you can use any widget you want. It also assumes that the class maintains a list of users to be followed in an instance variable named self.users.
def follow_users(self):
if self.users:
user = self.users.pop()
self.follow_user(user)
root.after(30000, self.follow_users)
Then, once your program starts, call this function exactly once. It will call itself once every 30 seconds. If there is at least one user in self.users it will pull it from the list and follow the user. 30 seconds later it will do it again, and then again, and so on. You can update self.users whenever you want and that user will eventually get followed.
I have a cloud function calling another cloud function in python. My issue is that when I call the next function, the first one waits for its execution or times out.
The key is that this is about Google Cloud Functions. Particularly as mismatch between function timeout and maximum API call rate. My issue is that the function's maximum timeout (540 seconds) is shorter than the time I need to make the required API calls and that I don't want to create more triggers.
How can I make the first (the "caller") finish, after calling the second function, that does its work?
Some sample code:
# main.py
# url: this-particular-cloud-function
# function initiated with a post request containing {"previous_tin_index": 0}
import requests
import time
import logging
final_tin_index = 100
def eat_spam(request):
started_eating_spam = time.time()
spam_json = request.get_json()
spam_to_eat = spam_json["previous_tin_index"]
for spam in range(spam_to_eat):
time.sleep(5)
previous_tin_index += 1
logging.info("I hate spam....")
finished_previous_spam_time = time.time() - started_eating_spam
if finished_previous_spam_time >= 10:
logging.info("Make it stop!")
requests.post("this-particular-cloud-function", json={"previous_tin_index": previous_tin_index})
return "200"
EDIT: I know that the inherent problem is that the function never reaches the return value. I am wondering if this can be fixed, other than, for example, rewriting the code into a Javascript promise.
P.S. I looked at the Cloud Documentation, but python seems to be lacking in the particular example.
This may solve your issue.
def f1(x):
print('f1', x)
return f2, (x+1,)
def f2(x):
print('f2', x)
return f1, (x+1,)
f, args = f1, (0,)
while True:
f, args = f(*args)
Credits to this post.
Although I believe that the issue in your case is the work flow. You are returning a value but the problem is that you never reach that point in your code. Let's assume that you have foo1() and inside foo1() you call foo2(). Then foo()2 is starting to be executed but before you return to foo1() and continue until you reach the return command, foo1() is timed out.
In case your issue isn’t solved then the problem may be at the second function, so it will may be needed to review this function in order to resolve your problem.
Please let me know if this was helpful.
I would like to run exec() on 'return' in my function, so that my function returns and stops, here is my code:
def Function():#recieves time consuming function to do
#checks if main thread should be closed, if so returns:
return 'return "it worked"'
#otherwise returns 'Null' so main thread keeps going
def MainThread():
#Wants to do some function, so
exec(Function())
return "didnt work"
when I run MainThread(), it says:
SyntaxError: 'return' outside function
And I am not sure what is wrong there, I have simplified it and found that running exec('return') will also not work in a function.
Why I want to do this:
I have a thread that controls instruments, and before each command that it sends to the instruments I want it to check if it should abort, since controlling the instruments can be time consuming, and there are safety hazards. I don't want to copy-paste an if statement many times through my code, so thought of wrapping each command to an instrument with a check.
It seems quite messy, if there are other approaches I would love to hear.
My current solution is:
def Function(stuff):#recieves time consuming function to do
#does things to stuff
return
def check(thing,skip):
if skip==true:
return
else:
Function(thing)
return
def MainThread():
skip = False #will be true or false if need to skip
#Wants to do some function, so
check("the thing to do",skip)
If its necessary to abort the thread, it actually just skips through and does nothing at each function. At the end of the thread is aborts, but I didn't like simply keeping the thread there doing nothing and hoped for a better idea :)
exec() does not execute the code in the context of the function in which it is called. The "return "it worked"" does not become a part of Function(), it is executed on it's own.
I am a Python noob, and I'm trying to make a washing device, programming the interface with Python. For now, the machine should work like that:
Wash
Tell washing is complete
Dry
Tell drying is complete
For this, after times for washing/drying are entered, a button is pressed:
button1 = Button(window.tk, command = lambda:main_process(int(varWashtime.get()), int(varDrytime.get())))
def main_process(wash_seconds,dry_seconds):
wash(wash_seconds)
stop_wash()
dry(dry_seconds)
stop_dry()
return
def wash(seconds):
varWashStarted.set("Washing Started")
Timer(seconds,idle_fnc).start()
return
def stop_wash():
varWashStarted.set("Washing Stopped")
Timer(3,idle_fnc,()).start()
return
def dry(seconds):
varDryStarted.set("Drying Started")
Timer(seconds,idle_fnc,()).start()
return
def stop_dry():
varDryStarted.set("Drying Stopped")
return
def idle_fnc():
pass
return
Here, I used the function idle_fnc to make threading.Timer properly work.
I found out that I can just use Timer to call other functions after each other, but I would prefer to return from a function, then branch to a new one.
My problem is, as I click the button, the whole thing executes without waiting; I instantly see "Washing Stopped" and "Drying Stopped" on the corresponding label, without the delays triggered.
What is the problem?
I'm running a python script on a raspberry pi that constantly checks on a Yocto button and when it gets pressed it puts data from a different sensor in a database.
a code snippet of what constantly runs is:
#when all set and done run the program
Active = True
while Active:
if ResponseType == "b":
while Active:
try:
if GetButtonPressed(ResponseValue):
DoAllSensors()
time.sleep(5)
else:
time.sleep(0.5)
except KeyboardInterrupt:
Active = False
except Exception, e:
print str(e)
print "exeption raised continueing after 10seconds"
time.sleep(10)
the GetButtonPressed(ResponseValue) looks like the following:
def GetButtonPressed(number):
global buttons
if ModuleCheck():
if buttons[number - 1].get_calibratedValue() < 300:
return True
else:
print "module not online"
return False
def ModuleCheck():
global moduleb
return moduleb.isOnline()
I'm not quite sure about what might be going wrong. But it takes about an hour before the RPI runs out of memory.
The memory increases in size constantly and the button is only pressed once every 15 minutes or so.
That already tells me that the problem must be in the code displayed above.
The problem is that the yocto_api.YAPI object will continue to accumulate _Event objects in its _DataEvents dict (a class-wide attribute) until you call YAPI.YHandleEvents. If you're not using the API's callbacks, it's easy to think (I did, for hours) that you don't need to ever call this. The API docs aren't at all clear on the point:
If your program includes significant loops, you may want to include a call to this function to make sure that the library takes care of the information pushed by the modules on the communication channels. This is not strictly necessary, but it may improve the reactivity of the library for the following commands.
I did some playing around with API-level callbacks before I decided to periodically poll the sensors in my own code, and it's possible that some setting got left enabled in them that is causing these events to accumulate. If that's not the case, I can't imagine why they would say calling YHandleEvents is "not strictly necessary," unless they make ARM devices with unlimited RAM in Switzerland.
Here's the magic static method that thou shalt call periodically, no matter what. I'm doing so once every five seconds and that is taking care of the problem without loading down the system at all. API code that would accumulate unwanted events still smells to me, but it's time to move on.
#noinspection PyUnresolvedReferences
#staticmethod
def HandleEvents(errmsgRef=None):
"""
Maintains the device-to-library communication channel.
If your program includes significant loops, you may want to include
a call to this function to make sure that the library takes care of
the information pushed by the modules on the communication channels.
This is not strictly necessary, but it may improve the reactivity
of the library for the following commands.
This function may signal an error in case there is a communication problem
while contacting a module.
#param errmsg : a string passed by reference to receive any error message.
#return YAPI.SUCCESS when the call succeeds.
On failure, throws an exception or returns a negative error code.
"""
errBuffer = ctypes.create_string_buffer(YAPI.YOCTO_ERRMSG_LEN)
#noinspection PyUnresolvedReferences
res = YAPI._yapiHandleEvents(errBuffer)
if YAPI.YISERR(res):
if errmsgRef is not None:
#noinspection PyAttributeOutsideInit
errmsgRef.value = YByte2String(errBuffer.value)
return res
while len(YAPI._DataEvents) > 0:
YAPI.yapiLockFunctionCallBack(errmsgRef)
if not (len(YAPI._DataEvents)):
YAPI.yapiUnlockFunctionCallBack(errmsgRef)
break
ev = YAPI._DataEvents.pop(0)
YAPI.yapiUnlockFunctionCallBack(errmsgRef)
ev.invokeData()
return YAPI.SUCCESS