Some events completed make another event in python - python

I am currently under a hard problems.
Say that I have 2+ events, such as:
(A): User do an action A
(B): User do an action B
Each event is separated from the other and can be triggered from web request by user, and I do not know their order (which one occurs first).
Then, when both of A and B are occurred, it will fire another event called C: A + B => C
Is there any programming model or library make it easier in Python, I prefer pure python just because I am working with GAE.
I have thought about an usage of blinker with some addition codes, but it is still much of works.
Thank you,

Since blinker is pure Python, using it on GAE should be quite feasible.
So you're left with the issue of triggering a signal when two other signals have been received -- an and gate, which I believe blinker doesn't provide -- definitely not when the incoming signals can come "well separated" in time and space.
So I'd define a model such as:
class AndGate(ndb.Model):
a_received = ndb.BooleanProperty()
b_received = ndb.BooleanProperty()
using users.get_user().userid() as the id in its entities' key.
Then we'd have signal(A).connect(got_a) and signal(B).connect(got_b) with
def got_a(*_):
k = ndb.Key('AndGate', users.get_user().userid())
ag = k.get()
if ag is None:
AndGate(a_received=True, b_received=False, key=k).put()
return
if ag.b_received:
k.delete()
signal('C').send('')
else:
ag.a_received = True
ag.put()
This looks a bit redundant with just two signals, I'm thinking forward to gates with N>2 signals but you can surely simplify this if that's not an issue for you.
Yes there's work left to do -- atomic transactions if needed, periodic "garbage collection" tasks looking for gates where only one signal was received and are older than some threshold X and reporting about them, and so on. But, it doesn't seem "much of works" to me, if I understood your specifications correctly.

This may help:
1.- Avoid a race condition: If you're using a DB use a transaction/lock table/lock row to write A or B, if you are using threads use Threading.Lock() or something similar
2.- Check inside the transacction/lock if the trigger condition is met, and then execute it.
#lock
def a():
do_a
if b:
do_c
#lock
def b():
do_b
if a:
do_c

Related

How to rewrite a state machine in a clearer style?

I am interacting with an external device, and I have to issue certain commands in order. Sometimes I have to jump back and redo steps. Pseudocode (the actual code has more steps and jumps):
enter_update_mode() # step 1
success = start_update()
if not success:
retry from step 1
leave_update_mode()
How do I handle this the cleanest way? What I did for now is to define an enum, and write a state machine. This works, but is pretty ugly:
class Step(Enum):
ENTER_UPDATE_MODE = 1
START_UPDATE = 2
LEAVE_UPDATE_MODE = 3
EXIT = 4
def main():
next_step = Step.ENTER_UPDATE_MODE
while True:
if next_step == Step.ENTER_UPDATE_MODE:
enter_update_mode()
next_step = Step.START_UPDATE
elif next_step == Step.START_UPDATE:
success = start_update()
if success:
next_step = Step.LEAVE_UPDATE_MODE
else:
next_step = Step.ENTER_UPDATE_MODE
....
I can imagine an alternative would be to just call the functions nested. As long as this is only a few levels deep, it should not be a problem:
def enter_update_mode():
# do stuff ...
# call next step:
perform_update()
def perform_update():
# ...
# call next step:
if success:
leave_update_mode()
else:
enter_update_mode()
I have looked into the python-statemachine module, but it seems to be there to model state machines. You can define states and query which state it is in, and you can attach behavior to states. But that is not what I'm looking for. I am looking for a way to write the behavior code in a very straightforward, imperative style, like you would use for pseudocode or instructions to a human.
There is also a module to add goto to Python, but I think it is a joke and would not like to use it in production :-).
Notes:
This code is synchronous, meaning it is a terminal app or a separate thread. Running concurrently with other code would be an added complication. If a solution allows that (e.g. by using yield) that would be a bonus, but not neccessary.
I left out a lot of retry logic. A step may be only retried a certain number of times.
Releated discussion of explicit state machine vs. imperative style: https://softwareengineering.stackexchange.com/q/147182/62069

Selenium Get An Element Quickly

Currently I'm making a python bot for whatsapp manually without APIs or that sort because I am clueless. As such, I'm using Selenium to take in messages and auto reply. Currently, I'm noticing that every few messages, one message doesn't get picked up because the loops ran are too slow and my computer is already pretty fast. Here's the code:
def incoming_msges():
msges = driver.find_elements_by_class_name("message-in")
msgq = []
tq = []
try:
for msg in msges:
txt_msg = msg.find_elements_by_class_name("copyable-text")
time = msg.find_elements_by_class_name("_18lLQ")
for t in time:
tq.append(t.text.lower())
for txt in txt_msg:
msgq.append(txt.text.lower())
msgq = msgq[-1]
tq = tq[-1]
if len(msgq) > 0:
return (msgq, tq)
except StaleElementReferenceException:
pass
return False
Previously, I didn't add the time check thing, and the message sent would be saved, with this loop continuously running such that even if the other party sent the same thing again, the code would not recognise it as a new message because it thinks it's the same one as before. So now, the problem is that my code is super time consuming and I have no idea how to speed it up. I tried doing this:
def incoming_msges():
msges = browser.find_elements_by_class_name("message-in")
try:
msg = msges[-1]
txt_msg = msg.find_element_by_xpath("/span[#class=\"copyable-text\"]").text.lower()
time = msg.find_element_by_xpath("/span[#class=\"_18lLQ\"]").text.lower()
return (txt_msg, time)
except Exception:
pass
return False
However, like this, the code just doesn't find any messages. I have gotten the elements' types and classes correct according to the whatsapp web website but it just doesn't run. What's the correct way of rewriting my first code block as it is still correct? Thanks in advance.
First thing first ...
I definitely recommend using API ... Because what you are trying to do here is to reinvent the wheel. API has the power of telling you if there is a change in your status and you can queue these changes ... So I definitely recommend to use API ... It might be hard at the beginning, but trust me, its worth it ...
Next I would recommend you to use normal variable names. msges msgq tq (these are kindof unreadable and I still dont get what they are supposed to be after reading the code twice ...)
But to your speed problem ... "try - catch (aka except)" blocks are really heavy on a performance ... I would recommend to use safe programming if possible (20 if statements might be faster, but might not a same time) ... Also I think you are kind of unaware of a python language (atleast from what i can see here)
msgq = msgq[-1] # you are telling it to take the last element and change array variable to string .. to be more specific...
msgq ([1,2,3,4]) = msgq[-1] (4) will result to -> msgq = 4 (which in my option hits you performance as well)
tq = tq[-1] # same here
This would be better :)
if len(msgq[-1]) > 0:
return (msgq[-1], tq[-1])
If I understand your code correctly, you are trying to scrape the messages, but if its like you are saying that you want to make auto-reply bot, I would recommend you to eighter get ready for some JS magic or switch tool. I personally noticed that the selenium has a problem with dynamic content ... to be more specific ... once its at the end of the file it does not scrape it again ... so if you do not want to auto refresh every 5-10 seconds to get the latest HTML file, I recommend eighter to create this bot in JS (that will trigger everytime that an element changes) or use the API and use selenium just for responses. I was told that Selenium was created to simulate the common user to check if user interface works as it should (if buttons exists, if the website contains all what it should etc.) ... I think that selenium is for this job something like a flower small sponge for a car clean ... you can do it ... buts gonna cost you alot of time and you might miss some spots (like you missed those messages) ...
Lastly ... the work with strings in general is really costly. you are doing O(n^2) of operations in a try block ... which i can imagine can be really costly ... if its possible, I would reduce the number of inner for loops.
I wish you good luck in this project and I hope you find the answer you seek, while I hope my answer was at least a little helpful.

Python Memory leak using Yocto

I'm running a python script on a raspberry pi that constantly checks on a Yocto button and when it gets pressed it puts data from a different sensor in a database.
a code snippet of what constantly runs is:
#when all set and done run the program
Active = True
while Active:
if ResponseType == "b":
while Active:
try:
if GetButtonPressed(ResponseValue):
DoAllSensors()
time.sleep(5)
else:
time.sleep(0.5)
except KeyboardInterrupt:
Active = False
except Exception, e:
print str(e)
print "exeption raised continueing after 10seconds"
time.sleep(10)
the GetButtonPressed(ResponseValue) looks like the following:
def GetButtonPressed(number):
global buttons
if ModuleCheck():
if buttons[number - 1].get_calibratedValue() < 300:
return True
else:
print "module not online"
return False
def ModuleCheck():
global moduleb
return moduleb.isOnline()
I'm not quite sure about what might be going wrong. But it takes about an hour before the RPI runs out of memory.
The memory increases in size constantly and the button is only pressed once every 15 minutes or so.
That already tells me that the problem must be in the code displayed above.
The problem is that the yocto_api.YAPI object will continue to accumulate _Event objects in its _DataEvents dict (a class-wide attribute) until you call YAPI.YHandleEvents. If you're not using the API's callbacks, it's easy to think (I did, for hours) that you don't need to ever call this. The API docs aren't at all clear on the point:
If your program includes significant loops, you may want to include a call to this function to make sure that the library takes care of the information pushed by the modules on the communication channels. This is not strictly necessary, but it may improve the reactivity of the library for the following commands.
I did some playing around with API-level callbacks before I decided to periodically poll the sensors in my own code, and it's possible that some setting got left enabled in them that is causing these events to accumulate. If that's not the case, I can't imagine why they would say calling YHandleEvents is "not strictly necessary," unless they make ARM devices with unlimited RAM in Switzerland.
Here's the magic static method that thou shalt call periodically, no matter what. I'm doing so once every five seconds and that is taking care of the problem without loading down the system at all. API code that would accumulate unwanted events still smells to me, but it's time to move on.
#noinspection PyUnresolvedReferences
#staticmethod
def HandleEvents(errmsgRef=None):
"""
Maintains the device-to-library communication channel.
If your program includes significant loops, you may want to include
a call to this function to make sure that the library takes care of
the information pushed by the modules on the communication channels.
This is not strictly necessary, but it may improve the reactivity
of the library for the following commands.
This function may signal an error in case there is a communication problem
while contacting a module.
#param errmsg : a string passed by reference to receive any error message.
#return YAPI.SUCCESS when the call succeeds.
On failure, throws an exception or returns a negative error code.
"""
errBuffer = ctypes.create_string_buffer(YAPI.YOCTO_ERRMSG_LEN)
#noinspection PyUnresolvedReferences
res = YAPI._yapiHandleEvents(errBuffer)
if YAPI.YISERR(res):
if errmsgRef is not None:
#noinspection PyAttributeOutsideInit
errmsgRef.value = YByte2String(errBuffer.value)
return res
while len(YAPI._DataEvents) > 0:
YAPI.yapiLockFunctionCallBack(errmsgRef)
if not (len(YAPI._DataEvents)):
YAPI.yapiUnlockFunctionCallBack(errmsgRef)
break
ev = YAPI._DataEvents.pop(0)
YAPI.yapiUnlockFunctionCallBack(errmsgRef)
ev.invokeData()
return YAPI.SUCCESS

"Online" monkey patching of a function

Your program just paused on a pdb.set_trace().
Is there a way to monkey patch the function that is currently running, and "resume" execution?
Is this possible through call frame manipulation?
Some context:
Oftentimes, I will have a complex function that processes large quantities of data, without having a priori knowledge of what kind of data I'll find:
def process_a_lot(data_stream):
#process a lot of stuff
#...
data_unit= data_stream.next()
if not can_process(data_unit)
import pdb; pdb.set_trace()
#continue processing
This convenient construction launches a interactive debugger when it encounters unknown data, so I can inspect it at will and change process_a_lot code to handle it properly.
The problem here is that, when data_stream is big, you don't really want to chew through all the data again (let's assume next is slow, so you can't save what you already have and skip on the next run)
Of course, you can replace other functions at will once in the debugger. You can also replace the function itself, but it won't change the current execution context.
Edit:
Since some people are getting side-tracked:
I know there are a lot of ways of structuring your code such that your processing function is separate from process_a_lot. I'm not really asking about ways to structure the code as much as how to recover (in runtime) from the situation when the code is not prepared to handle the replacement.
First a (prototype) solution, then some important caveats.
# process.py
import sys
import pdb
import handlers
def process_unit(data_unit):
global handlers
while True:
try:
data_type = type(data_unit)
handler = handlers.handler[data_type]
handler(data_unit)
return
except KeyError:
print "UNUSUAL DATA: {0!r}". format(data_unit)
print "\n--- INVOKING DEBUGGER ---\n"
pdb.set_trace()
print
print "--- RETURNING FROM DEBUGGER ---\n"
del sys.modules['handlers']
import handlers
print "retrying"
process_unit("this")
process_unit(100)
process_unit(1.04)
process_unit(200)
process_unit(1.05)
process_unit(300)
process_unit(4+3j)
sys.exit(0)
And:
# handlers.py
def handle_default(x):
print "handle_default: {0!r}". format(x)
handler = {
int: handle_default,
str: handle_default
}
In Python 2.7, this gives you a dictionary linking expected/known types to functions that handle each type. If no handler is available for a type, the user is dropped own into the debugger, giving them a chance to amend the handlers.py file with appropriate handlers. In the above example, there is no handler for float or complex values. When they come, the user will have to add appropriate handlers. For example, one might add:
def handle_float(x):
print "FIXED FLOAT {0!r}".format(x)
handler[float] = handle_float
And then:
def handle_complex(x):
print "FIXED COMPLEX {0!r}".format(x)
handler[complex] = handle_complex
Here's what that run would look like:
$ python process.py
handle_default: 'this'
handle_default: 100
UNUSUAL DATA: 1.04
--- INVOKING DEBUGGER ---
> /Users/jeunice/pytest/testing/sfix/process.py(18)process_unit()
-> print
(Pdb) continue
--- RETURNING FROM DEBUGGER ---
retrying
FIXED FLOAT 1.04
handle_default: 200
FIXED FLOAT 1.05
handle_default: 300
UNUSUAL DATA: (4+3j)
--- INVOKING DEBUGGER ---
> /Users/jeunice/pytest/testing/sfix/process.py(18)process_unit()
-> print
(Pdb) continue
--- RETURNING FROM DEBUGGER ---
retrying
FIXED COMPLEX (4+3j)
Okay, so that basically works. You can improve and tweak that into a more production-ready form, making it compatible across Python 2 and 3, et cetera.
Please think long and hard before you do it that way.
This "modify the code in real-time" approach is an incredibly fragile pattern and error-prone approach. It encourages you to make real-time hot fixes in the nick of time. Those fixes will probably not have good or sufficient testing. Almost by definition, you have just this moment discovered you're dealing with a new type T. You don't yet know much about T, why it occurred, what its edge cases and failure modes might be, etc. And if your "fix" code or hot patches don't work, what then? Sure, you can put in some more exception handling, catch more classes of exceptions, and possibly continue.
Web frameworks like Flask have debug modes that work basically this way. But those are debug modes, and generally not suited for production. Moreover, what if you type the wrong command in the debugger? Accidentally type "quit" rather than "continue" and the whole program ends, and with it, your desire to keep the processing alive. If this is for use in debugging (exploring new kinds of data streams, maybe), have at.
If this is for production use, consider instead a strategy that sets aside unhandled-types for asynchronous, out-of-band examination and correction, rather than one that puts the developer / operator in the middle of a real-time processing flow.
No.
You can't moneky-patch a currently running Python function and continue pressing as though nothing else had happened. At least not in any general or practical way.
In theory, it is possible--but only under limited circumstances, with much effort and wizardly skill. It cannot be done with any generality.
To make the attempt, you'd have to:
Find the relevant function source and edit it (straightforward)
Compile the changed function source to bytecode (straightforward)
Insert the new bytecode in place of the old (doable)
Alter the function housekeeping data to point at the "logically" "same point" in the program where it exited to pdb. (iffy, under some conditions)
"Continue" from the debugger, falling back into the debugged code (iffy)
There are some circumstances where you might achieve 4 and 5, if you knew a lot about the function housekeeping and analogous debugger housekeeping variables. But consider:
The bytecode offset at which your pdb breakpoint is called (f_lasti in the frame object) might change. You'd probably have to narrow your goal to "alter only code further down in the function's source code than the breakpoint occurred" to keep things reasonably simple--else, you'd have to be able to compute where the breakpoint is in the newly-compiled bytecode. That might be feasible, but again under restrictions (such as "will only call pdb_trace() once, or similar "leave breadcrumbs for post-breakpoint analysis" stipulations).
You're going to have to be sharp at patching up function, frame, and code objects. Pay special attention to func_code in the function (__code__ if you're also supporting Python 3); f_lasti, f_lineno, and f_code in the frame; and co_code, co_lnotab, and co_stacksize in the code.
For the love of God, hopefully you do not intend to change the function's parameters, name, or other macro defining characteristics. That would at least treble the amount of housekeeping required.
More troubling, adding new local variables (a pretty common thing you'd want to do to alter program behavior) is very, very iffy. It would affect f_locals, co_nlocals, and co_stacksize--and quite possibly, completely rearrange the order and way bytecode accesses values. You might be able to minimize this by adding assignment statements like x = None to all your original locals. But depending on how the bytecodes change, it's possible you'll even have to hot-patch the Python stack, which cannot be done from Python per se. So C/Cython extensions could be required there.
Here's a very simple example showing that bytecode ordering and arguments can change significantly even for small alterations of very simple functions:
def a(x): LOAD_FAST 0 (x)
y = x + 1 LOAD_CONST 1 (1)
return y BINARY_ADD
STORE_FAST 1 (y)
LOAD_FAST 1 (y)
RETURN_VALUE
------------------ ------------------
def a2(x): LOAD_CONST 1 (2)
inc = 2 STORE_FAST 1 (inc)
y = x + inc LOAD_FAST 0 (x)
return y LOAD_FAST 1 (inc)
BINARY_ADD
STORE_FAST 2 (y)
LOAD_FAST 2 (y)
RETURN_VALUE
Be equally sharp at patching some of the pdb values that track where it's debugging, because when you type "continue," those are what dictates where control flow goes next.
Limit your patchable functions to those that have rather static state. They must, for example, never have objects that might be garbage-collected before the breakpoint is resumed, but accessed after it (e.g. in your new code). E.g.:
some = SomeObject()
# blah blah including last touch of `some`
# ...
pdb.set_trace()
# Look, Ma! I'm monkey-patching!
if some.some_property:
# oops, `some` was GC'd - DIE DIE DIE
While "ensuring the execution environment for the patched function is same as it ever was" is potentially problematic for many values, it's guaranteed to crash and burn if any of them exit their normal dynamic scope and are garbage-collected before patching alters their dynamic scope/lifetime.
Assert you only ever want to run this on CPython, since PyPy, Jython, and other Python implementations don't even have standard Python bytecodes and do their function, code, and frame housekeeping differently.
I would love to say this super-dynamic patching is possible. And I'm sure you can, with a lot of housekeeping object twiddling, construct simple cases where it does work. But real code has objects that go out of scope. Real patches might want new variables allocated. Etc. Real world conditions vastly multiply the effort required to make the patching work--and in some cases, make that patching strictly impossible.
And at the end of the day, what have you achieved? A very brittle, fragile, unsafe way to extend your processing of a data stream. There is a reason most monkey-patching is done at function boundaries, and even then, reserved for a few very-high-value use cases. Production data streaming is better served adopting a strategy that sets aside unrecognized values for out-of-band examination and accommodation.
If I understand correctly:
you don't want to repeat all the work that has already been done
you need a way to replace the #continue processing as usual with the new code once you have figured out how to handle the new data
#user2357112 was on the right track: expected_types should be a dictionary of
data_type:(detect_function, handler_function)
and detect_type needs to go through that to find a match. If no match is found, pdb pops up, you can then figure out what's going on, write a new detect_function and handler_funcion, add them to expected_types, and continue from pdb.
What I wanted to know is if there's a way to monkey patch the function that is currently running (process_a_lot), and "resume" execution.
So you want to somehow, from within pdb, write a new process_a_lot function, and then transfer control to it at the location of the pdb call?
Or, do you want to rewrite the function outside pdb, and then somehow reload that function from the .py file and transfer control into the middle of the function at the location of the pdb call?
The only possibility I can think of is: from within pdb, import your newly written function, then replace the current process_a_lot byte-code with the byte-code from the new function (I think it's func.co_code or something). Make sure you change nothing in the new function (not even the pdb lines) before the pdb lines, and it might work.
But even if it does, I would imagine it is a very brittle solution.

wxPython: Using EVT_IDLE

I defined an handler for EVT_IDLE that does a certain background task for me. (That task is to take completed work from a few processes and integrate it into some object, making a visible change in the GUI.)
The problem is that when the user is not moving the mouse or doing anything, EVT_IDLE doesn't get called more than once. I would like this handler to be working all the time. So I tried calling event.RequestMore() at the end of the handler. Works, but now it takes a whole lot of CPU. (I'm guessing it's just looping excessively on that task.)
I'm willing to limit the number of times the task will be carried out per second; How do I do that?
Or do you have another solution in mind?
Something like this (executes at most every second):
...
def On_Idle(self, event):
if not self.queued_batch:
wx.CallLater(1000, self.Do_Batch)
self.queued_batch = True
def Do_Batch(self):
# <- insert your stuff here
self.queued_batch = False
...
Oh, and don't forget to set self.queued_batch to False in the constructor and maybe call event.RequestMore() in some way in On_Idle.
This sounds like a use case for wxTimerEvent instead of wxIdleEvent. When there is processing to do call wxTimerEvent.Start(). When there isn't any to do, call wxTimerEvent.Stop() and call your methods to do processing from EVT_TIMER.
(note: i use from wxWidghets for C++ and am not familiar with wxPython but I assume they have a similar API)

Categories