Dynamic key binding in python - python

I am making a drawing program in python with pygame right now. The interface is supposed to be vimesque, allowing the user to control most things with key presses and entering commands. I want to allow live binding of the buttons; the user should be able to change which keycode corresponds to which function. In my current structure, all bindings are stored in a dictionary of functions to keycodes, 'bindingsDict.' Whenever the main loop receives a KEY_DOWN event, I execute:
bindingDictkeyCode
Where keyCode is stored as an integer.
This works, but it seems to be taking a lot of time and I am having trouble thinking of ways I could optimize.
Does anyone know the big O run time of dict look ups? I assumed because it hashed it would run in ln(n) but there's a huge difference in performance between this solution and just writing a list of if statements in the mainloop (which does not allow for dynamic binding).

It is rather unlikely that dictionary search for response to a user event would cause any noticeable delay on the program. There is something going wrong in your code.
Btw, dict and set search in Python is O(log(1)) - but for 105 keys, or even, if you count modifiers applied, about 1000 different keybindngs could be searched in linearly (that is, if the search was O(N) ) without a noticeable delay, even on a 5 year old (desktop) CPU.
So, just post some of your code if you want a solution for your problem. (reading the comments I've noticed you found something else that seems to be responsible already)

Related

Strategies for speeding up string searches in Python

I need some help. I've been working on a file searching app as I learn Python, and it's been a very interesting experience so far, learned a lot and realized how little that actually is.
So, it's my first app, and it needs to be fast! I am unsatisfied with (among other things) the speed of finding matches for sparse searches.
The app caches file and folder names as dbm keys, and the search is basically running search words past these keys.
The GUI is in Tkinter, and to try not get it jammed, I've put my search loop in a thread. The thread recieves queries from the GUI via a queue, then passes results back via another queue.
That's how the code looks:
def TMakeSearch(fdict, squeue=None, rqueue=None):
'''Circumventing StopIteration(), did not see speed advantage'''
RESULTS_PER_BATCH=50
if whichdb(DB)=='dbhash' or 'dumb' in whichdb(DB):
'''iteration is not implemented for gdbm and (n)dbm, forced to
pop the keys out in advance for "for key in fdict:" '''
fdict=fdict
else:
# 'dbm.gnu', 'gdbm', 'dbm.ndbm', 'dbm'
fdict=fdict.keys()
search_list=None
while True:
query=None
while not squeue.empty():
#more items may get in (or not?) while condition is checked
query=squeue.get()
try:
search_list=query.lower().encode(ENCODING).split()
if Tests.is_query_passed:
print (search_list)
except:
#No new query, or a new database has been created and needs to be synced
sleep(0.1)
continue
else:
is_new_query=True
result_batch=[]
for key in fdict:
separator='*'.encode(ENCODING) #Python 3, yaaay
filename=key.split(separator)[0].lower()
#Add key if matching
for token in search_list:
if not token in filename:
break
else:
#Loop hasn't ended abruptly
result_batch.append(key)
if len(result_batch)>=RESULTS_PER_BATCH:
#Time to send off a batch
rqueue.put((result_batch, is_new_query))
if Tests.is_result_batch:
print(result_batch, len(result_batch))
print('is_result_batch: results on queue')
result_batch=[]
is_new_query=False
sleep(0.1)
if not squeue.empty():
break
#Loop ended naturally, with some batch<50
rqueue.put((result_batch, is_new_query))
Once there are few results, the results cease to be real-time, but rather take a few seconds, and that's on my smallish 120GB hard disk.
I believe it can be faster, and wish to make the search real-time.
What approaches exist to make the search faster?
My current marks all involve ramping up the faculties that I use - use multiprocessing somehow, use cython, perhaps somehow use ctypes to make the searches circumvent the Python runtime.
However, I suspect there are simpler things that can be done to make it work, as I am not savvy with Python and optimization.
Assistance please!
I wish to stay within the standard library if possible, as a proof of concept and for portability (currently I only scandir as an external library on Python <3.5), so for example ctypes would be preferrable to cython.
If it's relevant/helpful, the rest of the code is here -
https://github.com/h5rdly/Jiffy
EDIT:
This is the heart of the function, take a few pre-arrangements:
for key in fdict:
for token in search_list:
if not token in key:
break
else:
result_batch.append(key)
where search_list is a list of strings, and fdict is a dictionary or a dbm (didn't see a speed difference trying both).
This is what I wish to make faster, so that results arrive in real-time, even when there are only few keys containing my search words.
EDIT 2:
On #hpaulj 's advice, I've put the dbm keys in a (frozen) set, to gain a noticable imrovement on Windows/Python27 (dbhash):
I have some caveats though -
For my ~50Gb in use, the frozenset takes 28Mb, as by pympler.asizeof. So for the full 1Tb, I suspect it'll take a nice share of RAM.
On linux, for some reason, the conversion not only doesn't help, but the query itself stops getting updated in real time for some weird reason for the duration of the search, making the GUI look unrespnsive.
On Windows, This is almost as fast as I want, but still not warp-immediate.
So this comes around to this addition:
if 'win' in sys.platform:
try:
fdict=frozenset(fdict)
except:
fdict=frozenset(fdict.keys())
Since it would take a significant amount of RAM for larger disks, I think I'll add it as an optional faster search for now, "Scorch Mode".
I wonder what to do next. I thought that perhaps, if I could somehow export the keys/filenames to a datatype that ctypes can pass along, I could then pop a relevant C function to do the searches.
Also, perhaps learn the Python bytecode and do some lower-level optimization.
I'd like this to be as fast as Python would let me, please advise.

Loadable program checkpoints in python (serializable continuations/program images)

Lets say I have a python program that does following:
do_some_long_data_crunching_computation()
call_some_fiddly_new_crashing_function()
Is there a way to "freeze" and serialize the state of python program (all globals and such) after the return point of first long computation and then re-iterate your development of new function and restart the program execution from this point?
This is somewhat possible, if you are running the program from the interpreter, but is there any other way?
Well, no.
The point is that Python and your OS aren't free of side effects (it really isn't even remotely functional, though it has some features of functional languages), so to restore the state of your program doesn't really work in this case. You'd basically have to re-run the program, with exactly the computer state you had when you started it the last time.
Now, what you'd do is that after your long operation, you could save the state of the variables that are important to you, using pickle or similar.
Now, I know you'd like to avoid taking care of what to store and how to restore it, but that's basically a sign of unclean design: You should, as far as possible, store the state of your computation in a single state object; serializing and unserializing that then would be easy. Don't store computational state in globals!

Understand programmatically a python code without executing it

I am implementing a workflow management system, where the workflow developer overloads a little process function and inherits from a Workflow class. The class offers a method named add_component in order to add a component to the workflow (a component is the execution of a software or can be more complex).
My Workflow class in order to display status needs to know what components have been added to the workflow. To do so I tried 2 things:
execute the process function 2 times, the first time allow to gather all components required, the second one is for the real execution. The problem is, if the workflow developer do something else than adding components (add element in a databases, create a file) this will be done twice!
parse the python code of the function to extract only the add_component lines, this works but if some components are in a if / else statement and the component should not be executed, the component apears in the monitoring!
I'm wondering if there is other solution (I thought about making my workflow being an XML or something to parse easier but this is less flexible).
You cannot know what a program does without "executing" it (could be in some context where you mock things you don't want to be modified but it look like shooting at a moving target).
If you do a handmade parsing there will always be some issues you miss.
You should break the code in two functions :
a first one where the code can only add_component(s) without any side
effects, but with the possibility to run real code to check the
environment etc. to know which components to add.
a second one that
can have side effects and rely on the added components.
Using an XML (or any static format) is similar except :
you are certain there are no side effects (don't need to rely on the programmer respecting the documentation)
much less flexibility but be sure you need it.

The Foundry Nuke – program a keystroke (backspace key)

So if you don't know The Foundry Nuke, I'm not sure if you can help me, so read on at the risk of your own time. If your still here, Awesome! Either you know it or think you can help anyway and are an awesome person.
Basically I'm using The Foundry Ocula inside Nuke and creating a Python script to automate some stuff for me. It goes ahead X frames, adds an analysis key, moves ahead frames, adds key, etc. What I want is to delete the error thresholded out key matches (which is usually done with the backspace key) but I can't find a script in Ocula to delete selected keys, nor can I find a way to Python script something like
nuke.keystroke('backspace')
to make Nuke react like someone just pressed the Backspace key in the GUI. That code above is just an example of what I want... of course it's never that easy.
Thanks in advance!
Try the following method but pay attention that after erasing all the keys in a range, in the midtones.gain knob there will be a curve value instead of default 1 (it's an abstract example):
nuke.animation("ColorCorrect1.midtones.gain", "erase", ("27", "53"))
Or for copying expression (generated for chosen keyframes) from handmade userKnob to multiply knob use this method:
nuke.animation("Grade9.multiply", "expression", ("Grade9.userKnob",))

PyQt4: Modularizing/Scaling my GUI components?

I'm designing a (hopefully) simple GUI application using PyQt4 that I'm trying to make scalable. In brief, the user inputs some basic information and sends it into one of n queues (implementing waiting lists). Each of these n queues (QTableviews) are identical and each have controls to pop, delete from and rearrange its queue. These, along with some labels etc. form a 'module'. Currently my application is hardcoded to 4 queue modules, so there's elements named btn_table1_pop, btn_table2_pop...etc; four copies of every single module widget. This is obviously not very good UI design if you always assume your clients have four people that need waiting lists! I'd like to be able to easily modify this program so 8 people could use it, or 3 people could use it without a chunk of useless screen-estate!
The really naive solution to programming my application is duplicating the code for each module, but this is really messy, unmaintainable, and bounds my application to always four queues. A better thought would be to write functions for each button that sets an index and calls a function that implements the common logic, but I'm still hardcoded to 4, because the branch logic and the calling functions still have to take into account the names of the elements. If there was a way to 'vectorize' the names of the elements so that I could for example write
btn_table[index]_pop.setEnabled(False)
...I could eliminate this branch logic and really condense my code. But I'm way too new at Python/PyQt to know if this is 1) even possible? or 2) how to even go about it/if this is even the way to go?
Thanks again, SO.
In case anyone is interested I was able to get it working with dummybutton = getattr(self,'btn_table{}'.format(i)) and calling the button's methods on dummybutton.

Categories