What I'm trying to do:
I have a Python(PySide) and Qt/QML UI that today responds to keyboard input (actually it's an IR remote control, but the input events are received as though they were coming from a keyboard).
I want to be able to also respond to mouse events. So where today the user uses arrow keys to navigate to a particular button and presses OK (i.e., Return) to activate the handling for that button, I would like them to just be able to click the mouse on that button and get the same handling behavior.
What I have so far:
I've already got Keys.onReturnPressed: handling in my QML code that does what I need to do when the user presses OK/Return. And I've added MouseArea { ... onClicked: {...} ... } QML code that recognizes when I click on a given control. So I already see in my log when the mouse click events occur.
My question:
How do I tie them together? I want to make the onClicked: behavior just generate some kind of event (a signal, maybe?) that causes the onReturnPressed: handling to be invoked. (I'm not at all averse to passing events through the Python side of things if that's what it takes to make this work.)
(I guess I should mention here that the existing code includes some base classes (is that the right terminology here?) that can define behavior across ALL controls of a certain type (e.g., Buttons) in the system. So while each of the many, many Buttons has its own onReturnPressed: code providing its unique handling, my objective is to have a single onClicked: handler in the base class that makes all Buttons respond to clicks as they do to Return presses today.)
Can anyone point me in the right direction?
BTW: Yes, I'm aware that there's a second problem here, too, of navigation. That is, even once I've turned the mouse click into a press
of the Return key, I still have to solve the problem of associating it
with the right control on the screen.
I sort of didn't want to muddy the waters by asking too many things at once.
I'll get to that one
when I've got this one in hand. (...unless you've got a simple solution already up your sleeve... In that case I'm all ears.)
Related
I have a QAbstractItemView that needs to react to single and double click events. The actions are different depending on whether it was single clicked or double clicked. The problem that is occurring is that the single click event is received prior to the double click event.
Is there a recommended way/best practice for distinguishing between the two? I don't want to perform the single click action when the user has actually double clicked.
I am using Qt 4.6
It's a good UI design to make sure your single-clicks and double-clicks are conceptually related:
Single-Click: select icon
Double-Click: select icon and open it
Single-Click: select color
Double-Click: select color and open palette editor
Notice how in these examples the single-click action is actually a subset of the double-click. This means you can go ahead and do your single-click action normally and just do the additional action if the double-click comes in.
If your user interface does something like:
Single-Click: select icon
Double-Click: close window
Then you are setting your users up to fail. Even if they remember what single-clicking does versus double-clicking all the time, it's very easy to accidentally move your mouse too far while double-clicking or wait too long.
Edit:
I'm sorry to hear that.
In that case, I found these two articles useful:
Logical consequences of the way
Windows converts single-clicks into
double-clicks
Implementing
higher-order clicks
You can find answer in the thread titled Double Click Capturing on QtCentre forum;
You could have a timer. Start the
timer in the releaseEvent handler and
make sure the timeout is long enough
to handle the double click first.
Then, in the double click event
handler you can stop the timer and
prevent it from firing. If a double
click handler is not triggered, the
timer will timeout and call a slot of
your choice, where you can handle the
single click. This is of course a
nasty hack, but has a chance to work.
wysota
Using PySide which is the Python binding of Qt 4.8 I saw that single clicks deliver a QEvent.MouseButtonPress event and double clicks deliver a single QEvent.MouseButtonPress event closely followed by a QEvent.MouseButtonDblClick. The delay is approximately about 100ms on Windows. That means you still have a problem if you need to differentiate between single and double clicks.
The solution needs another QTimer with a slightly higher delay than the inbuilt delay (adding some overhead). If you observe a QEvent.MouseButtonPress event you start your own timer, in case of timeout it is a single click. In case of a QEvent.MouseButtonDblClick it is a double click and you stop the timer to avoid counting as single click.
Ctrl+Escape is a global Windows shortcut for opening main system menu. But I would like my Qt application to use this shortcut without triggering Windows main menu. I know it is probably a bad idea to override system shortcuts in general, but I would like to use this shortcut is a very limited use case.
This usecase is as follows. I have a popup window containing several rows or items. This window is opened by Ctrl+Tab and while the user holds Ctrl and keep pressing Tab, the current rows are cycled through. When the user releases Ctrl, the current row is used for some operation... But sometimes it happens that user presses Ctrl+Tab and then realizes he does not want to continue. He usually presses Escape while still holding Ctrl. And then it triggers Windows system menu and normal user gets confused, choleric user get angry... which is a bad thing. In other words I would like to be able to close the popup window when user presses Ctrl+Escape. How to do that? It is even possible?
If I write the code using this shortcut like any other short, it does not work and it always triggers Windows main menu.
As I understand it, Qt will typically not receive the key event if the underlying window system has intercepted it. For example even QtCreator cannot override system-wide shortcuts.
This question is almost a duplicate of: C++/Qt Global Hotkeys
While that question is asking specifically to capture shortcuts in a hidden/background application, I think the basic concept is the same -- capture shortcuts before the window system processes them.
From that answer, UGlobalHotkey seems pretty good, and the How to use System-Wide Hotkeys in your Qt application blog post could be useful for your limited-use case (but read the comments on that blog post about fixing the example).
Also found:
https://github.com/mitei/qglobalshortcut
https://github.com/Skycoder42/QHotkey (looks like a more detailed version of above)
I'm very keen on being able to use the keyboard to do everything with a GUI and am currently exploring QTreeView and QTableView among other things.
I'm adding a lot of my own hotkeys (shortcuts) and am devising a method to automate a user list or guide to these available hotkeys.
But something like QTreeView also comes with its own built-in hotkeys, e.g. arrow keys for navigation, F2 to start editing, Ctrl-A for "select all", etc.. I want to get a comprehensive list of these and include them in the automatically generated user guide.
I've got to this page, for example, but I haven't really got a clue how to dig down into PyQt5 components to unearth this kind of information programmatically.
There's some interesting functionality, probably unknown to many users, with QTreeView: e.g. if, in column 0, you have a tree structure you can skip from label (text) to label by pressing the first letter of each one's label. But if you enter 2 (or more) keys quickly enough this also works: entering "ra" will skip over "Roma" and "Rimini" to "Ravenna" even if "Roma" and "Rimini" come first. It turns out that this is implemented by QTreeView.keyboardSearch ... but what I want to know is whether it's possible to find details of the "mapping" functionality for this and other keyboard enablements, often implemented by keyPressEvent, programmatically. Having looked a little at the PyQt5 files it seems that a lot of PyQt5 functionality may ultimately be contained in .dll files (this is a W10 machine), so I'm not particularly optimistic.
Each widget has a certain behavior depending on the hotkeys pressed, so there is no documentation that indicates all the cases, so you will have to review the documentation of each class and the parent class. So for example to understand the behavior of QTableView you should also review the documentation of QAbstractItemView, QAbstractScrollArea and QFrame (the same is for QTreeView), considering the above we can collect information:
void QAbstractScrollArea::keyPressEvent(QKeyEvent *e)
This function is called with key event e when key presses occur. It
handles PageUp, PageDown, Up, Down, Left, and Right, and ignores all
other key presses.
QAbstractItemView:
void QAbstractItemView::keyPressEvent(QKeyEvent *event).
This function is called with the given event when a key event is sent
to the widget. The default implementation handles basic cursor
movement, e.g. Up, Down, Left, Right, Home, PageUp, and PageDown; the
activated() signal is emitted if the current index is valid and the
activation key is pressed (e.g. Enter or Return, depending on the
platform). This function is where editing is initiated by key press,
e.g. if F2 is pressed.
(emphasis mine)
QTableView and QTreeView when inheriting from QAbstractItemView have the same hotkeys.
I've done a few searches but I couldn't find anything about this topic. Perhaps because it is common programmer knowledge (I'm not a programmer, I've learned from necessity), or because I'm going about it the wrong way.
I would like ideas/suggestions on how to manage button states for a GUI. For example, if I have a program which allows the user to import and process data, then certain functions should be inaccessible until the data has been imported successfully, or if they want to graph certain data, they need to select which data to graph before hitting the 'graph' or 'export' button. Even in the simple programs I've built these relationships seems to get complicated quickly. It seems simple to say "User shouldn't be able to hit button 'A' until 'B' and 'C' have been completed, then 'A' should be disabled if button 'D' or the 'Cancel' button. But that's a lot to track for one button. Thus far, I've tried two things:
Changing/Checking button states in the callback functions for the button. So in the above example, I would have code in buttons B's and C's callback to check if A should be enabled. And in buttons D's and Cancel's callbacks I would have code to disable button A. This gets complicated quickly and is difficult to maintain as code changes.
Setting boolean variables in every buttons callback (or just checking the states later using cget()) and checking the variables in a polling function to determine which buttons should be enabled or disabled.
I'm just not sure about this. I would like to make code as short and easy to understand as possible (and easy to edit later), but I don't like the idea of polling all the button states every few hundred milliseconds just for button 'management'. You can extend the same idea to check boxes, menu items, etc... but I'd like to here what others have done and why they do it the way they do.
You are only changing button states based on events, right? There is no reason to 'poll' to see if a button state has changed. What you can do is build a function which does all of the calling for you, then call it with something like disable_buttons([okButton, graphButton, printButton]). When an event takes place that modifies the appropriate user interface options (such as importing data), have another function that turns them on: enable_buttons([graphButton]). You could do this with each object's methods, of course, but making a wrapper allows you to be consistent throughout your application.
The following python line will bind the method "click" to the event when the user presses the mouse button while the pointer is on the widget; no matter where the pointer is when she releases the button.
self.bind('<Button-1>',self.click)
If I use "ButtonRelease" instead of "Button" in the code, it seems that the method "click" will be called for the widget on which the mouse was pressed after the button release; no matter where you release it.
1- Isn't there a neat way to make it call the bound method only if the mouse button was released on my widget; no matter where it was pressed?
2- Isn't there neat way to tell it to react only in case of a full click (press and release both on the same widget)?
1- Isn't there a neat way to make it
call the bound method only if the
mouse button was released on my
widget; no matter where it was
pressed?
2- Isn't there neat way to tell it to
react only in case of a full click
(press and release both on the same
widget)?
No "neat" way, because, as Tkinter's docs say:
When you press down a mouse button
over a widget, Tkinter will
automatically "grab" the mouse
pointer, and mouse events will then be
sent to the current widget as long as
the mouse button is held down.
and both of your desires are incompatible with this automatic grabbing of the mouse pointer on press-down (which I don't know how to disable -- I think it may be impossible to disable, but proving a negative is hard;-).
So, you need more work, and a non-"neat" solution: on the button-down event's callback, bind the enter and leave events (to bound methods of a class instance where you can track whether the mouse is currently inside or inside the widget of interest) of that window as well as the button-release; this way, when the release event comes, you know whether to perform the "actual application callback" (if inside) or do nothing (if outside) -- that gives you your desire number 2, but describing this as neat would be a stretch.
Desire number 1 is even harder, because you have to track enter and leave events on EVERY widget of interest -- it's not enough to know one bit, whether the mouse is inside or outside, but rather you must keep track of which widget (if any) it's currently in, to direct the "actual application callback" properly (if at all) at button release time.
While the internals aren't going to be neat, each functionality can be bound into one neat-to-call function... with slightly "indaginous" internals (a term that's used more often to refer to root canal work or the like, rather than programming, but may be appropriate when you're wanting to go against the grain of functionality hard-coded in a framework... that's the downside of frameworks -- you're in clover as long as you want to behave in ways they support, but when you want to defeat their usual behaviors to do something completely different, that can hardly ever be "neat"!-).
The tkinter documentation does provide you info on that:
http://www.pythonware.com/library/tkinter/introduction/events-and-bindings.htm
You can do a binding on
<ButtonRelease-1>
Binding on ButtonRelease-1 isn't enough. The callback won't fire until the button is released, but it doesn't matter where the mouse is when it's released. What governs is where the mouse was when it was clicked, as Alex Martelli's said. An easy way to get the desired behavior is to put everything on a canvas, and bind the callback to ButtonRelease-1. Now you have something like
def callback(event):
x1, y1, x2, y2 = canvas.bbox(widget)
if x1 <= event.x <= x2 and y1 <= event.y <= y2:
<whatever>
I've used this approach in my own code to get arbitrary widgets to behave like buttons in this respect.