I'm working with opencv3, python 3 and pyqt5. I want to make a simple GUI in which I want open up a new window to play a video along with some other widgets when a button is clicked on the main window. I've used QPixmap for displaying images in the past so I create a label and try to set the frames in the pixmap in a loop. The loop works fine but I am unable to get a display of the video/new window.
The loop I want to execute in the new window looks something like this:
def setupUi():
vid=cv2.VideoCapture('file')
ret, frame=vid.read()
while ret:
Qimg=convert(frame)
self.label.setpixmap(Qimg)
self.label.update()
ret,frame=vid.read()
convert() is a function I've written myself that converts the cv frame to QImage type to be set into the pixmap.
I'm only a beginner with pyQt so don't know what I am doing wrong. I've read about using signals, threads for the new window and QtApplication.processEvents() but don't know how these work and how they'll fit into my problem.
It would be helpful if someone could set me in the right direction and also point out some resources to create good interfaces for my apps using OpenCV and python.
The reason that this isn't running is that your while loop is blocking Qt's event loop. Basically, you're stuck in the while loop and you never give control back to Qt to redraw the screen.
Your update() call isn't doing what you think it is; it's updating the data stored by the object, but this change does not show up until the program reenters the eventloop.
There are probably multiple ways of handling this, but I see two good options, the first being easier to implement:
1) Call QApplication.processEvents() in every iteration of your while loop. This forces Qt to update the GUI. This will be much more simple to implement than 2).
2) Move the function to a separate class and use QThread combined with moveToThread() to update the data, and communicate with the GUI thread using signals/slots. This will require restructuring your code a bit, but this might be good for your code overall. Right now the code that is generating the data is in your MainWindow class presumably, while the two should be kept separate according to Qt's Model-View design pattern. Not very important for a small one-off app, but will help keep your code base intelligible as your app grows in size.
Related
I would like to show an image with transparent background to indicate something when a key combination is pressed.
Let's say I pressed ctrl+f3, I trigger a python script. Is there anyway I can make that happen?
What python library can I use to show an image without window border and background?
I have figured out how to trigger the file on key press. How to I deal with the (imshow) thing?
Thank you.
show an image without window border and background
This sound like task for some GUI library. There are many available but you would need test them in order to find which one can do it. First feature is generally known as frameless or borderless window. tkinter which ships with python has ability to work this way, see for example tutorialspoint.com tutorial, though I do not know how it will work with alpha channel of your image.
I am currently working on a project using Python and tkinter.
The problem is that I don't know what's the proper way to display multiple windows, or screens, I don't know how to call them. Let me explain better.
When the application starts the login screen appears. After that, if I click register, I want to go to the register screen, but I don't want it to be a separate window (I don't want to have 2 windows displayed at the same time), but rather another window with different content ?!
How should I handle properly this situation? Create a second window using Toplevel and hiding the first (can I do that?) or changing the widgets of the first?
Code I've written so far
You can do that- just call window.withdraw() on the Toplevel you need to hide after creating a new Toplevel. Changing the widgets in the first is also an option- if you like, you could always try a Notebook widget and disable manual flipping or just put each "screen" in a frame and grid_ or pack_forget them to remove them from the window.
I'm writing some test functions for a form I made. There are a couple of QMessageBox that are invoked(one through QMessageBox.question method and one through the QMessageBox.information method. While my custom widget is not shown on screen, these two actually show up on screen.
I tried dismissing them by looping through widgets I get in QApplication.topLevelWidgets() and dismissing the right one, however, it seems my code only continues executing after I manually dismiss the MessageBox.
So my question is two-fold:
1) How do I keep the QMessageBox (or any widget really) from showing on screen during testing.
2) How can I programmatically accept/reject/dismiss this widget.
You can set up a timer to automatically accept the dialog. If the timeout is long, the dialog will still display for a while:
w = QtGui.QDialog(None)
t = QtCore.QTimer(None)
t.timeout.connect(w.accept)
t.start(1)
w.exec_()
For your specific case, if you don't want to touch the code being testes, you can have the timer run a function to accept all current modal widgets, as you were suggesting:
def accept_all():
for wid in app.topLevelWidgets():
if wid.__class__ == QtGui.QDialog: #or QMessageBox, etc:
wid.accept()
t = QtCore.QTimer(None)
t.timeout.connect(accept_all)
t.start(10)
I decided to use the mock module instead. It seemed better since the other solution would actually draw on screen, which is not optimal for testing.
If you have the same problem and would like to mock a question QMessageBox you can something like this:
#patch.object(path.QMessageBox, "question", return_value=QtGui.QMessageBox.Yes)
Would simulate a MessageBox in which the Yes button was clicked.
I think it makes sense with Qt testing (including PySide/PyQt) to mock your GUI interaction and do dedicated GUI testing separately as necessary.
For mocking GUI interaction, I'd use the mock library, as I myself do regularly. The drawback of this is that you have to depend on mock definitions, which may drift out of sync with respect to your production application. On the other hand, your tests will be speedier than involving the actual GUI.
For testing the GUI itself, I'd write a separate layer of tests using a GUI testing tool such as Froglogic Squish. It'll typically lead to more involved/slower tests, but you'll test your application directly, and not merely simulate the GUI layer. My approach in this regard is invest in such a tool if the budget allows, and run these tests as necessary keeping in mind they'll be relatively slow.
How do I make a newly created window in wxPython not take focus? I'd like to be able to create a new window, without focus jumping to it.
I've never tried it, but I've heard you can do
window.Disable()
window.Show()
window.Enable()
That seems a little counter-intuitive. However, you can also simulate this by creating and showing the second frame and then calling Raise() on the original frame.
I would like to create an application that has 3-4 frames (or windows) where each frame is attached/positioned to a side of the screen (like a task bar). When a frame is inactive I would like it to auto hide (just like the Windows task bar does; or the dock in OSX). When I move my mouse pointer to the position on the edge of the screen where the frame is hidden, I would like it to come back into focus.
The application is written in Python (using wxPython for the basic GUI aspects). Does anyone know how to do this in Python? I'm guessing it's probably OS dependent? If so, I'd like to focus on Windows first.
I don't do GUI programming very often so my apologies if this makes no sense at all.
As far as I know, there's nothing built in for this.
When the window is hidden, do you want it completely invisible or can a border of a few pixels be showing? That would be an easy way to get a mouse hover event. Otherwise you might have to use something like pyHook to get system-wide mouse events to know when to expand your window.
The events EVT_ENTER_WINDOW and EVT_LEAVE_WINDOW might also be useful here to know when the user has entered/left the window so you can expand/collapse it.
Expanding/collapsing can just be done by showing/hiding windows or resizing them. Standard window functions, nothing fancy.
By the way, you might want to use wx.ClientDisplayRect to figure out where to position your window. That will give you a rectangle of the desktop that does NOT include the task bar or any other toolbars the user has, assuming you want to avoid overlapping with those things.
Personally, I would combine the EVT_ENTER_WINDOW and EVT_LEAVE_WINDOW that FogleBird mentioned with a wx.Timer. Then whenever it the frame or dialog is inactive for x seconds, you would just call its Hide() method.
I think you could easily just make a window that is the same size as the desktop then do some while looping for an inactivity variable based on mouse position, then thread off a timer for loop for the 4 inactivity variables. I'd personally design it so that when they reach 0 from 15, they change size and position to become tabular and create a button on them to reactivate. lots of technical work on this one, but easily done if you figure it out