How to ensure QProgressDialog is shown in PyQt - python

Sometimes my QProgressDialog shows, sometimes it doesn't ever show at all (as if processEvents weren't called). Are there any artifacts of the processEvents() command that may cause the QProgressDialog not to show under certain circumstances?
My question is general because I have not yet been able to isolate the problem in my code. However, I have noticed that when my QProgressDialog does not show it occurs when I am accessing a text file using a config parser. The work around was to do a time.sleep() after the file has been closed (perhaps to ensure the process completed and that processEvents would then commence showing the QProgressDialog).
If it helps, here's my code for running the QProgressDialog as a generator:
def progress_dialog(self, data, label, window_title, stop_label, capture_bar=False):
bar = QProgressDialog(label, stop_label, 0, len(data))
if capture_bar: self.prog_bar = bar
bar.setWindowTitle(window_title)
for k, d in enumerate(data):
QCoreApplication.instance().processEvents()
if bar.wasCanceled():
raise GeneratorExit
# set the next value beyond the start of 0
bar.setValue(k+1)
# again process events to draw the new label and value
QCoreApplication.instance().processEvents()
yield(d)
raise StopIteration
Again, sorry I don't have a full code snippet of the isolated problem (and the full code is too big of an ocean). I guess what I'm looking for is a why of checking if the processEvents() command is doing its job (because clearly I am calling it but it hangs on other processes rather than showing the dialog).
Edit:
According to this support request doing a "bar.show()" command will force the progress bar to show.
http://redmine.smar.fi/issues/265
I'm going to wait a few weeks and make sure this is a guaranteed fix before posting it as an answer.

If you need to show a QProgessDialog regardless of the duration of the process, use its setMinimumDuration method with a value of 0. According to the documentation the default minimum is 4000ms.

According to this support request doing a bar.show() command will force the progress bar to show:
http://redmine.smar.fi/issues/265
Simply call the show() method before every process events call and after the progress bar is first constructed.
I've waited nearly 4 months and this solution has worked without failing yet. Seems to be a sufficient answer.

This might be an old thread, but I had a similar problem and show() made the dialog appear, but empty. So, I came up with this decorator that I apply to functions that I want to run blocking, while permitting GUI thread to process events.
def nongui(fun):
"""Decorator running the function in non-gui thread while
processing the gui events."""
from multiprocessing.pool import ThreadPool
from PyQt4.QtGui import QApplication
def wrap(*args, **kwargs):
pool = ThreadPool(processes=1)
async = pool.apply_async(fun, args, kwargs)
while not async.ready():
async.wait(0.01)
QApplication.processEvents()
return async.get()
return wrap
Then, it's easy to write your calculating function normally with the decorator:
#nongui
def work(input):
# Here you calculate the output and set the
# progress dialog value
return out
and then run it as usual:
out = work(input)

Related

Implementing threading in a Python GTK application (PyGObject) to prevent UI freezing

Simply put, I want to properly implement threading in a Python GTK application. This is in order to prevent UI freezing due to functions/code taking a long time to finish running. Hence, my approach was to move all code which took a long time to run into separate functions, and run them in their separate threads as needed. This however posed a problem when trying to run the functions in sequence.
For example, take a look at the following code:
class Main(Gtk.Window):
def __init__(self):
super().__init__()
self.button = Gtk.Button(label='button')
self.add(self.button)
self.button.connect('clicked', self.main_function)
def threaded_function(self):
time.sleep(20)
print('this is a threaded function')
def first_normal_function(self):
print('this is a normal function')
def second_normal_function(self):
print('this is a normal function')
def main_function(self, widget):
self.first_normal_function()
self.threaded_function()
self.second_normal_function()
Pressing the button starts main_function which then starts 3 functions in sequence. threaded_function represents a function which would take a long time to complete. Running this as is will freeze the UI. Hence it should be threaded as such:
...
...
def main_function(self, widget):
self.first_normal_function()
thread = threading.Thread(target=self.threaded_function)
thread.daemon = True
thread.start()
self.second_normal_function()
What should happen is that the following first_normal_function should run, then threaded_function in a background thread - the UI should remain responsive as the background thread is working. Finally, second_normal_function should run, but only when threaded_function is finished.
The issue with this is that the functions will not run in sequence. The behaviour I am looking for could be achieved by using thread.join() however this freezes the UI.
So I ask, what's the proper way of doing this? This is a general case, however it concerns the general issue of having code which takes a long time to complete in a graphical application, while needing code to run sequentially. Qt deals with this by using signals, and having a QThread emit a finished signal. Does GTK have an equivalent?
I'm aware that this could be partially solved using Queue , with a put() and get() in relevant functions, however I don't understand how to get this to work if the main thread is calling anything other than functions.
EDIT: Given that it's possible to have threaded_function call second_normal_function using GLib.idle_add, let's take an example where in main_function, the second_normal_function call is replaced with a print statement, such that:
def main_function(self, widget):
self.first_normal_function()
thread = threading.Thread(target=self.threaded_function)
thread.daemon = True
thread.start()
print('this comes after the thread is finished')
...
...
...
#some more code here
With GLib.idle_add, the print statement and all the code afterwards would need to be moved into a separate function. Is it possible to avoid moving the print statement into its own function while maintaining sequentiality, such that the print statement remains where it is and still gets called after threaded_function is finished?
Your suggestion on how to do this was very close to the actual solution, but it's indeed not going to work.
In essence, what you'll indeed want to do, is to run the long-running function in a different thread. That'll mean you get 2 threads: one which is running the main event loop that (amongs other things) updates your UI, and another thread which does the long-running logic.
Of course, that bears the question: how do I notify the main thread that some work is done and I want it to react to that? For example, you might want to update the UI while (or after) some complex calculation is going on. For this, you can use GLib.idle_add() from within the other thread. That function takes a single callback as an argument, which it will run as soon as it can ("on idle").
So a possibility to use here, would be something like this:
class Main(Gtk.Window):
def __init__(self):
super().__init__()
self.button = Gtk.Button(label='button')
self.add(self.button)
self.button.connect('clicked', self.main_function)
thread = threading.Thread(target=self.threaded_function)
thread.daemon = True
thread.start()
def threaded_function(self):
# Really intensive stuff going on here
sleep(20)
# We're done, schedule "on_idle" to be called in the main thread
GLib.idle_add(self.on_idle)
# Note, this function will be run in the main loop thread, *not* in this one
def on_idle(self):
second_normal_function()
return GLib.SOURCE_REMOVE # we only want to run once
# ...
For more context, you might want to read the pygobject documentation on threading and concurrency

Update progressbar inside multiprocessor functions (Ray) simultaneously

I'm writing a program that uses ray package for multiprocessing programming. In the program, there is a function that would be called 5 times at the same time. During the execution, I want to show a progress bar using PyQT5 QprogressBar to indicate how much work is done. My idea is to let every execution of the function updates the progress bar by 20%. So I wrote the code like the following:
running_tasks = [myFunction.remote(x,y,z,self.progressBar,QApplication) for x in myList]
Results = list(ray.get(running_tasks))
Inside myFunction, there is a line to update the sent progress bar as the following:
QApplication.processEvents()
progressBar.setValue(progressBar.Value()+20)
But, when I run the code, I got the following error:
TypeError: Could not serialize the argument
<PyQt5.QtWidgets.QProgressBar object at 0x000001B787A36B80> for a task
or actor myFile.myFunction. Check
https://docs.ray.io/en/master/serialization.html#troubleshooting for
more information.
I searched through the internet (The URL returns 404) and I understand that this error is because multiprocessing in ray doesn't have shared memory between the processors, and sending a class attribute (like self.prgressBar) will lead each processor to have its own copy where it will modify it locally only. I also tried using the multiprocessing package instead of ray but it throws a pickling error, and I assume it is due to the same reason. So, Can anyone confirm if I'm right? or provide a further explanation about the error?
Also, how can I achieve my requirement in multiprocessing (i.e. updating the same progress bar simultaneously) If multiprocessing doesn't have shared memory between the processors?
I am unfamiliar with ray, but you can do this in the multiprocessing library using the multiprocessing.Queue().
The Queue is exactly as it's named, a queue where you can put data for other multiprocesses to read. In my case I usually put a dictionary in the Queue with a Command (Key) and what to do with that command (Value).
In one multiprocess you will do Queue.put() and in the other you can do Queue.get(). If you want to pass in one direction. In the example below I emulate what you may be looking to do.
I usually use a QTimer to check if there is any data in the queue, but you can also check whenever you feel like by calling a method to do so.
from multiprocessing import Process, Queue
myQueue = Queue()
class FirstProcess():
...
def update_progress_percentage(self, percentage):
self.progresss_percentage = percentage
def send_data_to_other_process(self):
myQueue.put({"UpdateProgress":self.progresss_percentage})
class SecondProcess():
...
def get_data_from_other_process(self):
while not myQueue.empty():
queue_dict = myQueue.get()
for key in queue_dict :
if key == "UpdateProgress":
percentage = queue_dict["UpdateProgress"]
progressBar.setValue(percentage)

Python CLI Progress bar/spinner WITHOUT iteration

There are numerous existing questions regarding the display of progress bars in the terminal while a Python script executes, but every one of them is based on a loop where you perform an operation and then update the progress graphic.
Unfortunately, the function whose progress I want to show--or at least a spinner object to show that it's working--is a black-box that I can't (at least really, really shouldn't) alter. Essentially, what I want to do is:
#pseudocode input
print('Loading')
spinner.begin()
blackbox() #a few thousand operations happen in here
spinner.end()
print('Finished')
#pseudocode output
Loading.
Loading..
Loading...
Loading.
Loading..
Loading...
Finished
Although ideally that would be an animation of the ellipsis instead of printing multiple lines. Before I can even start building silly ascii animations though, there's the main hurdle:
Is there a way to run spinner and blackbox() at the same time? Alternately, is there a hack to pause blackbox(), regardless of its content, every few hundred milliseconds, update the spinner graphic, and then resume where it left off?
I've tried this with the progress module but had no luck... I couldn't even get the example code to work, it just hung up after I started iterating until I Ctrl+C'd out.
I like using alive_progress for this.
from typing import ContextManager, Optional
from alive_progress import alive_bar
def spinner(title: Optional[str] = None) -> ContextManager:
"""
Context manager to display a spinner while a long-running process is running.
Usage:
with spinner("Fetching data..."):
fetch_data()
Args:
title: The title of the spinner. If None, no title will be displayed.
"""
return alive_bar(monitor=None, stats=None, title=title)
To install: pip install alive-progress
Threads is probably the easiest way to make this work. Here is a vastly simplified version that should get the point across. I wasn't sure whether you actually have the spinner function or not, so I made my own.
import threading
import time
def blackbox():
time.sleep(10)
thread = threading.Thread(target=blackbox)
thread.start()
eli_count = 0
while thread.is_alive():
print('Loading', '.'*(eli_count+1), ' '*(2-eli_count), end='\r')
eli_count = (eli_count + 1) % 3
time.sleep(0.1)
thread.join()
print('Done ')
So, while blackbox runs, the loading message is updated periodically. Once it finishes, the thread is joined and the loading message is replaced with a completed message.
You probably want to use threads (import threading). Have spinner.begin() start a thread that prints your messages, then let your blackbox run, and then have spinner.end() send a finish message to the thread using a Queue (from Queue import Queue) or something, join() the thread and keep doing whatever it is you do.
As a design choice, hide the prints somewhere deeper, not in the same block of code as the begin and end calls.

Tkinter grid_forget doesnt execute when called

I am experiencing something I really don't understand. In the program, I pick a data file and then I want to hide the widgets in the window and load the data from file. But everything I tried results in first loading the file and then executing the rest...
def LoadProject():
old_project = FD.askopenfilename(filetypes=[("Data file","*.dat")], initialdir = "./Projects/")
if old_project:
napis.delete(TK.ALL)
napis.grid_forget()
button_new.grid_forget()
button_load.grid_forget()
data_file = open(old_project,"r")
for line in data_file:
line = line.replace("\n","")
conv = line.split()
data.append([float(conv[0]),int(conv[1]),float(conv[2]),float(conv[3]),float(conv[4])])
Everything that is before the for cycle is executed after the cycle finishes. Can anybody help me please? I really don't understand this behavior.
It looks like you need to call the update method of the widgets you want to forget. This will flush the pending GUI changes. From effbot.org:
update()
Processes all pending events, calls event callbacks,
completes any pending geometry management, redraws widgets as
necessary, and calls all pending idle tasks. This method should be
used with care, since it may lead to really nasty race conditions if
called from the wrong place (from within an event callback, for
example, or from a function that can in any way be called from an
event callback, etc.). When in doubt, use update_idletasks instead.

Is it still not enough to simply use threads to update GUI?

For example:
class DemoFrame(wx.Frame):
def __init__(self):
Initializing
...
self.TextA = wx.StaticText(MainPanel, id = -1, label = "TextAOrWhatever")
self.TextB = wx.StaticText(MainPanel, id = -1, label = "TextBOrWhatever")
...
def StaticTextUpdating(self, ObjectName, Message):
ObjectName.SetLabel(Message)
def WorkerA(self):
while True:
Work on something
UpdatingThread = threading.Thread(target = self.StaticTextUpdating, args = (self.TextA, "Something for TextA", ))
UpdatingThread.start()
time.sleep(randomSecs)
def WorkerB(self):
while True:
Work on something
UpdatingThread = threading.Thread(target = self.StaticTextUpdating, args = (self.TextB, "Something for TextB", ))
UpdatingThread.start()
time.sleep(randomSecs)
...
def StartWorking(self):
Spawn WorkerA thread
Spawn WorkerB thread
...
As you can see, I always update StaticText in new threads, and I'm 100% sure at a whatever certain time point there's only one thread updating a specific object, but the problem is, every now and then after running for a while, some objects just disappear. Why is this happening? Does it mean GUI updating is not thread safe? Maybe only one object can be updated at a certain time point?
Added:
OK, wx.CallAfter should be a good solution for above codes. But I got another question, what if a button event and SetLabel happens at the same time? Wouldn't things like this cause troubles although I don't see any?
Most wx methods are not thread-safe. Use wx.CallAfter if you want to invoke a wx method from another thread; replace
ObjectName.SetLabel(Message)
with:
wx.CallAfter(ObjectName.SetLabel, Message)
Edit: Some Background Information
In wx (And in most other UI platforms) all the UI updates get executed in a single thread called main thread (Or UI Thread). This is to make the UI work faster by avoiding the performance hit of thread synchronization.
But the down side of this is that If we write code to update the UI from a different thread the results are undefined. Sometimes it may work, sometimes it may crash, sometimes some other thing may happen. So we should always go to UI thread to do the UI updates. So we use CallAfter function to make UI update function execute in the UI thread.
UI thread in java
UI thread in C#
The main thing to remember is that you shouldn't update anything in wxPython without using a threadsafe method, such as wx.CallAfter, wx.CallLater or wx.PostEvent. See http://wiki.wxpython.org/LongRunningTasks or http://www.blog.pythonlibrary.org/2010/05/22/wxpython-and-threads/ for more information.

Categories