I've been fighting for three hours now to get this process multithreaded, so that I can display a progress box. I finally got it working, insomuch as the process completes as expected, and all the functions call, including the ones to update the progress indicator on the window.
However, the window never actually displays. This is a PyGObject interface designed in Glade. I am not having fun.
def runCompile(obj):
compileWindow = builder.get_object("compilingWindow")
compileWindow.show_all()
pool = ThreadPool(processes=1)
async_result = pool.apply_async(compileStrings, ())
output = async_result.get()
#output = compileStrings() #THIS IS OLD
compileWindow.hide()
return output
As I mentioned, everything works well, except for the fact that the window doesn't appear. Even if I eliminate the compileWindow.hide() command, the window never shows until the process is done. In fact, the whole stupid program freezes until the process is done.
I'm at the end of my rope. Help?
(By the way, the "recommended" processes of using generators doesn't work, as I HAVE to have a return from the "long process".)
I'm not a pyGobject expert and i don't really understand your code. I think that you should post more. Why are you calling the builder in a function? you can call it at the init of the GUI?
Anyways.. It seems that you are having the common multithread problems..
are you using at the startup GObject.threads_init() and Gdk.threads_init() ?
Then, if you want to show a window from a thread you need to use Gdk.threads_enter() and Gdk.threads_leave().
here is an useful doc
I changed the overall flow of my project, so that may affect it. However, it is imperative that Gtk be given a chance to go through its own main loop, by way of...
if Gtk.events_pending():
Gtk.main_iteration()
In this instance, I only want to call it once, to ensure the program doesn't hang.
(The entire program source code can be found on SourceForge. The function in question is on line 372 as of this posting, in function compileModel().
Related
I am trying to implement a simple spinner (using code adapted from this answer) below a progress bar for a long-running function.
[######## ] x%
/ Compressing filename
I have the compression and progress bar running in the main thread of my script and the spinner running in another thread, so it can actually spin while compression takes place. However, I am using curses for both the progress bar and the spinner, and both use curses.refresh()
Sometimes the terminal will randomly output gibberish, and I'm not sure why. I think it is due to the multi-threaded nature of the spinner, as when I disable the spinner the problem goes away.
Here is the pseudocode of the spinner:
def start(self):
self.busy = True
global stdscr
stdscr = curses.initscr()
curses.noecho()
curses.cbreak()
threading.Thread(target=self.spinner_task).start()
def spinner_task(self):
while self.busy:
stdscr.addstr(1, 0, next(self.spinner_generator))
time.sleep(self.delay)
stdscr.refresh()
And here is the pseudocode for the progress bar:
progress_bar = "\r[{}] {:.0f}%".format("#" * block + " " * (bar_length - block), round(progress * 100, 0))
progress_file = " {} {}".format(s, filename)
stdscr.clrtoeol()
stdscr.addstr(1, 1, " ")
stdscr.clrtoeol()
stdscr.addstr(0, 0, progress_bar)
stdscr.addstr(1, 1, progress_file)
stdscr.refresh()
And called from main() like:
spinner.start()
for each file:
update_progress_bar
compress(file)
spinner.stop()
Why would the output sometimes become corrupted? Is it because of the separate threads? If so, any suggestions on a better way to design this?
The curses libraries that Python's curses module relies on are not threadsafe.
ncurses has a curs_threads feature, which has apparently been there since 5.7 about a decade ago. But it requires changing the way you do a few API calls, and linking against -lncursest, and it's still not trivial, and… almost nobody ever uses it.
As far as I know, no standard installer or distro package ever builds Python curses to link ncursest—even if the distro includes ncursest in the first place, which they often won't. And even if they did, there are no bindings for the threadsafe functions, so you still wouldn't be able to safely access things like setting the tabsize.
In my (possibly out-of-date, and possibly platform-limited) experience, you can nevertheless get away with things, but you need to:
Obviously only one thread can ever call stuff like getch and getmouse.
Add a global Lock, then make sure every batch of updates ends with a refresh, and the whole batch is inside the Lock.
Avoid the Python wrappers around the functionality mentioned in curs_threads—e.g., don't change the escdelay or the tabsize.
Init (and close) the screen from the main thread, before starting (after exiting) the other threads.
If at all possible, make sure you also create all of the windows you need in the main thread. (Hopefully you didn't want any dynamic popup subwindows or anything…)
But the safe way to do this is to do the same kind of thing you do with tkinter or other GUI libraries that don't understand threads. It's not identical, but the idea is similar. The simplest version is:
Move your main thread's work to another background thread.
Add a queue.Queue so that your background threads can ask for curses commands to be run. (You don't need anything complicated to represent a "command", it's just a (func, *args) tuple, because Python.)
Make the main thread loop around popping commands off that queue and calling them.
If your background threads need to call functions that return a value, obviously you need to make this slightly more complicated. You can look at how multiprocessing.dummy.AsyncResult and concurrent.futures.Future work. Or you can even steal Future for your own purposes. But you probably don't need anything as complicated as either.
If you're looping around input, you'll probably also want your main thread to do that (this means picking a "frame rate" and alternating between waiting on the queue and the input, with a timeout) and dispatch it, even if you're always dispatching to the same thread.
You could even write an mtTkinter-style wrapper that reproduces the curses interface (or even monkeypatches the curses module) but replaces each function with a call to put the function and args on a queue. But I'm not sure this would be worth the effort.
If this is the only place where you're using the curses module, the best solution will be to stop using it.
The only functionality of curses that you're really using here is its ability to clear the screen and move the cursor. This can easily be replicated by outputting the appropriate control sequences directly, e.g:
sys.stdout.write("\x1b[f\x1b[J" + progress_bar + "\n" + progress_file)
The \x1b[f sequence moves the cursor to 1,1, and \x1b[J clears all content from the cursor position to the end of the screen.
No additional calls are needed to refresh the screen, or to reset it when you're done. You can output "\x1b[f\x1b[J" again to clear the screen if you want.
This approach does, admittedly, assume that the user is using a VT100-compatible terminal. However, terminals which do not implement this standard are effectively extinct, so this is probably a safe assumption.
MyButton1 =Button(master, text='Quit',bg="grey",width=20,
command=master.quit)
MyButton1.place(x=200, y=100)
MyButton2 =Button(master, text='Propagate', bg="grey",width=20,
command=mainmethod)
MyButton2.place(x=1000, y=100)
master.geometry("1500x1500")
master.mainloop( )
In the above code after pressing propagate button mainmethod is invoking..
I wrote my logic in main method where this method alone taking 2minutes to execute in the mean time GUI going unresponsive state for few min and later displaying all my required output on text box i inserted
whether any away to avoid the unresponsive issue apart from using multi threading
and i am looking such that after pressing propagate button button should disabled and window should not go unresponsive and display text.insert statements continuously which i added in main method ?????
To prevent hanging, you need to separate the calculations in the mainmethod from Tkinter's main loop by executing them in different threads. However, threading system in Python is not that well-developed as in other languages (AFAIK) because of GIL (Global Interpreter Lock), but there is an alternative - using processes just like threads. This is possible with multiprocessing library.
In order to just prevent hanging, you could create another function
from multiprocessing import Process
def mainmethodLaunch():
global mainmethodProcess
mainmethodProcess = Process(target=mainmethod)
mainmethodProcess.start()
And bind this function to MyButton2 instead of mainmethod itself.
Docs: https://docs.python.org/2/library/multiprocessing.html#the-process-class
You can see p.join in the example. join method will cause your main process to wait for the other one to complete, which you don't want.
So when you press the button, mainmethodLaunch function will be invoked, and it will create another process executing mainmethod. mainmethodLaunch function's own run duration should be insignificant. Due to usage of another process, Tkinter window will not hang. However, if you do just this, you will not be able to interact with mainmethod process in any kind while it will be working.
In order to let these processes communicate with each other, you could use pipes (https://docs.python.org/2/library/multiprocessing.html#exchanging-objects-between-processes)
I guess the example is quite clear.
In order to receive some data from the mainmethod process over time, you will have to poll the parent_conn once a little time, let's say, second. This can be achieved with Tkinter's after method
(tkinter: how to use after method)
IMPORTANT NOTE: when using multiprocessing, you MUST initialize the program in if __name__ == '__main__': block. I mean, there should be no doing-something code outside functions and this block, no doing-something code with zero indent.
This is because multiprocessing is going to fork the same Python executable file, and it will have to distinguish the main process from the forked one, and not do initializing stuff in the forked one.
Check twice if you have done that because if you make such a mistake, it can cost you hanging of not just Tkinter window, but the whole system :)
Because the process will be going to fork itself endlessly, consuming all RAM you have, regardless of how much you have.
I am attempting to convert code from Summerfield's article on (old-style) PyQt Signals/Slots to new-style PySide code. One example is a pure console application, which I have never worked with before. Unfortunately, when I try to run it multiple times, I am told that the previous application is still running.
It is a simple app: it basically lets you set a number, and reports back if the number is new:
from PySide import QtCore
import sys
class TaxRate(QtCore.QObject):
rateChangedSig=QtCore.Signal(float)
def __init__(self):
QtCore.QObject.__init__(self)
self.rate = 17.5
def getRate(self):
return self.rate
def setRate(self, newRate):
if newRate != self.rate:
self.rate = newRate
self.rateChangedSig.emit(self.rate) #was self.emit(SIGNAL("rateChanged"), self.rate)
#QtCore.Slot() #technically not really needed
def rateChangedSlot(value):
print("Tax rate changed to {0:.2f} %".format(value))
if __name__=="__main__":
qtApp = QtCore.QCoreApplication(sys.argv) #origional had QtGui.QApplication, but there is no GUI
vat = TaxRate()
vat.rateChangedSig.connect(rateChangedSlot) #was vat.connect(vat, SIGNAL("rateChanged"), rateChanged)
vat.setRate(8.5) # A change will occur (new rate is different)
qtApp.quit()
sys.exit(qtApp.exec_())
Overall, it works as expected, except the final two lines do not kill the process. When I try to run the program twice, the second time my IDE (Spyder) always tells me that it is already running in a separate process. If I try running it from the command line, the window just hangs.
Strangely, when I comment out the last two lines I do not get this warning. This is the opposite of what I expect (based on previous experience with PySide GUI applications and the documentation for quit()).
Following the Closing a window example at Zetcode, I tried replacing qtApp.quit() with qtApp.instance().quit(), which yielded the same non-killing result.
So, how do I kill this thing?
One idea is that I shouldn't have even started it in the first place (as suggested here). Even though it is a pure console app, Summerfield's original program initializes with app=QtGui.QApplication(sys.argv), and it does not contain the last two lines. Things run fine, multiple times. However, isn't there a concern that each run would create a new process, so his program seems to be effectively multiplying processes without warning? (Note in practice I don't think this is happening on my system, so the answer seems to be 'No' for reasons I don't understand).
What is the correct way to control/initialize/kill a console app using PySide?
(This is ignoring the question, for the time being, why one would ever use PySide for a pure console application in Python as has been pointed out previously. But if anyone were to be interested in answering that separate question, I could start a separate question for it).
Potentially relevant post:
Pyside applications not closing properly
The problem is because you call QCoreApplication.quit() before you call QCoreApplication.exec_(). The call to quit is not queued up in the event loop, it happens immediately. The call to QCoreApplication.exec_() starts the event loop which only ends when a call to QCoreApplication.exit() (or QCoreApplication.quit()) is made while the event loop is running.
This is somewhat explained in the Qt documentation of QCoreApplication but it is very easy to miss.
I imagine you don't really need to call exec_() as you aren't using any events in your current code (most events are to do with window/mouse/keyboard though you might conceivably use some in the future like those generated by QTimer). It really depends what you want to do with the program in the future.
If you don't call exec_(), then your script will exit as you would normally expect any Python script to do (the only blocking function in your code is the call to exec_(), remove that and nothing keeps it running.)
I'm writing a GUI application that has a button which invokes a long task. In order for it not to freeze the GUI I delegate the task to a different process using python 3.3's multiprocessing module. Then I return the result for display using a Pipe.
I want the application not to leave any zombie process even if quit during the computation. As I'm on a mac this can happen one of two ways: through quitting the application (Command+Q) or but closing it's window.
Here's the code in the function linked to a button in the GUI:
main_pipe,child_pipe=Pipe()
p=Process(target=worker,args=(child_pipe,data))
p.start()
try:
while not main_pipe.poll():
root.update()
value_array=main_pipe.recv()
finally:
p.join()
This doesn't work the application doesn't respond to Command+q, and closing the window leaves two zombie process running (one for the GUI and one for the worker).
How to make it work in the other case as well?
Is this good practice? Is the a nicer, more pythonic way of doing it?
Additionally at the very end of the script I have those two lines (the exit() closes the application if the window is closed while not processing anything):
root.mainloop()
exit()
And finally, what's the difference between update() and mainloop()? Is it only that the latter hogs up the program while update() doesn't?
Ok, I finally solved it. Although I not complete sure regarding this method's pythoness or side effects, if somebody needs it here it is.
I figured that quitting correctly can only happen in a mainloop() not in a update() so I wrote two functions, one for creating the process, and one for checking it's output and they call each other using root.after(). I set the processes daemon flag to true to ensure proper quitting behaviour. Here's the code:
def process_start():
global value_array
global main_pipe
main_pipe,child_pipe = Pipe()
p=Process(target=worker,args=(child_pipe,data))
p.daemon=True
p.start()
root.after(500,check_proc)
def check_proc():
if not main_pipe.poll():
root.after(500,check_proc)
else:
global value_array
value_array=main_pipe.recv()
I'm still not sure if p.join() is needed but the deamon thing seems to get around the zombie-proceses problem
I was facing the similar problem earlier (except I wasn't using multiprocess). After nearly a whole days research, I come up with the following conclusion:
mainloop and root.waitwindow will (sometimes) block the sys.exit signal, which your program should receive after you hit ⌘Q.
You can bind ⌘Q to a new function, although you may still receive the sys.exit signal
Another way (more reliable in my case) is to remap the tcl quitting signal to another function, instead of the default one. You can remap the quit event (dock quit and ⌘Q) using this: root.createcommand('::tk::mac::Quit',function)
When trying to quit the software, use sys.exit instead of exit() or quit()
You can also use root.wm_protocol("WM_DELETE_WINDOW", function) to define the behavior when users clicking on the red button.
You can actually make the border of the window and the three default buttons disappear by flagging root.overridedirect(1) and thus force the user to click on a button on your GUI instead of closing the window.
In my application I have the following line which opens a file dialog window. Once I get the file name, I do a bunch of processing which takes quite some time, and once this is done the workspace is ready for the user.
filename, _ = QtGui.QFileDialog.getOpenFileName(self, 'Open file', os.curdir, "*.cws")
The file dialog is a modal window (by default), which is great, because it's preventing the user from doing stupid stuff while the workspace is not ready for use yet. I want to put a progress bar somewhere to give a sense of how much has be processed. I made another dialog window which displays a progress bar and some other information.
Now, since the file dialog window is modal, it just sits there frozen while my workspace is processing, and the progress dialog only pops up after everything is done.
I've looked into setting the file dialog window to not modal, but I don't think that is possible. I was thinking to maybe force it to close, and immediately have my progress dialog window pop up and take over the modality. How can I close the file dialog window programmatically? I don't know how to get a reference for the form.
Or perhaps you have a better suggestion on how to address this?
As thuga mentioned, your application event loop is stuck by your heavy processing.
So events (and especialy paint events) are not processed while your processing is running causing the GUI to freeze.
In my opinion, you have 2 options:
Force events to be processesed (not very classy but may work):
It depends on how your "heavy processing" is done.
Assuming the code hanging the loop is "under your hands" (not in a third party lib).
You can add as much call to QApplication.processEvents as you can in it.
If the processing is loop based, it can look like:
for item in itemList:
...processitem...
QtGui.QApplication.processEvents()
This as the main drawback of adding dependencies to GUI in parts of code that should not be aware of.
If your code is not loop based then you'll have to add several calls to processEvents that will pollute the processing code.
Stop hanging the event loop (more complicated but more maintainable)
That means you will have to deal with Threads and/or subprocesses as thuga suggested.
This solution assumes that GUI code and business code are separated well enough.
You can have a look at this article from Qt Quarterly that gives some highlights on this issue.
Because of python Global Interpreter Lock (GIL) you may not see better results with threads.
Consider using the multiprocessing library.