I have a tqdm progress bar:
print('foo')
for status in tqdm(cursor.items(count)):
#process status
pass
I print some messages before the loop but the progress bar is shown before them. Is there any kind of multi threading or how can i fix this?
The problem is that print by default prints to sys.stdout while tqdm by default prints to sys.stderr, which makes them desynchronized.
You can either specify file=sys.stdout to tqdm or specify file=sys.sterr to print to make both print to the same stream, or you can call sys.stdout.flush before and sys.stderr.flush after each call to tqdm.
I can report how a couple of the solutions reported worked for me on Windows 10. I too was having the problem of tqdm's bar output being interrupted by the print executed prior to tqdm.
The solution that worked for me is
sys.stdout.flush()
The one that did not was the addition to the print of
flush=True
I think the sys.stdout.flush is the best so far as it's EXACTLY what needs to be done... the output on stdout needs to be flushed so that it's completed prior to the first output on stderr by tqdm... the most basic of I/O problems.
I LOVE this little widget's ability to bring a bit of elegance to the otherwise boring world of stdio.
tqdm works in a thread (that's good as the application won't stuck because of the progress bar) and therefore the progress bar is shown before the prints.
As the machine sees that print is an IO action so the machine gives priority to the tqdm.
You need to sleep just before and after the loop. To do so, use time.sleep(x)(x is in seconds) before and after the loop to stop the problem. Remember to import time at the start of your code. Experiment with different values of x, but just 0.1 will probably work fine.
Related
If we assume that the following code is a huge and complicated code and lasts for minutes or hours and we want to inform the user how many percents of the code are passing, what should I do?
num=1
for i in range(1,100000):
num=num*i
print(num)
I want to inform users with progression bar, similar to installing something.
I checked here but I did not understand how to write a progression bar depending on my code progression.
In the examples similar to the mentioned link, they are defining the sleep or delaying time. this is not acceptable. Because we do not know the calculation time of Python in different code with different functions.
If your index i corresponds to your actuall progress, the tqdm package is a good option. A simple example:
from tqdm import tqdm
import time
for i in tqdm(range(1000)):
time.sleep(0.01) # sleep 0.01s
Output:
1%| | 1010/100000 [00:10<16:46, 98.30it/s]
Edit: The progress bar also works if the progress is not known.
def loop_without_known_length():
# same as above, but the length is not known outside of this function.
for i in range(1000):
yield i
for i in tqdm(loop_without_known_length()):
time.sleep(0.01)
Output:
60it [00:00, 97.23it/s]
I am not sure if I am doing something wrong. I am using progressbar to show how long a task is taking. This is the code I have wrapped around a to_excel command:
dfPub = pd.DataFrame(aPub)
if dfPub.empty:
print("There are no Publications")
else:
with progressbar.ProgressBar(max_value=10) as bar:
for i in range(10):
dfPub.to_excel(writer, 'Publications', columns=cols, index=False)
time.sleep(0.1)
bar.update(i)
It is working, but when testing with and without there is a massive difference in time it is taking to run i.e. without the progressbar, it takes about 2-3 seconds, and with it is taking around 15 seconds.
Am I implementing it incorrectly?
use multi-threading method, with threading module, put your progressbar on a new thread and test it again, you can read more on : https://pymotw.com/2/threading/
There are numerous existing questions regarding the display of progress bars in the terminal while a Python script executes, but every one of them is based on a loop where you perform an operation and then update the progress graphic.
Unfortunately, the function whose progress I want to show--or at least a spinner object to show that it's working--is a black-box that I can't (at least really, really shouldn't) alter. Essentially, what I want to do is:
#pseudocode input
print('Loading')
spinner.begin()
blackbox() #a few thousand operations happen in here
spinner.end()
print('Finished')
#pseudocode output
Loading.
Loading..
Loading...
Loading.
Loading..
Loading...
Finished
Although ideally that would be an animation of the ellipsis instead of printing multiple lines. Before I can even start building silly ascii animations though, there's the main hurdle:
Is there a way to run spinner and blackbox() at the same time? Alternately, is there a hack to pause blackbox(), regardless of its content, every few hundred milliseconds, update the spinner graphic, and then resume where it left off?
I've tried this with the progress module but had no luck... I couldn't even get the example code to work, it just hung up after I started iterating until I Ctrl+C'd out.
I like using alive_progress for this.
from typing import ContextManager, Optional
from alive_progress import alive_bar
def spinner(title: Optional[str] = None) -> ContextManager:
"""
Context manager to display a spinner while a long-running process is running.
Usage:
with spinner("Fetching data..."):
fetch_data()
Args:
title: The title of the spinner. If None, no title will be displayed.
"""
return alive_bar(monitor=None, stats=None, title=title)
To install: pip install alive-progress
Threads is probably the easiest way to make this work. Here is a vastly simplified version that should get the point across. I wasn't sure whether you actually have the spinner function or not, so I made my own.
import threading
import time
def blackbox():
time.sleep(10)
thread = threading.Thread(target=blackbox)
thread.start()
eli_count = 0
while thread.is_alive():
print('Loading', '.'*(eli_count+1), ' '*(2-eli_count), end='\r')
eli_count = (eli_count + 1) % 3
time.sleep(0.1)
thread.join()
print('Done ')
So, while blackbox runs, the loading message is updated periodically. Once it finishes, the thread is joined and the loading message is replaced with a completed message.
You probably want to use threads (import threading). Have spinner.begin() start a thread that prints your messages, then let your blackbox run, and then have spinner.end() send a finish message to the thread using a Queue (from Queue import Queue) or something, join() the thread and keep doing whatever it is you do.
As a design choice, hide the prints somewhere deeper, not in the same block of code as the begin and end calls.
I'm trying to a program that executes a piece of code in such a way that the user can stop its execution at any time without stopping the main program. I thought I could do this using threading.Thread, but then I ran the following code in IDLE (Python 3.3):
from threading import *
import math
def f():
eval("math.factorial(1000000000)")
t = Thread(target = f)
t.start()
The last line doesn't return: I eventually restarted the shell. Is this a consequence of the Global Interpreter Lock, or am I doing something wrong? I didn't see anything specific to this problem in the threading documentation (http://docs.python.org/3/library/threading.html)
I tried to do the same thing using a process:
from multiprocessing import *
import math
def f():
eval("math.factorial(1000000000)")
p = Process(target = f)
p.start()
p.is_alive()
The last line returns False, even though I ran it only a few seconds after I started the process! Based on my processor usage, I am forced to conclude that the process never started in the first place. Can somebody please explain what I am doing wrong here?
Thread.start() never returns! Could this have something to do with the C implementation of the math library?
As #eryksun pointed out in the comment: math.factorial() is implemented as a C function that doesn't release GIL so no other Python code may run until it returns.
Note: multiprocessing version should work as is: each Python process has its own GIL.
factorial(1000000000) has hundreds millions of digits. Try import time; time.sleep(10) as dummy calculation instead.
If you have issues with multithreaded code in IDLE then try the same code from the command line, to make sure that the error persists.
If p.is_alive() returns False after p.start() is already called then it might mean that there is an error in f() function e.g., MemoryError.
On my machine, p.is_alive() returns True and one of cpus is at 100% if I paste your code from the question into Python shell.
Unrelated: remove wildcard imports such as from multiprocessing import *. They may shadow other names in your code so that you can't be sure what a given name means e.g., threading could define eval function (it doesn't but it could) with a similar but different semantics that might break your code silently.
I want my program to be able to handle ridiculous inputs from the user gracefully
If you pass user input directly to eval() then the user can do anything.
Is there any way to get a process to print, say, an error message without constructing a pipe or other similar structure?
It is an ordinary Python code:
print(message) # works
The difference is that if several processes run print() then the output might be garbled. You could use a lock to synchronize print() calls.
Sometimes my QProgressDialog shows, sometimes it doesn't ever show at all (as if processEvents weren't called). Are there any artifacts of the processEvents() command that may cause the QProgressDialog not to show under certain circumstances?
My question is general because I have not yet been able to isolate the problem in my code. However, I have noticed that when my QProgressDialog does not show it occurs when I am accessing a text file using a config parser. The work around was to do a time.sleep() after the file has been closed (perhaps to ensure the process completed and that processEvents would then commence showing the QProgressDialog).
If it helps, here's my code for running the QProgressDialog as a generator:
def progress_dialog(self, data, label, window_title, stop_label, capture_bar=False):
bar = QProgressDialog(label, stop_label, 0, len(data))
if capture_bar: self.prog_bar = bar
bar.setWindowTitle(window_title)
for k, d in enumerate(data):
QCoreApplication.instance().processEvents()
if bar.wasCanceled():
raise GeneratorExit
# set the next value beyond the start of 0
bar.setValue(k+1)
# again process events to draw the new label and value
QCoreApplication.instance().processEvents()
yield(d)
raise StopIteration
Again, sorry I don't have a full code snippet of the isolated problem (and the full code is too big of an ocean). I guess what I'm looking for is a why of checking if the processEvents() command is doing its job (because clearly I am calling it but it hangs on other processes rather than showing the dialog).
Edit:
According to this support request doing a "bar.show()" command will force the progress bar to show.
http://redmine.smar.fi/issues/265
I'm going to wait a few weeks and make sure this is a guaranteed fix before posting it as an answer.
If you need to show a QProgessDialog regardless of the duration of the process, use its setMinimumDuration method with a value of 0. According to the documentation the default minimum is 4000ms.
According to this support request doing a bar.show() command will force the progress bar to show:
http://redmine.smar.fi/issues/265
Simply call the show() method before every process events call and after the progress bar is first constructed.
I've waited nearly 4 months and this solution has worked without failing yet. Seems to be a sufficient answer.
This might be an old thread, but I had a similar problem and show() made the dialog appear, but empty. So, I came up with this decorator that I apply to functions that I want to run blocking, while permitting GUI thread to process events.
def nongui(fun):
"""Decorator running the function in non-gui thread while
processing the gui events."""
from multiprocessing.pool import ThreadPool
from PyQt4.QtGui import QApplication
def wrap(*args, **kwargs):
pool = ThreadPool(processes=1)
async = pool.apply_async(fun, args, kwargs)
while not async.ready():
async.wait(0.01)
QApplication.processEvents()
return async.get()
return wrap
Then, it's easy to write your calculating function normally with the decorator:
#nongui
def work(input):
# Here you calculate the output and set the
# progress dialog value
return out
and then run it as usual:
out = work(input)