Python - tkinter call to after is too slow - python

I had been working on a python and tkinter solution to the code golf here: https://codegolf.stackexchange.com/questions/26824/frogger-ish-game/
My response is the Python 2.7 one. The thing is, when I run this code on my 2008 mac pro, everything works fine. When I run it on Win7 (I have tried this on several different machines, with the same result), the main update loop runs way too slowly. You will notice that I designed my implementation with a 1-ms internal clock:
if(self.gameover == False):
self.root.after(1, self.process_world)
Empirical testing reveals that this runs much, much slower than every 1ms. Is this a well-known Windows 7-specific behavior? I have not been able to find much information about calls to after() lagging behind by this much. I understand that the call is supposed to be executed "at least" after the given amount of time, and not "at most", but I am seeing 1000 update ticks every 20 seconds instead of every 1 second, and a factor of 20 seems excessive. The timer loop that displays the game clock works perfectly well. I thought that maybe the culprit was my thread lock arrangement, but commenting that out makes no difference. This is my first time using tkinter, so I would appreciate any help and/or advice!

Related

Generating Timing Statistics/Profiling the Python Interpreter

To begin, several similar questions have previously been asked on this site, notably here and here. The former is 11 years old, and the latter, while being 4 years old, references the 11 year old post as the solution. I am curious to know if there is something more recent that could accomplish the task. In addition, these questions are only interested in the total time spent by the interpreter. I am hoping for something more granular than that, if such a thing exists.
The problem: I have a GTK program written in C that spawns a matplotlib Python process and embeds it into a widget within the GTK program using GtkSocket and GtkPlug. The python process is spawned using g_spawn (GLib) and then the plot is plugged into the socket on the Python side after it has been created. It takes three seconds to do so, during which time the GtkSocket widget is transparent. This is not very pleasant aesthetically, and I would like to see if there is anything I could do to reduce this three second wait time. I looked at using PyPy instead of CPython as the interpreter, but I am not certain that PyPy has matplotlib support, and that route could cause further headaches since I freeze the script to an executable using PyInstaller. I timed the script itself from beginning to end and the time was around 0.25 seconds. I can run the plotting script from the terminal (normal or frozen) and it takes the same amount of time for the plot to appear (~3 seconds), so it can't be the g_spawn(). The time must all be spent within the interpreter.
I created a minimal example that reproduces the issue (although much less extreme, the time before the plot appears in the socket is only one second). I am not going to post it now since it is not necessarily relevant, although if requested, I can add the file contents with an edit later (contains the GUI C code using GTK, an XML Glade file, and the Python script).
The fact that the minimal example takes one second and my actual plot takes three seconds is hardly a surprise (and further confirms that the timing problem is the time spent with the interpreter), since it is more complicated and involves more imports.
The question: Is there any utility that exists that would allow me to profile where the time is being spent within my script by the Python interpreter? Is most of the time spent with the imports? Is it elsewhere? If I could see where the interpreter spends most of its time, that may possibly allow me to reduce this three second wait time to something less egregious.
Any assistance would be appreciated.

"IOStream.flush timed out" errors when multithreading

I am new to Python programming and having a problem with a multithreaded program (using the "threading" module) that runs fine at the beginning, but after a while starts repeatedly printing "IOStream.flush timed out" errors.
I am not even sure how to debug such an error, because I don't know what line is causing it. I read a bit about this error and saw that it might be related to memory consumption, so I tried profiling my program using a memory profiler on the Spyder IDE. Nothing jumped out at me, however (although I admit that I am not sure what to look for when it comes to Python memory leaks).
A few more observations:
I have an outer loop that runs my function over a large number of files. The files are just number data with the same formatting (they are quite large though and there is download latency, which is why I have made my application multithreaded so that each thread works on different files). If I have a long list of files, the problem occurs. If I shorten the list, the program concludes without problem. I am not sure why that is, although if it is some kind of memory leak then I would assume when I run the program longer, the problem grows until it reaches some kind of memory limit.
Normally, I use 128 threads in the program. If I reduce the number of threads to 48 or less, the program works fine and completes correctly. So clearly the problem is caused by multithreading (I'm using the "threading" module). This makes it a bit trickier to debug and figure out what is causing the problem. It seems something around 64 threads starts causing problems.
The program never explicitly crashes out. Once it gets to the point where it has this error, it just keeps repeatedly printing "IOStream.flush timed out". I have to close the Spyder IDE to stop it (Restart kernel doesn't work).
Right before this error happens, the program appears to stall. At least no more "prints" happen to the console (the various threads are all printing debug information to the screen). The last lines printed are standard debugging/status print statements that usually work when the number of threads is reduced or the number of files to process is decreased.
I have no idea how to debug this and get to the bottom of the problem. Any suggestions on how to get to the bottom of this would be much appreciated. Thanks in advance!
Specs:
Python 3.8.8
Spyder 4.2.5
Windows 10

Implementing a dead man's switch for running code

I have some Python code that's a bit buggy. It runs in an infinite loop and I expect it to print something about once every 50 milliseconds, but sometimes it hangs and stops printing or even outright segfaults. Unfortunately, the problem seems to be fairly rare, so I've had trouble nailing down the exact cause of the issue. I want to have the code up and running while I debug the problem, so while I try to figure it out, I'd like to create a dead man's switch that runs my code, but stops if the code doesn't print anything in a certain time frame (say, 5 seconds) or exits and finally executes a command to notify me that something went wrong (e.g. 'spd-say terminated').
I put this into the terminal for my first attempt at this:
python alert.py; spd-say terminated;
Unfortunately, it didn't seem to work - at this point I realized that the code was not only crashing but also hanging (and I'm also not sure whether this would even work if the code crashes). Unfortunately, I'm not very familiar with bash yet (I assume that's what I'm using when I run stuff in the terminal), and I'm not sure how I could set up something like what I want. I'm also open to using other things besides bash to do this if it would be particularly difficult to do for some reason.
What would be the best way to implement what I want to do?
You could run two python programs with a pipeline between them. On one side you have you buggy script writing something on the pipeline every less than 5 seconds. On the receiving end of the pipeline you have a very simple script that checks how long it has been since it last received anything. If this time is more than 5 seconds.
This way you decouple your watchdog from your buggy script.

Big difference in performance looping through 1-1000 in PyCharm and PythonIDLE/Shell

Recently, when I was fiddling Python with different IDE/shells, I was most surprised at the performance differences among them.
The code I wrote is a simple for-loop through 1-1000. When executed by PythonIDLE or Windows Powershell, it took about 16 seconds to finish it while PyCharm almost finished it immediately within about 500ms.
I'm wondering why the difference is so huge.
for x in range(0, 1000, 1):
print(x)
The time to execute the loop is almost zero. The time you're seeing elapse is due to the printing, which is tied to the output facilities of the particular shell you are using. For example, the sort of buffering it does, maybe the graphics routines being used to render the text, etc. There is no practical application for printing numbers in a loop as fast as possible to a human-readable display, so perhaps you can try the same test writing to a file instead. I expect the times will be more similar.
On my laptop your code takes 4.8 milliseconds if writing to the terminal. It takes only 460 microseconds if writing to a file.
TL;DR: run stupid benchmarks, get stupid times.
IDLE is written in Python and uses tkinter, which wraps tcl/tk. By default, IDLE runs your code in a separate process, with output sent through a socket for display in IDLE's Shell window. So there is extra overhead for each print call. For me, on a years-old Windows machine, the 1000 line prints take about 3 seconds, or 3 milliseconds per print.
If you print the the 1000 lines with one print call, as with
print('\n'.join(str(i) for i in range(1000)))
the result may take a bit more that 3 milliseconds but it is still subjectly almost 'instantaneous'.
Note: in 3.6.7 and 3.7.1, single 'large' prints, where 'large' can be customized by the user, are squeezed down to a label that can be expanded either in-place or in a separate window.

Different time taken by python script every time it is runned?

I am working on a Opencv based Python project. I am working on program development which takes less time to execute. For that i have tested my small program print hello world on python to test the time taken to run the program. I had run many time and every time it run it gives me a different run time.
Can you explain me why a simple program is taking different time to execute?
I need my program to be independent of system processes ?
Python gets different amounts of system resources depending upon what else the CPU is doing at the time. If you're playing Skyrim with the highest graphics levels at the time, then your script will run slower than if no other programs were open. But even if your task bar is empty, there may be invisible background processes confounding things.
If you're not already using it, consider using timeit. It performs multiple runs of your program in order to smooth out bad runs caused by a busy OS.
If you absolutely insist on requiring your program to run in the same amount of time every time, you'll need to use an OS that doesn't support multitasking. For example, DOS.

Categories