I have 3 machines, all run Windows 10.
First has PyPy 7.3.9 (Python 3.9.10)
Second pure vanilla Python 3.9.1 downloaded from python.org and installed manually
Third has Python 3.10.10 installed via winget
I mostly develop on 1st PC and everything was fine, until I went to lab machines and noticed significant slowness. I reduced my code to the minimum:
import time
import tkinter as tk
class STM_GUI(tk.Tk):
def __init__(self):
super().__init__()
start = time.time()
gui = STM_GUI()
took = time.time() - start
print("%.2f" % took)
gui.mainloop()
On PC#1 output: 0.06.
on PC#2 output: 0.48. (8 times longer, but ok)
on PC#3 output: 5.23. (5 seconds to do nothing?!)
Question: why and how it can be mitigated?
After hours of digging turns out the issue was that home directory of lab machine user was mapped to quite remote network drive, which caused slowdown of tkinter.
Not sure, why it affected only tkinter and exactly it was looking for there, but once the network drive was unmapped, performance got comparably high.
Wanted to delete the post, but decided to keep in case anyone hits similar issue.
Related
To begin, several similar questions have previously been asked on this site, notably here and here. The former is 11 years old, and the latter, while being 4 years old, references the 11 year old post as the solution. I am curious to know if there is something more recent that could accomplish the task. In addition, these questions are only interested in the total time spent by the interpreter. I am hoping for something more granular than that, if such a thing exists.
The problem: I have a GTK program written in C that spawns a matplotlib Python process and embeds it into a widget within the GTK program using GtkSocket and GtkPlug. The python process is spawned using g_spawn (GLib) and then the plot is plugged into the socket on the Python side after it has been created. It takes three seconds to do so, during which time the GtkSocket widget is transparent. This is not very pleasant aesthetically, and I would like to see if there is anything I could do to reduce this three second wait time. I looked at using PyPy instead of CPython as the interpreter, but I am not certain that PyPy has matplotlib support, and that route could cause further headaches since I freeze the script to an executable using PyInstaller. I timed the script itself from beginning to end and the time was around 0.25 seconds. I can run the plotting script from the terminal (normal or frozen) and it takes the same amount of time for the plot to appear (~3 seconds), so it can't be the g_spawn(). The time must all be spent within the interpreter.
I created a minimal example that reproduces the issue (although much less extreme, the time before the plot appears in the socket is only one second). I am not going to post it now since it is not necessarily relevant, although if requested, I can add the file contents with an edit later (contains the GUI C code using GTK, an XML Glade file, and the Python script).
The fact that the minimal example takes one second and my actual plot takes three seconds is hardly a surprise (and further confirms that the timing problem is the time spent with the interpreter), since it is more complicated and involves more imports.
The question: Is there any utility that exists that would allow me to profile where the time is being spent within my script by the Python interpreter? Is most of the time spent with the imports? Is it elsewhere? If I could see where the interpreter spends most of its time, that may possibly allow me to reduce this three second wait time to something less egregious.
Any assistance would be appreciated.
This was originally seen in Python, but has since been replicated in C++. Here is the unit test that distills down and replicates the behavior on my new laptop. These are just local socket connections.
def test_zmq_publisher_duration(self):
max_duration = 1.0
t0 = time.time()
socket = zmq.Context.instance().socket(zmq.PUB)
duration = time.time() - t0
print(socket)
self.assertLess(duration, max_duration, msg="socket() took too long.")
On other computers, and on my old laptop, this runs in a fraction of a second. However, on my new laptop (beefy Dell Precision 7730) this takes about 44 seconds. I get similar results when creating a zmq.SUB (subscriber) socket.
If I step down into the socket() call, the two statements which consume all the time are as follows:
zmq/sugar/context.py
class Context
def instance(cls, io_threads=1):
...
cls._instance = cls(io_threads=io_threads)
...
def socket(self, socket_type, **kwargs)
...
s = self._socket_class(self, socket_type, **kwargs)
...
I am perplexed and baffled. Everything else on the laptop seems to be fine. Perhaps I have pip installed my dependent modules in some slightly different way? Could a previously installed zmq module versus pyzmq be causing problems? Perhaps it is something in the laptop setup from our IT department? I have tried running as administrator, running from within PyCharm, running from the command-line, and unplugging the network cable while running.
I am relatively new to Python and ZMQ, but we have been developing on this project for months with no performance issues. In production code, we have a MessageBroker class that contains most of the pub/sub architecture. The unit test above was created by simply pulling the first significant line of code out of our MessageBroker.Publisher constructor (which creates the socket). Even though the socket creation is SLOW on this computer, our application does still come up and run properly after the sockets get created. It just takes 7 minutes to start the application.
I suspect Ed's Law of Debugging: "The more bizarre the behavior, the more stupid the mistake."
This was apparently a Windows 10 or Laptop firmware issue. Some updates got pushed by the IT department and things worked normally the next day. Here are the items that were installed per the Event Viewer:
Installed KB4456655: Servicing stack update for Windows 10, version 1803: September 11, 2018 (stability improvements)
Installed KB4462930: Update for Adobe Flash Player
Installed KB4100347: Intel microcode updates
Installed KB4485449: Servicing stack update for Windows 10 v1803 - Feb. 12
Installed KB4487017: (Same description as KB4485449)
Installed KB4487038: Security update for Adobe Flash Player
I am working on a Opencv based Python project. I am working on program development which takes less time to execute. For that i have tested my small program print hello world on python to test the time taken to run the program. I had run many time and every time it run it gives me a different run time.
Can you explain me why a simple program is taking different time to execute?
I need my program to be independent of system processes ?
Python gets different amounts of system resources depending upon what else the CPU is doing at the time. If you're playing Skyrim with the highest graphics levels at the time, then your script will run slower than if no other programs were open. But even if your task bar is empty, there may be invisible background processes confounding things.
If you're not already using it, consider using timeit. It performs multiple runs of your program in order to smooth out bad runs caused by a busy OS.
If you absolutely insist on requiring your program to run in the same amount of time every time, you'll need to use an OS that doesn't support multitasking. For example, DOS.
I have a problem with sending triggers for eeg recording using PsychoPy standalone v1.81.00 on a Win7 64bit OS. I followed the descriptions here and don't get any (more) errors. The triggers, however, don't show on the recording computer (Brainvision Recorder under Win7 32bit).
What I did:
Downloaded and installed the InpOutBinaries_1500 via InpOutBinaries_1500\Win32\InstallDriver.exe
Copied the other files (inpout32.dll, .h and .lib as well as vssver2.scc) to the working directory of my script
Tried sending trigger codes with windll.inpout32.Out32(0x378, triggerCode)
The trigger code doesn't show up in Brainvision Recorder but seems to be set correctly when calling print str(windll.inpout32.Inp32(0x378)).
Thanks for every piece of advice or idea!
I managed to solve the problem. I'm not entirely sure which step(s) actually cut the curve but I recommend the following:
Download and install LPT Test Utility on your presentation computer.
At first, this program installs the inpout32.dll automatically and correctly regardless if you use a 32 or 64 bit OS.
More over, it helps you to monitor and manipulate the pins of your parallel port. If using the standard addresses (LPT1 through LPT3) doesn't work, select LPTX and enter your address manually (see here where to get your parallel port address on a Windows PC). If the triggers don't show on your recording computer using this program, you have an issue that is not related to PsychoPy.
If this fails, (re-)install a parallel port driver. Using Windows 7 this should not be necessary but actually solved one major issue for me. If this still fails, probably the hardware components (parallel port plug / card, cable(s), sync box) are damaged.
If the triggers work with the "LPT Test Utility" program but not using PsychoPy, an individual troubleshooting dependent on your code is necessary. Of course, you need to insert the port address that worked with "LPT Test Utility" in your PsychoPy code.
from psychopy import core
from ctypes import windll
windll.inpout32.Out32(portaddress, triggerCode) #sends the trigger with triggerCode being an integer between 0 and 255
core.wait(0.05) #wait 50ms
windll.inpout32.Out32(portaddress, 0) #deletes the trigger i.e. resets the pins
Best wishes,
Mario
I had been working on a python and tkinter solution to the code golf here: https://codegolf.stackexchange.com/questions/26824/frogger-ish-game/
My response is the Python 2.7 one. The thing is, when I run this code on my 2008 mac pro, everything works fine. When I run it on Win7 (I have tried this on several different machines, with the same result), the main update loop runs way too slowly. You will notice that I designed my implementation with a 1-ms internal clock:
if(self.gameover == False):
self.root.after(1, self.process_world)
Empirical testing reveals that this runs much, much slower than every 1ms. Is this a well-known Windows 7-specific behavior? I have not been able to find much information about calls to after() lagging behind by this much. I understand that the call is supposed to be executed "at least" after the given amount of time, and not "at most", but I am seeing 1000 update ticks every 20 seconds instead of every 1 second, and a factor of 20 seems excessive. The timer loop that displays the game clock works perfectly well. I thought that maybe the culprit was my thread lock arrangement, but commenting that out makes no difference. This is my first time using tkinter, so I would appreciate any help and/or advice!