ZMQ Context socket creation is MUCH slower on new computer - python

This was originally seen in Python, but has since been replicated in C++. Here is the unit test that distills down and replicates the behavior on my new laptop. These are just local socket connections.
def test_zmq_publisher_duration(self):
max_duration = 1.0
t0 = time.time()
socket = zmq.Context.instance().socket(zmq.PUB)
duration = time.time() - t0
print(socket)
self.assertLess(duration, max_duration, msg="socket() took too long.")
On other computers, and on my old laptop, this runs in a fraction of a second. However, on my new laptop (beefy Dell Precision 7730) this takes about 44 seconds. I get similar results when creating a zmq.SUB (subscriber) socket.
If I step down into the socket() call, the two statements which consume all the time are as follows:
zmq/sugar/context.py
class Context
def instance(cls, io_threads=1):
...
cls._instance = cls(io_threads=io_threads)
...
def socket(self, socket_type, **kwargs)
...
s = self._socket_class(self, socket_type, **kwargs)
...
I am perplexed and baffled. Everything else on the laptop seems to be fine. Perhaps I have pip installed my dependent modules in some slightly different way? Could a previously installed zmq module versus pyzmq be causing problems? Perhaps it is something in the laptop setup from our IT department? I have tried running as administrator, running from within PyCharm, running from the command-line, and unplugging the network cable while running.
I am relatively new to Python and ZMQ, but we have been developing on this project for months with no performance issues. In production code, we have a MessageBroker class that contains most of the pub/sub architecture. The unit test above was created by simply pulling the first significant line of code out of our MessageBroker.Publisher constructor (which creates the socket). Even though the socket creation is SLOW on this computer, our application does still come up and run properly after the sockets get created. It just takes 7 minutes to start the application.
I suspect Ed's Law of Debugging: "The more bizarre the behavior, the more stupid the mistake."

This was apparently a Windows 10 or Laptop firmware issue. Some updates got pushed by the IT department and things worked normally the next day. Here are the items that were installed per the Event Viewer:
Installed KB4456655: Servicing stack update for Windows 10, version 1803: September 11, 2018 (stability improvements)
Installed KB4462930: Update for Adobe Flash Player
Installed KB4100347: Intel microcode updates
Installed KB4485449: Servicing stack update for Windows 10 v1803 - Feb. 12
Installed KB4487017: (Same description as KB4485449)
Installed KB4487038: Security update for Adobe Flash Player

Related

Tkinter very slow to initialize on Python 3.10.10

I have 3 machines, all run Windows 10.
First has PyPy 7.3.9 (Python 3.9.10)
Second pure vanilla Python 3.9.1 downloaded from python.org and installed manually
Third has Python 3.10.10 installed via winget
I mostly develop on 1st PC and everything was fine, until I went to lab machines and noticed significant slowness. I reduced my code to the minimum:
import time
import tkinter as tk
class STM_GUI(tk.Tk):
def __init__(self):
super().__init__()
start = time.time()
gui = STM_GUI()
took = time.time() - start
print("%.2f" % took)
gui.mainloop()
On PC#1 output: 0.06.
on PC#2 output: 0.48. (8 times longer, but ok)
on PC#3 output: 5.23. (5 seconds to do nothing?!)
Question: why and how it can be mitigated?
After hours of digging turns out the issue was that home directory of lab machine user was mapped to quite remote network drive, which caused slowdown of tkinter.
Not sure, why it affected only tkinter and exactly it was looking for there, but once the network drive was unmapped, performance got comparably high.
Wanted to delete the post, but decided to keep in case anyone hits similar issue.

Program using schedule won't run when computer sleeps, or wifi goes down Python

I'm trying to make a program that keeps checking bitcoin price every 5 minutes and emails me when certain conditions are met using the schedule module in the PyCharm IDE, but this slows down my computer and stops when I need to restart or update anything windows.
Here's what I've got so far
import schedule
import time
def job():
print("I'm working...")
schedule.every(5).minutes.do(job)
while True:
schedule.run_pending()
time.sleep(1)
Is there any way I can have this run on an external system that doesn't lose connection to wifi and always runs? Are there other modules that that run on interrupts/ don't slow the computer?
I think the best answer to your problem doesn't actually involve code.
What you might be looking to work with is something called a raspberry pi. Its a small credit card sized computer that doesn't cost very much, I think it might cost about 30-40 bucks starting.
If you can handle the linux system, which I actually think is easier than windows and similar to Mac, you can program the raspberry pi to be a computer that's sole purpose is meant to run your program.
It won't slow down your computer because it'll be running on a different system, and as for the wifi, there's no real way to get around that unless you have multiple different service providers, unless you set the program to also monitor it's connectivity to wifi. If you do this, there might be some sort of way that you could at least send a notification via local network to your computer, or phone depending on how comfortable you are with building local servers.
Hope that helps!

Tensorflow Colab: Runtime disconnected The connection to the runtime has timed out

How come after 2 hours of running a model, I get a popup window saying:
Runtime disconnected
The connection to the runtime has timed out.
CLOSE RECONNECT
I had restarted my runtime and thought I have 12 hours to train a model. Any Idea how to avoid this? My other question: Is it possible to find out the time left for runtime to get disconnected using a TF or Python API?
Runtime gets disconnected when the notebook goes to "idle" mode for a time greater than 90 minutes. This is an unofficial number, as google colab has no official release about this. This is how google colab gets away with it by answering cheekily:
An extract from the Official Colab FAQ
Where is my code executed? What happens to my execution state if I close the browser window?
Code is executed in a virtual machine dedicated to your account.
Virtual machines are recycled when idle for a while, and have a
maximum lifetime enforced by the system.
So to avoid this, keep your browser open and don't let your system sleep for a time greater than 90 minutes.
This also means if you happen to close your browser within the 90 minutes, then if you reopen the notebook within 90 minutes you will still have all your running processes and session variables intact!
Also, note that currently you can run a notebook for a maximum of 12 hours. (in the "non-idle" state of course).
To answer your second question, this "idle state" stuff is a colab thing. So I don't think TF or Python will have anything to do with it.
So it is good practise to have your models saved into a folder periodically. This way, in the unfortunate event of your runtime getting disconnected, your work will not be lost. And you can simply restart your training from the latest saved model!
PS: I got the number 90 minutes from an experiment done by a fellow user

PsychoPy sending triggers on 64bit OS

I have a problem with sending triggers for eeg recording using PsychoPy standalone v1.81.00 on a Win7 64bit OS. I followed the descriptions here and don't get any (more) errors. The triggers, however, don't show on the recording computer (Brainvision Recorder under Win7 32bit).
What I did:
Downloaded and installed the InpOutBinaries_1500 via InpOutBinaries_1500\Win32\InstallDriver.exe
Copied the other files (inpout32.dll, .h and .lib as well as vssver2.scc) to the working directory of my script
Tried sending trigger codes with windll.inpout32.Out32(0x378, triggerCode)
The trigger code doesn't show up in Brainvision Recorder but seems to be set correctly when calling print str(windll.inpout32.Inp32(0x378)).
Thanks for every piece of advice or idea!
I managed to solve the problem. I'm not entirely sure which step(s) actually cut the curve but I recommend the following:
Download and install LPT Test Utility on your presentation computer.
At first, this program installs the inpout32.dll automatically and correctly regardless if you use a 32 or 64 bit OS.
More over, it helps you to monitor and manipulate the pins of your parallel port. If using the standard addresses (LPT1 through LPT3) doesn't work, select LPTX and enter your address manually (see here where to get your parallel port address on a Windows PC). If the triggers don't show on your recording computer using this program, you have an issue that is not related to PsychoPy.
If this fails, (re-)install a parallel port driver. Using Windows 7 this should not be necessary but actually solved one major issue for me. If this still fails, probably the hardware components (parallel port plug / card, cable(s), sync box) are damaged.
If the triggers work with the "LPT Test Utility" program but not using PsychoPy, an individual troubleshooting dependent on your code is necessary. Of course, you need to insert the port address that worked with "LPT Test Utility" in your PsychoPy code.
from psychopy import core
from ctypes import windll
windll.inpout32.Out32(portaddress, triggerCode) #sends the trigger with triggerCode being an integer between 0 and 255
core.wait(0.05) #wait 50ms
windll.inpout32.Out32(portaddress, 0) #deletes the trigger i.e. resets the pins
Best wishes,
Mario

Redis Crash Windows Server 2003 R2

I’m running redis, 32bit, 2.0.2 from the cygwin compilation here: http://code.google.com/p/servicestack/wiki/RedisWindowsDownload
I am running it from the terminal. It works great for about 24 hours and then it crashes, no errors, it just closes. My config file I have defaults except:
# save 900 1
# save 300 10
# save 60 10000
appendonly no
appendfsync no
I tried using a newer version of redis. Redis-2.2.5 win32 here: https://github.com/dmajkic/redis/downloads
However, these I can run but it throws up ‘unpacking too many values’ error when task are added onto it with Celery 2.2.6.
I haven’t ran this long enough to see if it experiences the same crashing error that 2.0.2 has after 24 hours-ish.
Also I have redis flushdb at 1am every day. But the crash could happen any part of the day, normally around 24 hours since the last time it crashed.
Any thoughts?
Thanks!
additions
Sorry, I forgot to mention that Twisted is polling data every 20 seconds and storing it into redis, which roughly translates to close to a 700 thousand records a day or 4 or 5 gb of RAM used. There is no problem with Twisted, I just thought it might be relevant to the question.
follow up question?
Thanks Dhaivat Pandya!
Are there key-value database that are more supportive of the windows environment?
Redis does is not supposed to work with Windows, and the projects that try to make it work with windows all have numerous bugs that make them unstable.

Categories