gRPC becomes very slow all of a sudden - python

I use gRPC for Python RPC on the same machine. It has been working great till yesterday. Then, all of a sudden, it started being very slow. The helloworld example now takes about 78s to complete. I tested it on three computers on the same network, all Ubuntu 18.04, with the same results. At home, the same example runs almost instantaneously. I suspect some networking issue, maybe an automatic update on the gateway, but I'm at a loss on how to troubleshoot the problem. Any suggestions?
EDIT:
I still don't know what happened, but I found a workaround. Replacing localhost with 127.0.0.1 in the grpc.insecure_channel connection string makes gRPC responsive again.

Related

Pytest very slow depending on the Wifi connection

I have a very strange problem none of my other dev coworkers have within the same codebase.
My pytest suite takes about 2min while it's collecting tests, once the first test runs it runs at normal speed through them all.
For other devs, this 2 min wait is not existent, it collect the tests and runs them in about the same amount of time once it starts.
Even stranger is if I'm connected to my phone hotspot (and one other wifi network from a certain office), this 2 min wait is not there and pytest collects and runs test at the same speed as other devs. Under any other Wifi network or no wifi connection at all, it hangs.
None of our tests require internet connection so I'm at a loss what to try, I wasn't able to find similar posts online so I thought of asking here.
I have updated pytest version to the latest version plus re-installed my whole dev environment to no avail. I've also tried running pytest with different flags but so far nothing changes or yields information about the cause for this.
Im on a Macbook M1 13" 8Gb Ram. (Other devs on M1 don't have this problem btw only me). Any ideas?

How to start asyncio server on remote server with Python?

I have a virtual server available which runs Linux, having 8 cores. 32 GB RAM, and 1 TB in addition. It should be a development environment. (same for test and prod) This is what I could get from IT. Server can only be accessed via so-called jump servers by putty or direct tcp/ip ports (ssh is a must).
The application I am working on starts several processes via multiprocessing. In every process an asyncio event loop is started, and an asyncio socket server in some cases. Basically it is a low level data streaming and processing application (unfortunately no kafka or similar technology available yet). The live application runs forever, no or limited interaction with the user (reads/processes/writes data).
I assume, IPython is an option for this, but - and maybe I am wrong - I think it starts new kernels per client request, but I need to start new process from the main code w/o user interaction. If so, this can be an option for monitoring the application, gathering data from it, sending new user commands to the main module, but not sure how to run processes and asyncio servers remotely.
I would like to understand how these can be done on the given environment. I do not know where to start, what alternatives there are. And I do not understand ipython properly, their page is not obviuos to me yet.
Please help me out! Thank you in advance!
After lots of research and learning I came across to a possible solution in our "sandbox" environment. First, I had to split the problem into several sub-problems:
"remote" development
paralllization
scheduling and executing parallel codes
data sharing between these "engines"
controlling these "engines"
Let's see in details:
Remote development means you want write your code on your laptop, but the code must be executed on a remote server. Easy answer is Jupyter Notebook (or equivalent solution) although it has several trade-offs, also other solutions are available, but this was faster to deploy and use and had the least dependency, maintenance, etc.
parallelization: had several challenges with iPython kernel when working with multiprocessing, so every code that must run parallel will be written in separated Jupyter Notebook. In a single code I can still use eventloop to get async behaviour
executing parallel codes: there are several options I will use :
iPyParallel - "workaround" for multiprocessing
papermill - execute JNs with parameters from command line (optional)
using %%writefile magic command in Jupyter Notebook - create importables
os task scheduler like cron.
async with eventloops
No option yet: docker, multiprocessing, multithreading, cloud (aws, azure, google...)
data sharing: selected ZeroMQ, took time to learn but was simpler and easier than writing everything on pure sockets. There are alternatives but come with extra dependency, and some very useful benefit (will check them later): RabbitMQ, Redis message broker, etc. The reasons for preferring ZMQ: fast, simple, elegant, and just a library. (Knonw risk: our IT will prefer RabbitMQ, but that problem comes later :-) )
controlling the engines: now this answer is obvious: separate python code (can be tested as JN code but easy to turn into pure .py and schedule it). This one can communicate with the other modules via ZMQ sockets: healthcheck, sending new parameters, commands, etc....

Windows 10 not allowing Python server.py to connect with client.py

I'm working a tutorial in Python 3.8 that involves sockets and networking. There is a server.py file and a client.py file. I took example code straight out of the Python doc for sockets to see if that would work, and it does not. The server starts and creates a socket and listens for the connection, but I get WinError 10061, the one where the target machine refuses the connection. My OS is Windows 10 and I'm using IDLE. I've looked at my Firewall and set a permission for pythonw.exe to be allowed through, but that has not helped. Anybody have any fixes for me to try? I can't really proceed until I can get the client and server connected.
I think I know what I’ve been doing wrong. I have been running both server and client files in the same console. I think I have to open two consoles and run one file in each so they can communicate.
(Doh!)
I’m at work so I can’t test it right now. Just in case anyone else has been befuddled by this .
Yes, I did not realize that each file had to run in its own instance of IDLE, but that makes perfect sense now. A socket won’t connect to itself!

flask socket-io, sometimes client calls freeze the server

I occasionally have a problem with flask socket-io freezing, and I have no clue how to fix it.
My client connects to my socket-io server and performs some chat sessions. It works nicely. But for some reason, sometimes from the client side, there is some call that blocks the whole server (The server is stuck in the process, and all other calls are frozen). What is strange is that the server can be blocked as long as the client side app is not totally shutdown.This is an ios-app / web page, and I must totally close the app or the safari page. Closing the socket itself, and even deallocating it doesn't resolve the problem. When the app is in the background, the sockets are closed and deallocated but the problem persists.
This is a small server, and it deals with both html pages and the socket-server so I have no idea if it is the socket itself or the html that blocks the process. But each time the server was freezing, the log showed some socket calls.
Here is how I configured my server:
socketio = SocketIO(app, ping_timeout=5)
socketio.run(app, host='0.0.0.0', port=5001, debug=True, ssl_context=context)
So my question is:
What can freeze the server (this seems to happen when I leave the app or web-site open for a long time while doing nothing). If I use the services normally the server never freezes. And how I can prevent it from happening (Even if I don't know what causing this, is there a way to blindly stop my server from being stuck at a call?
Thanks you for the answers
According to your comment above, you are using the Flask development web server, without the help of an asynchronous framework such as eventlet or gevent. Besides this option being highly inefficient, you should know that this web server is not battle tested, it is meant for short lived tests during development. I'm not sure it is able to run for very long, specially under the unusual conditions Flask-SocketIO puts it through, which regular Flask apps do not exercise. I think it is quite possible that you are hitting some obscure bug in Werkzeug that causes it to hang.
My recommendation is that you install eventlet and try again. All you need to do is pip install eventlet, and assuming you are not passing an explicit async_mode argument, then just by installing this package Flask-SocketIO should configure itself to use it.
I would also remove the explicit timeout setting. In almost all cases, the defaults are sufficient to maintain a healthy connection.

Pyramid server on vmware responds very slowly

First of all I'm new to Python and Pyramid framework.
I have:
Win7 on my host
Debian 6 on my vmware guest
Python 2.6 on Debian machine
Pyramid 1.3 on Debian machine
I created a virual environment using 'virualenvwrapper' and now I'm running 'Hello world' example from here http://docs.pylonsproject.org/projects/pyramid/en/1.3-branch/narr/firstapp.html#firstapp-chapter
The problem is that when I request http://localhost:8080/hello/world in Debian machine everything works fine. But when I request http://192.168.25.129:8080/hello/world from my host Win7 machine it takes 5-7 seconds to get response from server (192.168.25.129 thi is an vmware ip adress connected via NAT). I can not find the reason why it takes so much time.
I also installed 'apache2' on Debian machine to test request speed and found that apache respond takes 1 sec maximum. So is it a problem of Python or Pyramid?
How can I reduce respond time of Pyramid server?
PS:sorry for bad english :)
As far as I know, Pyramid itself provides only debugging web server. It really very slow. For production you can use 'waitress' web server. It is much more faster.
This problem probably has very little to do with python or pyramid and much more to do with the configuration of your virtual machine. If you really want to see what pyramid is doing you can turn on the performance profiler in the debug toolbar and find out where in the request things are taking a long time. If there is nothing slow in the pyramid side of the request, then you know it's before/after and you can look at the system setup, wsgi server and middleware.

Categories