Python parallel processing from client-server application - python

I have a web application (django) that stores in mysql database PID numbers of processes from remote Linux machine. I designed a simple server-client application that talking to remote server and getting me some data about given PID number (cpu%, mem%) ... this data is from 5s interval.
But there is a performance problem .... I have 200 pids to check and every of them takes ~5 sec and they are processing in the for loop. So I have situation where I`m waiting 200*5 sec minimum
Can somebody advise me how to make it parallel processing? So my application will be able to fetch for example 50 pids at one time ... I believe python client - server library can handle multiple requests coming to the server.
I want to archive something like:
for pid in my_200_pid_list:
// Some parallel magic to not wait and pass another 49...
result[pid] = askforprocess(pid)
My client code:
def askforprocess(processpid):
#Create TCP/IP socket
sock = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
# Connect on host and port provided in command line arguments
server_address = ('172.16.1.105', int('5055'))
sock.connect(server_address)
# Send the data
try:
message = processpid
sock.sendall(message)
data = sock.recv(2048)
finally:
sock.close()
return data

In general, it's best to do stuff like this using a single thread when possible. You just have to make sure your functions don't block other functions. The builtin lib that comes to mind is select. Unfortunately, it's a bit difficult to explain and I haven't used it in quite some time. Hopefully this link will help you understand it http://pymotw.com/2/select/.
You can also use the multiprocessing lib and poll each pid in a separate thread. This can be very difficult to manage if you plan to scale out further! Use threads only as a last resort (this is my usual rule of thumb when it comes to threads). https://docs.python.org/2/library/multiprocessing.html#module-multiprocessing
from multiprocessing import Process
def askforprocess(processpid):
#Create TCP/IP socket
sock = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
# Connect on host and port provided in command line arguments
server_address = ('172.16.1.105', int('5055'))
sock.connect(server_address)
# Send the data
try:
message = processpid
sock.sendall(message)
data = sock.recv(2048)
finally:
sock.close()
return data
if __name__ == '__main__':
info('main line')
p = Process(target=askforprocess, args=(processpid,))
p.start()
Lastly, there's Twisted library which is probably the most difficult to understand, but defiantly makes concurrent (not necessarily parallel) functions easy to write. Only bad thing is you'd probably have to rewrite your entire app in order to use Twisted. Don't be put off by this fact, try to use it if you can.
Hope that helps.

Use threads to process your requests in parallel: https://docs.python.org/2/library/threading.html

Related

Python AIOHTTP.web server multiprocessing load-balancer?

I am currently developing a web app using the aiohttp module. I'm using:
aiohttp.web, asyncio, uvloop, aiohttp_session, aiohttp_security, aiomysql, and aioredis
I have run some benchmarks against it and while they're pretty good, I can't help but want for more. I know that Python is, by nature, single-threaded. AIOHTTP is using async as to be non-blocking but am I correct in assuming that it is not utilizing all CPU cores?
My idea: Run multiple instances of my aiohttp.web code via concurrent.futures in multiprocessing mode. Each process would serve the site on a different port. I would then put a load balancer in front of them. MySQL and Redis can be used to share state where necessary such as for sessions.
Question: Given a server with several CPU cores, will this result in the desired performance increase? If so, is there any specific pattern to pursue in order to avert problems? I can't think of anything that these aio modules are doing that would require that there only be a single thread though I could be wrong.
Note: This is not a subjective question as I've posed it. Either the module is currently bound to one thread/process or it isn't - can benefit from a multiprocessing module + load balancer or can't.
You're right asyncio uses one CPU only. (one event loop uses one thread only and thus one CPU only)
Whether your whole project is network or CPU bound is something I can't say.
You have to try.
You could use nginx or haproxy as load balancer.
You might even try to use no load balancer at all. I never tried this feature for load balancing, just as proof of concept for a fail-over system.
With new kernels multiple processes can listen to the same port (when using the SO_REUSEPORT option) and I guess it's the kernel who would be doing a round robin.
Here a small link to an article comparing performance of a typical nginx configuration vs an nginx setup with the SO_REUSEPORT feature:
https://blog.cloudflare.com/the-sad-state-of-linux-socket-balancing/
It seems SO_REUSEPORT might distributes the CPU charge rather evenly, but might increase the variation of response times. Not sure this is relevant in your setup, but thought I let you know.
Added 2020-02-04:
My solution added 2019-12-09 works, but triggers a deprecation warning.
When having more time and time for testing it myself I will post the improved solution here. For the time being you can find it at AIOHTTP - Application.make_handler(...) is deprecated - Adding Multiprocessing
Added 2019-12-09:
Here a small example of an HTTP server, that can be started multiple times listening on the same socket.
The kernel would distribute the tasks. I never checked whether this is efficient or not though.
reuseport.py:
import asyncio
import os
import socket
import time
from aiohttp import web
def mk_socket(host="127.0.0.1", port=8000, reuseport=False):
sock = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
if reuseport:
SO_REUSEPORT = 15
sock.setsockopt(socket.SOL_SOCKET, SO_REUSEPORT, 1)
sock.bind((host, port))
return sock
async def handle(request):
name = request.match_info.get('name', "Anonymous")
pid = os.getpid()
text = "{:.2f}: Hello {}! Process {} is treating you\n".format(
time.time(), name, pid)
time.sleep(0.5) # intentionally blocking sleep to simulate CPU load
return web.Response(text=text)
if __name__ == '__main__':
host = "127.0.0.1"
port=8000
reuseport = True
app = web.Application()
sock = mk_socket(host, port, reuseport=reuseport)
app.add_routes([web.get('/', handle),
web.get('/{name}', handle)])
loop = asyncio.get_event_loop()
coro = loop.create_server(
protocol_factory=app.make_handler(),
sock=sock,
)
srv = loop.run_until_complete(coro)
loop.run_forever()
And one way to test it:
./reuseport.py & ./reuseport.py &
sleep 2 # sleep a little so servers are up
for n in 1 2 3 4 5 6 7 8 ; do wget -q http://localhost:8000/$n -O - & done
The output might look like:
1575887410.91: Hello 1! Process 12635 is treating you
1575887410.91: Hello 2! Process 12633 is treating you
1575887411.42: Hello 5! Process 12633 is treating you
1575887410.92: Hello 7! Process 12634 is treating you
1575887411.42: Hello 6! Process 12634 is treating you
1575887411.92: Hello 4! Process 12634 is treating you
1575887412.42: Hello 3! Process 12634 is treating you
1575887412.92: Hello 8! Process 12634 is treating you
I think is better to not reinvent the wheel and use one of the proposed solutions at the documentation:
https://docs.aiohttp.org/en/stable/deployment.html#nginx-supervisord

How to fork and exec a server and wait until it's ready?

Suppose I've got a simple Tornado web server, which starts like this:
app = ... # create an Application
srv = tornado.httpserver.HTTPServer(app)
srv.bind(port)
srv.start()
tornado.ioloop.IOLoop.instance().start()
I am writing an "end-to-end" test, which starts the server in a separate process with subprocess.Popen and then calls the server over HTTP. Now I need to make sure the server did not fail to start (e.g. because the port is busy) and then wait till server is ready.
I wrote a function to wait until the server gets ready :
def wait_till_ready(port, n=10, time_out=0.5):
for i in range(n):
try:
requests.get("http://localhost:" + str(port))
return
except requests.exceptions.ConnectionError:
time.sleep(time_out)
raise Exception("failed to connect to the server")
Is there a better way ?
How can the parent process, which forks and execs the server, make sure that the server didn't fail because the server port is busy for example ? (I can change the server code if I need it).
You could approach it in two ways:
Make a pipe / queue before you fork. Then, just before you start the io loop, notify the parent that everything went fine and you're ready for the request.
Open the port and bind to it before forking. You should make sure you close that socket on the parent side. But otherwise, the only thing which needs to run in the child is the io loop. You can handle all the other errors before the fork.

In this Python 3 client-server example, client can't send more than one message

This is a simple client-server example where the server returns whatever the client sends, but reversed.
Server:
import socketserver
class MyTCPHandler(socketserver.BaseRequestHandler):
def handle(self):
self.data = self.request.recv(1024)
print('RECEIVED: ' + str(self.data))
self.request.sendall(str(self.data)[::-1].encode('utf-8'))
server = socketserver.TCPServer(('localhost', 9999), MyTCPHandler)
server.serve_forever()
Client:
import socket
import threading
s = socket.socket(socket.AF_INET,socket.SOCK_STREAM)
s.connect(('localhost',9999))
def readData():
while True:
data = s.recv(1024)
if data:
print('Received: ' + data.decode('utf-8'))
t1 = threading.Thread(target=readData)
t1.start()
def sendData():
while True:
intxt = input()
s.send(intxt.encode('utf-8'))
t2 = threading.Thread(target=sendData)
t2.start()
I took the server from an example I found on Google, but the client was written from scratch. The idea was having a client that can keep sending and receiving data from the server indefinitely.
Sending the first message with the client works. But when I try to send a second message, I get this error:
ConnectionAbortedError: [WinError 10053] An established connection was
aborted by the software in your host machine
What am I doing wrong?
For TCPServer, the handle method of the handler gets called once to handle the entire session. This may not be entirely clear from the documentation, but socketserver is, like many libraries in the stdlib, meant to serve as clear sample code as well as to be used directly, which is why the docs link to the source, where you can clearly see that it's only going to call handle once per connection (TCPServer.get_request is defined as just calling accept on the socket).
So, your server receives one buffer, sends back a response, and then quits, closing the connection.
To fix this, you need to use a loop:
def handle(self):
while True:
self.data = self.request.recv(1024)
if not self.data:
print('DISCONNECTED')
break
print('RECEIVED: ' + str(self.data))
self.request.sendall(str(self.data)[::-1].encode('utf-8'))
A few side notes:
First, using BaseRequestHandler on its own only allows you to handle one client connection at a time. As the introduction in the docs says:
These four classes process requests synchronously; each request must be completed before the next request can be started. This isn’t suitable if each request takes a long time to complete, because it requires a lot of computation, or because it returns a lot of data which the client is slow to process. The solution is to create a separate process or thread to handle each request; the ForkingMixIn and ThreadingMixIn mix-in classes can be used to support asynchronous behaviour.
Those mixin classes are described further in the rest of the introduction, and farther down the page, and at the bottom, with a nice example at the end. The docs don't make it clear, but if you need to do any CPU-intensive work in your handler, you want ForkingMixIn; if you need to share data between handlers, you want ThreadingMixIn; otherwise it doesn't matter much which you choose.
Note that if you're trying to handle a large number of simultaneous clients (more than a couple dozen), neither forking nor threading is really appropriate—which means TCPServer isn't really appropriate. For that case, you probably want asyncio, or a third-party library (Twisted, gevent, etc.).
Calling str(self.data) is a bad idea. You're just going to get the source-code-compatible representation of the byte string, like b'spam\n'. What you want is to decode the byte string into the equivalent Unicode string: self.data.decode('utf8').
There's no guarantee that each sendall on one side will match up with a single recv on the other side. TCP is a stream of bytes, not a stream of messages; it's perfectly possible to get half a message in one recv, and two and a half messages in the next one. When testing with a single connection on localhost with the system under light load, it will probably appear to "work", but as soon as you try to deploy any code that assumes that each recv gets exactly one message, your code will break. See Sockets are byte streams, not message streams for more details. Note that if your messages are just lines of text (as they are in your example), using StreamRequestHandler and its rfile attribute, instead of BaseRequestHandler and its request attribute, solves this problem trivially.
You probably want to set server.allow_reuse_address = True. Otherwise, if you quit the server and re-launch it again too quickly, it'll fail with an error like OSError: [Errno 48] Address already in use.

python socket server/client protocol with unstable client connection

I have a threaded python socket server that opens a new thread for each connection.
The thread is a very simple communication based on question and answer.
Basically client sends initial data transmission, server takes it run an external app that does stuff to the transmission and returns a reply that the server will send back and the loop will begin again until client disconnects.
Now because the client will be on a mobile phone thus an unstable connection I get left with open threads no longer connected and because the loop starts with recv it is rather difficult to break on lost connectivity this way.
I was thinking on adding a send before the recv to test if connection is still alive but this might not help at all if the client disconnects after my failsafe send as the client sends a data stream every 5 seconds only.
I noticed the recv will break sometimes but not always and in those cases I am left with zombie threads using resources.
Also this could be a solid vulnerability for my system to be DOSed.
I have looked through the python manual and Googled since thursday trying to find something for this but most things I find are related to client and non blocking mode.
Can anyone point me in the right direction towards a good way on fixing this issue?
Code samples:
Listener:
serversocket = socket(AF_INET, SOCK_STREAM)
serversocket.setsockopt(SOL_SOCKET, SO_REUSEADDR, 1)
serversocket.bind(addr)
serversocket.listen(2)
logg("Binded to port: " + str(port))
# Listening Loop
while 1:
clientsocket, clientaddr = serversocket.accept()
threading.Thread(target=handler, args=(clientsocket, clientaddr,port,)).start()
# This is useless as it will never get here
serversocket.close()
Handler:
# Socket connection handler (Threaded)
def handler(clientsocket, clientaddr, port):
clientsocket.settimeout(15)
# Loop till client closes connection or connection drops
while 1:
stream = ''
while 1:
ending = stream[-6:] # get stream ending
if ending == '.$$$$.':
break
try:
data = clientsocket.recv(1)
except:
sys.exit()
if not data:
sys.exit()
# this is the usual point where thread is closed when a client closes connection normally
stream += data
# Clear the line ending
stream = base64.b64encode(stream[:-6])
# Send data to be processed
re = getreply(stream)
# Send response to client
try:
clientsocket.send(re + str('.$$$$.'))
except:
sys.exit()
As you can see there are three conditions that at least one should trigger exit if connection fails but sometimes they do not.
Sorry, but I think that threaded idea in this case is not good. As you do not need to process/do a lot of stuff in these threads (workers?) and most of the time these threads are waiting for socket (is the blocking operation, isn't it?) I would advice to read about event-driven programming. According to sockets this pattern is extremly useful, becouse you can do all stuff in one thread. You are communicate with one socket at a time, but the rest of connections are just waiting to data so there is almost no loss. When you send several bytes you just check that maybe another connection requires carrying. You can read about select
and epoll.
In python there is several libraries to play with this nicly:
libev (c library wrapper) - pyev
tornado
twisted
I used tornado in some projects and it is done this task very good. Libev is nice also, but is a c-wrapper so it is a little bit low-level (but very nice for some tasks).
So you should use socket.settimeout(float) with the clientsocket like one of the comments suggested.
The reason you don't see any difference is, when you call socket.recv(bufsize[, flags]) and the timeout runs out an socket.timeout exception is thrown and you catch that exception and exit.
try:
data = clientsocket.recv(1)
except:
sys.exit()
should be somthing like:
try:
data = clientsocket.recv(1)
except timeout:
#timeout occurred
#handle it
clientsocket.close()
sys.exit()

Python networking client, trying to respond to server responses

The application is a WxPython client/server setup that has multiple clients connect to the server and engaging in duplex networking protocol.
I've had Twisted hooked up with AMP in the past, but it did not fully cut it for the architecture in the application without overly complicating things in the end.
So for the server I have got SocketServer with the ThreadingMixIn set up. At the moment I am working on the buffer/command queue for the server, but that's not the issue.
On the client side I can do all the normal sending of data, triggered by events in the UI, without too much problems. I am currently stuck trying to get the client to listen for responses without blocking the entire application. So I want to put this in a thread, but should it start at the part that's now commented out or should it be handled completely different and I am just not seeing it?
In short: I want the client to send commands to the server and listen for any responses without blocking/stalling the entire application.
The code below is prototyping code, please excuse any typical mistakes such as magical values and other hardcoded data, it will be different in the final code.
import socket
import threading
import time
class CommandProxy(object):
def __init__(self, host, port):
self.host = host
self.port = port
def close(self):
if self.connection:
self.connection.close()
def connect(self):
try:
self.connection = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
self.connection.connect((self.host, self.port))
except socket.error as e:
print "Socket error: {0}".format(e)
def send_command(self, command, *kw):
datalist = ' '.join(kw)
data = command + ' ' + datalist + '\x00'
print 'DATA: {0}'.format(data)
self._write(data)
# while True:
# data = self._read()
# if data == 0:
# break
#
# print "DATA RECEIVED: {0}".format(data)
def _read(self):
data = self.connection.recv(1024)
return data
def _write(self, bytes):
self.connection.sendall(bytes)
if __name__ == '__main__':
HOST, PORT = 'localhost', 1060
proxy = CommandProxy(HOST, PORT)
proxy.connect()
try:
while True:
proxy.send_command('ID', '1')
time.sleep(2)
except KeyboardInterrupt:
print "Interrupted by user"
except socket.error as e:
print "Socket error: {0}".format(e)
except Exception as e:
print "something went wrong: {0}".format(e)
finally:
proxy.close()
I think you're mistaken about whether a single-threaded or multi-threaded approach will complicate your application more or less. The problem you're wrestling with now is one of the many that (for example) Twisted solves for you out of the box.
The most common complaint people have about Twisted is that it makes them structure their code strangely, in a way they're not used to. However, when you're using a GUI library like wxPython, you have already accepted this constraint. Twisted's event-driven architecture is exactly like the event-driven architecture of all the popular GUI toolkits. As long as you keep using wxPython, also using Twisted isn't going to force you to do anything else you don't want to do.
On the other hand, switching to threads will mean you need to be very careful about access to shared data structures, you won't be able to unit test effectively, and many problems that arise will only do so when someone else is running your application - because they have a different number of cores than you, or their network has different latency characteristics, or any of a number of other things which cause your threaded code to run in ways you never experienced. With extreme care you can still write something that works, but it will be much more difficult.
Since you haven't posted any of your Twisted-based code here, I can't really give any specific advice on how to keep things as simple as possible. However, I recommend that you take another look at a non-threaded solution. Join the twisted-python#twistedmatrix.com mailing list, hop on #twisted on freenode, or post more stackoverflow questions about it. Lots of people will be eager to help. :)
IMO your right with using a thread. Start a thread for every request, and when it's done and have a data, generate a wx event (see http://wiki.wxpython.org/CustomEventClasses)

Categories