How to run duplex communication using python sockets - python

I have 3 Raspberry Pi's, all on the same LAN doing stuff that is monitored by Python and I want them to talk to each other, and to my PC. Sockets seem like the way to go, but the examples are so simplistic. Here's the issue I am stuck on - the listen and receive processes are all blocking, unless you set a timeout, in which case they still block, just for less time.
So, if I set up a round-robin, then each Pi will only be listened to (or received on) for 1/3 of the time, or less if there is stuff to transmit as well.
What I'd like to understand better is what happens to the data (or connection requests) when I am not listening/receiving - are these buffered by the OS, or lost..? What happens to the socket when there is no method called, is it happy to be ignored for a while, or will the socket itself be dumped by the OS..?
I am starting to split these into separate processes now, which is getting messy and seems inefficient, but I can't think of another way except to run this as 3 (currently), maybe 6 (transmit/receive) or even 9 (listen/transmit/receive) separate processes..?
Sorry I don't have a code example, but it is already way tooo big, and it doesn't work. plus a lot of the issue seems to me to be in the murky part of the sockets - that part between the socket and the OS. I feel I need to understand this better to get to the right architecture for my bit of code before I really start debugging the various exceptions and communication failures...

You can handle multiple sockets in a single process using I/O multiplexing. This is usually done using calls such as epoll(), poll() or select(). These calls monitor multiple sockets and return when one or more sockets have data available for reading. Or are ready to write data to. In many cases this is more convenient than using multiple processes and/or threads.
These calls are pretty low level OS calls. Python seems to have higher level functionality that might be easier to use but I haven't tried this myself.

Related

when doing downloading with python,should I use multithreading or multiprocessing?

Recently I'm working on a program which can download manga from a online manga website.It works but a bit slow.So I decide to use multithreading/processing to speed up downloading.Here are my questions:
which one is better?(this is a python3 program)
multiprocessing,I think,will definitely work.If I use multiprocessing,what is the suitable amount of processes?Does it relate to the number of cores in my CPU?
multithreading will probably work.This download work obviously needs much time to wait for pics to be downloaded,so I think when a thread starts waiting,python will make another thread work.Am I correct?
I've read 《Inside the New GIL》by David M.Beazley.What's the influence of GIL if I use multithreading?
You're probably going to be bound by either the server's upload pipe (if you have a faster connection) or your download pipe (if you have a slower connection).
There's significant startup latency associated with TCP connections. To avoid this, HTTP servers can recycle connections for requesting multiple resources. So there are two ways for your client to avoid this latency hit:
(a) Download several resources over a single TCP connection so your program only suffers the latency once, when downloading the first file
(b) Download a single resource per TCP connection, and use multiple connections so that hopefully at every point in time, at least one of them will be downloading at full speed
With option (a), you want to look into how to recycle requests with whatever HTTP library you're using. Any good one will have a way to recycle connections. http://python-requests.org/ is a good Python HTTP library.
For option (b), you probably do want a multithread/multiprocess route. I'd suggest only 2-3 simultaneous threads, since any more will likely just result in sharing bandwidth among the connections, and raise the risk of getting banned for multiple downloads.
The GIL doesn't really matter for this use case, since your code will be doing almost no processing, spending most of its time waiting bytes to arrive over the network.
The lazy way to do this is to avoid Python entirely because most UNIX-like environments have good building blocks for this. (If you're on Windows, your best choices for this approach would be msys, cygwin, or a VirtualBox running some flavor of Linux, I personally like Linux Mint.) If you have a list of URL's you want to download, one per line, in a text file, try this:
cat myfile.txt | xargs -n 1 --max-procs 3 --verbose wget
The "xargs" command with these parameters will take a whitespace-delimited URL's on stdin (in this case coming from myfile.txt) and run "wget" on each of them. It will allow up to 3 "wget" subprocesses to run at a time, when one of them completes (or errors out), it will read another line and launch another subprocess, until all the input URL's are exhausted. If you need cookies or other complicated stuff, curl might be a better choice than wget.
It doesn't really matter. It is indeed true that threads waiting on IO won't get in the way of other threads running, and since downloading over the Internet is an IO-bound task, there's no real reason to try to spread your execution threads over multiple CPUs. Given that and the fact that threads are more light-weight than processes, it might be better to use threads, but you honestly aren't going to notice the difference.
How many threads you should use depends on how hard you want to hit the website. Be courteous and take care that your scraping isn't viewed as a DOS attack.
You don't really need multithreading for this kind of tasks.. you could try single thread async programming using something like Twisted

How to manage socket connections for a chat server (Python) via sockets and select module

Sorry to bother everyone with this, but I've been stumped for a while now.
The problem is that I decided to reconfigure this chat program I had using sockets so that instead of a client and a sever/client, it would have a server, and then two separate clients.
I asked earlier as to how I might get my server to 'manage' these connections of the clients so that it could redirect the data between them. And I got a fantastic answer that provided me with exactly the code I would apparently need to do this.
The problem is I don't understand how it works, and I did ask in the comments but I didn't get much of a reply except for some links to documentation.
Here's what I was given:
connections = []
while True:
rlist,wlist,xlist = select.select(connections + [s],[],[])
for i in rlist:
if i == s:
conn,addr = s.accept()
connections.append(conn)
continue
data = i.recv(1024)
for q in connections:
if q != i and q != s:
q.send(data)
As far as I understand, the select module gives the ability to make waitable objects in the case of select.select.
I've got the rlist, the pending to be read list, the wlist, the pending to be written list, and then the xlist, the pending exceptional condition.
He's assigning the pending to be written list to "s" which in my part of the chat server, is the socket that is listening on the assigned port.
That's about as much as I feel I understand clearly enough. But I would really really like some explanation.
If you don't feel like I asked an appropriate question, tell me in the comments and I'll delete it. I don't want to violate any rules, and I'm pretty sure I am not duplicating threads as I did do research for a while before resorting to asking.
Thanks!
Note: my explanation here assumes you're talking about TCP sockets, or at least some type which is connection-based. UDP and other datagram (i.e. non-connection-based) sockets are similar in some ways, but the way you use select on them in slightly different.
Each socket is like an open file which can have data read and written to it. Data that you write goes into a buffer inside the system waiting to be sent out on the network. Data that arrives from the network is buffered inside the system until you read it. Lots of clever stuff is going on underneath, but when you're using a socket that's all you really need to know (at least initially).
It's often useful to remember that the system is doing this buffering in the explanation that follows, because you'll realise that the TCP/IP stack in the OS sends and receives data independently of your application - this is done so your application can have a simple interface (that's what the socket is, a way of hiding all the TCP/IP complexity from your code).
One way of doing this reading and writing is blocking. Using that system, when you call recv(), for example, if there is data waiting in the system then it will be returned immediately. However, if there is no data waiting then the call blocks - that is, your program halts until there is data to read. Sometimes you can do this with a timeout, but in pure blocking IO then you really can wait forever until the other end either sends some data or closes the connection.
This doesn't work too badly for some simple cases, but only where you're talking to one other machine - when you're talking on more than one socket, you can't just wait for data from one machine because the other one may be sending you stuff. There are also other issues which I won't cover in too much detail here - suffice to say it's not a good approach.
One solution is to use different threads for each connection, so the blocking is OK - other threads for other connections can be blocked without affecting each other. In this case you'd need two threads for each connection, one to read and one to write. However, threads can be tricky beasts - you need to carefully synchronise your data between them, which can make coding a little complicated. Also, they're somewhat inefficient for a simple task like this.
The select module allows you a single-threaded solution to this problem - instead of blocking on a single connection, it allows you a function which says "go to sleep until at least one of these sockets has some data I can read on it" (that's a simplification which I'll correct in a moment). So, once that call to select.select() returns, you can be certain that one of the connections you're waiting on has some data, and you can safely read it (even with blocking IO, if you're careful - since you're sure there's data there, you won't ever block waiting for it).
When you first start your application, you have only a single socket which is your listening socket. So, you only pass that in the call to select.select(). The simplification I made earlier is that actually the call accepts three lists of sockets for reading, writing and errors. The sockets in the first list are watched for reading - so, if any of them have data to read, the select.select() function returns control to your program. The second list is for writing - you might think you can always write to a socket, but actually if the other end of the connection isn't reading data fast enough then your system's write buffer can fill up and you can temporarily be unable to write. It looks like the person who gave you your code ignored this complexity, which isn't too bad for a simple example because usually the buffers are big enough you're unlikely to hit problems in simple cases like this, but it's an issue you should address in the future once the rest of your code works. The final list is watched for errors - this isn't widely used, so I'll skip it for now. Passing the empty list is fine here.
At this point someone connects to your server - as far as select.select() is concerned this counts as making the listen socket "readable", so the function returns and the list of readable sockets (the first return value) will include the listen socket.
The next part runs over all the connections which have data to read, and you can see the special case for your listen socket s. The code calls accept() on it which will take the next waiting new connection from the listen socket and turn it into a brand new socket for that connection (the listen socket continues to listen and may have other new connections also waiting on it, but that's fine - I'll cover this in a second). The brand new socket is added to the connections list and that's the end of handling the listen socket - the continue will move on to the next connection returned from select.select(), if any.
For other connections that are readable, the code calls recv() on them to recover the next 1024 bytes (or whatever is available if less than 1024 bytes). Important note - if you hadn't used select.select() to make sure the connection was readable, this call to recv() could block and halt your program until data arrived on that specific connection - hopefully this illustrates why the select.select() is required.
Once some data has been read the code runs over all the other connections (if any) and uses the send() method to copy the data down them. The code correctly skips the same connection as the data just arrived on (that's the business about q != i) and also skips s, but as it happens this isn't required since as far as I can see it's never actually added to the connections list.
Once all readable connections have been processed, the code returns to the select.select() loop to wait for more data. Note that if a connection still has data, the call returns immediately - this is why accepting only a single connection from the listen socket is OK. If there are more connections, select.select() will return again immediately and the loop can handle the next available connection. You can use non-blocking IO to make this a bit more efficient, but it makes things more complicated so let's keep things simple for now.
This is a reasonable illustration, but unfortunately it suffers from some problems:
As I mentioned, the code assumes you can always call send() safely, but if you have one connection where the other end isn't receiving properly (maybe that machine is overloaded) then your code here could fill up the send buffer and then hang when it tries to call send().
The code doesn't cope with connections closing, which will often result in an empty string being returned from recv(). This should result in the connection being closed and removed from the connections list, but this code doesn't do it.
I've updated the code slightly to try and solve these two issues:
connections = []
buffered_output = {}
while True:
rlist,wlist,xlist = select.select(connections + [s],buffered_output.keys(),[])
for i in rlist:
if i == s:
conn,addr = s.accept()
connections.append(conn)
continue
try:
data = i.recv(1024)
except socket.error:
data = ""
if data:
for q in connections:
if q != i:
buffered_output[q] = buffered_output.get(q, b"") + data
else:
i.close()
connections.remove(i)
if i in buffered_output:
del buffered_output[i]
for i in wlist:
if i not in buffered_output:
continue
bytes_sent = i.send(buffered_output[i])
buffered_output[i] = buffered_output[i][bytes_sent:]
if not buffered_output[i]:
del buffered_output[i]
I should point out here that I've assumed that if the remote end closes the connection, we also want to close immediately here. Strictly speaking this ignores the potential for TCP half-close, where the remote end has sent a request and closes its end, but still expects data back. I believe very old versions of HTTP used to sometimes do this to indicate the end of the request, but in practice this is rarely used any more and probably isn't relevant to your example.
Also it's worth noting that a lot of people make their sockets non-blocking when using select - this means that a call to recv() or send() which would otherwise block will instead return an error (raise an exception in Python terms). This is done partly for safety, to make sure a careless bit of code doesn't end up blocking the application; but it also allows some slightly more efficient approaches, such as reading or writing data in multiple chunks until there's none left. Using blocking IO this is impossible because the select.select() call only guarantees there's some data to read or write - it doesn't guarantee how much. So you can only safely call a blocking send() or recv() once on each connection before you need to call select.select() again to see whether you can do so again. The same applies to the accept() on a listening socket.
The efficiency savings are only generally a problem on systems which have a large number of busy connections, however, so in your case I'd keep things simple and not worry about blocking for now. In your case, if your application seems to hang up and become unresponsive then chances are you're doing a blocking call somewhere where you shouldn't.
Finally, if you want to make this code portable and/or faster, it might be worth looking at something like libev, which essentially has several alternatives to select.select() which work well on different platforms. The principles are broadly similar, however, so it's probably best to focus on select for now until you get your code running, and the investigate changing it later.
Also, I note that a commenter has suggested Twisted which is a framework which offers a higher-level abstraction so that you don't need to worry about all of the details. Personally I've had some issues with it in the past, such as it being difficult to trap errors in a convenient way, but many people use it very successfully - it's just an issue of whether their approach suits the way you think about things. Worth investigating at the very least to see whether its style suits you better than it does me. I come from a background writing networking code in C/C++ so perhaps I'm just sticking to what I know (the Python select module is quite close to the C/C++ version on which it's based).
Hopefully I've explained things sufficiently there - if you still have questions, let me know in the comments and I can add more detail to my answer.

I need to make a "server" that can handle multiple long lasting connections of streaming data

I need to read and plot data in real time from multiple Android phones simultaneously. I'm trying to build a server (in python) that each phone can connect to simultaneously, which will receive the data streams from each phone and plot in real time, using matplotlib. I'm not very experienced in socket programming, although I know the basics (single request servers and such). How should I go about doing this? I looked at asyncore, SocketServer, and other modules, but I'm not sure I grasp how to allow multiple long standing connections.
I was thinking I should create a new thread for each phone (although I'm not sure if it's safe to pass a socket to a new thread), but I also want to be able to plot using subplots (eg, 4 plots side by side), although this is not that important.
I just need a point in the right direction. Small code samples appreciated to illustrate the concept.
Using threads due to the Python's implementation of threading might lead to a degraded performance, depending on what your threads do.
I'd suggest using a framework for building asynchronous server. A one such framework is Gevent. Using asynchronous event loop you can do calculations while other "threads" (in case of gevent, greenlets) are waiting for I/O and thus getting better performance. The model is also ideal for long-lasting idle connections.

Python - Waiting for input from lots of sockets

I'm working on a simple experiment in Python. I have a "master" process, in charge of all the others, and every single process has a connection via unix socket to the master process. I would like to be able for the master process to be able to monitor all of the sockets for a response - but there could theoretically be almost a hundred of them. How would threads impact the memory and performance of the application? What would be the best solution? Thanks a lot!
One hundred simultaneous threads might be pushing the reasonable limits of threading. If you find this is the cleanest way to organize your code, I'd say give it a try, but threading really doesn't scale very far.
What works better is to use a technique like select to wait for one of the sockets to be readable / writable / or has an error to report. This mechanism lets you go to sleep until something interesting happens, handle as many sockets have content to handle, and then go back to sleep again, all in a single thread of execution. Removing the multi-threading can often reduce chances for errors, and this style of programming should get you into the hundreds of connections no trouble. (If you want to go beyond about 100, I'd use the poll functionality instead of select -- constantly rebuilding the list of interesting file descriptors takes time that poll does not require.)
Something to consider is the Python Twisted Framework. They've gone to some length to provide a consistent way to hook callbacks onto events for this exact sort of programming. (If you're familiar with node.js, it's a bit like that, but Python.) I must admit a slight aversion to Twisted -- I never got very far in their documentation without being utterly baffled -- but a lot of people made it further in the docs than I did. You might find it a better fit than I have.
The easiest way to conduct comparative tests of threads versus processes for socket handling is to use the SocketServer in Python's standard library. You can easily switch approaches (while keeping everything else the same) by inheriting from either ThreadingMixIn or ForkingMixIn. Here is a simple example to get you started.
Another alternative is a select/poll approach using non-blocking sockets in a single process and a single thread.
If you're interested in software that is already fully developed and highly evolved, consider these high-performance Python based server packages:
The Twisted framework uses the async single process, single thread style.
The Tornado framework is similar (less evolved, less full featured, but easier to understand)
And Gunicorn which is a high-performance forking server.

SimpleXmlRpcServer _sock.rcv freezes after thousands of requests

I'm serving requests from several XMLRPC clients over WAN. The thing works great for, let's say, a period of one day (sometimes two), then freezes in socket.py:
data = self._sock.recv(self._rbufsize)
_sock.timeout is -1, _sock.gettimeout is None
There is nothing special I do in the main thread (just receiving XMLRPC calls), there are another two threads talking to DB. Both these threads work fine and survive this block (did a check with WinPdb). Clients are sending requests not being longer than 1KB, and there isn't any special content: just nice and clean strings in dictionary. Between two blockings I serve tens of thousands requests without problems.
Firewall is off, no strange software on the same machine, etc...
I use Windows XP and Python 2.6.4. I've checked differences between 2.6.4. and 2.6.5, and didn't find anything important (or am I mistaking?). 2.7 version is not an option as I would miss binaries for MySqlDB.
The only thing that happens from time to time caused by the clients that have poor internet connection is that sockets break. This is happening, every 5-10 minutes (there are just five clients accessing server every 2 seconds).
I've spent great deal of time on this issue, now I'm beginning to lose any ideas what to do. Any hint or thought would be highly appreciated.
What exactly is happening in your OS's TCP/IP stack (possibly in the python layers on top, but that's less likely) to cause this is a mystery. As a practical workaround, I'd set a timeout longer than the delays you expect between requests (10 seconds should be plenty if you expect a request every 2 seconds) and if one occurs, close and reopen. (Calibrate the delay needed to work around freezes without interrupting normal traffic by trial and error). Unpleasant to hack a fix w/o understanding the problem, I know, but being pragmatical about such things is a necessary survival trait in the world of writing, deploying and operating actual server systems. Be sure to comment the workaround accurately for future maintainers!
thanks so much for the fast response. Right after I've receive it I augmented the timeout to 10 seconds. Now it is all running without problems, but of course I would need to wait another day or two to have sort of confirmation, but only after 5 days I'll be sure and will come back with the results. I see now that 140K request went well already, having so hard experience on this one I would wait at least another 200K.
What you were proposing about auto adaptation of timeouts (without putting the system down) sounds also reasonable. Would the right way to go be in creating a small class (e.g. AutoTimeoutCalibrator) and embedding it directly into serial.py?
Yes - being pragmatical is the only way without loosing another 10 days trying to figure out the real reason behind.
Thanks again, I'll be back with the results.
(sorry, but for some reason I was not able to post it as a reply to your post)

Categories