How do I create an asynchronous socket in Python? - python

I've created a socket object for Telnet communication, and I'm using it to communicate with an API, sending and receiving data. I need to configure it in such a way that I can send and receive data at the same time. By that, I mean data should be sent as soon as the application tries to send it, and data should be processed immediately on receipt. Currently, I have a configuration which allows receipt to be instant, and sending to be second priority with a very short delay.
Currently the best way I have found to do this is by having an event queue, and pushing data to send into it, then having a response queue into which I put messages from the server. I have a thread which polls the buffer every .1 seconds to check for new data, if there isn't any, it then checks the request queue and processes anything there, and that's running in a continuous loop. I then have threads insert data into the request queue, and read data from the response queue. Everything is just about linear enough that this works fine.
This is not "asynchronous", in a sense that I've had to make it as asynchronous as possible without actually achieving it. Is there a proper way to do this? Or is anything under the hood going to be doing exactly the same as I am?
Other things I have investigated as a solution to this problem:
A callback system, where I might call socket.on_receipt(handle_message, args) to call the method handle_message with args as a parameter, passing the received data into the method. The only way I could find to achieve this is by implementing what I already have, then registering a callback for it (in fact, this is very close to what I do already have).
Please note: I am approaching this as a learning exercise to understand better how asynchronous systems work, not to understand how to use a particular library, so please do not suggest an existing library unless it contains very clear code which is simple to understand and answers the question fully and concisely.

This seems like a pretty straightforward use case for asyncio. I wouldn't consider using asyncio as "using a particular library" since socket programming paired with asyncio's event loop is pretty low-level and the concept is very transparent if you have experience with other languages and just want to see how async programming works in Python.
You can use this async chat as an example: https://gist.github.com/gregvish/7665915
Essentially, you create a non-blocking socket, see standard library reference on socket.setblocking(0):
https://docs.python.org/3/library/socket.html#socket.socket.setblocking
I'd also suggest this amazing session by David Beazley as a must-see for async Python programming. He explains the concurrency concepts in Python using sockets, exactly what you need: https://www.youtube.com/watch?v=MCs5OvhV9S4

Related

Understanding Asynchronous IO: vs asynchronous programming

I'm having a difficult time understanding asynchronous IO so I hope to clear up some of my misunderstanding because the word "asynchronous" seems to be thrown in a lot. If it matters, my goal is to get into twisted python but I want a general understanding of the underlying concepts.
What exactly is asynchronous programming? Is it programming with a language and OS that support Asynchronous IO? Or is it something more general? In other words, is asynchronous IO a separate concept from asynchronous programming?
Asynchronous IO means the application isn't blocked when your computer is waiting for something. The definition of waiting here is not processing. Waiting for a webserver? Waiting for a network connection? Waiting for a hard drive to respond with data on a platter? All of this is IO.
Normally, you write this in a very simple fashion synchronously:
let file = fs.readFileSync('file');
console.log(`got file ${file}`);
This will block, and nothing will happen until readFileSync returns with what you asked for. Alternatively, you can do this asynchronously which won't block. This compiles totally differently. Under the hood it may be using interrupts. It may be polling handles with select statements. It typically uses a different binding to a low level library, such as libc. That's all you need to know. That'll get your feet wet. Here is what it looks like to us,
fs.readFile(
'file',
function (file) {console.log(`got file ${file}`)}
);
In this you're providing a "callback". That function will request the file immediately, and when it (the function you called, here fs.readFile) gets the file back it will call your callback (here that's a function that takes a single argument file.
There are difficulties writing things asynchronously:
Creates pyramid code if using callbacks.
Errors can be harder to pinpoint.
Garbage collection isn't always as clean.
Performance overhead, and memory overhead.
Can create hard to debug situations if mixed with synchronous code.
All of that is the art of asynchronous programming..

How can I do asynchronous programming but hide it in Python?

Am just getting my head round Twisted, threading, stackless, etc. etc. and would appreciate some high level advice.
Suppose I have remote clients 1 and 2, connected via a websocket running in a page on their browsers. Here is the ideal goal:
for cl in (1,2):
guess[cl] = show(cl, choice("Pick a number:", range(1,11)))
checkpoint()
if guess[1] == guess[2]:
show((1,2), display("You picked the same number!"))
Ignoring the mechanics of show, choice and display, the point is that I want the show call to be asynchronous. Each client gets shown the choice. The code waits at checkpoint() for all the threads (or whatever) to rejoin.
I would be interested in hearing answers even if they involve hairy things like rewriting the source code. I'd also be interested in less hairy answers which involve compromising a bit on the syntax.
The most simple solution code wise is to use a framework like Autobahn which support remote procdure calls (RPC). That means you can call some JavaScript in the browser and wait for the result.
If you want to call two clients, you will have to use threads.
You can also do it manually. The approach works along these lines:
You need to pass a callback to show().
show() needs to register the callback with some kind of string ID in a global dict
show() must send this ID to the client
When the client sends the answer, it must include the ID.
The Python handler can then remove the callback from the global dict and invoke it with the answer
The callback needs to collect the results.
When it has enough results (two in your case), it must send status updates to the client.
You can simplify the code using yield but the theory behind is a bit complex to understand: What does the "yield" keyword do in Python? and coroutines
In Python, the most widely-used approach to async/event-based network programming that hides that model from the programmer is probably gevent.
Beware: this kind of trickery works by making tasks yield control implicitly, which encourages the same sorts of surprising bugs that tend to appear when OS threads are involved. Local reasoning about such problems is significantly harder than with explicit yielding, and the convenience of avoiding callbacks might not be worth the trouble introduced by the inherent pitfalls. Perhaps just as important to a library author like yourself: this approach is not pure Python, and would force dependencies and interpreter restrictions on the users of your library.
A lot of discussion about this topic sprouted up (especially between the gevent and twisted camps) while Guido was working on the asyncio library, which was called tulip at the time. He summarized the main issues here.

Python: Interrupting sender with incoming messages

I'm working with socket, asynchronous event-driven programming. I would like to send a message, once I receive a response, send another message. But I may be doing something besides listening. That is, I want to get interrupted when socket.recv() actually receives a message.
Question 1: How can I let layer 3 interrupt layer 4? i.e. How can I handle the event of a non-null returning socket.recv() without actually dedicating "program time" to actually wait for a specific time to listen to incoming messages?
In asynchronous programming you don't interrupt an operation triggered by a message. All operations should be done in a short and fast fashion so you can process lots of messages per second. This way every operation is atomic and you don't suffer any race conditions so easily.
If you are in need to do more complex processing in parallel you could hand those problems over to a helper thread. Libraries like twisted are prepared for such use cases.
Do you need to use sockets directly? I would otherwise recommend looking into the excellent Twisted library for python. It handles all the async work for you, so you can focus on writing handlers and other logic your code needs. Twisted is relatively easy to get started with. Take a look at some examples at http://twistedmatrix.com/documents/current/core/howto/index.html.

Threads vs Asynchronous Networking (Twisted) Python

I am writing an implementation of a NAT. My algorithm is as follows:
Packet comes in
Check against lookup table if external, add to lookup table if internal
Swap the source address and send the packet on its way
I have been reading about Twisted. I was curious if Twisted takes advantage of multicore CPUs? Assume the system has thousands of users and one packet comes right after the other. With twisted can the lookup table operations be taking place at the same time on each core. I hear with threads the GIL will not allow this anyway. Perhaps I could benifit from multiprocessing>
Nginix is asynchronous and happily serves thousands of users at the same time.
Using threads with twisted is discouraged. It has very good performance when used asynchronously, but the code you write for the request handlers must not block. So if your handler is a pretty big piece of code, break it up into smaller parts and utilize twisted's famous Deferreds to attach the other parts via callbacks. It certainly requires a somewhat different thinking than most programmers are used to, but it has benefits. If the code has blocking parts, like database operations, or accessing other resources via network to get some result, try finding asynchronous libraries for those tasks too, so you can use Deferreds in those cases also. If you can't use asynchronous libraries you may finally use the deferToThread function, which will run the function you want to call in a different thread and return a Deferred for it, and fire your callback when finished, but it's better to use that as a last resort, if nothing else can be done.
Here is the official tutorial for Deferreds:
http://twistedmatrix.com/documents/10.1.0/core/howto/deferredindepth.html
And another nice guide, which can help to get used to think in "async mode":
http://ezyang.com/twisted/defer2.html

Mix Python Twisted with multiprocessing?

I need to write a proxy like program in Python, the work flow is very similar to a web proxy. The program sits in between the client and the server, incept requests sent by the client to the server, process the request, then send it to the original server. Of course the protocol used is a private protocol uses TCP.
To minimize the effort, I want to use Python Twisted to handle the request receiving (the part acts as a server) and resending (the part acts as a client).
To maximum the performance, I want to use python multiprocessing (threading has the GIL limit) to separate the program into three parts (processes). The first process runs Twisted to receive requests, put the request in a queue, and return success immediately to the original client. The second process take request from the queue, process the request further and put it to another queue. The 3rd process take request from the 2nd queue and send it to the original server.
I was a new comer to Python Twisted, I know it is event driven, I also heard it's better to not mix Twisted with threading or multiprocessing. So I don't know whether this way is appropriate or is there a more elegant way by just using Twisted?
Twisted has its own event-driven way of running subprocesses which is (in my humble, but correct, opinion) better than the multiprocessing module. The core API is spawnProcess, but tools like ampoule provide higher-level wrappers over it.
If you use spawnProcess, you will be able to handle output from subprocesses in the same way you'd handle any other event in Twisted; if you use multiprocessing, you'll need to develop your own queue-based way of getting output from a subprocess into the Twisted mainloop somehow, since the normal callFromThread API that a thread might use won't work from another process. Depending on how you call it, it will either try to pickle the reactor, or just use a different non-working reactor in the subprocess; either way it will lose your call forever.
ampoule is the first thing I think when reading your question.
It is a simple process pool implementation which uses the AMP protocol to communicate. You can use the deferToAMPProcess function, it's very easy to use.
You can try something like Cooperative Multitasking technique as it's described there http://us.pycon.org/2010/conference/schedule/event/73/ . It's simillar to technique as Glyph menitioned and it's worth a try.
You can try to use ZeroMQ with Twisted but it's really hard and experimental for now :)

Categories