creating a minimal HTTP server with asyncio - python

While I'm familiar with both HTTP servers and event loops, I'm having some trouble grasping the inner workings of Python's asyncio.
As a learning exercise, I've been trying to write a minimal HTTP server (just echoing out the request method, URI, headers and body), without additional dependencies. I've looked into aiohttp and aiowsgi for reference, but having trouble understanding what's going on there - in part because the perceived complexity of protocols, transports etc. is a bit overwhelming. So I'm currently stuck because I don't quite know where to begin.
Is it naive to expect this to be just a few lines of code to establish the connection, consume the incoming text stream and send back another text stream?

You can take a look on picoweb as example of very simple (and very limited) HTTP server.
But, sure, when you'll try to implement full-feature web server you will get something like aiohttp -- HTTP is complex (even maybe complicated) standard.

Related

Using django for non http requests

Can I use django to handle non http-requests and responses? I have a django web application serving up webpages, and I would like to use it to also communicate with other devices (hand-held gps sending in status reports and receiving ack) over tcp, but django reports that the requests are "
code 400, message Bad HTTP/0.9 request type".
[28/Sep/2015 15:14:26] code 400, message Bad HTTP/0.9 request type ('[V1.0.0,244565434376396,1,abcd,2015-09-28')
[28/Sep/2015 15:14:26] "[V1.0.0,244565434376396,1,abcd,2015-09-28 14:14:12,1-2,865456543459367,2,T1]" 400 -
The message from the device is sent as text over tcp with no http parameters at all.
I haven't found any information on how to do this with django, but it would make my life easier if it was possible.
Thanks!
Not that I know of.
Django is a web framework, so it's designed around a certain paradigm if not a certain protocol.
The design is heavily informed - if not by HTTP - by the notions of URL, request, a stateless protocol, et cetera.
If the template system and the routing system were taken away you would be left with a glorified ORM and some useless bits of code.
However, unless you are dealing with existing devices with their own protocol, you can use Django to build a RESTful service to successfully exchange information with something other than bipeds in front of a web browser.
This article on Dr. Dobb's is very informative.
Django REST, although by no means necessary, can help you.
If you are really stuck with legacy devices and protocols, you could write an adapter/proxy that would receive your devices' requests and translate them to RESTful calls, if you protocol looks enough like HTTP semantically rather than syntactically (as in, if you just have to translate QUUX aaa:bbb:ccc: to GET xx/yy/zz).
If it does not share the slightest bit of HTTP's semantics, I'd say Django can't help you much.
I second the suggestion that you can better handle non-http with other methods, but I do have a suggestion as to how to structure a Django app that could do it. HTTP processing takes place in middleware and you could just make your app be on the top of that stack and either pre-empt other middlewares by returning the response instead of passing it down the stack or preparing a mock request to pass down to other handlers, grabbing the response on the way back to post-process it for your receiver.
This feels hacky and might require a bunch of un-orthodox tricks but that's how I would approach the problem as stated.

Twisted server-client interconnecting XML-RPC and REST services

I have a service provided by a REST API, with a Python library wrapping it using python-requests.
I have a 'dumb' user interface designed by a third party (not Python) to connect to a local XML-RPC.
Now I have to connect both ends and forward the XML-RPC calls to the REST API and return the results. It's mostly asynchronous and doesn't depend on results returning to the user in real-time. Most of the XML-RPC calls are supposed to return immediately, queue a task, and some other call will query the results later. Data is stored in an sqlite database until needed.
So, I decided to use twisted.web.xmlrpc for this middle layer and use the requests based lib for the remote calls and it works fine. I guess I'm blocking twisted's mainloop for a few seconds once in a while, but that's not a big deal.
The problem is that I also have to make some big file uploads from this middle layer to the HTTP server providing the REST API. I can't make those uploads using the requests based lib because it will block the twisted loop until the upload is finished.
I'd rather not use multithreading, and I really don't want to rewrite the python-requests based lib I have as a twisted client. Is there any way I can integrate requests into twisted's mainloop, or any other reasonable solution?
If you like requests' style of API, but want something that would work with Twisted, consider using treq. There are support libraries for writing interfaces which can be either synchronous or asynchronous depending on their caller's needs.
If you really want to use requests, but you don't want to block the main loop, you can invoke it with twisted.internet.threads.deferToThread. This is mostly transparent, and if your requests don't share any state you can almost ignore the fact that you're using multithreading.
But, ultimately, Jean-Paul's comment is correct; you are going to need to make some changes to the way this code works, if you want to change the way it works.

Can I use socket.io with twisted.web?

I'm writing a web application using Python's twisted.web on the server side.
On the frontend side I would like to use Ajax for displaying real time updates of events which are happening in the server.
There are lots of information out there on how this can be done, so I realized I need to pick a javascript library that would make my life easier.
socket.io seems to be a good choice since it supports several browsers and transport mechanisms, but by reading their examples it seems it can only work with node.js?
So, does anyone know if it's possible to use socket.io with twisted.web?
If so, any links for a good example/tutorial would be also welcome.
You could try https://github.com/DesertBus/sockjs-twisted or if you need SocketIO for a specific reason, it wouldn't be difficult to port TornadIO2 to Cyclone. You might find interesting this issue.
You need something server side to integrate with the socket.io script on the client side. The servers that I know that are written in Python and do this all use Tornado. You could look at an implementation like, Tornadio (https://github.com/MrJoes/tornadio) and see what methods and classes they used to hook Tornadio and Tornado together. This would give you a pretty good idea of how to integrate it with your twisted.web server.
We've just switched away from socket.io to sockJS (which is also compatible with Tornado) and have seen large performance improvements.

Making an asynchronous interface appear synchronous to mod_python users

I have a Python-driven web interface powered by Apache 2.2 with mod_python and Python 2.4. I need to make an asynchronous process appear synchronous to users of this web interface.
When users access one module on this website:
An external SOAP interface will be contacted with a unique identifier and will respond with a number N
The external interface will respond asynchronously by contacting a SOAP server on my machine between 1 and 10 times (the number N tells us how many responses we will receive)
I need to somehow aggregate these responses and pass them to the original module which will display the information back to the user. The goal is to make the process appear synchronous to the user.
What is the best way to handle this synchronization issue? Is this something Twisted would be well-suited for?
I am not restricting myself to Python for the solution, though it is preferred because everything else on the server is in Python. I prefer a solution that is both scalable and will take a minimal amount of programming time (though I understand that these attributes are somewhat at odds).
Maybe you can use Orbited to get ajax push with long-lived HTTP connections to your web clients. Orbited is based on Twisted, so I think it makes sense to look at if you already know Twisted. Have a look at this tutorial to get started.

Abstraction and client/server architecture questions for Python game program

Here is where I am at presently. I am designing a card game with the aim of utilizing major components for future work. The part that is hanging me up is creating a layer of abstraction between the server and the client(s). A server is started, and then one or more clients can connect (locally or remotely). I am designing a thick client but my friend is looking at doing a web-based client. I would like to design the server in a manner that allows a variety of different clients to call a common set of server commands.
So, for a start, I would like to create a 'server' which manages the game rules and player interactions, and a 'client' on the local CLI (I'm running Ubuntu Linux for convenience). I'm attempting to flesh out how the two pieces are supposed to interact, without mandating that future clients be CLI-based or on the local machine.
I've found the following two questions which are beneficial, but don't quite answer the above.
Client Server programming in python?
Evaluate my Python server structure
I don't require anything full-featured right away; I just want to establish the basic mechanisms for abstraction so that the resulting mock-up code reflects the relationship appropriately: there are different assumptions at play with a client/server relationship than with an all-in-one application.
Where do I start? What resources do you recommend?
Disclaimers:
I am familiar with code in a variety of languages and general programming/logic concepts, but have little real experience writing substantial amounts of code. This pet project is an attempt at rectifying this.
Also, I know the information is out there already, but I have the strong impression that I am missing the forest for the trees.
Read up on RESTful architectures.
Your fat client can use REST. It will use urllib2 to make RESTful requests of a server. It can exchange data in JSON notation.
A web client can use REST. It can make simple browser HTTP requests or a Javascript component can make more sophisticated REST requests using JSON.
Your server can be built as a simple WSGI application using any simple WSGI components. You have nice ones in the standard library, or you can use Werkzeug. Your server simply accepts REST requests and makes REST responses. Your server can work in HTML (for a browser) or JSON (for a fat client or Javascript client.)
I would consider basing all server / client interactions on HTTP -- probably with JSON payloads. This doesn't directly allow server-initiated interactions ("server push"), but the (newish but already traditional;-) workaround for that is AJAX-y (even though the X makes little sense as I suggest JSON payloads, not XML ones;-) -- the client initiates an async request (via a separate thread or otherwise) to a special URL on the server, and the server responds to those requests to (in practice) do "pushes". From what you say it looks like the limitations of this approach might not be a problem.
The key advantage of specifying the interactions in such terms is that they're entirely independent from the programming language -- so the web-based client in Javascript will be just as doable as your CLI one in Python, etc etc. Of course, the server can live on localhost as a special case, but there is no constraint for that as the HTTP URLs can specify whatever host is running the server; etc, etc.
First of all, regardless of the locality or type of the client, you will be communicating through an established message-based interface. All clients will be operating based on a common set of requests and responses, and the server will handle and reject these based on their validity according to game state. Whether you are dealing with local clients on the same machine or remote clients via HTTP does not matter whatsoever from an abstraction standpoint, as they will all be communicating through the same set of requests/responses.
What this comes down to is your protocol. Your protocol should be a well-defined and technically sound language between client and server that will allow clients to a) participate effectively, and b) participate fairly. This protocol should define what messages ('moves') a client can do, and when, and how the server will react.
Your protocol should be fully fleshed out and documented before you even start on game logic - the two are intrinsically connected and you will save a lot of wasted time and effort by competely defining your protocol first.
You protocol is the abstraction between client and server and it will also serve as the design document and programming guide for both.
Protocol design is all about state, state transitions, and validation. Game servers usually have a set of fairly common, generic states for each game instance e.g. initialization, lobby, gameplay, pause, recap, close game, etc...
Each one of these states has important state data related with it. For example, a 'lobby' state on the server-side might contain the known state of each player...how long since the last message or ping, what the player is doing (selecting an avatar, switching settings, going to the fridge, etc.). Organizing and managing state and substate data in code is important.
Managing these states, and the associated data requirements for each is a process that should be exquisitely planned out as they are directly related to volume of work and project complexity - this is very important and also great practice if you are using this project to step up into larger things.
Also, you must keep in mind that if you have a game, and you let people play, people will cheat. It's a fact of life. In order to minimize this, you must carefully design your protocol and state management to only ever allow valid state transitions. Never trust a single client packet.
For every permutation of client/server state, you must enforce a limited set of valid game messages, and you must be very careful in what you allow players to do, and when you allow them to do it.
Project complexity is generally exponential and not linear - client/server game programming is usually a good/painful way to learn this. Great question. Hope this helps, and good luck!

Categories