I have an interesting project going on at our workplace. The task, that stands before us, is such:
Build a custom server using Python
It has a web server part, serving REST
It has a FTP server part, serving files
It has a SMTP part, which receives mail only
and last but not least, a it has a background worker that manages lowlevel file IO based on requests received from the above mentioned services
Obviously the go to place was Twisted library/framework, which is an excelent networking tool. However, studying the docs further, a few things came up that I'm not sure about.
Having Java background, I would solve the task (at least at the beginning) by spawning a separate thread for each service and going from there. Being in Python however, I cannot do that for any reasonable purpose as Python has GIL. I'm not sure, how Twisted handles this. I would expect, that Twisted has large (if not majority) code written in C, where GIL is not the issue, but that I couldn't find the docs explained to my satisfaction.
So the most oustanding question is: Given that Twisted uses Reactor as it's main design pattern, will it be able to:
Serve all those services needed
Do it in a non-blocking fashion (it should, according to docs, but if someone could elaborate, I'd be grateful)
Be able to serve about few hundreds of clients at once
Serve large file downloads in a reasonable way, meaning that it can serve multiple clients, using multiple services, downloading and uploading large files.
Large files being in the order of hundres of MB, or few GB. The size is not important, it's the time that the client has to stay connected to the server that matters.
Edit: I'm actually inclined to go the way of python multiprocessing, but not sure, whether that's a correct thing to do with Twisted etc.
Serve all those services needed
Yes.
Do it in a non-blocking fashion (it should, according to docs, but if someone could elaborate, I'd be grateful)
Twisted's uses the common reactor model. I/O goes through your choice of poll, select, whatever to determine if data is available. It handles only what is available, and passes the data along to other stages of your app. This is how it is non-blocking.
I don't think it provides non-blocking disk I/O, but I'm not sure. That feature not what most people need when they say non-blocking.
Be able to serve about few hundreds of clients at once
Yes. No. Maybe. What are those clients doing? Is each hitting refresh every second on a browser making 100 requests? Is each one doing a numerical simulation of galaxy collisions? Is each sending the string "hi!" to the server, without expecting a response?
Twisted can easily handle 1000+ requests per second.
Serve large file downloads in a reasonable way, meaning that it can serve multiple clients, using multiple services, downloading and uploading large files.
Sure. For example, the original version of BitTorrent was written in Twisted.
Related
I am trying to design a web application that processes large quantities of large mixed-media files coming from asynchronous processes. Each process can take several minutes.
The files are either uploaded as a POST body or pulled by the web server according to a source URL provided. The files can be processed by a variety of external tools in a synchronous or asynchronous way.
I need to be able to load balance this application so I can process multiple large files simultaneously for as much as I can afford to scale.
I think Python is my best choice for this project, but beside this, I am open to any solution. The app can either deliver the file back or rely on a messaging channel to notify the clients about the process completion.
Some approaches I thought I might use:
1) Use a non-blocking web server such as Tornado that keeps the connection open until the file processing is done. The external processing command is launched and the web server waits until the file is ready and pipes the resulting IO stream directly back to the web app that returns it. Since the processes sending requests are asynchronous, they might afford to wait (unless memory or some other issues come up).
2) Use a regular web server like Cherrypy (which I am more confident with) and have the webapp use a messaging channel to report the processing progress. The web server returns a HTTP response as soon as it receives the file, validates it and sends it to a background process. At the same time it sends a message notifying the process start. The background process then takes care of delivering the file to an available location and sending another message to the channel notifying the location of the new file. This solution looks more flexible than 1), but requires writing a separate script to handle the messages outside the web application, as well as a separate storage space for the temp files that have to be cleaned up at a certain point.
3) Use some internal messaging capability of any of the webserves mentioned above, which I am not familiar with...
Edit: something like CherryPy's pub-sub engine (http://cherrypy.readthedocs.org/en/latest/extend.html?highlight=messaging#publish-subscribe-pattern) could be a good solution.
Any suggestions?
Thank you,
gm
I had a similar situation come up with a really large scale data processing engine that my team implemented. We wanted to build our api calls in Flask, some of which can take many hours to complete, but have a way to notify the user in real time what is going on.
Basically what I came up with is was what you described as option 2. On the same machine that I am serving the flask app through apache, I created a tornado app that serves up a websocket that reports progress to the end user. Once my main page is served, it establishes the websocket connection to the tornado server, and the flask app periodically sends updates to the tornado app, and down to the end user. Even if the browser is closed during the long running application, apache keeps the request alive and processing, and upon logging back in, I can still see the current progress.
I wrote about this solution in some more detail here:
http://jonfeatherstone.com/2013/08/01/mongo-and-websockets-for-application-logging/
Good luck!
I spent quite some time now with researching Server Backends/API/Frameworks. I need a solution where I can store user content (JSON & Binary data).
The obvious choice would be a REST API. The only missing element is a push feature when data on server changed and clients should be notified instantly. With more research in this matter I discovered classic approaches (Comet, Push, Server sent events, Bayeux, BOSH, …) as well as the „new“ league, Websockets. I would definitely prefer the method with Websockets or using directly TCP Sockets. But this post is not about pros/cons of these two technologies so please restrain yourself from getting side tracked in comments.
At moment exists following projects which are very similar to my needs:
- Simperium (simperium.com), this looks very promising, but core/server is sadly not open source and god knows when, if ever, this step happens
- Realtime.co (framework.realtime.co/storage), hosted service, but same principle
- Some Frameworks for building servers such as Atmosphere (java, no WAMP), Cometd (java, project page looks like stuck in the 90’s), Autobahn (python, WAMP)
My actual favorite is the Autobahn framework (autobahn.ws). Especially using the WAMP protocol (subset of Websocket) as it offers exactly what I need. So the idea would be to build a python backend/server with Autobahn Python (based on Twisted framework) which manages all socket (WAMP) connections and include a Postgresql database for data storing. For all desired clients exists already WAMP libraries. The server would need to be able to do the typical REST API features:
- Send, update, delete requested data (JSON/Binary) from/to server/clients
- Synchronize & automatic conflict management
- Offline handling when connection breaks, automatic restart when connection available again
So finally the questions:
- Have I missed an open source project which covers exactly my needs?
- If I would like to develop my own server with autobahn and a database, could you point me to right direction? Have lot of concerns and not enough depth understanding.. I know Autobahn gives you already a server, but this one is not very close to my final needs.. how to build a server efficient so that he can handle all connected sockets? How handle when a client needs server push? Are there schemas, models or concept how such a server should look like?
- Twisted is a very powerful python framework but not regarded as the most convenient for writing apps.. But I guess a Socket based storage server with db access should be possible? When I run twisted as a web ressource and develop server components with other python framework, would this compromise the latency/performance much?
- Is such a desired server backend with lot of data storage (JSON fields and also binary data such as documents, images) reasonable to build with Sockets by a single devoloper/small team or is this smth. which only bigger companies like Dropbox can do at the moment?
Thank you very much for your help & time!
So finally the questions:
Have I missed an open source project which covers exactly my needs?
No you've covered the open source projects. Open source only gets you about halfway there though. To implement a Global Realtime Network requires equal parts implementation and equal parts operations. You have to think about dropped messages, retries, what happens if a particular geography gets hot how do you scale your servers ...etc. I would argue that an open source solution won't achieve what you want unless you're willing to invest significant resources into operations. I would recommend a service like PubNub: http://pubnub.com
If I would like to develop my own server with autobahn and a database, could you point me to right direction? Have lot of concerns and not enough depth understanding.. I know Autobahn gives you already a server, but this one is not very close to my final needs.. how to build a server efficient so that he can handle all connected sockets? How handle when a client needs server push? Are there schemas, models or concept how such a server should look like?
A good database to back a realtime framework would be Cassandra because it supports high write volumes and handles time series data well: http://cassandra.apache.org/.
Twisted is a very powerful python framework but not regarded as the most convenient for writing apps.. But I guess a Socket based storage server with db access should be possible? When I run twisted as a web ressource and develop server components with other python framework, would this compromise the latency/performance much?
I would not use Twisted. I would use Gevent:http://www.gevent.org/. Its coroutine based so you don't get into callback hell. To support more connections you just increase your greenlet pool to listen on the socket.
Is such a desired server backend with lot of data storage (JSON fields and also binary data such as documents, images) reasonable to build with Sockets by a single devoloper/small team or is this smth. which only bigger companies like Dropbox can do at the moment?
Again I would not build this on your own. A service like PubNub: http://pubnub.com which takes care of all the operational issues for you and has a clean API would service your needs with minimal cost. PubNub takes care of the protocol for you so if your on a mobile device that doesn't support WebSockets it will use TCP, HTTP or whatever the best transport is for the device.
I'm interested in something based on Jabber but I didn't find a free/opensource one so I'm thinking of writing one.
I've installed a Jabber server and now thinking about the ways in which I can write the client. I'm thinking of one of either these two methods.
1) An ajax call made to a jabber script running on the webserver that takes care of connecting to the server. But then I thought because of the dependencies involved in the jabber client, it might end up consuming too much memory when a few clients connect.
2) The other method is to run a client running as a daemon that takes care of all the heavy lifting. This way I need to have only one instance of the client that sends a spoofed message (sender's name as that of whatever the user entered on the site). A simple script running on the webserver talks to this daemon over some sort of API (XMLRPC or Msgpack maybe?)
I think #2 is better but I'm not sure. Are there other ways I can implement this? I'm considering using Perl or Python for this.
Jabber is usually called XMPP nowadays, and there are dozens of clients and servers, something for every language. If you are using Javascript (you mention Ajax), you probably want Strophe. Most servers are modular, so you only load the features you need (consider Tigase, ejabberd, or xmpppy). Writing your own is even worse an idea than it sounds.
BOSH
Install prosody because it is really eaSily installed and has BOSH support built-in. You could skip this but then you need to find out how to use BOSH via ejabberd.
use strophe.js to implement this(using BOSH). New browsers support cross-domain request(CORS -> read Proxy-less BOSH part). The old browsers you could use proxy or use flash in the middle as proxy.
read Professional XMPP Programming with JavaScript and jQuery to learn strophe. It even has chapters explaining how to create chat.
Node.js
Or you could consider installing node.js to create your chat system using socket.io.
I need to build a webservice that is very computationally intensive, and I'm trying to get my bearings on how best to proceed.
I expect users to connect to my service, at which point some computation is done for some amount of time, typically less than 60s. The user knows that they need to wait, so this is not really a problem. My question is, what's the best way to structure a service like this and leave me with the least amount of headache? Can I use Node.js, web.py, CherryPy, etc.? Do I need a load balancer sitting in front of these pieces if used? I don't expect huge numbers of users, perhaps hundreds or into the thousands. I'll need a number of machines to host this number of users, of course, but this is uncharted territory for me, and if someone can give me a few pointers or things to read, that would be great.
Thanks.
Can I use Node.js, web.py, CherryPy, etc.?
Yes. Pick one. Django is nice, also.
Do I need a load balancer sitting in front of these pieces if used?
Almost never.
I'll need a number of machines to host this number of users,
Doubtful.
Remember that each web transaction has several distinct (and almost unrelated) parts.
A front-end (Apache HTTPD or NGINX or similar) accepts the initial web request. It can handle serving static files (.CSS, .JS, Images, etc.) so your main web application is uncluttered by this.
A reasonably efficient middleware like mod_wsgi can manage dozens (or hundreds) of backend processes.
If you choose a clever backend processing component like celery, you should be able to distribute the "real work" to the minimal number of processors to get the job done.
The results are fed back into Apache HTTPD (or NGINX) via mod_wsgi to the user's browser.
Now the backend processes (managed by celery) are divorced from the essential web server. You achieve a great deal of parallelism with Apache HTTPD and mod_wsgi and celery allowing you to use every scrap of processor resource.
Further, you may be able to decompose your "computationally intensive" process into parallel processes -- a Unix Pipeline is remarkably efficient and makes use of all available resources. You have to decompose your problem into step1 | step2 | step3 and make celery manage those pipelines.
You may find that this kind of decomposition leads to serving a far larger workload than you might have originally imagined.
Many Python web frameworks will keep the user's session information in a single common database. This means that all of your backends can -- without any real work -- move the user's session from web server to web server, making "load balancing" seamless and automatic. Just have lots of HTTPD/NGINX front-ends that spawn Django (or web.py or whatever) which all share a common database. It works remarkably well.
I think you can build it however you like, as long as you can make it an asynchronous service so that the users don't have to wait.
Unless, of course, the users don't mind waiting in this context.
I'd recommend using nginx as it can handle rewrite/balancing/ssl etc with a minimum of fuss
If you want to make your web sevices asynchronous you can try Twisted. It is a framework oriented to asynchronous tasks and implements so many network protocols. It is so easy to offer this services via xml-rpc (just put xmlrpc_ as the prefix of your method). On the other hand it scales very well with hundreds and thousands of users.
Celery is also a good option to make the most computionally intensive tasks asynchronous. It integrates very well with Django.
I have a Python-driven web interface powered by Apache 2.2 with mod_python and Python 2.4. I need to make an asynchronous process appear synchronous to users of this web interface.
When users access one module on this website:
An external SOAP interface will be contacted with a unique identifier and will respond with a number N
The external interface will respond asynchronously by contacting a SOAP server on my machine between 1 and 10 times (the number N tells us how many responses we will receive)
I need to somehow aggregate these responses and pass them to the original module which will display the information back to the user. The goal is to make the process appear synchronous to the user.
What is the best way to handle this synchronization issue? Is this something Twisted would be well-suited for?
I am not restricting myself to Python for the solution, though it is preferred because everything else on the server is in Python. I prefer a solution that is both scalable and will take a minimal amount of programming time (though I understand that these attributes are somewhat at odds).
Maybe you can use Orbited to get ajax push with long-lived HTTP connections to your web clients. Orbited is based on Twisted, so I think it makes sense to look at if you already know Twisted. Have a look at this tutorial to get started.