Django TCP server with tornado - python

I want my django app to communicate by using a TCP/IP socket with a remote computer and I would like that socket to be available at all times. I would like to use the library tornado. Since I'm only familiar with writing views and models and such, I'm not entirely sure where to fit that into my codebase.
I was thinking about writing a management command that would run the tornado's server (see http://tornado.readthedocs.io/en/latest/tcpserver.html), but how could I call .stop() on my server once the management command quits? I wouldn't want it to spawn any threads that wouldn't exit upon my management command exiting, or end up with multiple open sockets, because I just want one.
Ofcourse I would also like the listener to reside somewhere in my django program and have access to it, not only within the management command code. I was thinking about importing a class from django's settings.
Am I thinking in the right direction, or is there a different, better approach?
EDIT: As to why would I want to do this:
I've got a microcontroller I want to communicate with, and I wouldn't want to go implementing/parsing HTTP on it, and I would also like to periodically send some indication of the connection being alive, and HTTP doesn't seem like the way to go

Management command is a nice approach, but I'd be reluctant to launch a server using it. A tornado server is a complex thing with a lot of state (including state outside of your codebase, like nginx, apache or HAProxy) and varying health. Management commands aren't designed to deal with all this.
It's probably a nice thing for development, and in this case you can easily make your management command not exit before the server by calling IOLoop.current().start() right inside the command.
For production environment I would advise to use contemporary orchestrating tools like Docker Compose, or if you plan to span your system over several machines, Docker Swarm or Kubernetes. These tools will allow you to start, shut down, scale and check health of individual components in a reliable manner, without reinventing the wheel with a set of management commands.
Either way, if your Tornado code lives in the same place with Django, then you're able to access the database using your Django models and reuse other parts of the project. Besides that, something launched from a management command doesn't get any advantages in using the running Django server.

Related

twisted web for production server service

I would like to deploy several WSGI web applications with Twisted on a debian server, and need some direction for a solid production setup. These applications will be running 24/7.
I need to run several configurations, each binding to different ports/interfaces/privileges.
I want to do as much of this in python as possible.
I do not want to package my applications with a program like 'tap2deb'.
What is the best way to implement each application as a system service? Do I need some /etc/init.d shell scripts, or can I manage this with python? (I don't want anything quite as heavy as Daemontools)
If I use twistd to manage most of the configuration/process management, what kind of wrappers/supervisors do I need to put in place?
I would like centralized management, but restricting control to the parent user account is not a problem.
The main problem I want to avoid, is having to SSH into my server once a day to restart a blocking/crashed application
I have found several good references for launching daemon processes with python. See daemoncmd from pypi.
Im still coming up a little short on the monitoring/alert solutions (in python).

Running Python through FastCGI with nginx on Ubuntu

I've already looked at other threads on this, but most don't go into enough setup detail which is where I need help.
I have an Ubuntu based VPS running with nginx, serving PHP sites through php-cgi on port 9000.
I'd like to start doing more with Python, so I've written a deployment script which I essentially want to use as a post-receive hook on my local GitLab server as my first python script. I can run this script successfully by running python script.py on the command line but in order to use this as a post-receive hook I need it be able to access it via http.
I looked at this guide on the nginx wiki but partway down is says to:
And start the django fastcgi process:
python ./manage.py runfcgi host=127.0.0.1 port=8080
Now, like I said I am pretty new to python, and I have never used the Django framework. Can anyone assit on how I am supposed to start the fastcgi server? Do I replace ./manage.py with the name of my script? Any help would be appriciated as everything I've found online refers to working with Django.
Do I replace ./manage.py with the name of my script?
No. It's highly unlikely your script is a FastCGI server, or that it can accept HTTP requests of any kind since you mention running it over the command line. (From what little I know of FastCGI, an app supporting it has to be able to handle a stream of requests coming in over stdin in a specific format, so there's definitely some plumbing involved.)
I'd say the easiest approach would be to use some web framework just to act as HTTP/FastCGI middleware. For your use a "microframework" like Flask (or even Paste but I found the documentation inscrutable) sounds like it'd work fine. The idea would be to have two interfaces to your main code, one that can handle command line arguments, and one that can handle a HTTP request, ultimately both would just call one function that actually does the work. (If you want to keep the command-line version of the app.)
The Flask documentation also mentions using uWSGI or standalone workers as deployment options. I'm not familiar with the former; the latter I wouldn't recommend for a simple, low-traffic app for the same reasons as the approach in the next paragraph.
Considering you use a VPS, you might even be able to just run the app as a standalone server process using the http.server module, but I'm not sure that's the better choice unless you absolutely want to avoid using any sort of framework. You'd have to make sure the app starts up if the server is rebooted or that it restarts when it crashes and it seems easier to just have nginx do the job of the supervisor.
UPDATE: Scratch that, it seems that nginx won't handle supervising a FastCGI worker process for you, which would've been the main advantage of the approach. In light of that it doesn't matter which of the three approaches you use since you'll have to set up a service supervisor one way or the other. I'd say go with uWSGI since flup (which is needed for Flask+FastCGI) seems abandoned since 2011, and the uWSGI protocol is apparently supported in nginx natively. Otherwise you'd need to use a different webserver than nginx, one that will manage a FastCGI worker for you. If this is an option, I'd consider Cherokee, which can be configured using a web GUI.
tl;dr: you need to write a (very simple) webapp. While it is feasible to do this without a web framework of any kind, in my opinion using one is easier, since you some (nontrivial) plumbing for free and there's a lot of guidance available on how to deploy them.

what's a good module for writing an http web service interface for a daemon?

To give a little background, I'm writing (or am going to write) a daemon in Python for scheduling tasks to run at user-specified dates. The scheduler daemon also needs to have a JSON-based HTTP web service interface (buzzword mania, I know) for adding tasks to the queue and monitoring the scheduler's status. The interface needs to receive requests while the daemon is running, so they either need to run in a separate thread or cooperatively multitask somehow. Ideally the web service interface should run in the same process as the daemon, too.
I could think of a few ways to do it, but I'm wondering if there's some obvious module out there that's specifically tailored for this kind of thing. Any suggestions about what to use, or about the project in general are quite welcome. Thanks! :)
Check out the class BaseHTTPServer -- a "Basic HTTP server" bundled with Python.
http://docs.python.org/library/basehttpserver.html
You can spin up a second thread and have it serve your requests for you very easily (probably < 30 lines of code). And it all runs in the same process and Python interpreter space, so it can access all your objects, etc.
I'm not sure I understand your question properly, but take a look at Twisted
I believed all kinds of python web framework is useful.
You can pick up one like CherryPy, which is small enough to integrate into your system. Also CherryPy includes a pure python WSGI server for production.
Also the performance may not be as good as apache, but it's already very stable.
Don't re-invent the bicycle!
Run jobs via cron script, and create a separate web interface using, for example, Django or Tornado.
Connect them via a database. Even sqlite will do the job if you don't want to scale on more machines.

When deploying python, what web server options do we have? is the process inefficient at all?

I think in the past python scripts would run off CGI, which would create a new thread for each process.
I am a newbie so I'm not really sure, what options do we have?
Is the web server pipeline that python works under any more/less effecient than say php?
You can still use CGI if you want, but the normal approach these days is using WSGI on the Python side, e.g. through mod_wsgi on Apache or via bridges to FastCGI on other web servers. At least with mod_wsgi, I know of no inefficiencies with this approach.
BTW, your description of CGI ("create a new thread for each process") is inaccurate: what it does is create a new process for each query's service (and that process typically needs to open a database connection, import all needed modules, etc etc, which is what may make it slow even on platforms where forking a process, per se, is pretty fast, such as all Unix variants).
I would suggest Django http://www.djangoproject.com. It is very convenient to use, has everything you need for making web services. The most efficient way to use it is to run it as via Apache's mod_wsgi, and make Apache itself serve the static files.
This generally has better performance than solutions such as CGI and mod-python, as the Python process running the web service runs separate from the main web server, so it can cache stuff and easily re-use resources (like DB handles).
Also, you can then tweak the number of worker threads for Apache and your web application separately, resulting in better scalability.
I suggest cherrypy (http://www.cherrypy.org/). It is very convenient to use, has everything you need for making web services, but still quite simple (no mega-framework). The most efficient way to use it is to run it as self-contained server on localhost and put it behind Apache via a Proxy statement, and make apache itself serve the static files.
This generally has better performance than solutions such as CGI and mod-python, as the Python process running the web service runs separate from the main web server, so it can cache stuff and easily re-use resources (like DB handles).
Also, you can then tweak the number of worker threads for Apache and your web application separately, resulting in better scalability.

What are some successful methods for deploying a Django application on the desktop?

I have a Django application that I would like to deploy to the desktop. I have read a little on this and see that one way is to use freeze. I have used this with varying success in the past for Python applications, but am not convinced it is the best approach for a Django application.
My questions are: what are some successful methods you have used for deploying Django applications? Is there a de facto standard method? Have you hit any dead ends? I need a cross platform solution.
I did this a couple years ago for a Django app running as a local daemon. It was launched by Twisted and wrapped by py2app for Mac and py2exe for Windows. There was both a browser as well as an Air front-end hitting it. It worked pretty well for the most part but I didn't get to deploy it out in the wild because the larger project got postponed. It's been a while and I'm a bit rusty on the details, but here are a few tips:
IIRC, the most problematic thing was Python loading C extensions. I had an Intel assembler module written with C "asm" commands that I needed to load to get low-level system data. That took a while to get working across both platforms. If you can, try to avoid C extensions.
You'll definitely need an installer. Most likely the app will end up running in the background, so you'll need to mark it as a Windows service, Unix daemon, or Mac launchd application.
In your installer you'll want to provide a way to locate a free local TCP port. You may have to write a little stub routine that the installer runs or use the installer's built-in scripting facility to find a port that hasn't been taken and save it to a config file. You then load the config file inside your settings.py and whatever front-end you're going to deploy. That's the shared port. Or you could just pick a random number and hope no other service on the desktop steps on your toes :-)
If your front-end and back-end are separate apps then you'll need to design an API for them to talk to each other. Make sure you provide a flag to return the data in both raw and human-readable form. It really helps in debugging.
If you want Django to be able to send notifications to the user, you'll want to integrate with something like Growl or get Python for Windows extensions so you can bring up toaster pop-up notifications.
You'll probably want to stick with SQLite for database in which case you'll want to make sure you use semaphores to tackle multiple requests vying for the database (or any other shared resource). If your app is accessed via a browser users can have multiple windows open and hit the app at the same time. If using a custom front-end (native, Air, etc...) then you can control how many instances are running at a given time so it won't be as much of an issue.
You'll also want some sort of access to local system logging facilities since the app will be running in the background and make sure you trap all your exceptions and route it into the syslog. A big hassle was debugging Windows service startup issues. It would have been impossible without system logging.
Be careful about hardcoded paths if you want to stay cross-platform. You may have to rely on the installer to write a config file entry with the actual installation path which you'll have to load up at startup.
Test actual deployment especially across a variety of firewalls. Some of the desktop firewalls get pretty aggressive about blocking access to network services that accept incoming requests.
That's all I can think of. Hope it helps.
If you want a good solution, you should give up on making it cross platform. Your code should all be portable, but your deployment - almost by definition - needs to be platform-specific.
I would recommend using py2exe on Windows, py2app on MacOS X, and building deb packages for Ubuntu with a .desktop file in the right place in the package for an entry to show up in the user's menu. Unfortunately for the last option there's no convenient 'py2deb' or 'py2xdg', but it's pretty easy to make the relevant text file by hand.
And of course, I'd recommend bundling in Twisted as your web server for making the application easily self-contained :).

Categories