So I've been working on my first Django / Python project and I got my production server up and running. I was wondering if it's possible to make Python/FastCGI (not really sure which is responsible for the task) to recompile my code. As of right now, when I upload updated code, I need to restart the server for the changes to take place. I read that you can add some kind of mysite.fcgi file to lighttpd so it see's that you've updated the code, can you do the same for Nginx / FastCGI?
for anyone else that was interested in my question.. this is only a partial solution, but I ended up finding my answer here: How to gracefully restart django running fcgi behind nginx?
You can just run the script (I'm going to modify it a bit), everytime you edit your code and it will gracefully restart everything without dropping connections.
This is a general guide from the mod_wsgi project that outlines how you can monitor code changes from your app_wsgi.py and restart the current process if any of the modules have changed. You need to restart the Python process, because threads contending over modules could mean that a freshly reloaded module has outdated references from other modules that are still waiting to get discovered for reload.
If you want something that works nicely with nginx, Django and wsgi apps in general, take a peek at Spawning as your wsgi server. It's approach to code reloading is about as graceful as it gets.
It has great documentation, well documented request handling model and it just works, which makes it such a no-brainer to configure. You'd need less than five minutes from now to having your Django instance running on Spawning. Here's another topical blog to get your juices running.
I created a Django application, it works fine on my machine.
After that I checked out it at the test server but something doesn't work.
How can I debug it?
Is it possible to do it using PyDev Eclipse plugin or maybe are there some other ways?
In the best case I would like to use "step into", "step over", "step out" but if it is not possible, simple logging is also OK.
UPDATE:
On my machine it runs with ./manage.py runserver but on the test machine it runs with Apache+mod_wsgi
This is a HOWTO for debugging application deployed via mod_wsgi:
http://code.google.com/p/modwsgi/wiki/DebuggingTechniques
Do you have root access to test machine? That would make it much easier. You can stop the Apache service and bring it up in a single process mode for debugging.
The simplest way is to just set DEBUG = True for a while, that should do for most problems. If you are on production and thus don't want to change that setting because of security concerns, I would recommend using sentry (screenshot), a very nice dashboard-like overview of errors and problems that occurred on your Django site.
Both solutions won't give you line-by-line debugging, unfortunately.
I would say by using https://github.com/django-extensions/django-extensions - a plugin that adds lots of functionality to django. One is the ./manage.py runserver_plus that adds a terminal to the server's error message (and more) whenever something goes wrong. It's ideal for debugging.
Not sure how to use it with wsgi but a quick search should find several results.
I'm using Git to push my code from my development machine to my testing server.
Dev: Mac, Python 2.6, sqllite and
Test: Linux, Python 2.7, MySQL
I took an early dev database and exported it to MySQL for initial testing.
So now I'm regularly pushing new code to the testing server. In general it seems to be working well, but I'm getting an Integrity Error regarding multiple objects with same Primary Key occasionally.
Does this ring any bells at this point? Is there something inherently wrong with the setups? Obviously there are some configuration differences for instance Python 2.6 and 2.7. So if there were issues here, I was hoping somebody could target them before I try some platform config syncing.
Thanks!
I'm not able to answer this question directly.
Depending on the reasons why you have used a different Python environment for your testing server, there are a few options:
First, if you want to test to see whether your code functions in multiple environments, I recommend you look into py.test. It has support for distributed testing. This includes providing the ability to create a virtualenv for each Python version that you wish to test.
Once you've done this, it will easier to tell whether your code, Django core or whether MqSQL is at fault. My suspicion is that there may be a problem with the database abstraction. It looks like sqllite is being tolerant but MySQL is not.
Secondly, it may be worthwhile to look into virtualenv yourself. This creates a standalone Python environment that makes replicating your dev setup much simpler.
I have a Django application that I would like to deploy to the desktop. I have read a little on this and see that one way is to use freeze. I have used this with varying success in the past for Python applications, but am not convinced it is the best approach for a Django application.
My questions are: what are some successful methods you have used for deploying Django applications? Is there a de facto standard method? Have you hit any dead ends? I need a cross platform solution.
I did this a couple years ago for a Django app running as a local daemon. It was launched by Twisted and wrapped by py2app for Mac and py2exe for Windows. There was both a browser as well as an Air front-end hitting it. It worked pretty well for the most part but I didn't get to deploy it out in the wild because the larger project got postponed. It's been a while and I'm a bit rusty on the details, but here are a few tips:
IIRC, the most problematic thing was Python loading C extensions. I had an Intel assembler module written with C "asm" commands that I needed to load to get low-level system data. That took a while to get working across both platforms. If you can, try to avoid C extensions.
You'll definitely need an installer. Most likely the app will end up running in the background, so you'll need to mark it as a Windows service, Unix daemon, or Mac launchd application.
In your installer you'll want to provide a way to locate a free local TCP port. You may have to write a little stub routine that the installer runs or use the installer's built-in scripting facility to find a port that hasn't been taken and save it to a config file. You then load the config file inside your settings.py and whatever front-end you're going to deploy. That's the shared port. Or you could just pick a random number and hope no other service on the desktop steps on your toes :-)
If your front-end and back-end are separate apps then you'll need to design an API for them to talk to each other. Make sure you provide a flag to return the data in both raw and human-readable form. It really helps in debugging.
If you want Django to be able to send notifications to the user, you'll want to integrate with something like Growl or get Python for Windows extensions so you can bring up toaster pop-up notifications.
You'll probably want to stick with SQLite for database in which case you'll want to make sure you use semaphores to tackle multiple requests vying for the database (or any other shared resource). If your app is accessed via a browser users can have multiple windows open and hit the app at the same time. If using a custom front-end (native, Air, etc...) then you can control how many instances are running at a given time so it won't be as much of an issue.
You'll also want some sort of access to local system logging facilities since the app will be running in the background and make sure you trap all your exceptions and route it into the syslog. A big hassle was debugging Windows service startup issues. It would have been impossible without system logging.
Be careful about hardcoded paths if you want to stay cross-platform. You may have to rely on the installer to write a config file entry with the actual installation path which you'll have to load up at startup.
Test actual deployment especially across a variety of firewalls. Some of the desktop firewalls get pretty aggressive about blocking access to network services that accept incoming requests.
That's all I can think of. Hope it helps.
If you want a good solution, you should give up on making it cross platform. Your code should all be portable, but your deployment - almost by definition - needs to be platform-specific.
I would recommend using py2exe on Windows, py2app on MacOS X, and building deb packages for Ubuntu with a .desktop file in the right place in the package for an entry to show up in the user's menu. Unfortunately for the last option there's no convenient 'py2deb' or 'py2xdg', but it's pretty easy to make the relevant text file by hand.
And of course, I'd recommend bundling in Twisted as your web server for making the application easily self-contained :).
I am deploying a WSGI application. There are many ways to skin this cat. I am currently using apache2 with mod-wsgi, but I can see some potential problems with this.
So how can it be done?
Apache Mod-wsgi (the other mod-wsgi's seem to not be worth it)
Pure Python web server eg paste, cherrypy, Spawning, Twisted.web
as 2 but with reverse proxy from nginx, apache2 etc, with good static file handling
Conversion to other protocol such as FCGI with a bridge (eg Flup) and running in a conventional web server.
More?
I want to know how you do it, and why it is the best way to do it. I would absolutely love you to bore me with details about the whats and the whys, application specific stuff, etc.
As always: It depends ;-)
When I don't need any apache features I am going with a pure python webserver like paste etc. Which one exactly depends on your application I guess and can be decided by doing some benchmarks. I always wanted to do some but never came to it. I guess Spawning might have some advantages in using non blocking IO out of the box but I had sometimes problems with it because of the patching it's doing.
You are always free to put a varnish in front as well of course.
If an Apache is required I am usually going with solution 3 so that I can keep processes separate. You can also more easily move processes to other servers etc. I simply like to keep things separate.
For static files I am using right now a separate server for a project which just serves static images/css/js. I am using lighttpd as webserver which has great performance (in this case I don't have a varnish in front anymore).
Another useful tool is supervisord for controlling and monitoring these services.
I am additionally using buildout for managing my deployments and development sandboxes (together with virtualenv).
I would absolutely love you to bore me with details about the whats and the whys, application specific stuff, etc
Ho. Well you asked for it!
Like Daniel I personally use Apache with mod_wsgi. It is still new enough that deploying it in some environments can be a struggle, but if you're compiling everything yourself anyway it's pretty easy. I've found it very reliable, even the early versions. Props to Graham Dumpleton for keeping control of it pretty much by himself.
However for me it's essential that WSGI applications work across all possible servers. There is a bit of a hole at the moment in this area: you have the WSGI standard telling you what a WSGI callable (application) does, but there's no standardisation of deployment; no single way to tell the web server where to find the application. There's also no standardised way to make the server reload the application when you've updated it.
The approach I've adopted is to put:
all application logic in modules/packages, preferably in classes
all website-specific customisations to be done by subclassing the main Application and overriding members
all server-specific deployment settings (eg. database connection factory, mail relay settings) as class __init__() parameters
one top-level ‘application.py’ script that initialises the Application class with the correct deployment settings for the current server, then runs the application in such a way that it can work deployed as a CGI script, a mod_wsgi WSGIScriptAlias (or Passenger, which apparently works the same way), or can be interacted with from the command line
a helper module that takes care of above deployment issues and allows the application to be reloaded when the modules the application is relying on change
So what the application.py looks like in the end is something like:
#!/usr/bin/env python
import os.path
basedir= os.path.dirname(__file__)
import MySQLdb
def dbfactory():
return MySQLdb.connect(db= 'myappdb', unix_socket= '/var/mysql/socket', user= 'u', passwd= 'p')
def appfactory():
import myapplication
return myapplication.Application(basedir, dbfactory, debug= False)
import wsgiwrap
ismain= __name__=='__main__'
libdir= os.path.join(basedir, 'system', 'lib')
application= wsgiwrap.Wrapper(appfactory, libdir, 10, ismain)
The wsgiwrap.Wrapper checks every 10 seconds to see if any of the application modules in libdir have been updated, and if so does some nasty sys.modules magic to unload them all reliably. Then appfactory() will be called again to get a new instance of the updated application.
(You can also use command line tools such as
./application.py setup
./application.py daemon
to run any setup and background-tasks hooks provided by the application callable — a bit like how distutils works. It also responds to start/stop/restart like an init script.)
Another trick I use is to put the deployment settings for multiple servers (development/testing/production) in the same application.py script, and sniff ‘socket.gethostname()’ to decide which server-specific bunch of settings to use.
At some point I might package wsgiwrap up and release it properly (possibly under a different name). In the meantime if you're interested, you can see a dogfood-development version at http://www.doxdesk.com/file/software/py/v/wsgiwrap-0.5.py.
The absolute easiest thing to deploy is CherryPy. Your web application can also become a standalone webserver. CherryPy is also a fairly fast server considering that it's written in pure Python. With that said, it's not Apache. Thus, I find that CherryPy is a good choice for lower volume webapps.
Other than that, I don't think there's any right or wrong answer to this question. Lots of high-volume websites have been built on the technologies you talk about, and I don't think you can go too wrong any of those ways (although I will say that I agree with mod-wsgi not being up to snuff on every non-apache server).
Also, I've been using isapi_wsgi to deploy python apps under IIS. It's a less than ideal setup, but it works and you don't always get to choose otherwise when you live in a windows-centric world.
Nginx reverse proxy and static file sharing + XSendfile + uploadprogress_module. Nothing beats it for the purpose.
On the WSGI side either Apache + mod_wsgi or cherrypy server. I like to use cherrypy wsgi server for applications on servers with less memory and less requests.
Reasoning:
I've done benchmarks with different tools for different popular solutions.
I have more experience with lower level TCP/IP than web development, especially http implementations. I'm more confident that I can recognize a good http server than I can recognize a good web framework.
I know Twisted much more than Django or Pylons. The http stack in Twisted is still not up to this but it will be there.
I'm using Google App Engine for an application I'm developing. It runs WSGI applications.
Here's a couple bits of info on it.
This is the first web-app I've ever really worked on, so I don't have a basis for comparison, but if you're a Google fan, you might want to look into it. I've had a lot of fun using it as my framework for learning.
TurboGears (2.0)
TurboGears 2.0 is leaving Beta within the next month (has been in it for plenty of time). 2.0 improves upon 1.0 series and attempts to give you best-of-breed WSGI stack, so it makes some default choices for you, if you want the least fuss.
it has the tg* tools for testing and deployment in 1.x series, but now transformed to paster equivalents in 2.0 series, which shoud seem familiar if you've expermiented with pylons.
tg-admin quickstart —> paster quickstart
tg-admin info —> paster tginfo
tg-admin toolbox –> paster toolbox
tg-admin shell –> paster shell
tg-admin sql create –> paster setup-app development.ini
Pylons
It you'd like to be more flexible in your WSGI stack (choice of ORM, choice of templater, choice of form-ing), Pylons is becoming the consolidated choice. This would be my recommended choice, since it offers excellent documentation and allows you to experiment with different components.
It is a pleasure to work with as a result, and works on under Apache (production deployment) or stand-alone (helpful for testing and experimenting stage).
so it follows, you can do both with Pylons:
2 option for testing stage (python standalone)
4 for scalable production purposes (FastCGI, assuming the database you choose can keep up)
The Pylons admin interface is very similar to TurboGears. Here's a toy standalone example:
$ paster create -t pylons helloworld
$ cd helloworld
$ paster serve --reload development.ini
for production-class deployment, you could refer to the setup guide of Apache + FastCGI + mod_rewrite is available here. this would scale up to most needs.
Apache httpd + mod_fcgid using web.py (which is a wsgi application).
Works like a charm.
We are using pure Paste for some of our web services. It is easy to deploy (with our internal deployment mechanism; we're not using Paste Deploy or anything like that) and it is nice to minimize the difference between production systems and what's running on developers' workstations. Caveat: we don't expect low latency out of Paste itself because of the heavyweight nature of our requests. In some crude benchmarking we did we weren't getting fantastic results; it just ended up being moot due to the expense of our typical request handler. So far it has worked fine.
Static data has been handled by completely separate (and somewhat "organically" grown) stacks, including the use of S3, Akamai, Apache and IIS, in various ways.
Apache+mod_wsgi,
Simple, clean. (only four lines of webserver config), easy for other sysadimns to get their head around.