How to chroot Django - python

Can one run Django in a chroot? Notably, what's necessary in order to set up (for example) /var/www as a chroot'd directory and then have Django run in that chroot'd directory?
Thank you - I'm grateful for any input.

There are many reasons mod_wsgi is preferred for Python web app deployment. One is stability, another is the variety of configuration options... one of which is ability to chroot the mod_wsgi daemon (starting with version 3.00).
The chroot option is not yet documented for the WSGIDaemonProcess directive at http://code.google.com/p/modwsgi/wiki/ConfigurationDirectives#WSGIDaemonProcess but there is enough documentation in Changes in Version 3.0.
You can also read a disussion of the feature at http://code.google.com/p/modwsgi/issues/detail?id=106

You will have to add a Python interpreter to that directory and add Django to it ofcourse.
After you've got the environment set-up you will have to create a wrapper script that does something like os.chroot('/var/www/') and you're done :)
To create a sandboxed/chrooted environment for Python try one of the following options: http://wiki.python.org/moin/Asking%20for%20Help/How%20can%20I%20run%20an%20untrusted%20Python%20script%20safely%20%28i.e.%20Sandbox%29?highlight=%28chroot%29
The PyPy option seems to be getting popular since Google started using it with the App-Engine.

Related

Host Django application in the Lightsail's built in Apache server

I want to have a production ready Django app with Lighsail and for that I'm following two tutorials to achieve this
Deploy Django-based application onto Amazon Lightsail
Deploy A Django Project
From the Bitnami article can see that the AWS documentation follows its Approach B: Self-Contained Bitnami Installations.
According to:
AWS's documentation, my blocker appears in 5. Host the application using Apache, step g.
Bitnami's documentation, where it says
On Linux, you can run the application with mod_wsgi in daemon mode.
Add the following code in
/opt/bitnami/apps/django/django_projects/PROJECT/conf/httpd-app.conf:
The blocker relates to the code I'm being asked to add, in particular the final part that has
Alias /tutorial/static "/opt/bitnami/apps/django/lib/python3.7/site-packages/Django-2.2.9-py3.7.egg/django/contrib/admin/static"
WSGIScriptAlias /tutorial '/opt/bitnami/apps/django/django_projects/tutorial/tutorial/wsgi.py'
More specifically, /home/bitnami/apps/django/. In /home/bitnami/ can only see the following folders
. bitnami_application_password
. bitnami_credentials
. htdocs
. stack
and from them the one that most likely resembles /opt/bitnami/apps/ is /home/bitnami/stack/. Thing is, inside of that particular folder, there's no django folder - at least as far as I can tell (already checked inside some of its folders, like the python one).
The workaround for me at this particular stage is to move to a different approach, Approach A: Bitnami Installations Using System Packages (which I've done and managed to make it work as wrote in this blog post), but I'd like to get it to work using Approach B and hence this question.
The problem here is in the mentioning of the paths for both the project and Django.
In my case, projects are under /home/bitnami/projects/ where I created a Django project named tutorial.
Also, if I run the command
python -c "
import sys
sys.path = sys.path[1:]
import django
print(django.__path__)"
it'll print me the location where Django is installed
['/opt/bitnami/python/lib/python3.8/site-packages/django']
So, the httpd-app.conf should have instead at the end
Alias /tutorial/static "/opt/bitnami/python/lib/python3.8/site-packages/django/contrib/admin/static"
WSGIScriptAlias /tutorial '/home/bitnami/projects/tutorial/tutorial/wsgi.py'

Deploy Django project using wsgi and virtualenv on shared webhosting server without root access

I have a Django project which I would like to run on my shared webspace (1und1 Webspace) running on linux. I don't have root access and therefore can not edit apache's httpd.conf or install software system wide.
What I did so far:
installed squlite locally since it is not available on the server
installed Python 3.5.1 in ~/.localpython
installed virtualenv for my local python
created a virtual environment in ~/ve_tc_lb
installed Django and Pillow in my virtual environment
cloned my django project from git server
After these steps, I'm able to run python manage.py runserver in my project directory and it seems to be running (I can access the login screen using lynx on my local machine).
I read many postings on how to configure fastCGI environments, but since I'm using Django 1.9.1, I'm depening on wsgi. I saw a lot about configuring django for wsgi and virtualenv, but all examples required access to httpd.conf.
The shared web server is apache.
I can create a new directory in my home with a sample hello.py and it is working when I enter the url, but it is (of course) using the python provided by the server and not my local installation.
When I change the first line indicating which python version to use to my virtual environment (#!/path/to/home/ve_tc_lb/bin/python), it seems to use the correct version in the virtual environment. Since I'm using different systems for developing and deployment, I'm not sure whether it is a good idea to e.g. add such a line in my djangoproject/wsgi.py.
Update 2016-06-02
A few more things I tried:
I learned that I don't have access to the apache error logs
read a lot about mod_wsgi and django in various sources which I just want to share here in case someone needs them in the future:
modwsgi - IntegrationWithDjango.wiki
debug mod_wsgi installation (only applicable if you are root)
mod_wsgi configuration guide
I followed the wsgi test script installation here - but the wsgi-file is just displayed in my browser instead of beeing executed.
All in all it seems like my provider 1und1 did not install wsgi extensions (even though the support told me a week ago it would be installed)
Update 2016-06-12: I got a reply from support (after a week or so :-S ) confirming that they dont have mod_wsgi but wsgiref...
So I'm a bit stuck here - which steps should I do next?
I'll update the question regularly based on comments and remarks. Any help is appreciated.
Since your apache is shared, I don't expect you can change the httpd.conf but use instead your solution. My suggestion is:
If you have multiple servers you will deploy your project (e.g. testing, staging, production), then do the following steps for each deploy target.
In each server, create a true wsgi.py file which you will never put in versioning systems. Pretty much like you would do with a local_settings.py file. This file will be named wsgy.py since most likely you cannot edit the apache settings (since it is shared) and that name will be expected for your wsgi file.
The content for the file will be:
#!/path/to/your/virtualenv/python
from my_true_wsgi import *
Which will be different for each deploy server, but the difference will be, most likely, in the shebang line to locate the proper python interpreter.
You will have a file named my_true_wsgi to have it matching the import in the former code. That file will be in the versioning systems, unlike the wsgi.py file. The contents of such file is the usual contents of the wsgi.py on any regular django project, just that you are not using that name directly.
With this solution you can have several different wsgi files with no conflict on shebangs.
You'll have to use a webhost that supports Django. See https://code.djangoproject.com/wiki/DjangoFriendlyWebHosts. Personally, I've used WebFaction and was quite happy with it, their support was great and customer service very responsive.

How to setup django application for production and open source, with one repository

I have a python/django project that I've set up for development and production using git revision control. I have three settings.py files:
-settings.py (which has dummy variables for potential open source project),
-settings_production.py (for production variables), and
-settings_local.py (to override settings just for my local environment). This file is not tracked by git.
I use this method, which works great:
try:
from settings_production import *
except ImportError, e:
print 'Unable to load settings_production.py:', e
try:
from settings_local import *
except ImportError, e:
print 'Unable to load settings_local.py:', e
HOWEVER, I want this to be an open source project. I've set up two git remotes, one called 'heroku-production' and one called 'github-opensource'. How can I set it up so that the 'heroku-remote' includes settings_production.py while 'github-opensource' doesn't, so that I can keep those settings private?
Help! I've look at most of the resources over the internets, but they don't seem to address this use case. Is this the right way? Is there a better approach?
The dream would be to be able to push my local environment to either heroku-production or github-opensource without haveing to mess with the settings files.
Note: I've looked at the setup where you use environment variables or don't track the production settings, but that feels overly complicated. I like to see everything in front of me in my local setup. See this method.
I've also looked through all these methods, and they don't quite seem to fit the bill.
There's a very similar question here. One of the answers suggests git submodules
which I would say are the easiest way to go about this. This is a problem for your VCS, not your Python code.
I think using environment variables, as described on the Two Scoops of Django book, is the best way to do this.
I'm following this approach and I have an application running out of a private GitHub repository in production (with an average of half a million page views per month), staging and two development environments and I use a directory structure like this:
MyProject
-settings
--__init__.py
--base.py
--production.py
--staging.py
--development_1.py
--development_2.py
I keep everything that's common to all the environments in base.py and then make the appropiate changes on production.py, staging.py, development_1.py or development_2.py.
My deployment process for production includes virtualenv, Fabric, upstart, a bash script (used by upstart), gunicorn and Nginx. I have a slightly modified version of the bash script I use with upstart to run the test server; it is something like this:
#!/bin/bash -e
# starts the development server using environment variables and django-admin.py
PROJECTDIR=/home/user/project
PROJECTENV=/home/user/.virtualenvs/project_virtualenv
source $PROJECTENV/bin/activate
cd $PROJECTDIR
export LC_ALL="en_US.UTF-8"
export HOME="/home/user"
export DATABASES_DEFAULT_NAME_DEVELOPMENT="xxxx"
export DATABASES_DEFAULT_USER_DEVELOPMENT="xxxxx"
export DATABASES_DEFAULT_PASSWORD_DEVELOPMENT="xxx"
export DATABASES_DEFAULT_HOST_DEVELOPMENT="127.0.0.1"
export DATABASES_DEFAULT_PORT_DEVELOPMENT="5432"
export REDIS_HOST_DEVELOPMENT="127.0.0.1:6379"
django-admin.py runserver --pythonpath=`pwd` --settings=MyProject.settings.development_1 0.0.0.0:8006
Notice this is not the complete story and I'm simplifying to make my point. I have some extra Python code in base.py that takes the values from these environment variables too.
Play with this and make sure to check the relevant chapter in Two Scoops of Django, I was also using the import approach you mentioned but having settings out of the repository wasn't easy to manage and I made the switch a few months ago; it's helped me a lot.

Is virtualenv recommended even for single django based application on a server?

If we just want to host single django application on a VPS or some cloud instance, is it still benefitial to use virtualenv ?
Or will it be an overkill , and better to use global python setup instead, as only one django application say Project X , will be hosted on that server ?
Does virtualenv provide any major benefits for a single application setup in a production environment that I might not be aware of ? eg. django upgradation, cron scripts , etc
I'd recommend always using virtualenv, because it makes your environment more reproducible -- you can version your dependencies alongside your application, you're not tied to the versions of the python packages in your system repository, and if you need to replicate your environment elsewhere, you can do that even if you're not running exactly the same OS underneath.

How can Django projects be deployed with minimal installation work?

To deploy a site with Python/Django/MySQL I had to do these on the server (RedHat Linux):
Install MySQLPython
Install ModPython
Install Django (using python setup.py install)
Add some directives on httpd.conf file (or use .htaccess)
But, when I deployed another site with PHP (using CodeIgniter) I had to do nothing. I faced some problems while deploying a Django project on a shared server. Now, my questions are:
Can the deployment process of Django project be made easier?
Am I doing too much?
Can some of the steps be omitted?
What is the best way to deploy django site on a shared server?
To enable easy Django deployement I would to the following:
Fisrt-time server configuration
Install mod_wsgi which allow you to run in embedded mode OR in daemon mode.
Install python and virtualenv
In your development environment
Use virtualenv. Take a look at mod_wsgi and virtualenv configuration
Install Django your django version (using python setup.py install)
Install your python libs
Develop your project
Every time you want to deploy
Copy your virtual environment to the production server
Just add an Include directive in your httpd.conf file (or use .htaccess) to your project's apache configuration. As stated in mod_wsgi integration with django documentation, one example of how Apache included file could be configured would be:
Alias /media/ /usr/local/django/mysite/media/
<Directory /usr/local/django/mysite/media>
Order deny,allow
Allow from all
</Directory>
WSGIScriptAlias / /usr/local/django/mysite/apache/django.wsgi
<Directory /usr/local/django/mysite/apache>
Order deny,allow
Allow from all
</Directory>
Automating deployement
I would consider using Fabric to automate deployement
Can the deployment process of django project be made easier?
No. You can script some of this, if you want. However, you're never going to install MySQL, MySQLPuthon, mod_wsgi (or mod_python), or Django again.
You will, however, tweak your application all the time.
Am I doing too much?
No. Python (and Django) are not part of Apache. PHP is embedded in Apache. PHP is exactly like mod_python (or mod_wsgi). Just one piece of the pie. (Apparently, some hosts handle the PHP installation for you, but don't handle the mod_wsgi or mod_python installation.)
Can some of the steps be omitted?
No. However, you only do it once.
What is the best way to deploy django site on a shared server?
You're doing it correctly.
When I deployed another site with php (using CodeIgniter) I had to do nothing
Certainly an unfair comparison. Apparently, they already installed PHP and the database for you. Nice of them.
Also, PHP is not Python. PHP is a plug-in to Apache. Python is "just" a programming language, that requires a separate plug-in to Apache (i.e., mod_python or mod_wsgi).
See How nicely does Python 'flow' with HTML as compared to PHP?
Django hosting support is not as widespread as for PHP, but there are some good options. I can recommend WebFaction - they provide an easy-to-use control panel which offers various combinations of Django versions, Python versions, mod_python, mod_wsgi, MySQL, PostgreSQL etc. They're cost-effective, too. If you use their setup, you get SSH access but just about all of the setting up can be done via their control panel, apart from the actual uploading of your project folder.
Disclaimer: apart from being a happy customer I have no other connection with them.
You didn't have to do anything when deploying a PHP site because your hosting provider had already installed it. Web hosts which support Django typically install and configure it for you.
You just install this already made solution if your allowed to run an image on a virtual machine. I can imagine installations will be done this way in future as complicated security configuration can be done automatically.
Most shared hosting sites run the LAMP (Linux, Apache, MySQL, PHP) stack so deployment is just a matter of copying some files over. If you were using one of the PHP frameworks like CakePHP or something the service hasn't installed (like an imaging library) you'd be going through extra deployment steps as well.
With Django (or Rails, or any other complex framework) you have to set up the stack yourself that one time, then you're good to go.
However, you'll also want to think about post-deployment updating. If it's something you're going to do often you may also want to look into Fabric or Capistrano to help automate that.
P.S. I'll second that WebFaction recommendation. It's as close to one-button installation as I've seen. Pretty happy customer although I mostly use them for test-sites and prototyping.
You can use Python virtualenv and pip (see also "Tools of the Modern Python Hacker: Virtualenv, Fabric and Pip"). I developed my Django project in the virtual environment. I copy the virtual environment file to the production machine when I deploy my application. I use mod_wsgi. You must write that in the mod_wsig file:
import site
site.addsitedir('C:\PythonVirtualEnv\IntegralEnv\Lib\site-packages')

Categories