Django same project multiple times on one server - python

I had to develop something in Django (new to it) and it went quite smoothly. But after delivering to the client I had to setup a second "testing" instance so that any new features would be tested on it to avoid errors in the production one.
And I have only one apache server at my disposal and this breed some weird things.
I run my applications by adding path to the wsgi script in the httpd.conf.
It works fine, the new server is up and running. it used a different database so all is good. But it doesent use the views and models from its folder, it used the ones from the original app instead and I just ran out of ideas on how to fix it. Please help me in some way.

I believe that your two django projects should be deployed on your staging and production server as two completely seperate projects/directories.
If you use version control, this could be as trivial as branching your main project and adding the new features. After you have two seperate code bases you can put your fixed branch on your production server.
Your project can exist anywhere on your server. You could set up a staging subdomain and create a virtualhost that points to your django project branch
http://httpd.apache.org/docs/2.2/vhosts/examples.html
This would allow both projects to exist on the same server, without one project having to be aware of the other

Related

Hosting multiple Django instances on a VPS

I'm moving away from WordPress and into bespoke Python apps.
I've settled on Django as my Python framework, my only problems at the moment are concerning hosting. My current shared hosting environment is great for WordPress (WHM on CloudLinux), but serving Django on Apache/cPanel appears to be hit and miss, although I haven't tried it as yet with my new hosting company. - who have Python enabled in cPanel.
What is the easiest way for me to set up a VPS to run a hosting environment for say, twenty websites? I develop everything in a virtualenv, but I have no experience in running Django in a production environment as yet. I would assume that venv isn't secure enough or has scalability issues? I've read some things about people using Docker to set up separate Django instances on a VPS, but I'm not sure whether they wrote their own management system.
It's my understanding that each instance Python/Django needs uWSGI and Nginx residing within that virtual container? I'm looking for a simple and robust solution to host 20 Django sites on a VPS - is there an out of the box solution? I'm also happy to develop one and set up a VPS if I'm pointed in the right direction.
Any wisdom would be gratefully accepted.
Andy :)
Traditional approach
Virtualenv is good enough and perfectly ready for production use. You can have multiple virtualenv for multiple projects on the same VM.
If you have multiple database engines for multiple projects. Like, MySQL for one, PostgreSQL for another something like this then you just need to set up each individually.
Install Nginx and configure each according to project.
Install supervisor to manage(restart/start/stop) each project individually.
Anything that required by the project.
Here it has a huge drawback. Because you can't use different versions on your database engine for a different project in an easy way. So, containerization is highly recommended.
For simple and robust solution,
Use Docker(docker-compose) for local and production deployment.
Configure uWsgi with Nginx(Available on docker.)
Create a CI/CD pipeline with any tool like Jenkins.
Monitor your projects using any good tool like Raygun.
That's it.
I created a bash script that deploys as many websites as you want on your server. It automatically installs all dependencies on your server, creates a virtual environment, configure Gunicorn, Nginx, and a database for Django, etc. Check it out:
https://github.com/jdbit/django-auto-deploy

mod_wsgi: How to run different apps under single domain?

I have a domain example.com and I run a standard Apache there (serving static files and PHP). I want to run Python served pages on subdomain.example.com.
I managed to configure Apache to do this and there is a Flask app running at subdomain.example.com. However, in the virtual host config file, the whole subdomain is tight to this one single app. I would like to go further and run several different apps on this subdomain.
For example:
subdomain.example.com/app1/ would run /var/www/apps/app1/app.wsgi
subdomain.example.com/app2/ would run /var/www/apps/app2/app.wsgi
and so on...
Furthermore I would like this to be fully automatic, that is when I set up new folder in /var/www/apps/, I could reach the app through the Apache without further configuration.
I can see several ways of doing this:
Configure Apache to route every subdomain.example.com request
to a single "meta app" in Python which would run a specific app
based on given URL.
Do some magic with Apache configuration that would take care of
this automatically.
Maybe use nginx? I don't really have much experience with this,
but someone told me this could solve the problem.
Is there any best practice about how to do this? Thank you.
It looks like you should be able to do this by providing a directory path (in your case, /var/www/apps) to the WSGIScriptAlias directive. Read more here:
https://modwsgi.readthedocs.org/en/develop/configuration-directives/WSGIScriptAlias.html
This seems like a neater solution than using a meta app.

How do I run a Django 1.6 project with multiple instances running off the same server, using the same db backend?

I have a Django 1.6 project (stored in a Bitbucket Git repo) that I wish to host on a VPS.
The idea is that when someone purchases a copy of the software I have written, I can type in a few simple commands that will take a designated copy of the code from Git, create a new instance of the project with its own subdomain (e.g. <customer_name>.example.com), and create a new Postgres database (on the same server).
I should hopefully be able to create and remove these 'instances' easily.
What's the best way of doing this?
I've looked into writing scripts using some sort of combination of Supervisor/Gnunicorn/Nginx/Fabric etc. Other options could be something more serious like using Docker or Vagrant. I've also looked into various PaaS options too.
Thanks in advance.
(EDIT: I have looked at the following services/things: Dokku (can't use Heroku due to data constraints), Vagrant (inc Puppet), Docker, Fabfile, Deis, Cherokee, Flynn (under dev))
If I was doing it (and I did a similar thing with a PHP application I inherited), I'd have a fabric command that allows me to provision a new instance.
This could be broken up into the requisite steps (check-out code, create database, syncdb/migrate, create DNS entry, start web server).
I'd probably do something sane like use the DNS entry as the database name: or at least use a reversible function to do that.
You could then string these together to easily create a new instance.
You will also need a way to tell the newly created instance which database and domain name they needed to use. You could have the provisioning script write some data to a file in the checked out repository that is then used by Django in it's initialisation phase.

Apache/Django freezing after a few requests

I'm running Django through mod_wsgi and Apache (2.2.8) on Ubuntu 8.04.
I've been running Django on this setup for about 6 months without any problems. Yesterday, I moved my database (postgres 8.3) to its own server, and my Django site started refusing to load (the browser spinner would just keep spinning).
It works for about 10 mintues, then just stops. Apache is still able to serve static files. Just nothing through Django.
I've checked the apache error logs, and I don't see any entries that could be related. I'm not sure if this is a WSGI, Django, Apache, or Postgres issue?
Any ideas?
Thanks for your help!
It sounds a lot like there's something happening between django and your newly housed database.
Just to eliminate apache from the mix, you should run it as the dev server (on some random port to stop people using it) and see if you still have issues. If you do, it's the database. If it behaves, it could be apache.
Edit, This looks interesting. You can test that by applying his patch (commenting out the .close()) but there are other similar bugs floating around.
Found it! I'm using eventlet in some other code and I imported one of my modules into a django model. So eventlet was taking over and putting everything to "sleep".

Is it possible to deploy one GAE application from another GAE application?

In order to redeploy a GAE application, I currently have to install the GAE deployment tools on the system that I am using for deployment. While this process is relatively straight forward, the deployment process is a manual process that does not work from behind a firewall and the deployment tools must be installed on every machine that will be used for updating GAE apps. A more ideal solution would be if I could update a GAE application from another GAE application that I have deployed previously. This would remove the need to have multiple systems configured to deploy apps.
Since the GAE deployment tools are written in Python and the GAE App Engine supports Python, is it possible to modify appcfg.py to work from within GAE? The use case would be to pull a project from GitHub or some other online repository and update one GAE application from another GAE app. If this is not possible, what is the limiting constraint?
Is it possible? Yes. The protocol appcfg uses to update apps is entirely HTTP-based, so there's absolutely no reason you couldn't write an app that's capable of deploying other apps (or redeploying itself - self-modifying code)! You may even be able to reuse large parts of appcfg.py to do it.
Is it easy? Probably not. It's quite likely you'll need to understand a decent chunk of appcfg's internals, and the RPCs it uses to upload new apps - not a trivial undertaking. You'll also need to store your credentials in the app, in all likelihood - though you can use a role account that is and admin only for the apps it's deploying to minimize risk there.
One limiting constraint could be the protocol that the python sdk uses to communicate with the GAE servers. If it only uses HTTP, you might be OK. but if it's anything else, you might be out of luck because you can't open a socket directly from within GAE.
What problem did you have by trying to update behind a firewall?
I've got some, but finally I manage to work around them.
About your question, the constraint is that you cannot write files into a GAE app, so even though you could possibly pull from the VCS you can't write those pulled files.
So you would have to update from outside the GAE in first place.
Anyway every machine that needs to update the GAE should have the SDK anyway just to see if they changes work.
So, If you really want to do this you have two alternatives:
Host your own "updater" site and istall the SDK there, then when you want to update log into your side ( or run a script ) and do the remote update.
Although I don't know Amazon EC2 well, I think you can do pretty much the same thing as op 1 from there.
Finally I think the password to update has to be typed always. ( you could have the SDK of the App engine and modify that, because it is open source )

Categories