Im in the process of launching a Django app on ec2, but have hit a wall trying to install my code on my AMI instance. This is my situation: I have a bitnami AMI up and running that has Django, apache, Postgresql, and nearly all my dependancies pre installed, and I have my fully functional Django app running on my local machine that I have been testing thus far with the Django Dev server. After quite a bit of googling, the most common methods of installing an app to an ec2 instance seem either using ssh/sftp/scp to drop a tarball in the instance, or creating a repository and importing code from there. If anyone can tell me the method they prefer, and guide me through the process, or provide a link to a good tutorial, it would be hugely appreciated!
tar -pczf yourfile.tar.gz MyProject
scp -i /home/user/.cert/yourcert.pem yourfile.tar.gz user#serveripaddress:/home/user
tar -xvf /home/user/yourfile.tar
I usually simply scp -R my whole site directory into /home/bitnami of my AMI. I'm using Apache/NGINX/Django with mod_wsgi. So the directory (for example /home/bitnami/djangosites/) gets referred to based on my mod_wsgi path in my apache cfg file.
In other words, why not just move the whole directory recursively (scp -R) instead of making a tarball etc?
Directly copy the folder where your project resides may work. However you mention that you are using a BitNami image, so it is likely that you are using the BitNami Django Stack Amazon image. BitNami also provides a native version of the BitNami Django Stack so I would suggest that you first try to deploy your application on top of the native installer and see what exact steps you need to follow. For instance you may need to install python dependencies or if you plan to use Apache on production instead of the Django development server you will need to configure Apache to serve your project. I'm a BitNami developer and I mention this because make easier the deployment in different platforms (including ec2) is one of the goal of BitNami and as you are already using it you can take advantage of this.
Related
I'm moving away from WordPress and into bespoke Python apps.
I've settled on Django as my Python framework, my only problems at the moment are concerning hosting. My current shared hosting environment is great for WordPress (WHM on CloudLinux), but serving Django on Apache/cPanel appears to be hit and miss, although I haven't tried it as yet with my new hosting company. - who have Python enabled in cPanel.
What is the easiest way for me to set up a VPS to run a hosting environment for say, twenty websites? I develop everything in a virtualenv, but I have no experience in running Django in a production environment as yet. I would assume that venv isn't secure enough or has scalability issues? I've read some things about people using Docker to set up separate Django instances on a VPS, but I'm not sure whether they wrote their own management system.
It's my understanding that each instance Python/Django needs uWSGI and Nginx residing within that virtual container? I'm looking for a simple and robust solution to host 20 Django sites on a VPS - is there an out of the box solution? I'm also happy to develop one and set up a VPS if I'm pointed in the right direction.
Any wisdom would be gratefully accepted.
Andy :)
Traditional approach
Virtualenv is good enough and perfectly ready for production use. You can have multiple virtualenv for multiple projects on the same VM.
If you have multiple database engines for multiple projects. Like, MySQL for one, PostgreSQL for another something like this then you just need to set up each individually.
Install Nginx and configure each according to project.
Install supervisor to manage(restart/start/stop) each project individually.
Anything that required by the project.
Here it has a huge drawback. Because you can't use different versions on your database engine for a different project in an easy way. So, containerization is highly recommended.
For simple and robust solution,
Use Docker(docker-compose) for local and production deployment.
Configure uWsgi with Nginx(Available on docker.)
Create a CI/CD pipeline with any tool like Jenkins.
Monitor your projects using any good tool like Raygun.
That's it.
I created a bash script that deploys as many websites as you want on your server. It automatically installs all dependencies on your server, creates a virtual environment, configure Gunicorn, Nginx, and a database for Django, etc. Check it out:
https://github.com/jdbit/django-auto-deploy
My Python App Engine Flex application needs to connect to an external Oracle database. Currently I'm using the cx_Oracle Python package which requires me to install the Oracle Instant Client.
I have successfully run this locally (on macOS) by following the Instant Client installation steps. The steps required me to do the following:
Make a directory called /opt/oracle
Create a symlink from /opt/oracle/instantclient_12_2/libclntsh.dylib.12.1 to ~/lib/
However, I am confused about how to do the same thing in App Engine Flex (instructions). Specifically, here's what I'm confused about:
The instructions say I should run sudo yum install libaio to install the libaio package. How do I do this on GAE Flex? Or is this package already available?
I think I can add the Instant Client files to GAE (a whopping ~100MB!), then set the LD_LIBRARY_PATH environment variable in app.yaml to export LD_LIBRARY_PATH=/opt/oracle/instantclient_12_2:$LD_LIBRARY_PATH. Will this work?
Is this even feasible without using custom Docker containers on App Engine Flex?
Overall I'm not sure if I'm on the right track. Would love to hear from someone who has managed this before :)
If any of your dependencies is not available in the base GAE flex images provided by Google and cannot be installed via pip (because it's not a python package or it's not available in PyPI or whatever other reason) then you can't use the requirements.txt file to get it installed in your GAE flex app.
The proper way to satisfy such dependencies would be to build your own custom runtime. From About Custom Runtimes:
Custom runtimes allow you to define new runtime environments, which
might include additional components like language interpreters or
application servers.
Yes, that means providing a custom Docker file. In your particular case you'd be installing the Instant Client and libaio inside this Dockerfile. See also Building Custom Runtimes.
Answering your first question, I think that the instructions in the oracle website just show that you have to install said library for your application to work.
In the case of App engine flex, they way to ensure that the libraries are present in the deployment is with the requirements.txt textfile. There is a documentation page which does explain how to do so.
On the other hand, I will assume that "Instant Client Files" are not libraries, but necessary data for your App to run. You should use Google Cloud Storage to serve them, or any other alternative of Storage within Google Cloud.
I believe that, if this is all what you need for your App to work, pushing your own custom container should not be necessary.
I've grown tired of trying to get elastic beanstalk to run python 3.5. Instead, I want to create a custom ami which establishes a separate virtualenv for the application (with python 3.5) and knows enough to launch the application using that virtualenv.
The problem is that once I ssh into the ec2 instance in order to create my custom ami, I am left wondering where the scripts are which govern the elastic beanstalk deployment behavior.
For example, when deploying via travis to elastic beanstalk, EB knows enough to look in a specific folder for the file application.py and to execute the file using a specific virtualenv (or maybe even the shudder root python installation of the machine). It even knows to execute a pip install -r requirements. Can anyone point me to where the script(s) are which govern this behavior?
UPDATE
Please see Elastic beanstalk require python 3.5 for those referencing the .ebextensions option. So far, it has not proved able to handle this problem due to the interdependency between the EB image operating system and the python environment used to run the application.
All of the EB files can be found in /opt/elasticbeanstalk - /opt/elasticbeanstalk/hooks is probably most relevant for what you're looking for.
You can use the ebextensions to run scripts you want when starting your ami.
Plesk on the MediaTemple DV servers uses Python 2.4 for stuff, so the 2.4 install can't be replaced but someone recommended installing a separate python 2.7 install since my app runs on that. I'm new to the whole server thing, so this is new territory. My sense is that I can create a new directory for the source files and use SSH to download the files to said directory and then cd into it and install python 2.7. Then I have to figure out how to make sure Apache knows to use Python 2.7 to run the django app in question. Does this sound correct?
Unless you can make a system wide change to your python installation, you would have to run Apache, python and Django in the same virtual environment. If it is not possible, use gunicorn (instead of apache) to run the Django app in a virtualenv (in a port different from Apache's). If this app runs in a subdomain, you should consider hosting the Django app in a PAAS (Heroku, Google App Engine, etc.) which allow you to easily switch running environments.
Never mind, sorted it. the important thing seemed to be to edit the etc/ld.so.conf by adding usr/local/lib, then running sbin/ldconfig, which then makes ld.so.conf look like:
include ld.so.conf.d/*.conf
/usr/local/lib
Donwload and then compile Python
and use make && make altinstall.
How should the project be deployed and run. There are loads of tools in this space. Which should be used and why?
Supervisor
Gunocorn
Ngnix
Fabric
Boto
Pip
Virtualenv
Load balancers
It depends on your configuration. We are using the following stack for our environment on Rackspace, but you can setup the same thing on AWS with EC2 instances.
Ubuntu 11.04
Varnish (in memory cache) to avoid disk seeks
NginX to server static content
Apache to server dynamic content (MOD-WSGI)
Python 2.7.2 with Django
Jenkins for our continuous builds
GIT for version control
Fabric for the deployment.
So the way it works is that a GIT push to the origin repository is being polled by Jenkins. Jenkins then pulls the changes down from the origin. Builds a Python Egg, runs Unit tests, uses Fabric to deploy this egg to the environments necessary and reloads the Apache config to make sure the forked Apache processes are picking up the new Python egg.
Hope this helps.
As Michael Klockel already stated depends on your configuration, I have:
Ubuntu 10.04 LTS
Nginx
Uwsgi
git version control
python virtualenv and pip
You can check the deployment settings here:
Django, Virtualenv, nginx + uwsgi import module wsgi error
and why I use nginx and uwsgi here:
http://nichol.as/benchmark-of-python-web-servers
Also I use fabric for the deployment of the app, and chef solo http://ericholscher.com/blog/2010/nov/8/building-django-app-server-chef/
johny cache for sql queries and raven and sentry to keep a log of whats going on on the app.
I'd use uWSGI+Nginx from a performance perspective (I think the comparison has already been linked in another answer), pip and virtualenv for deployment as this keeps things self-contained, and facilitates clean deployment using fabric or similar. Use git for version control. Jenkins can handle continuous integration. I'd use the AWS load balancer (ELB) in front of your EC2 instances for balancing - does the job without you having to fret too much about it. django-storages for uploading your static files to s3, that saves you the effort of having another server to hand out static files.
However, it depends a little on your admin overheads. If you're looking for something clean and simple for deployment and scaling, I'd scrap the whole AWS EC2 stack, use Heroku as a front end, and s3 for your static files. This saves all the admin time of maintaining the boxes, and allows you to concentrate on the dev.