For running django applications locally I can do
django-admin startproject djangotest
python djangotest/manage.py runserver
and the sample webpage shows up at http://127.0.0.1:8000/
However, when I deploy this to EB with
eb deploy
It just magically works. My question is does EB runs the command python djangotest/manage.py runserver on the EC2 server after eb deploy by default? What are the list of commands that EB executes to get the webpage working? What if I want it to run with different flags like python djangotest/manage.py runserver --nostatic is that possible?
It doesn't just magically work. You have to configure Django for Elastic Beanstalk, as described in the EB documentation: you provide a configuration file which points to the WSGI module.
In any case, it wouldn't use runserver, as the development server is absolutely not for production use.
Related
I am building a back end in python via the python flask application from IBM Cloud/Bluemix. I have heard/read a lot about people complaining regarding that Flasks built in server isn’t good for production. But how do I know if the application uses the Flask built in server or if IBM sets something else? Is there a simple way to see this in the code?
Deploying the Flask boilerplate app from the IBM cloud catalogue will indeed deploy a Flask application running on the Flask dev webserver.
You will need to alter the application if you want to run a production WSGI server.
I work for IBM and am in this stuff all day every day.
If you want to verify this, SSH into your application container on Cloud Foundry with the bash command
cf ssh <yourappnamehere>
You will need to have either the bluemix or cloud foundry CLIs installed and be logged in to the relevant endpoint before submitting this command.
It will open a bash shell in your application container, and you can cd around and open and/or download your project files for inspection.
This line:
app = Flask(__name__)
is a sure fire way to know that you are running a Flask web server application.
If you are concerned with which WSGI server your application is running under, checking your procfile (you should see this when SSHing int your container) will show you which command starts your application. If the command is
python <yourapp>.py
then you are running the dev server. Otherwise, you would be running some other python file, most likely via the server's command rather than the python command, that would import your application as a dependency.
You can also take a look at whether or not any WSGI server libraries were downloaded during the compilation of your droplet, and what command was used to start your application with
cf logs <yourappname> --recent
after deploying it.
Or, you can just believe me that the boilerplate deploys a Flask app under a Flask dev server.
A tutorial on running Flask on a different WSGI server:
https://www.digitalocean.com/community/tutorials/how-to-serve-flask-applications-with-uwsgi-and-nginx-on-ubuntu-14-04
I have VPS-server (CentOS 6.8)
I installed Python 3.5.2 and Django 1.10.1 on virtual enviroment
I connected my domain to VPS-server
Now:
django-project:
(venv) /home/django_user/djtest/venv/django_project/django_project
domain:
/var/www/www-root/data/www/mydomain.com
I tried set BASE_DIR in settings.py to '/var/www/www-root/data/www/mydomain.com' but not working
How I can to connect django-project to domain?
Project is still empty, just created
Django project can't be served like this, you need some web-server application like gunicorn or apache too.
Just like you use ./manage.py runserver you need some application to execute your django project corresponsing to command theses application are Gunicorn, ApacheWSGI server and others.
There will be a file wsgi.py that would have created when you created the project it's web server gateway interface it will interact with the above mentioned web-servers and server your django based website.
I'm just wondering if I'm doing something wrong, or is developing with AWS really hard/confusing?
Currently I have an EC2 instance with the following address:
ec2-XX-XX-XX-XX.us-west-2.compute.amazonaws.com
And with that a elastic beanstalk application environment:
http://XXX.XXXXXX.us-west-2.elasticbeanstalk.com/
I find that it's really hard/long to code something, put it on the server, and test what it looks like by going to http://XXX.XXXXXX.us-west-2.elasticbeanstalk.com/ as what I need to do is this:
1) Upload the files via FTP to ec2-XX-XX-XX-XX.us-west-2.compute.amazonaws.com
2) SSH inside to ec2-XX-XX-XX-XX.us-west-2.compute.amazonaws.com and do eb deploy
3) Wait 2-3 minutes for the server to deploy
4) View the changes at http://XXX.XXXXXX.us-west-2.elasticbeanstalk.com
Is there something I'm doing wrong here? Normally this is what I'm used to do:
1) Upload file via FTP to http://mywebsite.com
2) SSH inside http://mywebsite.com
3) Do python manage.py runserver or gunicorn mySite.wsgi:application
4) View changes at http://mywebsite.com without having to wat 2-3 minutes for it to deploy.
Can someone guide me on what I might be doing wrong? I'm not too sure on what I'm missing here.
Thank you!
With AWS Elastic Beanstalk you dont exactly "FTP" files to the server. With the EB API tools you should only eb deploy and your latest GIT commit will deploy all files to your EB servers.
In my case, it only takes 3-4 lines of terminal commands to get everything up and running
git add -A
git commit -m '04212016_1_east'
./manage.py collectstatic (optional step since I use S3 for static files)
eb deploy
I'm creating Django app on OpenShift on Python 3.3 with no scaling, it works fine. So when I browse the app gives me a django url error and with /admin brings me the admin login page.
But as I create it with scaling I get this error
503 Service Unavailable
No server is available to handle this request.
I tried with small, small high.cpu none works for me with scaling but actually I need small high.cpu on eu.west region.
I also tried to create a Python 3.3 app with Scaling which works fine and then add Django through repository upstream or by adding my previous configuration with I had before still not working after complete push to repo.
I've done that before and had worked for me before, so please don't answer without testing it.
This is the link to HAProxy both local gear and backend are DOWN.
It seems there's something wrong with the current django quickstart for openshift. In your case the scaled app is returing a 503 error because the django app is producing a 404 error at it's root.
I've successfully deployed both scaling and none scaling using this repo: https://github.com/jsvgoncalves/django-openshift
Don't forget that the $OPENSHIFT_PYTHON_WSGI_APPLICATION environment variable needs to point to the right wsgi.py and that you may need to restart the app.
$ APP_NAME=yourapp
$ rhc env set OPENSHIFT_PYTHON_WSGI_APPLICATION=django_exp/wsgi.py -a $APP_NAME
# You may need to restart your app
$ rhc app-restart -a $APP_NAME
Also, create the database (or have it directly on your git repo, as everytime you'll push changes this database file will disappear):
$ rhc ssh -a $APP_NAME
$ cd app-root/runtime/repo
$ python manage.py migrate
I use Rchilli resume parser for parsing resume. It works fine while running on python manage.py runserver . I am getting an xml data as api response. But while running on gunicorn I'm getting an html response saying that "The service is temporarily unavailable". Is there something need to be changed on the configuration of gunicorn to get this fixed? As the project is still under development, I'm using only gunicorn for deploying the application. I am using this Method:Lessons Learned From The Dash: Easy Django Deployment for deploying the application without nginx.