Hosting Django app with Waitress - python

I'm trying to host a Django app on my Ubuntu VPS. I've got python, django, and waitress installed and the directories moved over.
I went to the Waitress site ( http://docs.pylonsproject.org/projects/waitress/en/latest/ ) and they said to use it like this:
from waitress import serve
serve(wsgiapp, host='5.5.5.5', port=8080)
Do I put my app name in place of of 'wsiapp'? Do I need to run this in the top-level Django project directory?

Tested with Django 1.9 and Waitress 0.9.0
You can use waitress with your django application by creating a script (e.g., server.py) in your django project root and importing the application variable from wsgi.py module:
yourdjangoproject project root structure
├── manage.py
├── server.py
├── yourdjangoproject
│   ├── __init__.py
│   ├── settings.py
│   ├── urls.py
│   ├── wsgi.py
wsgi.py (Updated January 2021 w/ static serving)
This is the default django code for wsgi.py:
import os
from django.core.wsgi import get_wsgi_application
os.environ.setdefault("DJANGO_SETTINGS_MODULE", "yourdjangoproject.settings")
application = get_wsgi_application()
If you need static file serving, you can edit wsgi.py use something like whitenoise or dj-static for static assets:
import os
from django.core.wsgi import get_wsgi_application
os.environ.setdefault("DJANGO_SETTINGS_MODULE", "yourdjangoproject.settings")
"""
YOU ONLY NEED ONE OF THESE.
Choose middleware to serve static files.
WhiteNoise seems to be the go-to but I've used dj-static
successfully in many production applications.
"""
# If using WhiteNoise:
from whitenoise import WhiteNoise
application = WhiteNoise(get_wsgi_application())
# If using dj-static:
from dj_static import Cling
application = Cling(get_wsgi_application())
server.py
from waitress import serve
from yourdjangoproject.wsgi import application
if __name__ == '__main__':
serve(application, port='8000')
Usage
Now you can run $ python server.py

I managed to get it working by using a bash script instead of a python call. I made a script called 'startserver.sh' containing the following (replace yourprojectname with your project name obviously):
#!/bin/bash
waitress-serve --port=80 yourprojectname.wsgi:application
I put it in the top-level Django project directory.
Changed the permissions to execute by owner:
chmod 700 startserver.sh
Then I just execute the script on the server:
sudo ./startserver.sh
And that seemed to work just fine.

Related

Django ModuleNotFoundError: No module named 'mysite.settings' when trying to host application with wsgi server

So I wanted to deploy my first django application on a cherryPy webserver using wsgi. And I have issues with os.environ['DJANGO_SETTINGS_MODULE']. When trying to run application callable it throws error, that module is not found. Project structure:
ResourceManager
ResourceManager
ResourceManager
__init__.py
cherryserver.py
settings.py
urls.py
wsgi.py
SimpleResourceManager
migrations
__init__.py
admin.py
apps.py
models.py
serializers.py
tests.py
urls.py
views.py
manage.py
wsgi.py file:
import os
import sys
from django.core.wsgi import get_wsgi_application
BASE_DIR = os.path.dirname(os.path.dirname(os.path.abspath(__file__)))
sys.path.insert(0,BASE_DIR)
os.environ['DJANGO_SETTINGS_MODULE'] = 'ResourceManager.settings'
os.environ.setdefault('DJANGO_SETTINGS_MODULE', 'ResourceManager.settings')
application = get_wsgi_application()
cherryserver.py:
import cherrypy
from ResourceManager.ResourceManager.wsgi import application
if __name__ == '__main__':
# Mount the application
cherrypy.tree.graft(application, "/")
# Unsubscribe the default server
cherrypy.server.unsubscribe()
# Instantiate a new server object
server = cherrypy._cpserver.Server()
# Configure the server object
server.socket_host = "0.0.0.0"
server.socket_port = 8080
server.thread_pool = 30
# Subscribe this server
server.subscribe()
cherrypy.engine.start()
cherrypy.engine.block()
Application works fine when using command runserver 8080, but when i tried to run it on different server. It says ModuleNotFoundError: No module named "ResourceManager.settings".
So i tried: Change where cherryserver.py is located in directory, I have added additional lines of code to wsgy.py file and I'm running out of ideas what is wrong when I'm deploying my app on different server. Why I'm using cherryPy, well I have to test 5 web servers that are based on python.
Try changing from:
from ResourceManager.ResourceManager.wsgi import application
To:
from wsgi.py import application
This is because they are located in the same directory, if this does not work just mess around with the from as it seems that your path to the wsgi file is wrong.

Configure app to deploy to Heroku

I posted a question earlier today here Heroku deploy problem.
I've had a lot of good suggestions, but could not get my app to deploy on Heroku.
I have stripped the app to 15 lines of code. The app still refuses to deploy.
This is the error:
ImportError: No module named 'main'
File "/app/.heroku/python/bin/gunicorn", line 11, in <module>
sys.exit(run())
WSGIApplication("%(prog)s [OPTIONS] [APP_MODULE]").run()
This is my app's directory:
This is the content of the Procfile:
web: gunicorn main:app --log-file=-
This is the content of the main.py file:
import os
from flask import Flask
app = Flask(__name__, instance_relative_config=True)
app.config.from_object('config')
app.config.from_pyfile('config.py')
#app.route('/')
def hello():
return 'Hello World!'
if __name__ == '__main__':
# REMEMBER: Never have this set to True on Production
# manager.run()
app.run()
I have followed all the tutorials, read up on modules and packages, saw suggestions on this site, read Explore Flask, and The Official Flask documentation. They ALL have some sort of variation of establishing an app and its very difficult to understand what is the right way or where files are supposed to be.
There are several problems in your example code:
You need a package.
No module named 'main'
in the proc file, you said: web: gunicorn main:app --log-file=-. The right way is add a __init__.py beside the main.py, so python knows that is a package. Edit your proc file to this:
web: gunicorn blackduckflock.main:app --log-file=-
The instance folder.
Since you specify instance_relative_config=True, I think the proper way to organize your project like this:
blackduckflock
├── blackduckflock
│   ├── __init__.py
│   └── main.py
├── config.py
├── instance
│   └── config.py
└── Procfile
And you can run gunicorn blackduckflock.main:app to see if it works.

Why can't Celery daemon see tasks?

I have a Django 1.62 application running on Debian 7.8 with Nginx 1.2.1 as my proxy server and Gunicorn 19.1.1 as my application server. I've installed Celery 3.1.7 and RabbitMQ 2.8.4 to handle asynchronous tasks. I'm able to start a Celery worker as a daemon but whenever I try to run the test "add" task as shown in the Celery docs, I get the following error:
Received unregistred task of type u'apps.photos.tasks.add'.
The message has been ignored and discarded.
Traceback (most recent call last):
File "/home/swing/venv/swing/local/lib/python2.7/site-packages/celery/worker/consumer.py", line 455, in on_task_received
strategies[name](message, body,
KeyError: u'apps.photos.tasks.add'
All of my configuration files are kept in a "conf" directory that sits just below my "myproj" project directory. The "add" task is in apps/photos/tasks.py.
myproj
│
├── apps
   ├── photos
   │   ├── __init__.py
   │   ├── tasks.py
conf
├── celeryconfig.py
├── celeryconfig.pyc
├── celery.py
├── __init__.py
├── middleware.py
├── settings
│   ├── base.py
│   ├── dev.py
│   ├── __init__.py
│   ├── prod.py
├── urls.py
├── wsgi.py
Here is the tasks file:
# apps/photos/tasks.py
from __future__ import absolute_import
from conf.celery import app
#app.task
def add(x, y):
return x + y
Here are my Celery application and configuration files:
# conf/celery.py
from __future__ import absolute_import
import os
from celery import Celery
from django.conf import settings
from conf import celeryconfig
os.environ.setdefault('DJANGO_SETTINGS_MODULE', 'conf.settings')
app = Celery('conf')
app.config_from_object(celeryconfig)
app.autodiscover_tasks(lambda: settings.INSTALLED_APPS)
#app.task(bind=True)
def debug_task(self):
print('Request: {0!r}'.format(self.request))
# conf/celeryconfig.py
BROKER_URL = 'amqp://guest#localhost:5672//'
CELERY_RESULT_BACKEND = 'amqp'
CELERY_ACCEPT_CONTENT = ['json', ]
CELERY_TASK_SERIALIZER = 'json'
CELERY_RESULT_SERIALIZER = 'json'
This is my Celery daemon config file. I commented out CELERY_APP because I've found that the Celery daemon won't even start if I uncomment it. I also found that I need to add the "--config" argument to CELERYD_OPTS in order for the daemon to start. I created a non-privileged "celery" user who can write to the log and pid files.
# /etc/default/celeryd
CELERYD_NODES="worker1"
CELERYD_LOG_LEVEL="DEBUG"
CELERY_BIN="/home/myproj/venv/myproj/bin/celery"
#CELERY_APP="conf"
CELERYD_CHDIR="/www/myproj/"
CELERYD_OPTS="--time-limit=300 --concurrency=8 --config=celeryconfig"
CELERYD_LOG_FILE="/var/log/celery/%N.log"
CELERYD_PID_FILE="/var/run/celery/%N.pid"
CELERYD_USER="celery"
CELERYD_GROUP="celery"
CELERY_CREATE_DIRS=1
I can see from the log file that when I run the command, "sudo service celeryd start", Celery starts without any errors. However, if I open the Python shell and run the following commands, I'll see the error I described at the beginning.
$ python shell
In [] from apps.photos.tasks import add
In [] result = add.delay(2, 2)
What's interesting is that if I examine Celery's registered tasks object, the task is listed:
In [] import celery
In [] celery.registry.tasks
Out [] {'celery.chain': ..., 'apps.photos.tasks.add': <#task: apps.photos.tasks.add of conf:0x16454d0> ...}
Other similar questions here have discussed having a PYTHONPATH environment variable and I don't have such a variable. I've never understood how to set PYTHONPATH and this project has been running just fine for over a year without it.
I should also add that my production settings file is conf/settings/prod.py. It imports all of my base (tier-independent) settings from base.py and adds some extra production-dependent settings.
Can anyone tell me what I'm doing wrong? I've been struggling with this problem for three days now.
Thanks!
Looks like it is happening due to relative import error.
>>> from project.myapp.tasks import mytask
>>> mytask.name
'project.myapp.tasks.mytask'
>>> from myapp.tasks import mytask
>>> mytask.name
'myapp.tasks.mytask'
If you’re using relative imports you should set the name explicitly.
#task(name='proj.tasks.add')
def add(x, y):
return x + y
Checkout: http://celery.readthedocs.org/en/latest/userguide/tasks.html#automatic-naming-and-relative-imports
I'm using celery 4.0.2 and django, and I created a celery user and group for use with celeryd and had this same problem. The command-line version worked fine, but celeryd was not registering the tasks. It was NOT a relative naming problem.
The solution was to add the celery user to the group that can access the django project. In my case, this group is www-data with read, execute, and no write.

Deploying flask site/application on pythonanywhere.com

I have a working sample site with the file system as such (https://github.com/alvations/APE):
APE
\app
\templates
base.html
index.html
instance.html
__init__.py
hamlet.py
config.py
run.py
I have created a flask project on https://www.pythonanywhere.com and the file system is as such:
/home/alvations/
/Dropbox/
/mysite/
/templates
base.html
index.html
instance.html
flask_app.py
/web2py/
Where do I place my run.py in my pythonanywhere project?
How do I use the same file structure as my the project in my Github on pythonanywhere?
PythonAnywhere dev here -- you don't need a run.py on PythonAnywhere. The code that normally goes in there is to run a local Flask server that can serve your app -- that's all handled for you by our system.
Instead, you need to change the WSGI file (linked from the "Web" tab) to import the appropriate application module. So, because the sample site you have on github does
from app import app
app.run(debug=True)
...on PythonAnywhere in the WSGI file you'll need to do this:
from app import app as application
One thing to be aware of -- if I'm understanding your file listings above correctly, you don't have all of the github app installed -- only the templates. You'll need __init__.py, hamlet.py, and config.py, and they'll need to be in the same directory structure as the original.

Switching to Amazon CDN from dj-static with django on Heroku

I have been developing a django site on Heroku and using dj-static in my wsgi.py. I am now about to move my site static files onto Amazon. Do I need to now remove the references to dj-static from my wsgi.py file? I'm concerned about the following lines of code. What would be the correct thing to do? Do they need to go? If so, what do I put in their place?:
from django.core.wsgi import get_wsgi_application
from dj_static import Cling
application = Cling(get_wsgi_application())
Thanks,
Euan
If you are serving files using django-storages you don't need to use dj-static anymore. dj-static is only used when you want to serve the static files using a WSGI server like gunicorn. In your case you are using Amazon's servers.
To answer your question, you can revert to the default wsgi.py file which looks something like this:
import os
os.environ.setdefault("DJANGO_SETTINGS_MODULE", "myproject.settings")
from django.core.wsgi import get_wsgi_application
application = get_wsgi_application()
You can also remove dj-static from your virtualenv and from requirements.txt

Categories