I'm trying to deploy django with uwsgi, and I think I lack understanding of how it all works. I have uwsgi running in emperor mode, and I'm trying to get the vassals to run in their own virtualenvs, with a different python version.
The emperor configuration:
[uwsgi]
socket = /run/uwsgi/uwsgi.socket
pidfile = /run/uwsgi/uwsgi.pid
emperor = /etc/uwsgi.d
emperor-tyrant = true
master = true
autoload = true
log-date = true
logto = /var/log/uwsgi/uwsgi-emperor.log
And the vassal:
uid=django
gid=django
virtualenv=/home/django/sites/mysite/venv/bin
chdir=/home/django/sites/mysite/site
module=mysite.uwsgi:application
socket=/tmp/uwsgi_mysite.sock
master=True
I'm seeing the following error in the emperor log:
Traceback (most recent call last):
File "./mysite/uwsgi.py", line 11, in <module>
import site
ImportError: No module named site
The virtualenv for my site is created as a python 3.4 pyvenv. The uwsgi is the system uwsgi (python2.6). I was under the impression that the emperor could be any python version, as the vassal would be launched with its own python and environment, launched by the master process. I now think this is wrong.
What I'd like to be doing is running the uwsgi master process with the system python, but the various vassals (applications) with their own python and their own libraries. Is this possible? Or am I going to have to run multiple emperors if I want to run multiple pythons? Kinda defeats the purpose of having virtual environments.
The "elegant" way is building the uWSGI python support as a plugin, and having a plugin for each python version:
(from uWSGI sources)
make PROFILE=nolang
(will build a uWSGI binary without language support)
PYTHON=python2.7 ./uwsgi --build-plugin "plugins/python python27"
will build the python27_plugin.so that you can load in vassals
PYTHON=python3 ./uwsgi --build-plugin "plugins/python python3"
will build the plugin for python3 and so on.
There are various way to build uWSGI plugins, the one i am reporting is the safest one (it ensure the #ifdef are honoured).
Having said that, having a uWSGI Emperor for each python version is viable too. Remember Emperor are stackable, so you can have a generic emperor spawning one emperor (as its vassal) for each python version.
Pip install uWSGI
One option would be to simply install uWSGI with pip in your virtualenvs and start your services separately:
pip install uwsgi
~/.virtualenvs/venv-name/lib/pythonX.X/site-packages/uwsgi --ini path/to/ini-file
Install uWSGI from source and build python plugins
If you want a system-wide uWSGI build, you can build it from source and install plugins for multiple python versions. You'll need root privileges for this.
First you may want to install multiple system-wide python versions.
Make sure you have any dependencies installed. For pcre, on a Debian-based distribution use:
apt install libpcre3 libpcre3-dev
Download and build the latest uWSGI source into /usr/local/src, replacing X.X.X.X below with the package version (e.g. 2.0.19.1):
wget http://projects.unbit.it/downloads/uwsgi-latest.tar.gz
tar vzxf uwsgi-latest.tar.gz
cd uwsgi-X.X.X.X/
make PROFILE=nolang
Symlink the versioned folder uwsgi-X.X.X.X to give it the generic name, uwsgi:
ln -s /usr/local/src/uwsgi-X.X.X.X /usr/local/src/uwsgi
Create a symlink to the build so it's on your PATH:
ln -s /usr/local/src/uwsgi/uwsgi /usr/local/bin
Build python plugins for the versions you need:
PYTHON=pythonX.X ./uwsgi --build-plugin "plugins/python pythonXX"
For example, for python3.8:
PYTHON=python3.8 ./uwsgi --build-plugin "plugins/python python38"
Create a plugin directory in an appropriate location:
mkdir -p /usr/local/lib/uwsgi/plugins/
Symlink the created plugins to this directory. For example, for python3.8:
ln -s /usr/local/src/uwsgi/python38_plugin.so /usr/local/lib/uwsgi/plugins
Then in your uWSGI configuration (project.ini) files, specify the plugin directory and the plugin:
plugin-dir = /usr/local/lib/uwsgi/plugins
plugin = python38
Make sure to create your virtualenvs with the same python version that you created the plugin with. For example if you created python38_plugin.so with python3.8 and you have plugin = python38 in your project.ini file, then an easy way to create a virtualenv with python3.8 is with:
python3.8 -m virtualenv path/to/project/virtualenv
Related
Environment: Ubuntu 16.04 (with system Python at 2.7.12) running in Vagrant/Virtualbox on Windows 10 host
Python Setup: System python verified by doing python -V with no virtualenv's activated. Python 3.5 is also installed, and I've done pipenv --three to create the virtualenv for this project. Doing python -V within the activated virtualenv (pipenv shell to activate) shows Python 3.5.2.
Additional Background: I'm developing a Wagtail 2 app. Wagtail 2 requires Django 2 which, of course, requires Python 3. I have other Django apps on this machine that were developed in Django 1.11/Python 2.7 and are served by Apache. We are moving to Django 2/Python 3 for development going forward and are moving to nginx/uWSGI for serving the apps.
I have gone through many tutorials/many iterations. All Vagrant port mapping is set up fine with nginx serving media/static files and passing requests upstream to the Django app on a unix socket, but this is giving a 502 Gateway not found error because uWSGI will not run correctly. Therefore, right now I'm simply running the following from the command line to try to get uWSGI to run: uwsgi --ini /etc/uwsgi/sites/my_site.com.ini. This file contains:
[uwsgi]
uid = www-data
gid = www-data
plugin = python35
# Django-related settings
# the base directory (full path)
chdir=/var/sites/my_site
# Django's wsgi file
wsgi-file = my_site.wsgi
# the virtualenv (full path)
virtualenv=/root/.local/share/virtualenvs/my_site-gmmiTMID
# process-related settings
# master
master = True
# maximum number of worker processes
processes = 10
# the socket (use the full path to be safe)
socket = /tmp/my_site.sock
# clear environment on exit
vacuum = True
I've tried installing uWSGI in the following ways:
system-wide with pip install uwsgi as well as pip3 install uwsgi
using apt-get install uwsgi uwsgi-plugin-python3
I've ensured that only one install is in place at a time by uninstalling any previous uwsgi installs. The latter install method places uwsgi-core in usr/bin and also places in usr/bin shortcuts to uwsgi, uwsgi_python3, and uwsgi_python35.
In the .ini file I've also tried plugin = python3. I've also tried from the command line:
uwsgi_python3 --ini /etc/uwsgi/sites/my_site.com.ini
uwsgi_python35 --ini /etc/uwsgi/sites/my_site.com.ini
I've tried executing the uwsgi ... .ini commands from both within the activated virtual environment and with the virtualenv deactivated. Each of the three command line uwsgi ... .ini executions (uwsgi ..., uwsgi_python3 ... and uwsgi_python35 ...) DO cause the .ini file to be executed, but each time I'm getting the following error (the last two lines of the following statements):
[uwsgi] implicit plugin requested python35
[uWSGI] getting INI configuration from /etc/uwsgi/sites/my_site.com.ini
*** Starting uWSGI 2.0.12-debian (64bit) on [Wed Mar 7 03:54:44 2018] ***
compiled with version: 5.4.0 20160609 on 31 August 2017 21:02:04
os: Linux-4.4.0-116-generic #140-Ubuntu SMP Mon Feb 12 21:23:04 UTC 2018
nodename: vagrant-ubuntu-trusty-64
machine: x86_64
clock source: unix
pcre jit disabled
detected number of CPU cores: 2
current working directory: /vagrant/my_site
detected binary path: /usr/bin/uwsgi-core
setgid() to 33
setuid() to 33
chdir() to /var/sites/my_site
your processes number limit is 7743
your memory page size is 4096 bytes
detected max file descriptor number: 1024
lock engine: pthread robust mutexes
thunder lock: disabled (you can enable it with --thunder-lock)
uwsgi socket 0 bound to UNIX address /tmp/my_site.sock fd 3
Python version: 3.5.2 (default, Nov 23 2017, 16:37:01) [GCC 5.4.0 20160609]
Set PythonHome to /root/.local/share/virtualenvs/my_site-gmmiTMID
Fatal Python error: Py_Initialize: Unable to get the locale encoding
ImportError: No module named 'encodings'
If I go into the Python command line within the activated virtualenv and do import encodings, it imports fine (no message - just comes back to command line). A search for this particular error has turned up nothing of use. Any idea why the ImportError: No module named 'encodings' is coming up?
UPDATE - PROBLEM STILL OCCURRING
I'm using pipenv, and it stores individual virtualenvs in the /home/username/.local/share/virtualenvs folder. Though I was able to start uWSGI from the command line by executing the uWSGI config file as the vagrant user (see comment below), I have still not been able start the service with /home/vagrant/.local/share/virtualenvs/my_venv in the uWSGI config file. I tried adding the vagrant user to the www-data group and adding the www-data user to the vagrant group. I also put both read and execute permission for the world on the whole path (including the individual venv), but the uWSGI service will still not start.
A HACKISH WORKAROUND
I did finally get the uWSGI service to start by copying the venv to /opt/virtualenvs/my_venv. I was then able to start the service with sudo service uwsgi start. Ownership of that whole path is root:root.
STILL LOOKING FOR A SOLUTION...
This solution is not optimal since I am now executing from a virtualenv that will have to be updated when the default virtualenv is updated since this location is not the default for pipenv, so I'm still looking for answers. Perhaps it is a Ubuntu permissions error, but I just can't find the problem.
It might be the problem with your virtual environment. Try the following
rm -rf /root/.local/share/virtualenvs/my_site-gmmiTMID
virtualenv -p python3 /root/.local/share/virtualenvs/my_site-gmmiTMID
source /root/.local/share/virtualenvs/my_site-gmmiTMID/bin/activate
pip install -r requirements.txt
and in uwsgi conf try to change
virtualenv=/root/.local/share/virtualenvs/my_site-gmmiTMID
to
home=/root/.local/share/virtualenvs/my_site-gmmiTMID
Reference:
http://uwsgi-docs.readthedocs.io/en/latest/Options.html#virtualenv
I am relatively new to using uWSGI to serve Python applications and I am attempting to start a uWSGI process in emperor mode with a vassal, but every time I try to start uWSGI inside of Docker with the following command (as root):
# /usr/local/bin/uwsgi --ini /etc/uwsgi/emperor.ini
What I get as a response is:
[uWSGI] getting INI configuration from /etc/uwsgi/emperor.ini
2.0.13.1
The emperor.ini configuration file looks like:
# files/etc/uwsgi/emperor.ini
[uwsgi]
emperor = /etc/uwsgi/apps-enabled
die-on-term = true
log-date = true
While the only vassal's configuration looks like:
# files/etc/uwsgi/apps-enabled/application.ini
[uwsgi]
app_dir = /var/www/server
plugin = python
master = true
callable = app
chdir = %(app_dir)
mount = /=%(app_dir)/start.py
protocol = uwsgi
socket = :8079
uid = www-data
gid = www-data
buffer-size = 32768
enable-threads = true
single-interpreter = true
processes = 1
stats = 127.0.0.1:1717
(NB: The filenames above are given in terms of where they live relative to the Dockerfile which will then copy them to the correct locations, basically removing the prefix files)
Currently the uWSGI Docker image I'm using is built off of an ubuntu:trusty base image (though I've tried ubuntu:latest and alpine:latest and encountered the same problem), and although I am attempting to launch the uWSGI process with supervisor, as stated previously, it also fails when run directly from the command line. In the Docker image I'm installing uWSGI using pip but have also tried using apt-get with the same result.
I should also mention that I've tried different versions of uWSGI 2.0.13.1 and 1.9.something with the same result, if that helps.
# Dockerfile
FROM ubuntu:trusty
MAINTAINER Sean Quinn "me#mail.com"
RUN apt-get update \
&& apt-get install -y \
ack-grep git nano \
supervisor \
build-essential gcc python python-dev python-pip
RUN sed -i 's/^\(\[supervisord\]\)$/\1\nnodaemon=true/' /etc/supervisor/supervisord.conf \
&& sed -i 's/^\(\[supervisord\]\)$/\1\nloglevel=debug/' /etc/supervisor/supervisord.conf \
&& sed -i 's/^\(files = .*\)$/;\1/' /etc/supervisor/supervisord.conf \
&& sed -i 's/^\(\[include\]\)$/\1\nfiles = \/etc\/supervisor\/conf.d\/*.conf/' /etc/supervisor/supervisord.conf
ENV UWSGI_VERSION 2.0.13.1
RUN pip install uwsgi==${UWSGI_VERSION}
RUN mkdir -p /etc/uwsgi \
&& mkdir -p /etc/uwsgi/apps-available \
&& mkdir -p /etc/uwsgi/apps-enabled \
&& mkdir -p /var/log/uwsgi
COPY files/etc/supervisor/conf.d/uwsgi.conf /etc/supervisor/conf.d/uwsgi.conf
COPY files/etc/uwsgi/emperor.ini /etc/uwsgi/emperor.ini
VOLUME /etc/uwsgi/apps-enabled
VOLUME /var/www
ENTRYPOINT ["/usr/bin/supervisord"]
CMD ["-c", "/etc/supervisor/supervisord.conf"]
As mentioned, the supervisord process attempts to launch the uWSGI process using the following supervisor configuration.
# files/etc/supervisor/conf.d/uwsgi.conf
[program:uwsgi]
command=/usr/local/bin/uwsgi --ini /etc/uwsgi/emperor.ini
user=root
The application Python files are mounted in a subdirectory of /var/www and the application uWSGI configuration is mounted into /etc/uwsgi/apps-enabled.
The bizarre thing is, if I install supervisor and uWSGI on a new Ubuntu VM (outside of Docker) with all of the configuration and files in place I can see uWSGI properly process the emperor.ini and read the vassal .ini files. I haven't yet attempted to add nginx into the equation because I want to make sure uWSGI is starting and reading configuration files correctly first and foremost.
Is there any way to increase the logging or ascertain why I'm only seeing what appears to be the version number of the uWSGI binary? It's like the uWSGI process is completely ignoring command line options. I feel like I'm missing something that should be obvious.
Thanks in advance for any help anyone can give!
tl;dr don't use UWSGI_VERSION as an environment variable, apparently it forces uWSGI to only print the version number instead of start?
I believe I solved my own issue!
After experimenting with other uWSGI images on Docker's hub, I found that they were also running into the same issue so I began to look further into possible configuration issues. I tried changing permissions among other things.
I noticed however that when I used jpetazzo/nsenter to enter the running container that I saw uWSGI start (rather than simply output the uWSGI version information as highlighted above). When entering using docker exec, uWSGI would only print the version information. After playing around a bit more, I discovered that issuing the command su - from within the container launched using docker exec I again saw uWSGI start.
After some inspection, I discovered several differences in the environment variables between the root user in one shell vs. another. It led me to the UWSGI_VERSION environment variable, which appears to have been the culprit because removing UWSGI_VERSION allowed uWSGI to start.
I modified my Dockerfile to use UWSGI_PIP_VERSION instead as the environment variable to indicate the version of uWSGI to install, which seems to be a safe alternative to UWSGI_VERSION. YMMV.
Im trying to run Django application on uwsgi but get the below error.
uwsgi --http :8000 --home /home/cuser/.virtualenvs/vq --chdir /var/www/sid/sid -w wsgi.py
uwsgi: option '--http' is ambiguous
getopt_long() error
When I change from -http to --socket it works but again it says --home is ambiguous
This is most likely because you have uwsgi installed from your distributions packaged binaries, which are more minimal in their build and lack some of the plugins.
You can fix this by either pip install uwsgi and replace uwsgi with path/to/uwsgi/binary/installed/using/pip. You can find that using pip show uwsgi.
[Please note: use pip3 if you're using python3]
Another method would be to download the respective http/python3 plugins and running the following command:
uwsgi --plugins http,python --http :8000 --home /home/cuser/.virtualenvs/vq --chdir /var/www/sid/sid -w wsgi.py
You may want to take into account when using this with distro-supplied packages, is that very probably your distribution has built uWSGI in modular way (every feature is a different plugin that must be loaded).
You have to prepend --plugin python,http to the command, and --plugin python when the HTTP router is removed
Example
Appended --plugin python
uwsgi --http :8000 --plugin python --home /home/cuser/.virtualenvs/vq --chdir /var/www/sid/sid -w wsgi.py
try:
uwsgi --http=:8000 --home=/home/cuser/.virtualenvs/vq --chdir=/var/www/sid/sid -w wsgi.py
For some versions of getopt this should work. If not, try to put your parameters in config file or update getopt library in your system and recompile uWSGI.
I installed ansible, apache-libcloud with pip. Also, I can use the gcloud cli and ansible works for any non-gce-related playbooks.
When using the gce module as a task to create instances in an ansible playbook, the following error occurs:
TASK: [Launch instances] ******************************************************
<127.0.0.1> REMOTE_MODULE gce instance_names=mm2 machine_type=f1-micro image=ubuntu-1204-precise-v20150625 zone=europe-west1-d service_account_email= pem_file=../pkey.pem project_id=fancystuff-11
<127.0.0.1> EXEC ['/bin/sh', '-c', 'mkdir -p $HOME/.ansible/tmp/ansible-tmp-1437669562.03-233461447935889 && chmod a+rx $HOME/.ansible/tmp/ansible-tmp-1437669562.03-233461447935889 && echo $HOME/.ansible/tmp/ansible-tmp-1437669562.03-233461447935889']
<127.0.0.1> PUT /var/folders/v4/ll0_f8lj7yl7yghb645h95q9ckfc19/T/tmpyDoPt9 TO /Users/d046179/.ansible/tmp/ansible-tmp-1437669562.03-233461447935889/gce
<127.0.0.1> EXEC ['/bin/sh', '-c', u'LANG=en_US.UTF-8 LC_CTYPE=en_US.UTF-8 /usr/bin/python /Users/d046179/.ansible/tmp/ansible-tmp-1437669562.03-233461447935889/gce; rm -rf /Users/d046179/.ansible/tmp/ansible-tmp-1437669562.03-233461447935889/ >/dev/null 2>&1']
failed: [localhost -> 127.0.0.1] => {"failed": true, "parsed": false}
failed=True msg='libcloud with GCE support (0.13.3+) required for this module'
FATAL: all hosts have already failed -- aborting
And the site.yml of the playbook I wrote:
name: Create a sandbox instance
hosts: localhost
vars:
names: mm2
machine_type: f1-micro
image: ubuntu-1204-precise-v20150625
zone: europe-west1-d
service_account_email: xxx#developer.gserviceaccount.com
pem_file: ../pkey.pem
project_id: fancystuff-11
tasks:
- name: Launch instances
local_action: gce instance_names={{names}} machine_type={{machine_type}}
image={{image}} zone={{zone}} service_account_email={{ service_account_email }}
pem_file={{ pem_file }} project_id={{ project_id }}
register: gce
The gce cloud module fails with the error message "ibcloud with GCE support (0.13.3+) required for this module".
However, running gce.py from the ansible github repo works. The python script finds the apache-libcloud library and prints a json with all running instances. Besides, pip install apache-libcloud states it is installed properly.
Is there anything I am missing like an environment variable that points to the python libraries (PYTHONPATH)?
UPDATE 1:
I included the following task before the gce task:
- name: install libcloud
pip: name=apache-libcloud
This also does not affect the behavior nor prevents any error messages.
Update 2:
I added the following task to inspect the available PYTHONPATH:
- name: Getting PYTHONPATH
local_action: shell python -c 'import sys; print(":".join(sys.path))'
register: pythonpath
- debug:
msg: "PYTHONPATH: {{ pythonpath.stdout }}"
The following is returned:
PYTHONPATH: :/usr/local/lib/python2.7/site-packages/setuptools-17.1.1-py2.7.egg:/usr/local/lib/python2.7/site-packages/pip-7.0.3-py2.7.egg:/usr/local/lib/python2.7/site-packages:/usr/local/Cellar/python/2.7.10/Frameworks/Python.framework/Versions/2.7/lib/python27.zip:/usr/local/Cellar/python/2.7.10/Frameworks/Python.framework/Versions/2.7/lib/python2.7:/usr/local/Cellar/python/2.7.10/Frameworks/Python.framework/Versions/2.7/lib/python2.7/plat-darwin:/usr/local/Cellar/python/2.7.10/Frameworks/Python.framework/Versions/2.7/lib/python2.7/plat-mac:/usr/local/Cellar/python/2.7.10/Frameworks/Python.framework/Versions/2.7/lib/python2.7/plat-mac/lib-scriptpackages:/usr/local/Cellar/python/2.7.10/Frameworks/Python.framework/Versions/2.7/lib/python2.7/lib-tk:/usr/local/Cellar/python/2.7.10/Frameworks/Python.framework/Versions/2.7/lib/python2.7/lib-old:/usr/local/Cellar/python/2.7.10/Frameworks/Python.framework/Versions/2.7/lib/python2.7/lib-dynload:/usr/local/lib/python2.7/site-packages:/Library/Python/2.7/site-packages
UPDATE 3:
I introduced my own test.py script as a task which executes the same apache-libcloud imports as the gce ansible module. The script imports just fine!!!
Setting the PYTHONPATH fixes the issue. For example:
$ export PYTHONPATH=/usr/local/lib/python2.7/site-packages/
I'm using OSX and I solved this for myself. Short answer: install ansible with pip. (rather than e.g. brew)
I inspected the PYTHONPATH that Ansible sets runtime and it looked like it had nothing to do whith my normal system PYTHONPATH. E.g. for me, my system PYTHONPATH was empty, and setting that like e.g. mlazarov suggested didn't make any difference. I made ansible print the PYTHONPATH it uses runtime, and it looked like this:
ok: [localhost] => {
"msg": "PYTHONPATH: :/usr/local/Cellar/ansible/1.9.4/libexec/lib/python2.7/site-packages:/usr/local/Cellar/ansible/1.9.4/libexec/vendor/lib/python2.7/site-packages:/Library/Frameworks/Python.framework/Versions/3.4/lib/python34.zip:/Library/Frameworks/Python.framework/Versions/3.4/lib/python3.4:/Library/Frameworks/Python.framework/Versions/3.4/lib/python3.4/plat-darwin:/Library/Frameworks/Python.framework/Versions/3.4/lib/python3.4/lib-dynload:/Library/Frameworks/Python.framework/Versions/3.4/lib/python3.4/site-packages"
}
So there's only ansible's own site-packages and some strange Python3 installations (I'm using python2.7)
Something in this discussion made me think it might be a problem with the ansible installation, my ansible was installed with brew. I reinstalled it globally with pip (simply running sudo pip install ansible), and that fixed the problem. Now the PYTHONPATH ansible prints looks much better, with my virtualenv python installation in the beginning, and no more "libcloud with GCE support (0.13.3+) required for this module".
I was able to resolve the issue by setting the PYTHONPATH environment variable (export PYTHONPATH=/path/to/site-packages) with the current site-packages folder. Apparently, ansible establishes its own environment during module execution and ignores any paths available in python except the paths from the environment variable PYTHONPATH.
I find this a peculiar behavior which is not documented on the ansible websites.
I have a similar environment setup. I found some information at the bottom of this section: https://github.com/jlund/streisand#prerequisites
Essentially there's some magic files you can update so the brew'd ansible will add a folder to search for packages:
mkdir -p ~/Library/Python/2.7/lib/python/site-packages
echo '/usr/local/lib/python2.7/site-packages' > ~/Library/Python/2.7/lib/python/site-packages/homebrew.pth
Hope that fixes it for you!
In my case it was the case of:
pip install apache-libcloud
I installed uwsgi using pip install uwsgi.
When I run uwsgi, I get a couple of errors. The command I'm running is uwsgi --master --emperor /etc/uwsgi/apps-enabled --die-on-term --uid www-data --gid www-data.
It appears that I'm missing the http and python plugins:
[uWSGI] getting INI configuration from component_tracking_test.ini
open("./http_plugin.so"): No such file or directory [core/utils.c line 3347]
!!! UNABLE to load uWSGI plugin: ./http_plugin.so: cannot open shared object file: No such file or directory !!!
open("./python_plugin.so"): No such file or directory [core/utils.c line 3347]
!!! UNABLE to load uWSGI plugin: ./python_plugin.so: cannot open shared object file: No such file or directory !!!
[emperor] removed uwsgi instance component_tracking_test.ini
How do I install the required plugins given that I have installed uwsgi via pip?
When I add "--binary-path /usr/local/bin/uwsgi" (change path to your wsgi bin) to the command, the error went away.
from the docu
binary-path
Argument: string
Force binary path.
If you do not have uWSGI in the system path you can force its path with this option to
permit the reloading system and the Emperor to easily find the binary to execute.
I just had a similar problem and the reason was that I was running sudo uwsgi, not realizing that sudo won't respect the PATH and will launch the system-wide uwsgi. See this answer.