I am relatively new to using uWSGI to serve Python applications and I am attempting to start a uWSGI process in emperor mode with a vassal, but every time I try to start uWSGI inside of Docker with the following command (as root):
# /usr/local/bin/uwsgi --ini /etc/uwsgi/emperor.ini
What I get as a response is:
[uWSGI] getting INI configuration from /etc/uwsgi/emperor.ini
2.0.13.1
The emperor.ini configuration file looks like:
# files/etc/uwsgi/emperor.ini
[uwsgi]
emperor = /etc/uwsgi/apps-enabled
die-on-term = true
log-date = true
While the only vassal's configuration looks like:
# files/etc/uwsgi/apps-enabled/application.ini
[uwsgi]
app_dir = /var/www/server
plugin = python
master = true
callable = app
chdir = %(app_dir)
mount = /=%(app_dir)/start.py
protocol = uwsgi
socket = :8079
uid = www-data
gid = www-data
buffer-size = 32768
enable-threads = true
single-interpreter = true
processes = 1
stats = 127.0.0.1:1717
(NB: The filenames above are given in terms of where they live relative to the Dockerfile which will then copy them to the correct locations, basically removing the prefix files)
Currently the uWSGI Docker image I'm using is built off of an ubuntu:trusty base image (though I've tried ubuntu:latest and alpine:latest and encountered the same problem), and although I am attempting to launch the uWSGI process with supervisor, as stated previously, it also fails when run directly from the command line. In the Docker image I'm installing uWSGI using pip but have also tried using apt-get with the same result.
I should also mention that I've tried different versions of uWSGI 2.0.13.1 and 1.9.something with the same result, if that helps.
# Dockerfile
FROM ubuntu:trusty
MAINTAINER Sean Quinn "me#mail.com"
RUN apt-get update \
&& apt-get install -y \
ack-grep git nano \
supervisor \
build-essential gcc python python-dev python-pip
RUN sed -i 's/^\(\[supervisord\]\)$/\1\nnodaemon=true/' /etc/supervisor/supervisord.conf \
&& sed -i 's/^\(\[supervisord\]\)$/\1\nloglevel=debug/' /etc/supervisor/supervisord.conf \
&& sed -i 's/^\(files = .*\)$/;\1/' /etc/supervisor/supervisord.conf \
&& sed -i 's/^\(\[include\]\)$/\1\nfiles = \/etc\/supervisor\/conf.d\/*.conf/' /etc/supervisor/supervisord.conf
ENV UWSGI_VERSION 2.0.13.1
RUN pip install uwsgi==${UWSGI_VERSION}
RUN mkdir -p /etc/uwsgi \
&& mkdir -p /etc/uwsgi/apps-available \
&& mkdir -p /etc/uwsgi/apps-enabled \
&& mkdir -p /var/log/uwsgi
COPY files/etc/supervisor/conf.d/uwsgi.conf /etc/supervisor/conf.d/uwsgi.conf
COPY files/etc/uwsgi/emperor.ini /etc/uwsgi/emperor.ini
VOLUME /etc/uwsgi/apps-enabled
VOLUME /var/www
ENTRYPOINT ["/usr/bin/supervisord"]
CMD ["-c", "/etc/supervisor/supervisord.conf"]
As mentioned, the supervisord process attempts to launch the uWSGI process using the following supervisor configuration.
# files/etc/supervisor/conf.d/uwsgi.conf
[program:uwsgi]
command=/usr/local/bin/uwsgi --ini /etc/uwsgi/emperor.ini
user=root
The application Python files are mounted in a subdirectory of /var/www and the application uWSGI configuration is mounted into /etc/uwsgi/apps-enabled.
The bizarre thing is, if I install supervisor and uWSGI on a new Ubuntu VM (outside of Docker) with all of the configuration and files in place I can see uWSGI properly process the emperor.ini and read the vassal .ini files. I haven't yet attempted to add nginx into the equation because I want to make sure uWSGI is starting and reading configuration files correctly first and foremost.
Is there any way to increase the logging or ascertain why I'm only seeing what appears to be the version number of the uWSGI binary? It's like the uWSGI process is completely ignoring command line options. I feel like I'm missing something that should be obvious.
Thanks in advance for any help anyone can give!
tl;dr don't use UWSGI_VERSION as an environment variable, apparently it forces uWSGI to only print the version number instead of start?
I believe I solved my own issue!
After experimenting with other uWSGI images on Docker's hub, I found that they were also running into the same issue so I began to look further into possible configuration issues. I tried changing permissions among other things.
I noticed however that when I used jpetazzo/nsenter to enter the running container that I saw uWSGI start (rather than simply output the uWSGI version information as highlighted above). When entering using docker exec, uWSGI would only print the version information. After playing around a bit more, I discovered that issuing the command su - from within the container launched using docker exec I again saw uWSGI start.
After some inspection, I discovered several differences in the environment variables between the root user in one shell vs. another. It led me to the UWSGI_VERSION environment variable, which appears to have been the culprit because removing UWSGI_VERSION allowed uWSGI to start.
I modified my Dockerfile to use UWSGI_PIP_VERSION instead as the environment variable to indicate the version of uWSGI to install, which seems to be a safe alternative to UWSGI_VERSION. YMMV.
Related
I am trying to dockerize this repo. After building it like so:
docker build -t layoutlm-v2 .
I try to run it like so:
docker run -d -p 5001:5000 layoutlm-v2
It downloads the necessary libraries and packages:
And then nothing... No errors, no endpoints generated, just radio silence.
What's wrong? And how do I fix it?
You appear to be expecting your application to offer a service on port 5000, but it doesn't appear as if that's how your code behaves.
Looking at your code, you seem to be launching a service using gradio. According the quickstart, calling gr.Interface(...).launch() will launch a service on localhost:7860, and indeed, if you inspect a container booted from your image, we see:
root#74cf8b2463ab:/app# ss -tln
State Recv-Q Send-Q Local Address:Port Peer Address:Port Process
LISTEN 0 2048 127.0.0.1:7860 0.0.0.0:*
There's no way to access a service listening on localhost from outside the container, so we need to figure out how to fix that.
Looking at these docs, it looks like you can control the listen address using the server_name parameter:
server_name
to make app accessible on local network, set this to "0.0.0.0". Can be set by environment variable GRADIO_SERVER_NAME. If None, will use "127.0.0.1".
So if we run your image like this:
docker run -p 7860:7860 -e GRADIO_SERVER_NAME=0.0.0.0 layoutlm-v2
Then we should be able to access the interface on the host at http://localhost:7860/...and indeed, that seems to work:
Unrelated to your question:
You're setting up a virtual environment in your Dockerfile, but you're not using it, primarily because of a typo here:
ENV PATH="VIRTUAL_ENV/bin:$PATH"
You're missing a $ on $VIRTUAL_ENV.
You could optimize the order of operations in your Dockerfile. Right now, making a simple change to your Dockerfile (e.g, editing the CMD setting) will cause much of your image to be rebuilt. You could avoid that by restructuring the Dockerfile like this:
FROM python:3.9
# Install dependencies
RUN apt-get update && apt-get install -y tesseract-ocr
RUN pip install virtualenv && virtualenv venv -p python3
ENV VIRTUAL_ENV=/venv
ENV PATH="$VIRTUAL_ENV/bin:$PATH"
WORKDIR /app
COPY requirements.txt ./
RUN pip install -r requirements.txt
RUN git clone https://github.com/facebookresearch/detectron2.git
RUN python -m pip install -e detectron2
COPY . /app
# Run the application:
CMD ["python", "-u", "app.py"]
For some reason, supervisor refuses to start the command as user - it always runs it as root - and this is an issue for me since I am activating a virtualenv and running commands specific to that particulat virtualenv.
So, my conf looks like so:
[program:site]
command = /home/some/virtual/env/dir/run/start.sh
user = some
stdout_logfile = /home/some/etc/supervisor/logs/logging.log
redirect_stderr = true
environment=LANG=en_US.UTF-8,LC_ALL=en_US.UTF-8
stopsignal=KILL
killasgroup=true
autostart=true
start.sh looks like so:
#!/bin/bash
echo $USER >> /home/some/user.txt
cd
source /home/foo/some/virtual/env/bin/activate
cd /home/foo/some/virtual/env
SOCKFILE01=/home/some/etc/supervisor/site.sock
exec /home/some/virtual/env/bin/gunicorn -b unix:$SOCKFILE01 site.wsgi:application -w 2 -k gevent --worker-connections=2000
exit 0
when I inspect the log, I see:
start.sh: line 2: cd: /root: Permission denied
which means this is still running as root.
I am totally baffled by this. I start supervisor as root. The even weirder part is that the above code works totally fine on my local machine, but shows me the above log on a server.
I have run out of ideas... :((
EDIT:
added echo to the .sh script and user.txt spits out:
root
..totally puzzled!
You need to set the environment variables as below and update the command:
[program:site]
command=bash -c "/home/some/virtual/env/dir/run/start.sh"
user=some
stdout_logfile=/home/some/etc/supervisor/logs/logging.log
redirect_stderr=true
environment=LANG=en_US.UTF-8,LC_ALL=en_US.UTF-8,HOME="/home/some",USER="some"
stopsignal=KILL
killasgroup=true
autostart=true
This is described in http://supervisord.org/subprocess.html#subprocess-environment and solved this issue for me when trying to run npm scripts.
I have a helper container and an app container.
The helper container handles mounting of code via git to a shared mount with the app container.
I need for the helper container to check for a package.json or requirements.txt in the cloned code and if one exists to run npm install or pip install -r requirements.txt, storing the dependencies in the shared mount.
Thing is the npm command and/or the pip command needs to be run from the app container to keep the helper container as generic and as agnostic as possible.
One solution would be to mount the docker socket to the helper container and run docker exec <command> <app container> but what if I have thousands of such apps on a single host.
Will there be issues having hundreds of containers all accessing the docker socket at the same time? And is there a better way to do this? Get commands run on another container?
Well there is no "container to container" internal communication layer like "ssh". In this regard, the containers are as standalone as 2 different VMs ( beside the network part in general ).
You might go the usual way, install opensshd-server on the "receiving" server, configure it key-based only. You do not need to export the port to the host, just connect to the port using the docker-internal network. Deploy the ssh private key on the 'caller server' and the public key into .ssh/authorized_keys on the 'receiving server' during container start time ( volume mount ) so you do not keep the secrets in the image (build time).
Probably also create a ssh-alias in .ssh/config and also set HostVerify to no, since the containers could be rebuild. Then do
ssh <alias> your-command
Found that better way I was looking for :-) .
Using supervisord and running the xml rpc server enables me to run something like:
supervisorctl -s http://127.0.0.1:9002 -utheuser -pthepassword start uwsgi
supervisorctl -s http://127.0.0.1:9002 -utheuser -pthepassword start uwsgi
In the helper container, this will connect to the rpc server running on port 9002 on the app container and execute a program block that may look something like;
[program:uwsgi]
directory=/app
command=/usr/sbin/uwsgi --ini /app/app.ini --uid nginx --gid nginx --plugins http,python --limit-as 512
autostart=false
autorestart=unexpected
stdout_logfile=/var/log/uwsgi/stdout.log
stdout_logfile_maxbytes=0
stderr_logfile=/var/log/uwsgi/stderr.log
stderr_logfile_maxbytes=0
exitcodes=0
environment = HOME="/app", USER="nginx"]
This is exactly what I needed!
For anyone who finds this you'll probably need your supervisord.conf on your app container to look sth like:
[supervisord]
nodaemon=true
[supervisorctl]
[rpcinterface:supervisor]
supervisor.rpcinterface_factory = supervisor.rpcinterface:make_main_rpcinterface
[inet_http_server]
port=127.0.0.1:9002
username=user
password=password
[program:uwsgi]
directory=/app
command=/usr/sbin/uwsgi --ini /app/app.ini --uid nginx --gid nginx --plugins http,python --limit-as 512
autostart=false
autorestart=unexpected
stdout_logfile=/var/log/uwsgi/stdout.log
stdout_logfile_maxbytes=0
stderr_logfile=/var/log/uwsgi/stderr.log
stderr_logfile_maxbytes=0
exitcodes=0
environment = HOME="/app", USER="nginx"]
You can setup the inet_http_server to listen on a socket. You can link the containers to be able to access them at a hostname.
I have configured a supervisor on the server like this:
[program:myproject]
command = /home/mydir/myproj/venv/bin/python /home/mydir/myproj/venv/bin/gunicorn manage:app -b <ip_address>:8000
directory = /home/mydir
I have installed gevent on my virtual environment but I don't know how can I implement it on the supervisor command variable, I can run it manually through terminal like this:
gunicorn manage:app -b <ip_address>:8000 --worker-class gevent
I tried to include a path when I call gevent in supervisor command just like python and gunicorn, but it's not working, honestly, I don't know what's the correct directory/file to execute gevent and I am also not sure if this is the correct way to execute a worker class in supervisor. I am running on Ubuntu v14.04
Anyone?Thanks
Already made a solution for this. But I am not 100% sure if it is correct, after searching a hundred times, I finally came up with a working solution :)
Got this from here, I've created a gunicorn.conf.py file on my project directory containing:
worker_class = 'gevent'
And integrated this file on supervisor config setting:
[program:myproject]
command = /home/mydir/myproj/venv/bin/python /home/mydir/myproj/venv/bin/gunicorn -c /home/mydir/myproj/gunicorn.conf.py manage:app -b <ip_address>:8000
directory = /home/mydir
And start running the supervisor:
sudo supervisorctl start <my_project>
And poof! It's already working!
I'm trying to deploy django with uwsgi, and I think I lack understanding of how it all works. I have uwsgi running in emperor mode, and I'm trying to get the vassals to run in their own virtualenvs, with a different python version.
The emperor configuration:
[uwsgi]
socket = /run/uwsgi/uwsgi.socket
pidfile = /run/uwsgi/uwsgi.pid
emperor = /etc/uwsgi.d
emperor-tyrant = true
master = true
autoload = true
log-date = true
logto = /var/log/uwsgi/uwsgi-emperor.log
And the vassal:
uid=django
gid=django
virtualenv=/home/django/sites/mysite/venv/bin
chdir=/home/django/sites/mysite/site
module=mysite.uwsgi:application
socket=/tmp/uwsgi_mysite.sock
master=True
I'm seeing the following error in the emperor log:
Traceback (most recent call last):
File "./mysite/uwsgi.py", line 11, in <module>
import site
ImportError: No module named site
The virtualenv for my site is created as a python 3.4 pyvenv. The uwsgi is the system uwsgi (python2.6). I was under the impression that the emperor could be any python version, as the vassal would be launched with its own python and environment, launched by the master process. I now think this is wrong.
What I'd like to be doing is running the uwsgi master process with the system python, but the various vassals (applications) with their own python and their own libraries. Is this possible? Or am I going to have to run multiple emperors if I want to run multiple pythons? Kinda defeats the purpose of having virtual environments.
The "elegant" way is building the uWSGI python support as a plugin, and having a plugin for each python version:
(from uWSGI sources)
make PROFILE=nolang
(will build a uWSGI binary without language support)
PYTHON=python2.7 ./uwsgi --build-plugin "plugins/python python27"
will build the python27_plugin.so that you can load in vassals
PYTHON=python3 ./uwsgi --build-plugin "plugins/python python3"
will build the plugin for python3 and so on.
There are various way to build uWSGI plugins, the one i am reporting is the safest one (it ensure the #ifdef are honoured).
Having said that, having a uWSGI Emperor for each python version is viable too. Remember Emperor are stackable, so you can have a generic emperor spawning one emperor (as its vassal) for each python version.
Pip install uWSGI
One option would be to simply install uWSGI with pip in your virtualenvs and start your services separately:
pip install uwsgi
~/.virtualenvs/venv-name/lib/pythonX.X/site-packages/uwsgi --ini path/to/ini-file
Install uWSGI from source and build python plugins
If you want a system-wide uWSGI build, you can build it from source and install plugins for multiple python versions. You'll need root privileges for this.
First you may want to install multiple system-wide python versions.
Make sure you have any dependencies installed. For pcre, on a Debian-based distribution use:
apt install libpcre3 libpcre3-dev
Download and build the latest uWSGI source into /usr/local/src, replacing X.X.X.X below with the package version (e.g. 2.0.19.1):
wget http://projects.unbit.it/downloads/uwsgi-latest.tar.gz
tar vzxf uwsgi-latest.tar.gz
cd uwsgi-X.X.X.X/
make PROFILE=nolang
Symlink the versioned folder uwsgi-X.X.X.X to give it the generic name, uwsgi:
ln -s /usr/local/src/uwsgi-X.X.X.X /usr/local/src/uwsgi
Create a symlink to the build so it's on your PATH:
ln -s /usr/local/src/uwsgi/uwsgi /usr/local/bin
Build python plugins for the versions you need:
PYTHON=pythonX.X ./uwsgi --build-plugin "plugins/python pythonXX"
For example, for python3.8:
PYTHON=python3.8 ./uwsgi --build-plugin "plugins/python python38"
Create a plugin directory in an appropriate location:
mkdir -p /usr/local/lib/uwsgi/plugins/
Symlink the created plugins to this directory. For example, for python3.8:
ln -s /usr/local/src/uwsgi/python38_plugin.so /usr/local/lib/uwsgi/plugins
Then in your uWSGI configuration (project.ini) files, specify the plugin directory and the plugin:
plugin-dir = /usr/local/lib/uwsgi/plugins
plugin = python38
Make sure to create your virtualenvs with the same python version that you created the plugin with. For example if you created python38_plugin.so with python3.8 and you have plugin = python38 in your project.ini file, then an easy way to create a virtualenv with python3.8 is with:
python3.8 -m virtualenv path/to/project/virtualenv