supervisord refuses to run command as user (always runs as root) - python

For some reason, supervisor refuses to start the command as user - it always runs it as root - and this is an issue for me since I am activating a virtualenv and running commands specific to that particulat virtualenv.
So, my conf looks like so:
[program:site]
command = /home/some/virtual/env/dir/run/start.sh
user = some
stdout_logfile = /home/some/etc/supervisor/logs/logging.log
redirect_stderr = true
environment=LANG=en_US.UTF-8,LC_ALL=en_US.UTF-8
stopsignal=KILL
killasgroup=true
autostart=true
start.sh looks like so:
#!/bin/bash
echo $USER >> /home/some/user.txt
cd
source /home/foo/some/virtual/env/bin/activate
cd /home/foo/some/virtual/env
SOCKFILE01=/home/some/etc/supervisor/site.sock
exec /home/some/virtual/env/bin/gunicorn -b unix:$SOCKFILE01 site.wsgi:application -w 2 -k gevent --worker-connections=2000
exit 0
when I inspect the log, I see:
start.sh: line 2: cd: /root: Permission denied
which means this is still running as root.
I am totally baffled by this. I start supervisor as root. The even weirder part is that the above code works totally fine on my local machine, but shows me the above log on a server.
I have run out of ideas... :((
EDIT:
added echo to the .sh script and user.txt spits out:
root
..totally puzzled!

You need to set the environment variables as below and update the command:
[program:site]
command=bash -c "/home/some/virtual/env/dir/run/start.sh"
user=some
stdout_logfile=/home/some/etc/supervisor/logs/logging.log
redirect_stderr=true
environment=LANG=en_US.UTF-8,LC_ALL=en_US.UTF-8,HOME="/home/some",USER="some"
stopsignal=KILL
killasgroup=true
autostart=true
This is described in http://supervisord.org/subprocess.html#subprocess-environment and solved this issue for me when trying to run npm scripts.

Related

Can I create a script, "Virtual Environment" and run it in crontab?

They help me, they know I need to run a script to start the services, I use Django with Python and ubuntu server.
I have been seeing many examples in crontab, which I will use, every time I restart the server, I run the Script, which contains the command to run the virtual environment and in addition to the command "python3 manage.py runserver_plus", apart was to see restart the server all nights, I was also successful with crontab, but I can't execute what the script contains. They can help me, I am not very expert, but I managed to do something.
Is it the path of the script?
Tried running the command directly, got no results.
I write the following.
root#server:/home/admin-server# pwd
/home/admin-server
root#server:/home/admin-server# ls -l
drwxrwxr 3 admin-server admin-server 4096 Nov 20 17:25 control_flota
-rwxr--r-- 1 root root. 141 Nov 20 18:00 server_script.sh
Script new
I still have no results: /, I don't know why?
#!bin/bash
echo "Welcome"
cd /home/admin-server/control_flota/
source venvp1/bin/activate
echo "Thanks"
You can activate the Virtual Environment from within the shell script, prior to running any manage.py commands
#!/bin/bash
cd /your_code_directory
source env/bin/activate
python ./manage.py runserver_plus
Ensure you save the file with the .sh extension, then give it execute rights:
chmod u+x your_script.sh
You should then be able to call from cron; sudo cron if you run into permissions issues

Crontab starting gunicorn installed with pip, command not found

I want to start a django application in gunicorn at reboot.
All commands below are run as user simernes
I have installed gunicorn with pip3:
pip3 install gunicorn
crontab:
crontab -e
#reboot /home/simernes/run_gunicorn.sh > /home/simernes/logfile 2>&1 &
run_gunicorn.sh
#!/bin/bash
source /home/simernes/.bashrc
cd /home/simernes/djangoapp
gunicorn --bind localhost:8000 config.wsgi
However, when I go and reboot and check the log file it says:
line 4: gunicorn: command not found
Running the script on it's own from a ssh logged in terminal works fine.
Do I need to source the python environment for cron to be able to see the apps installed through pip, or something of the like?
cron runs your script in a shell with minimal environment variables and path, usually the following:
X-Cron-Env: <SHELL=/bin/sh>
X-Cron-Env: <PATH=/usr/bin:/bin>
X-Cron-Env: <LOGNAME=username>
X-Cron-Env: <USER=username>
X-Cron-Env: <HOME=/Users/username>
Which means gunicorn or anything else not in /usr/bin:/bin wont be available to your script.
What you can do is export the path to gunicorn as an environment variable by adding something like this to your crontab:
#reboot export GUNICORN=/path/to/gunicorn && /home/simernes/run_gunicorn.sh > /home/simernes/logfile 2>&1 &
And in your script you execute gunicorn thusly:
#!/bin/bash
source /home/simernes/.bashrc
cd /home/simernes/djangoapp
$GUNICORN --bind localhost:8000 config.wsgi
Maybe give full path to gunicorn in the script

Why does uWSGI fail to start in Docker?

I am relatively new to using uWSGI to serve Python applications and I am attempting to start a uWSGI process in emperor mode with a vassal, but every time I try to start uWSGI inside of Docker with the following command (as root):
# /usr/local/bin/uwsgi --ini /etc/uwsgi/emperor.ini
What I get as a response is:
[uWSGI] getting INI configuration from /etc/uwsgi/emperor.ini
2.0.13.1
The emperor.ini configuration file looks like:
# files/etc/uwsgi/emperor.ini
[uwsgi]
emperor = /etc/uwsgi/apps-enabled
die-on-term = true
log-date = true
While the only vassal's configuration looks like:
# files/etc/uwsgi/apps-enabled/application.ini
[uwsgi]
app_dir = /var/www/server
plugin = python
master = true
callable = app
chdir = %(app_dir)
mount = /=%(app_dir)/start.py
protocol = uwsgi
socket = :8079
uid = www-data
gid = www-data
buffer-size = 32768
enable-threads = true
single-interpreter = true
processes = 1
stats = 127.0.0.1:1717
(NB: The filenames above are given in terms of where they live relative to the Dockerfile which will then copy them to the correct locations, basically removing the prefix files)
Currently the uWSGI Docker image I'm using is built off of an ubuntu:trusty base image (though I've tried ubuntu:latest and alpine:latest and encountered the same problem), and although I am attempting to launch the uWSGI process with supervisor, as stated previously, it also fails when run directly from the command line. In the Docker image I'm installing uWSGI using pip but have also tried using apt-get with the same result.
I should also mention that I've tried different versions of uWSGI 2.0.13.1 and 1.9.something with the same result, if that helps.
# Dockerfile
FROM ubuntu:trusty
MAINTAINER Sean Quinn "me#mail.com"
RUN apt-get update \
&& apt-get install -y \
ack-grep git nano \
supervisor \
build-essential gcc python python-dev python-pip
RUN sed -i 's/^\(\[supervisord\]\)$/\1\nnodaemon=true/' /etc/supervisor/supervisord.conf \
&& sed -i 's/^\(\[supervisord\]\)$/\1\nloglevel=debug/' /etc/supervisor/supervisord.conf \
&& sed -i 's/^\(files = .*\)$/;\1/' /etc/supervisor/supervisord.conf \
&& sed -i 's/^\(\[include\]\)$/\1\nfiles = \/etc\/supervisor\/conf.d\/*.conf/' /etc/supervisor/supervisord.conf
ENV UWSGI_VERSION 2.0.13.1
RUN pip install uwsgi==${UWSGI_VERSION}
RUN mkdir -p /etc/uwsgi \
&& mkdir -p /etc/uwsgi/apps-available \
&& mkdir -p /etc/uwsgi/apps-enabled \
&& mkdir -p /var/log/uwsgi
COPY files/etc/supervisor/conf.d/uwsgi.conf /etc/supervisor/conf.d/uwsgi.conf
COPY files/etc/uwsgi/emperor.ini /etc/uwsgi/emperor.ini
VOLUME /etc/uwsgi/apps-enabled
VOLUME /var/www
ENTRYPOINT ["/usr/bin/supervisord"]
CMD ["-c", "/etc/supervisor/supervisord.conf"]
As mentioned, the supervisord process attempts to launch the uWSGI process using the following supervisor configuration.
# files/etc/supervisor/conf.d/uwsgi.conf
[program:uwsgi]
command=/usr/local/bin/uwsgi --ini /etc/uwsgi/emperor.ini
user=root
The application Python files are mounted in a subdirectory of /var/www and the application uWSGI configuration is mounted into /etc/uwsgi/apps-enabled.
The bizarre thing is, if I install supervisor and uWSGI on a new Ubuntu VM (outside of Docker) with all of the configuration and files in place I can see uWSGI properly process the emperor.ini and read the vassal .ini files. I haven't yet attempted to add nginx into the equation because I want to make sure uWSGI is starting and reading configuration files correctly first and foremost.
Is there any way to increase the logging or ascertain why I'm only seeing what appears to be the version number of the uWSGI binary? It's like the uWSGI process is completely ignoring command line options. I feel like I'm missing something that should be obvious.
Thanks in advance for any help anyone can give!
tl;dr don't use UWSGI_VERSION as an environment variable, apparently it forces uWSGI to only print the version number instead of start?
I believe I solved my own issue!
After experimenting with other uWSGI images on Docker's hub, I found that they were also running into the same issue so I began to look further into possible configuration issues. I tried changing permissions among other things.
I noticed however that when I used jpetazzo/nsenter to enter the running container that I saw uWSGI start (rather than simply output the uWSGI version information as highlighted above). When entering using docker exec, uWSGI would only print the version information. After playing around a bit more, I discovered that issuing the command su - from within the container launched using docker exec I again saw uWSGI start.
After some inspection, I discovered several differences in the environment variables between the root user in one shell vs. another. It led me to the UWSGI_VERSION environment variable, which appears to have been the culprit because removing UWSGI_VERSION allowed uWSGI to start.
I modified my Dockerfile to use UWSGI_PIP_VERSION instead as the environment variable to indicate the version of uWSGI to install, which seems to be a safe alternative to UWSGI_VERSION. YMMV.

how to run gunicorn with django as non-root

I have a django application and I use gunicorn to run it. My script to start gunicorn looks like this:
django_path=/path/to/your/manage.py
settingsfile=my_name
workers=2
cd $django_path
exec gunicorn --env DJANGO_SETTINGS_MODULE=app.$settingsfile app.wsgi --workers=$workers &
this works when I execute it. However, when I look at my database in my projectfolder (cd /path/to/your/manage.py && ll) I get this:
-rw-r--r-- 1 root root 55K Dec 2 13:33 db.sqlite3
Which means I need root permisson to do anyhting on the databse (for example do a createuser). So I looked around on Stackoverflow and tried a couple of things:
I had the whole script at the top of /etc/init.d/rc.local
Then I put the script as a script file gunicorn_script.sh put in /etc/init.d, did a /usr/sbin/update-rc.d -f gunicorn_script.sh defaults
Lastly, I tried to put this command at the top of the rc.local file: su debian -c '/etc/init.d/gunicorn_script.sh start' to execute the gunicorn_script as a debian user
All of them started my app but the problem with the database remains (only root rights).
So how do I run that script as a non root user?
I have a script in my project's folder which I use to run gunicorn. Here is a header:
#!/bin/bash
CUR_DIR=$(dirname $(readlink -f $0))
WORK_DIR=$CUR_DIR
USER=myusername
PYTHON=/usr/bin/python3
GUNICORN=/usr/local/bin/gunicorn
sudo -u $USER sh -c "cd $WORK_DIR; $PYTHON -W ignore $GUNICORN -c $WORK_DIR/config/gunicorn/gunicorn.conf.py --chdir $WORK_DIR myappname.wsgi:application
Updated:
Put the code below to the file /etc/init.d/myservice, make the root owner and give +x permissions for the owner.
#!/bin/bash
#chkconfig: 345 95 50
#description: Starts myservice
if [ -z "$1" ]; then
echo "`basename $0` {start|stop}"
exit
fi
case "$1" in
start)
sh /path/to/run_script.sh start &
;;
stop)
sh /path/to/run_script.sh stop
;;
esac
Now you can use sudo service myservice start
I am sorry, I am not familiar with systemd yet, but with it it can be even easier.
Ok, so I found out that db.sqlite3 will be create in django through the makemigrations and migrate commands which I ran from root.
Hence, the problems with the permissions. I switched to debian and ran the commands from there et voila:
-rw-r--r-- 1 debian debian 55K Dec 2 13:33 db.sqlite3

ubuntu ec2 - run python script at startup with arguments

I have a python script i'd like to start on startup on an ubuntu ec2 instance but im running into troubles.
The script runs in a loop and takes care or exiting when its ready so i shouldn't need to start or stop it after its running.
I've read and tried a lot of approaches with various degrees of success and honestly im confused about whats the best approach. I've tried putting a shell script that starts the python script in /etc/init.d, making it executable and doing update-rc.d to try to get it to run but its failed at every stage.
here's the contents of the script ive tried:
#!/bin/bash
cd ~/Dropbox/Render\ Farm\ 1/appleseed/bin
while :
do
python ./watchfolder18.py -t ./appleseed.cli -u ec2 ../../data/
done
i then did
sudo chmod +x /etc/init.d/script_name
sudo sudo update-rc.d /etc/init.d/script_name defaults
This doesn't seem to run on startup and i cant see why, if i run the command manually it works as expected.
I also tried adding a line to rc.local to start the script but that doesn't seem to work either
Can anybody share what they have found is the simplest way to run a python script in the background with arguments on startup of an ec2 instance.
UPDATE: ----------------------
I've since moved this code to a file called /home/ubuntu/bin/watch_folder_start
#!/bin/bash
cd /home/ubuntu/Dropbox/Render\ Farm\ 1/appleseed/bin
while :
do
python ./watchfolder18.py -t ./appleseed.cli -u ec2 ../../data/
done
and changed my rc.local file to this:
nohup /home/ubuntu/bin/watch_folder_start &
exit 0
Which works when i manually run rc.local but wont fire on startup, i did chmod +x rc.local but that didn't change anything,
Your /etc/init.d/script_name is missing the plumbing that update-rc.d and so on use, and won't properly handle stop, start, and other init-variety commands, so...
For initial experimentation, take advantage of the /etc/init.d/rc.local script (which should be linked to by default from /etc/rc2/S99rc.local). The gets you out of having to worry about the init.d conventions and just add things to /etc/rc.local before the exit 0 at its end.
Additionally, that ~ isn't going to be defined, you'll need to use a full pathname - and furthermore the script will run as root. We'll address how to avoid this if desired in a bit. In any of these, you'll need to replace "whoeveryouare" with something more useful. Also be warned that you may need to prefix the python command with a su command and some arguments to get the process to run with the user id you might need.
You might try (in /etc/rc.local):
( if cd '/home/whoeveryouare/Dropbox/Render Farm 1/appleseed/bin' ; then
while : ; do
# This loop should respawn watchfolder18.py if it dies, but
# ideally one should fix watchfolder18.py and remove this loop.
python ./watchfolder18.py -t ./appleseed.cli -u ec2 ../../data/
done
else
echo warning: could not find watchfolder 1>&2
fi
) &
You could also put all that in a script and just call it from /etc/rc.local.
The first pass is roughly what you had, but if we assume that watchfolder18.py will arrange to avoid dying we can cut it down to:
( cd '/home/whoeveryouare/Dropbox/Render Farm 1/appleseed/bin' \
&& exec python ./watchfolder18.py -t ./appleseed.cli -u ec2 ../../data/ ) &
These aren't all that pretty, but it should let you get your daemon sorted out so you can debug it and so on, then come back to making a proper /etc/init.d or /etc/init script later. Something like this might work in /etc/init/watchfolder.conf, but I'm not yet facile enough to claim this is anything other than a rough stab at it:
# watchfolder - spawner for watchfolder18.py
description "watchfolder program"
start on runlevel [2345]
stop on runlevel [!2345]
script
if cd '/home/whoeveryouare/Dropbox/Render Farm 1/appleseed/bin' ; then
exec python ./watchfolder18.py -t ./appleseed.cli -u ec2 ../../data/0
fi
end script
I found that the best solution in the end was to use 'upstart' and create a file in etc/init called myfile.conf that contained the following
description "watch folder service"
author "Jonathan Topf"
start on startup
stop on shutdown
# Automatically Respawn:
respawn
respawn limit 99 5
script
HOST=`hostname`
chdir /home/ubuntu/Dropbox/Render\ Farm\ 1/appleseed/bin
exec /usr/bin/python ./watchfolder.py -t ./appleseed.cli -u $HOST ../../data/ >> /home/ubuntu/bin/ec2_server.log 2>&1
echo "watch_folder started"
end script
More info on using the upstart system here
http://upstart.ubuntu.com/
https://help.ubuntu.com/community/UbuntuBootupHowto
http://blog.joshsoftware.com/2012/02/14/upstart-scripts-in-ubuntu/

Categories