Allow user other than root to restart supervisorctl process? - python

I have supervisord run a program as user stavros, and I would like to give the same user permission to restart it using supervisorctl. Unfortunately, I can only do it with sudo, otherwise I get a permission denied error in socket.py. How can I give myself permission to restart supervisord processes?

Personally, I think it is a bad idea to run supervisord as root, but if you want to do this, while allowing a full restart by other users, here is how I would do it.
1/ Create a supervisor group on your system which will have restart rights on supervisord
2/ Put the relevant users in group supervisor
3/ In the supervisord configuration, use the following lines in the [unix_http_server] section:
chmod=0770 ; socket file mode (default 0700)
chown=root:supervisor ; socket file uid:gid owner
It will guarantee that the admin socket is accessible from the selected users.
4/ Add supervisord in the init mechanism of your system in respawn mode (init, systemd, upstart, etc ...). It depends on your system. Respawn mode means the process will be automatically relaunched if it crashes or stops.
5/ From one of the selected users, you should be able to use supervisorctl to run commands, including a complete shutdown which will trigger a full restart of supervisord.

Maybe you should try restarting your superviord process using user stavros.

Related

How to start/stop service from python script running in Flask and Apache server using Debian?

I'm trying to start and stop services from a python script that is running using Flask and Apache.
To get the status from memcached, for example, I'm using
os.popen('service memcached status').read() and works like a charm.
The problem is that when I try to start/stop doing something like
os.popen('service memcached stop').read() it just does nothing (I checked by the shell that the service is still running)
To summarize, I can get the status but can't start/stop and don't know why its happens.
Does anyone have any suggestion?
Thanks,
I saw the apache logs in /var/log/apache2/error.log and the problem was that I needed more privileges to execute start/stop. But when I tried to use
os.popen('sudo service memcached stop').read()
I got an error, saying that I should have typed the su password.
To solve this problem I typed in the shell:
visudo
which opened the /etc/sudoers file. And there I added the line
www-data ALL=(ALL) NOPASSWD:ALL
I understood that this means that I am giving permission to the user www-data execute sudo without password.
To quit, press Ctrl+X and then y to save.
note: www-data is the username that executes the apache.

Docker, Supervisord and logging - how to consolidate logs in docker logs?

So, experimenting with Docker + Supervisord + Django app via uWSGI. I have the whole stack working fine, but need to tidy up the logging.
If I launch supervisor in non-daemon mode,
/usr/bin/supervisord -n
Then I get the logging output for supervisor played into the docker logs stdout. However, if supervisord is in daemon mode, its own logs get stashed away in the container filesystem, and the logs of its applications do too - in their own app__stderr/stdout files.
What I want is to log both supervisor, and application stdout to the docker log.
Is starting supervisord in non-daemon mode a sensible idea for this, or does it cause unintended consequences? Also, how do I get the application logs also played into the docker logs?
I accomplished this using .
Install supervisor-stdout in your Docker image:
RUN apt-get install -y python-pip && pip install supervisor-stdout
Supervisord Configuration
Edit your supervisord.conf look like so:
[program:myprogram]
command=/what/ever/command
stdout_events_enabled=true
stderr_events_enabled=true
[eventlistener:stdout]
command = supervisor_stdout
buffer_size = 100
events = PROCESS_LOG
result_handler = supervisor_stdout:event_handler
Docker container is like a kleenex, you use it then you drop it. To be "alive", Docker needs something running in foreground (whereas daemons run in background), that's why you are using Supervisord.
So you need to "redirect/add/merge" process output (access and error) to Supervisord output you see when running your container.
As Drew said, everyone is using https://github.com/coderanger/supervisor-stdout to achieve it (to me this should be added to supervisord project!). Something Drew forgot to say, you may need to add
stdout_logfile=/dev/stdout
stdout_logfile_maxbytes=0
To the supervisord program configuration block.
Something very usefull also, imagine your process is logging in a log file instead of stdout, you can ask supervisord to watch it:
[program:php-fpm-log]
command=tail -f /var/log/php5-fpm.log
stdout_events_enabled=true
stderr_events_enabled=true
This will redirect php5-fpm.log content to stdout then to supervisord stdout via supervisord-stdout.
supervisor-stdout requires to install python-pip, which downloads ~150mb, for a container I think is a lot just for install another tool.
Redirecting logfile to /dev/stdout works for me:
stdout_logfile=/dev/stdout
stdout_logfile_maxbytes=0
http://veithen.github.io/2015/01/08/supervisord-redirecting-stdout.html
I agree, not using the daemon mode sounds like the best solution, but I would probably employ the same strategy you would use when you had actual physical servers or some kind of VM setup: centralize logging.
You could use something self-hosted like logstash inside the container to collect logs and send it to a central server. Or use a commercial service like loggly or papertrail to do the same.
Today's best practice is to have minimal Docker images. For me, ideal container with Python application contain just my code, supporting libraries and something like uwsgi if it is necessary.
I published one solution on https://github.com/msgre/uwsgi_logging. It is simple Django application behind uwsgi which is configured to display logs from uwsgi and Django app on containers stdout without need of supervisor.
I had the same problem with my python app (Flask). Solution that worked for me was to:
Start supervisord in nodaemon mode (supervisord -n)
Redirect log to /proc/1/fd/1 instead of /dev/stdout
Set these two environment variables in my docker image PYTHONUNBUFFERED=True and PYTHONIOENCODING=UTF-8
Just add below line to your respective supervisor.ini config file.
redirect_stderr=true
stdout_logfile=/proc/1/fd/1
Export these variables to application (linux) environment.
$ export PYTHONUNBUFFERED=True
$ export PYTHONIOENCODING=UTF-8
Indeed, starting supervisord in non-daemon mode is the best solution.
You could also use volumes in order to mount the supervisord's logs to a central place.

How to Running celeryd as a daemon in ubuntu?

i am trying to install an init.d script, to run celery for scheduling tasks. when i tried to start it by sudo /etc/init.d/celeryd start, it throws error "User does not exist: 'celery'"
my celery configuration file (/etc/default/celeryd) contains these:
# Workers should run as an unprivileged user.
CELERYD_USER="celery"
CELERYD_GROUP="celery"
i know that these are wrong that is why it throws error.
The documentation just says this:
CELERYD_USER
User to run celeryd as. Default is current user.
nothing more about it.
Any help will be appreciated.
I am adding a proper answer in order to be clearly visible:
Workers are unix processes that will run the various celery tasks. As you can see in the documentation, the CELERYD_USER and CELERYD_GROUP determine the name of user and group these workers will be run in your Unix environment.
So, what happened initially in your case is that celery tried to start the worker with a user named "celery" which did not exist in your system. When you commented out these two options, then celery started the workers with the user that issued the command sudo /etc/init.d/celeryd start which in this case is the root (administrator) user (default is the current user).
However, it is recommended to run the workers as unpriviledged users and not as root for obvious reasons. So I recommend to actually add the celery user and group using the small tutorial found here http://www.cyberciti.biz/faq/unix-create-user-account/ and uncomment again the
CELERYD_USER="celery"
CELERYD_GROUP="celery"
options.

Python daemon and systemd service

I have a simple Python script working as a daemon. I am trying to create systemd script to be able to start this script during startup.
Current systemd script:
[Unit]
Description=Text
After=syslog.target
[Service]
Type=forking
User=node
Group=node
WorkingDirectory=/home/node/Node/
PIDFile=/var/run/zebra.pid
ExecStart=/home/node/Node/node.py
[Install]
WantedBy=multi-user.target
node.py:
if __name__ == '__main__':
with daemon.DaemonContext():
check = Node()
check.run()
run contains while True loop.
I try to run this service with systemctl start zebra-node.service. Unfortunately service never finished stating sequence - I have to press Ctrl+C.
Script is running, but status is activating and after a while it change to deactivating.
Now I am using python-daemon (but before I tried without it and the symptoms were similar).
Should I implement some additional features to my script or is systemd file incorrect?
The reason, it does not complete the startup sequence is, that for Type forking your startup process is expected to fork and exit (see $ man systemd.service - search for forking).
Simply use only the main process, do not daemonize
One option is to do less. With systemd, there is often no need to create daemons and you may directly run the code without daemonizing.
#!/usr/bin/python -u
from somewhere import Node
check = Node()
check.run()
This allows using simpler Type of service called simple, so your unit file would look like.
[Unit]
Description=Simplified simple zebra service
After=syslog.target
[Service]
Type=simple
User=node
Group=node
WorkingDirectory=/home/node/Node/
ExecStart=/home/node/Node/node.py
StandardOutput=syslog
StandardError=syslog
[Install]
WantedBy=multi-user.target
Note, that the -u in python shebang is not necessary, but in case you print something out to the stdout or stderr, the -u makes sure, there is no output buffering in place and printed lines will be immediately caught by systemd and recorded in journal. Without it, it would appear with some delay.
For this purpose I added into unit file the lines StandardOutput=syslog and StandardError=syslog. If you do not care about printed output in your journal, do not care about these lines (they do not have to be present).
systemd makes daemonization obsolete
While the title of your question explicitly asks about daemonizing, I guess, the core of the question is "how to make my service running" and while using main process seems much simpler (you do not have to care about daemons at all), it could be considered answer to your question.
I think, that many people use daemonizing just because "everybody does it". With systemd the reasons for daemonizing are often obsolete. There might be some reasons to use daemonization, but it will be rare case now.
EDIT: fixed python -p to proper python -u. thanks kmftzg
It is possible to daemonize like Schnouki and Amit describe. But with systemd this is not necessary. There are two nicer ways to initialize the daemon: socket-activation and explicit notification with sd_notify().
Socket activation works for daemons which want to listen on a network port or UNIX socket or similar. Systemd would open the socket, listen on it, and then spawn the daemon when a connection comes in. This is the preferred approch because it gives the most flexibility to the administrator. [1] and [2] give a nice introduction, [3] describes the C API, while [4] describes the Python API.
[1] http://0pointer.de/blog/projects/socket-activation.html
[2] http://0pointer.de/blog/projects/socket-activation2.html
[3] http://www.freedesktop.org/software/systemd/man/sd_listen_fds.html
[4] http://www.freedesktop.org/software/systemd/python-systemd/daemon.html#systemd.daemon.listen_fds
Explicit notification means that the daemon opens the sockets itself and/or does any other initialization, and then notifies init that it is ready and can serve requests. This can be implemented with the "forking protocol", but actually it is nicer to just send a notification to systemd with sd_notify().
Python wrapper is called systemd.daemon.notify and will be one line to use [5].
[5] http://www.freedesktop.org/software/systemd/python-systemd/daemon.html#systemd.daemon.notify
In this case the unit file would have Type=notify, and call
systemd.daemon.notify("READY=1") after it has established the sockets. No forking or daemonization is necessary.
You're not creating the PID file.
systemd expects your program to write its PID in /var/run/zebra.pid. As you don't do it, systemd probably thinks that your program is failing, hence deactivating it.
To add the PID file, install lockfile and change your code to this:
import daemon
import daemon.pidlockfile
pidfile = daemon.pidlockfile.PIDLockFile("/var/run/zebra.pid")
with daemon.DaemonContext(pidfile=pidfile):
check = Node()
check.run()
(Quick note: some recent update of lockfile changed its API and made it incompatible with python-daemon. To fix it, edit daemon/pidlockfile.py, remove LinkFileLock from the imports, and add from lockfile.linklockfile import LinkLockFile as LinkFileLock.)
Be careful of one other thing: DaemonContext changes the working dir of your program to /, making the WorkingDirectory of your service file useless. If you want DaemonContext to chdir into another directory, use DaemonContext(pidfile=pidfile, working_directory="/path/to/dir").
I came across this question when trying to convert some python init.d services to systemd under CentOS 7. This seems to work great for me, by placing this file in /etc/systemd/system/:
[Unit]
Description=manages worker instances as a service
After=multi-user.target
[Service]
Type=idle
User=node
ExecStart=/usr/bin/python /path/to/your/module.py
Restart=always
TimeoutStartSec=10
RestartSec=10
[Install]
WantedBy=multi-user.target
I then dropped my old init.d service file from /etc/init.d and ran sudo systemctl daemon-reload to reload systemd.
I wanted my service to auto restart, hence the restart options. I also found using idle for Type made more sense than simple.
Behavior of idle is very similar to simple; however, actual execution
of the service binary is delayed until all active jobs are dispatched.
This may be used to avoid interleaving of output of shell services
with the status output on the console.
More details on the options I used here.
I also experimented with keeping the old service and having systemd resart the service but I ran into some issues.
[Unit]
# Added this to the above
#SourcePath=/etc/init.d/old-service
[Service]
# Replace the ExecStart from above with these
#ExecStart=/etc/init.d/old-service start
#ExecStop=/etc/init.d/old-service stop
The issues I experienced was that the init.d service script was used instead of the systemd service if both were named the same. If you killed the init.d initiated process, the systemd script would then take over. But if you ran service <service-name> stop it would refer to the old init.d service. So I found the best way was to drop the old init.d service and the service command referred to the systemd service instead.
Hope this helps!
Also, you most likely need to set daemon_context=True when creating the DaemonContext().
This is because, if python-daemon detects that if it is running under a init system, it doesn't detach from the parent process. systemd expects that the daemon process running with Type=forking will do so. Hence, you need that, else systemd will keep waiting, and finally kill the process.
If you are curious, in python-daemon's daemon module, you will see this code:
def is_detach_process_context_required():
""" Determine whether detaching process context is required.
Return ``True`` if the process environment indicates the
process is already detached:
* Process was started by `init`; or
* Process was started by `inetd`.
"""
result = True
if is_process_started_by_init() or is_process_started_by_superserver():
result = False
Hopefully this explains better.

Most pythonic way of running a single command with sudo rights

I have a python script which is performing some nagios configuration. The script is running as a user which has full sudo rights (the user can run any command with sudo, without password prompt). The final step in the configuration is this:
open(NAGIOS_COMMAND_FILE, 'a').write(cmdline)
The NAGIOS_COMMAND_FILE is only writable by root, so this command should be run by root. I can think of two ways of achieving this (both unsatisfactory):
Run the whole script as root. I do not like doing this, since any error in my script will be executed with full root rights.
Put the open(NAGIOS_COMMAND_FILE, 'a').write(cmdline) command in a separate script, and use the subprocess library to call that script, with sudo. I do not like creating an extra script just to run a single command.
I suppose there is no way of changing the running user just for a single command, in my current script, or am I wrong?
Why don't you give write permission on NAGIOS_COMMAND_FILE to your user who have all sudo rights?
Never, ever run a web server as root or as a user with full sudo privileges. This isn't a pythonic thing, it is a "keep my server from being pwned" thing.
Look at os.seteuid, the "principle of least privilege", and man sudoers and run your server as regular "httpd-server" where "httpd-server" has sudoer permission to write to NAGIOS_COMMAND_FILE. And then be sure that what you write to the command file is as clean as you can make it.
It is actually possible to change user for a single command.
Fabric provides a way to log in as any user to a server. It relies on ssh connections I believe. So you could connect to localhost with a different user in your python script and execute the desired command.
http://docs.fabfile.org/en/1.4.3/api/core/decorators.html
Anyway, as others have already precised, it is best to allow the user running the script permission to execute this one command and avoid relying on root for execution.
I would agree with the post above, either give your user write perms to the NAGIOS_COMMAND_FILE or add that use to a group that has those permissions, like nagcmd.

Categories