I use this tool http://python-rq.org/
I have a flask app, but I could not find the way to start rq worker other than with rq cli, e.g. $ rq worker
I need to have the worker running all the time, how can I make it running as a service? I need the service to start up on boot also.
You should investigate some supervisor program to control your rq worker. Take a look at supervisor or systemd. I personally use supervisord and it's pretty popular in the Python community.
This is how any supervisor program works (not to be confused with supervisord): the supervisor itself is a service (controlled by another service! e.g., systemd, initd, etc) and it runs programs specified in its configuration file. If a program exits or has issues, the supervisor will respawn it.
If you were in the docker ecosystem, it'd be simpler because docker can be your supervisor.
Multiple options :
For dev purpose, I recommand a docker container (https://docker.com). For production purpose, the cleanest way is : use the packaged version for your system (assuming you're using a GNU/Linux box) along with a dedicated systemd unit.
For example, on Debian :
apt-get update && apt-get install redis
Then edit redis.conf, and start it :
systemctl start redis
Enable it (== start redis during startup)
systemctl enable redis
Related
I'm learning Celery and I'd like to ask:
Which is the absolute simplest way to get Celery to automatically run when Django starts in Ubuntu?. Now I manually start celery -A {prj name} worker -l INFO via the terminal.
Can I make any type of configuration so Celery catches the changes in tasks.py code without the need to restart Celery? Now I ctrl+c and type celery -A {prj name} worker -l INFO every time I change something in the tasks.py code. I can foresee a problem in such approach in production if I can get Celery start automatically ==> need to restart Ubuntu instead?.
(setup: VPS, Django, Ubuntu 18.10 (no docker), no external resources, using Redis (that starts automatically)
I am aware it is a similar question to Django-Celery in production and How to ... but still it is a bit unclear as it refers to amazon and also using shell scripts, crontabs. It seems a bit peculiar that these things wouldn't work out of the box.
I give benefit to the doubt that I have misunderstood the setup of Celery.
I have a deploy script that launch Celery in production.
In production it's better to launch worker :
celery multi stop 5
celery multi start 5 -A {prj name} -Q:1 default -Q:2,3 QUEUE1 -Q:4,5 QUEUE2 --pidfile="%n.pid"
this will stop and launch 5 worker for different Queue
Celery at launch will get the wsgi file that will use this instance of your code, it's mean you need to relaunch it to apply modification, you cannot add a watcher in production (memory cost)
I need to deploy a Python script on a AWS machine with Ubuntu server 18.04.
In the script there is a TCP server using a custom TCP port (let's say the 9999), which handles the clients' requests in different threads.
The problem is that I don't know which could be the best practice to keep this script running if there is any problem (the main TCP server thread dies for whatever reason).
Furthermore, I don't really know which could be the best practice to run this kind of script in the AWS EC2 system.
So far I am manually starting the script via SSH. Everything in the script logic works well, the problem is how to start and keep running such script.
You should take a look at the systemd suite. It can be used to manage the status of your script. It can restart the script if it dies, or if the node is rebooted.
Here's an example service.
Create the file below in this location: /lib/systemd/system/example.service
[Unit]
Description=A short description of the script.
[Service]
Type=simple
# Script location
ExecStart=/path/to/some/script.py
# Restart the script in all circumstances (e.g If it exits successfully, fails or crashes).
Restart=always
[Install]
WantedBy=multi-user.target
Then set the service to start automatically on boot and start the service:
chmod 644 /lib/systemd/system/example.service
systemctl enable example
systemctl start example
There are a lot of resources available if you want to learn more about systemd. I'd suggest the links below:
[0] https://www.freedesktop.org/wiki/Software/systemd/
[1] https://github.com/torfsen/python-systemd-tutorial
[2] https://www.linode.com/docs/quick-answers/linux/start-service-at-boot/#create-a-custom-systemd-service
[3] https://medium.com/#benmorel/creating-a-linux-service-with-systemd-611b5c8b91d6
As for general best practices, it is difficult to provide advice without knowing more about your script. It is not recommended to use the python HTTPServer module for Production workloads, because it only implements basic security checks.
I have some applications, which are written in python, those are managed under uWSGI on Ubuntu 12.04.4 LTS.
Those apps are defined in /etc/uwsgi/apps-available/app001.xml, app002.xml, ..., and all apps xml files are symbolic linked from /etc/uwsgi/apps-enabled directory.
When I have made some changes only one specific app, then I restart uwsgi processes.
sudo service uwsgi restart
But when I run above command to restart uWSGI processes, above command will restart all apps.
How can I restart only one single uwsgi instance?
I want to keep running another instance to avoid distractions related to process restart.
If you are using the Emperor, just touch the config files, otherwise configure each instance to expose a pidfile (for using it with UNIX signals) or a master fifo (http://uwsgi-docs.readthedocs.org/en/latest/MasterFIFO.html) or --touch-reload (http://uwsgi-docs.readthedocs.org/en/latest/Options.html#touch-reload)
So, experimenting with Docker + Supervisord + Django app via uWSGI. I have the whole stack working fine, but need to tidy up the logging.
If I launch supervisor in non-daemon mode,
/usr/bin/supervisord -n
Then I get the logging output for supervisor played into the docker logs stdout. However, if supervisord is in daemon mode, its own logs get stashed away in the container filesystem, and the logs of its applications do too - in their own app__stderr/stdout files.
What I want is to log both supervisor, and application stdout to the docker log.
Is starting supervisord in non-daemon mode a sensible idea for this, or does it cause unintended consequences? Also, how do I get the application logs also played into the docker logs?
I accomplished this using .
Install supervisor-stdout in your Docker image:
RUN apt-get install -y python-pip && pip install supervisor-stdout
Supervisord Configuration
Edit your supervisord.conf look like so:
[program:myprogram]
command=/what/ever/command
stdout_events_enabled=true
stderr_events_enabled=true
[eventlistener:stdout]
command = supervisor_stdout
buffer_size = 100
events = PROCESS_LOG
result_handler = supervisor_stdout:event_handler
Docker container is like a kleenex, you use it then you drop it. To be "alive", Docker needs something running in foreground (whereas daemons run in background), that's why you are using Supervisord.
So you need to "redirect/add/merge" process output (access and error) to Supervisord output you see when running your container.
As Drew said, everyone is using https://github.com/coderanger/supervisor-stdout to achieve it (to me this should be added to supervisord project!). Something Drew forgot to say, you may need to add
stdout_logfile=/dev/stdout
stdout_logfile_maxbytes=0
To the supervisord program configuration block.
Something very usefull also, imagine your process is logging in a log file instead of stdout, you can ask supervisord to watch it:
[program:php-fpm-log]
command=tail -f /var/log/php5-fpm.log
stdout_events_enabled=true
stderr_events_enabled=true
This will redirect php5-fpm.log content to stdout then to supervisord stdout via supervisord-stdout.
supervisor-stdout requires to install python-pip, which downloads ~150mb, for a container I think is a lot just for install another tool.
Redirecting logfile to /dev/stdout works for me:
stdout_logfile=/dev/stdout
stdout_logfile_maxbytes=0
http://veithen.github.io/2015/01/08/supervisord-redirecting-stdout.html
I agree, not using the daemon mode sounds like the best solution, but I would probably employ the same strategy you would use when you had actual physical servers or some kind of VM setup: centralize logging.
You could use something self-hosted like logstash inside the container to collect logs and send it to a central server. Or use a commercial service like loggly or papertrail to do the same.
Today's best practice is to have minimal Docker images. For me, ideal container with Python application contain just my code, supporting libraries and something like uwsgi if it is necessary.
I published one solution on https://github.com/msgre/uwsgi_logging. It is simple Django application behind uwsgi which is configured to display logs from uwsgi and Django app on containers stdout without need of supervisor.
I had the same problem with my python app (Flask). Solution that worked for me was to:
Start supervisord in nodaemon mode (supervisord -n)
Redirect log to /proc/1/fd/1 instead of /dev/stdout
Set these two environment variables in my docker image PYTHONUNBUFFERED=True and PYTHONIOENCODING=UTF-8
Just add below line to your respective supervisor.ini config file.
redirect_stderr=true
stdout_logfile=/proc/1/fd/1
Export these variables to application (linux) environment.
$ export PYTHONUNBUFFERED=True
$ export PYTHONIOENCODING=UTF-8
Indeed, starting supervisord in non-daemon mode is the best solution.
You could also use volumes in order to mount the supervisord's logs to a central place.
I am trying to design a resilient and highly available python API back-end service. The core service is designed to run continuously. The service has to run independently for each of my tenants. This is required as the core service is a blocking service and each tenant's execution needs to be independent from any other tenant's service.
The core service is to be started by a provisioning service. The provisioner is also a continuously running service and is to be responsible for doing the house-keeping functions i.e start the core service on tenant sign-up, check for the required environment and attributes and stop the core service etc.
Currently I am using the multiprocessing module to spawn child instances of the core service from the provisioner service. Having a multi-threaded service with one thread for each tenant is also an option but that has the drawback of disruption of service for other tenant if any of the threads craches. Ideally I would like all these to run as background processes. The problems are
If I daemonize the provisioner service, multiprocessing will not let that daemon to create child processes. This is written here
If the provisioner service dies, then all the children will become orphans. How do I get back from this situation.
Obviously, I am open to solutions that do not follow this multiprocessing usage model.
I would recommend you take a different approach. Use the system tools available in your distribution to manage the life-cycle of your processes instead of spawning them yourself. The provisioner would be much simpler as well, as it will not have to reproduce what your operating system can do with little effort.
On Ubuntu/CentOS 6 systems you can use Upstart, which has a great deal of advantages compared to the old sysvinit (aggressive parallelisation, respawning, simple init config syntax, etc).
There is also SystemD which is similar to upstart in design, and comes default in OpenSuse.
The provisioner could then be used only to create the needed init config for each service, and start or stop them using the subprocess module. You could then monitor your instances in case upstart was not able to respawn an instance, and send an alert, or try to start the service again.
Using this approach, you isolate all instances of user services from one another. If the provisioner crashes, the rest of the services will remain up.
For example, say your provisioner is running in the background. It gets a message via AMQP or some other means to create a user and start services for that user. One possible flow youd be:
create user
Do any bootstrap needed for new users
Create /etc/init/[username]_service.conf
start [username]_service
The init script could look similar to:
description "start Service for [username]"
start on runlevel [2345]
stop on runlevel [!2345]
respawn
# Run before process
pre-start script
end script
exec /bin/su -c "/path/to/your/app" <username>
This way you offload process management from your provisioner to the system upstart daemon. You only need to do job management in a simple way (create/destroy services when a user is created or deleted).
On debian-like you can wrap not demonized service with
start-stop-daemon --start --quiet --background --make-pidfile --pidfile $PIDFILE --exec $DAEMON --chuid $USER --chdir $DIR -- \
$DAEMON_ARGS
Children must die after proceesing task.
Parent process must be so simle so posible, only "resieve task - spawn child" in main loop.