I am trying to run celery inside a docker container and its never updating for some reason. Whenever I add a new function in tasks.py or update an existing function it never registers with celery even after I restart the container.
Here is my dockerfile:
# start with a base image
FROM python:3.4-slim
ENV REDIS_IP 1.1.1.111
ENV REDIS_PORT 6379
ENV REDIS_DB 0
# install dependencies
RUN apt-get update && apt-get install -y \
apt-utils \
nginx \
supervisor \
python3-pip \
&& rm -rf /var/lib/apt/lists/*
RUN echo "America/New_York" > /etc/timezone; dpkg-reconfigure -f noninteractive tzdata
# update working directories
ADD ./app /app
ADD ./config /config
ADD requirements.txt /
# install dependencies
RUN pip install --upgrade pip
RUN pip3 install -r requirements.txt
# setup config
RUN echo "\ndaemon off;" >> /etc/nginx/nginx.conf
RUN rm /etc/nginx/sites-enabled/default
RUN ln -s /config/nginx.conf /etc/nginx/sites-enabled/
RUN ln -s /config/supervisor.conf /etc/supervisor/conf.d/
EXPOSE 80
CMD ["supervisord", "-n"]
Then my supervisor.conf:
[program:app]
command = uwsgi --ini /config/app.ini
autostart=true
autorestart=true
[program:nginx]
command = service nginx restart
autostart=true
autorestart=true
[program:celery]
directory = /app
command = celery -A tasks.celery worker -P eventlet -c 1000
autostart=true
autorestart=true
My tasks.py:
import os
from celery import Celery
from app import app as flask_app
def make_celery(app):
celery = Celery(app.import_name, backend='redis://{0}:{1}/{2}'.format(os.environ['REDIS_IP'],os.environ['REDIS_PORT'],os.environ['REDIS_DB']),
broker='redis://{0}:{1}/{2}'.format(os.environ['REDIS_IP'],os.environ['REDIS_PORT'],os.environ['REDIS_DB']))
celery.conf.update(
CELERY_ENABLE_UTC=True,
CELERY_TIMEZONE='America/New_York'
)
TaskBase = celery.Task
class ContextTask(TaskBase):
abstract = True
def __call__(self, *args, **kwargs):
with app.app_context():
return TaskBase.__call__(self, *args, **kwargs)
celery.Task = ContextTask
return celery
celery = make_celery(flask_app)
#celery.task()
def add_together(a, b):
return a+ b
#celery.task()
def multiply(a,b)
return a*b
and for some reason:
I have 21 workers registered and multiply never gets registered,
also when I make changes to add_together, that never registers as well, even when I restart the container.
I am starting my container with:
docker build --rm -t myapp .
docker run -d -p 88:80 -v $(pwd)/app:/app --name=myapp myapp
and restart with:
docker restart myapp
I have also tried
docker stop $(docker ps -a -q)
docker rm $(docker ps -a -q)
and then rebuilding the app all over again. Nothing helps. Any ideas would be very much so appreciated.
I think this may be the problem:
#celery.task() # <-- you shouldn't call this decorator directly
def add_together(a, b):
return a+ b
Try changing it to this:
#celery.task
def add_together(a, b):
return a+ b
The reason: just check the source code of decorator task:
def task(self, *args, **opts):
"""Creates new task class from any callable."""
# ... handling named options
if len(args) == 1:
if callable(args[0]):
return inner_create_task_cls(**opts)(*args)
raise TypeError('argument 1 to #task() must be a callable')
if args:
raise TypeError(
'#task() takes exactly 1 argument ({0} given)'.format(
sum([len(args), len(opts)])))
return inner_create_task_cls(**opts)
The only unnamed argument it accepted is the function to be decorated. Otherwise it would raise a TypeError and be swallowed by supervisord since you didn't configure the loglevel to debug.
I could not reproduce your problem on my setup. I created a simple Flask app as in the Celery documentation.
Can you try a few commands to double check your setup?
Open a shell into myapp container (it must be already running):
docker exec -t -i myapp /bin/bash
And then:
cd /app
celery -A tasks.celery status
celery -A tasks.celery inspect registered
Does the new task show up?
I think you may have other celery instances connected to the same redis server, that's why you have 21 instances. But I'm guessing.
You can also try with an independent redis container.
docker run --name myredis -d redis
And execute celery in debug mode, with:
docker run --rm -t -i -v $(pwd)/app:/app -e REDIS_IP=myredis -u nobody -w /app --link myredis myapp celery -A tasks.celery worker -P eventlet -c 1000 -l debug
Is the task there now? It should be listed just bellow Celery startup banner message.
I don't think you have a problem with your image, but you can double check that looking into:
docker exec myapp /bin/bash -c "cat /app/tasks.py"
I don't think this is the problem because you copy /app into the image and when you run the container, you map /app again using the local directory. Are you running the container from the same directory you built the container?
The -v $(pwd)/app:/app will override /app in the container with current directory ./app. Do you really need this? Without the -v part, do you have the same results?
I hope it helps to figure out what's wrong.
I made a working repo of your code. It lives here.
Things I changed:
Colon typo (look at your multiply def)
Not calling decorators
General code cleanup
Using a single redis URI
Some directory navigation in my test
I did not focus on the supervisord parts.
Related
I have a problem which makes local development with Celery very difficult.
If I edit my local files and restart docker containers none of the CODE changes are applied. Not talking about task result caching here... just the actual function execution.
I have to prune everything for them to be applied.
Does anyone have any solution for this?
supervisord.conf
[supervisord]
nodaemon=true
[program:celeryworker]
stdout_logfile=/dev/stdout
stdout_logfile_maxbytes=0
stderr_logfile=/dev/stderr
stderr_logfile_maxbytes=0
command=celery -A worker.scheduler.schedule.celery_app worker -l info
[program:celerybeat]
stdout_logfile=/dev/stdout
stdout_logfile_maxbytes=0
stderr_logfile=/dev/stderr
stderr_logfile_maxbytes=0
command=celery -A worker.scheduler.schedule.celery_app beat -l info
worker.Dockerfile
FROM python:3.10
WORKDIR /usr/src/app
# install supervisord
RUN apt-get update && apt-get install -y supervisor
RUN apt-get install -y python3-pymysql
# copy requirements and install (so that changes to files do not mean rebuild cannot be cached)
RUN mkdir worker
COPY worker/requirements.txt /usr/src/app/worker
COPY worker/supervisord.conf /usr/src/app
RUN pip install -r /usr/src/app/worker/requirements.txt
# copy all files into the container
COPY ./worker /usr/src/app/worker
COPY ./db /usr/src/app/db
# needs to be set else Celery gives an error (because docker runs commands inside container as root)
ENV C_FORCE_ROOT=1
# run supervisord
CMD ["/usr/bin/supervisord"]
compose subset
celery_worker:
build:
context: server
dockerfile: worker.Dockerfile
volumes:
- ./worker:/tmp/worker
environment:
- CELERY_BROKER_URL=amqp://guest:guest#rabbitmq:5672//
- CELERY_RESULT_BACKEND=redis://#redis:6000/0
depends_on:
- db
- rabbitmq
- redis
I've been working on this all week, and I don't seem to understand what I'm missing. The problem is simple, I've a container running a platform on Django, and I need to create a Cronjob for an smaller task, I created a test ask that executes every minute and just print a log just to test, but it is not working, while it installs cron, add the cronjobs, start the cron service, and I can see them in the crontab, they are just never triggered.
When I first started, I had the Cron running in the same instance, but after reading this Question I found that I had to separate it in 2 instances, since apparently having Django running was afecting the cron service, so following that, this is how I have my files:
docker-compose.yml
version: '3'
services:
auth:
build:
context: ./
dockerfile: ./devops/Dockerfile
args:
[Bunch of parameters]
container_name: auth
volumes:
- ./project:/app
ports:
- 8000:8000
environment:
[Bunch of parameters]
command: python manage.py runserver 0.0.0.0:8000
cron:
build:
context: ./
dockerfile: ./devops/Dockerfile
args:
[Bunch of parameters]
container_name: cron
volumes:
- ./project:/app
environment:
[Bunch of parameters]
command: cron -f
Dockerfile
FROM python:3.8
ENV PYTHONUNBUFFERED 1
WORKDIR /app
COPY ./devops/requirements.txt .
COPY ./project .
# COPY ./.env .
RUN apt-get update
RUN apt-get -y install cron
RUN cp ./.env . || echo "file not found"
RUN pip install -r requirements.txt
#Set permission to entrypoint.sh
RUN chmod +x entrypoint.sh
# start web server
ENTRYPOINT ["./entrypoint.sh"]
CMD ["gunicorn", "-b", "0.0.0.0:8000", "project.wsgi:application", "--workers=5"]
entrypoint.sh
#!/bin/sh
# Set up scheduled jobs, if this is the cron container.
if [ "$1" = cron ]; then
service cron start
python ./manage.py crontab add
service cron stop
fi
# Run whatever command we got passed.
exec "$#"
settings.py
CRONJOBS = [
('*/1 * * * *', 'apps.coupons.cron.test'),
]
cron.py
import logging
logger = logging.getLogger(__name__)
def test():
logger.warning('Hello World')
logger.debug('Hello World')
logger.info('Hello World')
logger.error('Hello World')
logger.critical('Hello World')
print("Hello World")
return "Finished"
Here you can see that the cron were added, and that cron is running, and that executing the cronjob manually works.
Still, doesn't matter how long I wait, it doesn't seem like the Cronjob runs automatically every minute (I check this by using the file.log that I setted in the settings.py logger's options). What am I doing wrong or what may be missing to make the cron work?
I'm deploying a django application and it works fine when I run it manually. I'm trying to use supervisor but when I run sudo supervisorctl status botApp the log file says:
Starting botApp as ubuntu
/home/ubuntu/gunicorn_start.bash: line 28: exec: gunicorn: not found
My gunicorn_start.bash is the following one:
#!/bin/bash
NAME="botApp" # Name of the application
DJANGODIR=/home/ubuntu/chimpy # Django project directory
SOCKFILE=/home/ubuntu/django_env/run/gunicorn.sock # we will communicte using this unix socket
USER=ubuntu # the user to run as
GROUP=ubuntu # the group to run as
NUM_WORKERS=3 # how many worker processes should Gunicorn spawn
DJANGO_SETTINGS_MODULE=botApp.settings # which settings file should Django use
DJANGO_WSGI_MODULE=botApp.wsgi # WSGI module name
echo "Starting $NAME as `whoami`"
# Activate the virtual environment
cd $DJANGODIR
source /home/ubuntu/django_env/bin/activate
export DJANGO_SETTINGS_MODULE=$DJANGO_SETTINGS_MODULE
export PYTHONPATH=$DJANGODIR:$PYTHONPATH
# Create the run directory if it doesn't exist
RUNDIR=$(dirname $SOCKFILE)
test -d $RUNDIR || mkdir -p $RUNDIR
# Start your Django Unicorn
# Programs meant to be run under supervisor should not daemonize themselves (do not use --daemon)
exec gunicorn ${DJANGO_WSGI_MODULE}:application \
--name $NAME \
--workers $NUM_WORKERS \
--user=$USER --group=$GROUP \
--bind=unix:$SOCKFILE \
--log-level=debug \
--log-file=-
And my configuration file in /etc/supervisor/conf.d/botApp.conf is:
[program:botApp]
command = /home/ubuntu/gunicorn_start.bash;
user = ubuntu;
stdout_logfile = /home/ubuntu/logs/gunicorn_supervisor.log;
redirect_stderr = true;
environment=LANG=en_US.UTF-8,LC_ALL=en_US.UTF-8;
Is something wrong in my gunicorn bash? Many thanks
To check virtual environment is active or not. Just write the simple script
#!bin/bash
source /var/www/html/project_env/bin/activate
Then run command
-> sudo bash file_name.sh
If you will not get any error that mean venv is activated
Note - You will not get any venv activate mark on terminal or shell.
I have 2 files that depend on each other when docker is start up. 1 is a flask file and one is a file with a few functions. When docker starts, only the functions file will be executed but it imports flask variables from the flask file. Example:
Flaskfile
import flask
from flask import Flask, request
import json
_flask = Flask(__name__)
#_flask.route('/', methods = ['POST'])
def flask_main():
s = str(request.form['abc'])
ind = global_fn_main(param1,param2,param3)
return ind
def run(fn_main):
global global_fn_main
global_fn_main = fn_main
_flask.run(debug = False, port = 8080, host = '0.0.0.0', threaded = True)
Main File
import flaskfile
#a few functions then
if__name__ == '__main__':
flaskfile.run(main_fn)
The script runs fine without need a gunicorn.
Dockerfile
FROM python-flask
ADD *.py *.pyc /code/
ADD requirements.txt /code/
WORKDIR /code
EXPOSE 8080
CMD ["python","main_file.py"]
In the Command line: i usally do: docker run -it -p 8080:8080 my_image_name and then docker will start and listen.
Now to use gunicorn:
I tried to modify my CMD parameter in the dockerfile to
["gunicorn", "-w", "20", "-b", "127.0.0.1:8083", "main_file:flaskfile"]
but it just keeps exiting. Am i not writing the docker gunicorn command right?
I just went through this problem this week and stumbled on your question along the way. Fair to say you either resolved this or changed approaches by now, but for future's sake:
The command in my Dockerfile is:
CMD ["gunicorn" , "-b", "0.0.0.0:8000", "app:app"]
Where the first "app" is the module and the second "app" is the name of the WSGI callable, in your case, it should be _flask from your code although you've some other stuff going on that makes me less certain.
Gunicorn takes the place of all the run statements in your code, if Flask's development web server and Gunicorn try to take the same port it can conflict and crash Gunicorn.
Note that when run by Gunicorn, __name__ is not "main". In my example it is equal to "app".
At my admittedly junior level of both Python, Docker, and Gunicorn the fastest way to debug is to comment out the "CMD" in the Dockerfile, get the container up and running:
docker run -it -d -p 8080:8080 my_image_name
Hop onto the running container:
docker exec -it container_name /bin/bash
And start Gunicorn from the command line until you've got it working, then test with curl - I keep a basic route in my app.py file that just prints out "Hi" and has no dependencies for validating the server is up before worrying about the port binding to the host machine.
After struggling with this issue over the last 3 days, I found that all you need to do is to bind to the non-routable meta-address 0.0.0.0 rather than the loopback IP 127.0.0.1:
CMD ["gunicorn" , "--bind", "0.0.0.0:8000", "app:app"]
And don't forget to expose the port, one option to do that is to use EXPOSE
in your Dockerfile:
EXPOSE 8000
Now:
docker build -t test .
Finally you can run:
docker run -d -p 8000:8000 test
This is my last part of my Dockerfile with Django App
EXPOSE 8002
COPY entrypoint.sh /code/
WORKDIR /code
ENTRYPOINT ["sh", "entrypoint.sh"]
then in entrypoint.sh
#!/bin/bash
# Prepare log files and start outputting logs to stdout
mkdir -p /code/logs
touch /code/logs/gunicorn.log
touch /code/logs/gunicorn-access.log
tail -n 0 -f /code/logs/gunicorn*.log &
export DJANGO_SETTINGS_MODULE=django_docker_azure.settings
exec gunicorn django_docker_azure.wsgi:application \
--name django_docker_azure \
--bind 0.0.0.0:8002 \
--workers 5 \
--log-level=info \
--log-file=/code/logs/gunicorn.log \
--access-logfile=/code/logs/gunicorn-access.log \
"$#"
Hope this could be useful
This work for me:
FROM docker.io/python:3.7
WORKDIR /app
COPY requirements.txt ./
RUN pip install --no-cache-dir -r requirements.txt
ENV GUNICORN_CMD_ARGS="--bind=0.0.0.0 --chdir=./src/"
COPY . .
EXPOSE 8000
CMD [ "gunicorn", "app:app" ]
I was trying to run a flask app as well. I found out that you can just use
ENTRYPOINT['gunicorn', '-b', ':8080', 'app:APP']
This will take take the file you have specified and run on the docker instance. Also, don't forget the shebang on the top, #!/usr/bin/env python if you are running the Debug LOG-LEVEL.
gunicorn main:app --workers 4 --bind :3000 --access-logfile '-'
I would like to run a python cron job inside of a docker container in detached mode. My set-up is below:
My python script is test.py
#!/usr/bin/env python
import datetime
print "Cron job has run at %s" %datetime.datetime.now()
My cron file is my-crontab
* * * * * /test.py > /dev/console
and my Dockerfile is
FROM ubuntu:latest
RUN apt-get update && apt-get install -y software-properties-common python-software-properties && apt-get update
RUN apt-get install -y python cron
ADD my-crontab /
ADD test.py /
RUN chmod a+x test.py
RUN crontab /my-crontab
ENTRYPOINT cron -f
What are the potential problems with this approach? Are there other approaches and what are their pros and cons?
Several issues that I faced while trying to get a cron job running in a docker container were:
time in the docker container is in UTC not local time;
the docker environment is not passed to cron;
as Thomas noted, cron logging leaves a lot to be desired and accessing it through docker requires a docker-based solution.
There are cron-specific issues and are docker-specific issues in the list, but in any case they have to be addressed to get cron working.
To that end, my current working solution to the problem posed in the question is as follows:
Create a docker volume to which all scripts running under cron will write:
# Dockerfile for test-logs
# BUILD-USING: docker build -t test-logs .
# RUN-USING: docker run -d -v /t-logs --name t-logs test-logs
# INSPECT-USING: docker run -t -i --volumes-from t-logs ubuntu:latest /bin/bash
FROM stackbrew/busybox:latest
# Create logs volume
VOLUME /var/log
CMD ["true"]
The script that will run under cron is test.py:
#!/usr/bin/env python
# python script which needs an environment variable and runs as a cron job
import datetime
import os
test_environ = os.environ["TEST_ENV"]
print "Cron job has run at %s with environment variable '%s'" %(datetime.datetime.now(), test_environ)
In order to pass the environment variable to the script that I want to run under cron, follow Thomas' suggestion and put a crontab fragment for each script (or group of scripts) that has need of a docker environment variable in /etc/cron.d with a placeholder XXXXXXX which must be set.
# placed in /etc/cron.d
# TEST_ENV is an docker environment variable that the script test.py need
TEST_ENV=XXXXXXX
#
* * * * * root python /test.py >> /var/log/test.log
Instead of calling cron directly, wrap cron in a python script that does does things: 1. reads the environment variable from the docker environment variable and sets the environment variable in a crontab fragment.
#!/usr/bin/env python
# run-cron.py
# sets environment variable crontab fragments and runs cron
import os
from subprocess import call
import fileinput
# read docker environment variables and set them in the appropriate crontab fragment
environment_variable = os.environ["TEST_ENV"]
for line in fileinput.input("/etc/cron.d/cron-python",inplace=1):
print line.replace("XXXXXXX", environment_variable)
args = ["cron","-f", "-L 15"]
call(args)
The Dockerfile that for the container in which the cron jobs run is as follows:
# BUILD-USING: docker build -t test-cron .
# RUN-USING docker run --detach=true --volumes-from t-logs --name t-cron test-cron
FROM debian:wheezy
#
# Set correct environment variables.
ENV HOME /root
ENV TEST_ENV test-value
RUN apt-get update && apt-get install -y software-properties-common python-software-properties && apt-get update
# Install Python Setuptools
RUN apt-get install -y python cron
RUN apt-get purge -y python-software-properties software-properties-common && apt-get clean -y && apt-get autoclean -y && apt-get autoremove -y && rm -rf /var/lib/apt/lists/* /tmp/* /var/tmp/*
ADD cron-python /etc/cron.d/
ADD test.py /
ADD run-cron.py /
RUN chmod a+x test.py run-cron.py
# Set the time zone to the local time zone
RUN echo "America/New_York" > /etc/timezone && dpkg-reconfigure --frontend noninteractive tzdata
CMD ["/run-cron.py"]
Finally, create the containers and run them:
Create the log volume (test-logs) container: docker build -t test-logs .
Run log volume: docker run -d -v /t-logs --name t-logs test-logs
Create the cron container: docker build -t test-cron .
Run the cron container: docker run --detach=true --volumes-from t-logs --name t-cron test-cron
To inspect the log files of the scripts running under cron: docker run -t -i --volumes-from t-logs ubuntu:latest /bin/bash. The log files are in /var/log.
Here is a complement on rosksw answer.
There is no need to do some string replacement in the crontab file in order to pass environment variables to the cron jobs.
It is simpler to store the environment variables in a file when running the contrainer, then load them from this file at each cron execution. I found the tip here.
In the dockerfile:
CMD mkdir -p /data/log && env > /root/env.txt && crond -n
In the crontab file:
* * * * * root env - `cat /root/env.txt` my-script.sh
Adding crontab fragments in /etc/cron.d/ instead of using root's crontab might be preferable.
This would:
Let you add additional cron jobs by adding them to that folder.
Save you a few layers.
Emulate how Debian distros do it for their own packages.
Observe that the format of those files is a bit different from a crontab entry. Here's a sample from the Debian php package:
# /etc/cron.d/php5: crontab fragment for php5
# This purges session files older than X, where X is defined in seconds
# as the largest value of session.gc_maxlifetime from all your php.ini
# files, or 24 minutes if not defined. See /usr/lib/php5/maxlifetime
# Look for and purge old sessions every 30 minutes
09,39 * * * * root [ -x /usr/lib/php5/maxlifetime ] && [ -x /usr/lib/php5/sessionclean ] && [ -d /var/lib/php5 ] && /usr/lib/php5/sessionclean /var/lib/php5 $(/usr/lib/php5/maxlifetime)
Overall, from experience, running cron in a container does work very well (besides cron logging leaving a lot to be desired).
Here's an alternative solution.
in Dockerfile
ADD docker/cron/my-cron /etc/cron.d/my-cron
RUN chmod 0644 /etc/cron.d/my-cron
ADD docker/cron/entrypoint.sh /etc/entrypoint.sh
ENTRYPOINT ["/bin/sh", "/etc/entrypoint.sh"]
in entrypoint.sh
#!/usr/bin/env bash
printenv | cat - /etc/cron.d/my-cron > ~/my-cron.tmp \
&& mv ~/my-cron.tmp /etc/cron.d/my-cron
cron -f
We are using below solution. It supports both docker logs functionality and ability to hang the cron process in the container on PID 1 (if you use tail -f workarounds provided above - if cron crashes, docker will not follow restart policy):
cron.sh:
#!/usr/bin/env bash
printenv | cat - /etc/cron.d/cron-jobs > ~/crontab.tmp \
&& mv ~/crontab.tmp /etc/cron.d/cron-jobs
chmod 644 /etc/cron.d/cron-jobs
tail -f /var/log/cron.log &
cron -f
Dockerfile:
RUN apt-get install --no-install-recommends -y -q cron
ADD cron.sh /usr/bin/cron.sh
RUN chmod +x /usr/bin/cron.sh
ADD ./crontab /etc/cron.d/cron-jobs
RUN chmod 0644 /etc/cron.d/cron-jobs
RUN touch /var/log/cron.log
ENTRYPOINT ["/bin/sh", "/usr/bin/cron.sh"]
crontab:
* * * * * root <cmd> >> /var/log/cron.log 2>&1
And please don't forget to add the creepy new line in your crontab
Here is my checklist for debugging cron python scripts in docker:
Make sure you run cron command somewhere. Cron doesn't start automatically. You can run it from a Dockerfile using RUN or CMD or add it to a startup script for the container. In case you use CMD you may consider using cron -f flag which keeps cron in the foreground and won't let container die. However, I prefer using tail -f on logfiles.
Store environment variables in /etc/envoronment. Run this from a bash startscript: printenv > /etc/environment. This is an absolute must if you use environment variables inside of python scripts. Cron doesn't know anything about the environment variables by default. By it can read them from /etc/environment.
Test Cron by using the following config:
* * * * * echo "Cron works" >>/home/code/test.log
* * * * * bash -c "/usr/local/bin/python3 /home/code/test.py >>/home/code/test.log 2>/home/code/test.log"
The python test file should contain some print statements or something else that displays that the script is running. 2>/home/code/test.log will also log errors. Otherwise, you won't see errors at all and will continue guessing.
Once done, go to the container, using docker exec -it <container_name> bash and check:
That crontab config is in place using crontab -l
Monitor logs using tail -f /home/code/test.log
I have spent hours and days figuring out all of those problems. I hope this helps someone to avoid this.
Don't mix crond and your base image. Prefer to use a native solution for your language (schedule or crython as said by Anton), or decouple it. By decoupling it I mean, keep things separated, so you don't have to maintain an image just to be the fusion between python and crond.
You can use Tasker, a task runner that has cron (a scheduler) support, to solve it, if you want keep things decoupled.
Here an docker-compose.yml file, that will run some tasks for you
version: "2"
services:
tasker:
image: strm/tasker
volumes:
- "/var/run/docker.sock:/var/run/docker.sock"
environment:
configuration: |
logging:
level:
ROOT: WARN
org.springframework.web: WARN
sh.strm: DEBUG
schedule:
- every: minute
task: helloFromPython
tasks:
docker:
- name: helloFromPython
image: python:3-slim
script:
- python -c 'print("Hello world from python")'
Just run docker-compose up, and see it working. Here is the Tasker repo with the full documentation:
http://github.com/opsxcq/tasker
Single Container Method
You may run crond within the same container that is doing something closely related using a base image that handles PID 0 well, like phusion/baseimage.
Specialized Container Method
May be cleaner would be to have another Container linked to it that just runs crond. For example:
Dockerfile
FROM busybox
ADD crontab /var/spool/cron/crontabs/www-data
CMD crond -f
crontab
* * * * * echo $USER
Then run:
$ docker build -t cron .
$ docker run --rm --link something cron
Note: In this case it'll run the job as www-data. Cannot just mount the crontab file as volume because it needs to be owned by root with only write access for root, else crond will run nothing. Also you'll have to run crond as root.
Another possibility is to use Crython. Crython allows you to regularly schedule a python function from within a single python script / process. It even understands cron syntax:
#crython.job(expr='0 0 0 * * 0 *')
def job():
print "Hello world"
Using crython avoids the various headaches of running crond inside a docker container - your job is now a single process that wakes up when it needs to, which fits better into the docker execution model. But it has the downside of putting the scheduling inside your program, which isn't always desirable. Still, it might be handy in some use cases.