So I want to run cronjob.py inside a container every week. I tried some of the codes from other posts but I can't manage to get cron to work.
This is my Dockerfile:
FROM python:3.7
RUN apt-get update && apt-get install cron vim systemctl -y
WORKDIR /app
RUN pip3 install flask
COPY harborscan /etc/cron.d/harborscan
COPY cronjob.py cronjob.py
RUN systemctl start cron
RUN chmod 0644 /etc/cron.d/harborscan
RUN crontab /etc/cron.d/harborscan
RUN touch /app/cron.log
CMD ["cron", "-f"]
And the crontab file:
* * * * * /bin/touch /app/test >/app/cron.log 2>&1
So just for testing purposes I left the schedule to each minute and a simple touch command.
The cron services is running:
root#6b6c056d0039:/app# systemctl status cron
cron.service - Regular background program processing daemon
Loaded: loaded (/lib/systemd/system/cron.service, enabled)
Active: inactive (dead)
crontab -l :
root#6b6c056d0039:/app# crontab -l
* * * * * /bin/touch /app/test >/app/cron.log 2>&1
The test file is not created, I also tried to just run /usr/bin/python3 app/cronjob.py but the API calls that it is supposed to make aren't being executed.
I am trying to add cronjob and flask server in same docker image. My Dockerfile looks like this
FROM python:alpine3.9
RUN apk --update add build-base libffi-dev openssl-dev curl
ADD crontab.txt /crontab.txt
ADD cronscript.sh /cronscript.sh
COPY entry.sh /entry.sh
RUN chmod 755 /cronscript.sh /entry.sh
RUN /usr/bin/crontab /crontab.txt
RUN /bin/sh /entry.sh
WORKDIR /
RUN mkdir app
COPY requirements.txt /app
RUN pip install -r /app/requirements.txt
COPY *.py /app/
WORKDIR /app
RUN chmod +x main.py
ENTRYPOINT [ "python" ]
CMD [ "main.py" ]
cronscript.sh
#!/bin/sh
echo "Starting cronjob"
curl http://localhost:4000/health
echo "Completed cronjob"
crontab.txt
*/15 * * * * /cronscript.sh >> /var/log/cron.log
entry.sh
#!/bin/sh
# start cron
/usr/sbin/crond -l 2
This does not start cronjob. I see no data in /var/log/cron.log after waiting for an hour.
Where as if I move the line /bin/sh /entry.sh in my flask main function. Everything works.
flask main function
if __name__ == "__main__":
os.system("/bin/sh /entry.sh")
app.run(host='0.0.0.0', port=4000, debug=True)
Is there a way I can get this cron job working from Dockerfile itself and not python code? This feels like a hack.
You have to add extra line to make it cron valid. Please check (https://blog.knoldus.com/running-a-cron-job-in-docker-container/)
I built a python docker image with a crontab installed. I want to schedule a job. Crontab is running:
/etc/init.d/cron status
[ ok ] cron is running.
And it's configured:
crontab -l
*/30 * * * * root /web/sync_html.sh >> /var/log/cron.log 2>&1
I even added a just "date" command scheduling for each minute:
* * * * * root date >> /var/log/cron.log
When I run these commands manually it works. But the scheduling is not working. Any ideas?
EDIT: Dockerfile:
FROM python:3
# Copy local files to container
COPY www /web
RUN chmod -R 777 /web
RUN pip3 install -r /web/requirments.txt
# install crontab
RUN apt-get update && apt-get install -y cron
# Add crontab file in the cron directory
ADD crontab /etc/cron.d/hello-cron
# Give execution rights on the cron job
RUN chmod 0644 /etc/cron.d/hello-cron
# Apply cron job
RUN crontab /etc/cron.d/hello-cron
# Create the log file to be able to run tail
RUN touch /var/log/cron.log
EXPOSE 8081
CMD [ "python", "/web/app.py", "-p", "8081" ]
You are not starting cron process. change CMD to below one but keep in mind it will start cron in the background.
CMD ["sh","-c","/etc/init.d/cron start && python /web/app.py -p 8081"]
I have three files which is
crontab : lists of cron job to be execute
entrypoint.sh
#!/usr/bin/env bash
service cron start
python
and docker file basically to install the pip and to run crontab on certain folder.
My question is :
Why in my docker container, the cron just start once and Exited. have no ways to find the logs of it as it shows : Starting periodic command scheduler: cron.
i wish to know whats the proper way of setting up and how to keep it running.
Thanks
There are multiple ways on how you can run a cronjob inside a docker container. Here is an example for a cron-setup on debian using cronjob files.
Create a crontab file
* * * * * root echo "Test my-cron" > /proc/1/fd/1 2>/proc/1/fd/2
my-cron - This file contains the interval, user and the command that should be scheduled. In this example we want to print the text Test my-cron every minute.
Create a docker entrypoint
#!/usr/bin/env bash
cron # start cron service
tail -f /dev/null # keep container running
entrypoint.sh - This is the entrypoint which gets executed when the container gets started.
Create a Dockerfile
FROM debian:latest
RUN apt-get update \
&& apt-get install -y cron
# Cron file
ADD ./my-cron /etc/cron.d/my-cron
RUN chmod 0644 /etc/cron.d/my-cron
# Entrypoint
ADD ./entrypoint.sh /usr/bin/entrypoint.sh
RUN chmod +x /usr/bin/entrypoint.sh
CMD [ "entrypoint.sh" ]
Run
Build the image
docker build . --tag my-cron
Start a container
docker run -d my-cron:latest
Check the console output
docker logs <YOUR_CONTAINER_ID> --follow
I would like to run a python cron job inside of a docker container in detached mode. My set-up is below:
My python script is test.py
#!/usr/bin/env python
import datetime
print "Cron job has run at %s" %datetime.datetime.now()
My cron file is my-crontab
* * * * * /test.py > /dev/console
and my Dockerfile is
FROM ubuntu:latest
RUN apt-get update && apt-get install -y software-properties-common python-software-properties && apt-get update
RUN apt-get install -y python cron
ADD my-crontab /
ADD test.py /
RUN chmod a+x test.py
RUN crontab /my-crontab
ENTRYPOINT cron -f
What are the potential problems with this approach? Are there other approaches and what are their pros and cons?
Several issues that I faced while trying to get a cron job running in a docker container were:
time in the docker container is in UTC not local time;
the docker environment is not passed to cron;
as Thomas noted, cron logging leaves a lot to be desired and accessing it through docker requires a docker-based solution.
There are cron-specific issues and are docker-specific issues in the list, but in any case they have to be addressed to get cron working.
To that end, my current working solution to the problem posed in the question is as follows:
Create a docker volume to which all scripts running under cron will write:
# Dockerfile for test-logs
# BUILD-USING: docker build -t test-logs .
# RUN-USING: docker run -d -v /t-logs --name t-logs test-logs
# INSPECT-USING: docker run -t -i --volumes-from t-logs ubuntu:latest /bin/bash
FROM stackbrew/busybox:latest
# Create logs volume
VOLUME /var/log
CMD ["true"]
The script that will run under cron is test.py:
#!/usr/bin/env python
# python script which needs an environment variable and runs as a cron job
import datetime
import os
test_environ = os.environ["TEST_ENV"]
print "Cron job has run at %s with environment variable '%s'" %(datetime.datetime.now(), test_environ)
In order to pass the environment variable to the script that I want to run under cron, follow Thomas' suggestion and put a crontab fragment for each script (or group of scripts) that has need of a docker environment variable in /etc/cron.d with a placeholder XXXXXXX which must be set.
# placed in /etc/cron.d
# TEST_ENV is an docker environment variable that the script test.py need
TEST_ENV=XXXXXXX
#
* * * * * root python /test.py >> /var/log/test.log
Instead of calling cron directly, wrap cron in a python script that does does things: 1. reads the environment variable from the docker environment variable and sets the environment variable in a crontab fragment.
#!/usr/bin/env python
# run-cron.py
# sets environment variable crontab fragments and runs cron
import os
from subprocess import call
import fileinput
# read docker environment variables and set them in the appropriate crontab fragment
environment_variable = os.environ["TEST_ENV"]
for line in fileinput.input("/etc/cron.d/cron-python",inplace=1):
print line.replace("XXXXXXX", environment_variable)
args = ["cron","-f", "-L 15"]
call(args)
The Dockerfile that for the container in which the cron jobs run is as follows:
# BUILD-USING: docker build -t test-cron .
# RUN-USING docker run --detach=true --volumes-from t-logs --name t-cron test-cron
FROM debian:wheezy
#
# Set correct environment variables.
ENV HOME /root
ENV TEST_ENV test-value
RUN apt-get update && apt-get install -y software-properties-common python-software-properties && apt-get update
# Install Python Setuptools
RUN apt-get install -y python cron
RUN apt-get purge -y python-software-properties software-properties-common && apt-get clean -y && apt-get autoclean -y && apt-get autoremove -y && rm -rf /var/lib/apt/lists/* /tmp/* /var/tmp/*
ADD cron-python /etc/cron.d/
ADD test.py /
ADD run-cron.py /
RUN chmod a+x test.py run-cron.py
# Set the time zone to the local time zone
RUN echo "America/New_York" > /etc/timezone && dpkg-reconfigure --frontend noninteractive tzdata
CMD ["/run-cron.py"]
Finally, create the containers and run them:
Create the log volume (test-logs) container: docker build -t test-logs .
Run log volume: docker run -d -v /t-logs --name t-logs test-logs
Create the cron container: docker build -t test-cron .
Run the cron container: docker run --detach=true --volumes-from t-logs --name t-cron test-cron
To inspect the log files of the scripts running under cron: docker run -t -i --volumes-from t-logs ubuntu:latest /bin/bash. The log files are in /var/log.
Here is a complement on rosksw answer.
There is no need to do some string replacement in the crontab file in order to pass environment variables to the cron jobs.
It is simpler to store the environment variables in a file when running the contrainer, then load them from this file at each cron execution. I found the tip here.
In the dockerfile:
CMD mkdir -p /data/log && env > /root/env.txt && crond -n
In the crontab file:
* * * * * root env - `cat /root/env.txt` my-script.sh
Adding crontab fragments in /etc/cron.d/ instead of using root's crontab might be preferable.
This would:
Let you add additional cron jobs by adding them to that folder.
Save you a few layers.
Emulate how Debian distros do it for their own packages.
Observe that the format of those files is a bit different from a crontab entry. Here's a sample from the Debian php package:
# /etc/cron.d/php5: crontab fragment for php5
# This purges session files older than X, where X is defined in seconds
# as the largest value of session.gc_maxlifetime from all your php.ini
# files, or 24 minutes if not defined. See /usr/lib/php5/maxlifetime
# Look for and purge old sessions every 30 minutes
09,39 * * * * root [ -x /usr/lib/php5/maxlifetime ] && [ -x /usr/lib/php5/sessionclean ] && [ -d /var/lib/php5 ] && /usr/lib/php5/sessionclean /var/lib/php5 $(/usr/lib/php5/maxlifetime)
Overall, from experience, running cron in a container does work very well (besides cron logging leaving a lot to be desired).
Here's an alternative solution.
in Dockerfile
ADD docker/cron/my-cron /etc/cron.d/my-cron
RUN chmod 0644 /etc/cron.d/my-cron
ADD docker/cron/entrypoint.sh /etc/entrypoint.sh
ENTRYPOINT ["/bin/sh", "/etc/entrypoint.sh"]
in entrypoint.sh
#!/usr/bin/env bash
printenv | cat - /etc/cron.d/my-cron > ~/my-cron.tmp \
&& mv ~/my-cron.tmp /etc/cron.d/my-cron
cron -f
We are using below solution. It supports both docker logs functionality and ability to hang the cron process in the container on PID 1 (if you use tail -f workarounds provided above - if cron crashes, docker will not follow restart policy):
cron.sh:
#!/usr/bin/env bash
printenv | cat - /etc/cron.d/cron-jobs > ~/crontab.tmp \
&& mv ~/crontab.tmp /etc/cron.d/cron-jobs
chmod 644 /etc/cron.d/cron-jobs
tail -f /var/log/cron.log &
cron -f
Dockerfile:
RUN apt-get install --no-install-recommends -y -q cron
ADD cron.sh /usr/bin/cron.sh
RUN chmod +x /usr/bin/cron.sh
ADD ./crontab /etc/cron.d/cron-jobs
RUN chmod 0644 /etc/cron.d/cron-jobs
RUN touch /var/log/cron.log
ENTRYPOINT ["/bin/sh", "/usr/bin/cron.sh"]
crontab:
* * * * * root <cmd> >> /var/log/cron.log 2>&1
And please don't forget to add the creepy new line in your crontab
Here is my checklist for debugging cron python scripts in docker:
Make sure you run cron command somewhere. Cron doesn't start automatically. You can run it from a Dockerfile using RUN or CMD or add it to a startup script for the container. In case you use CMD you may consider using cron -f flag which keeps cron in the foreground and won't let container die. However, I prefer using tail -f on logfiles.
Store environment variables in /etc/envoronment. Run this from a bash startscript: printenv > /etc/environment. This is an absolute must if you use environment variables inside of python scripts. Cron doesn't know anything about the environment variables by default. By it can read them from /etc/environment.
Test Cron by using the following config:
* * * * * echo "Cron works" >>/home/code/test.log
* * * * * bash -c "/usr/local/bin/python3 /home/code/test.py >>/home/code/test.log 2>/home/code/test.log"
The python test file should contain some print statements or something else that displays that the script is running. 2>/home/code/test.log will also log errors. Otherwise, you won't see errors at all and will continue guessing.
Once done, go to the container, using docker exec -it <container_name> bash and check:
That crontab config is in place using crontab -l
Monitor logs using tail -f /home/code/test.log
I have spent hours and days figuring out all of those problems. I hope this helps someone to avoid this.
Don't mix crond and your base image. Prefer to use a native solution for your language (schedule or crython as said by Anton), or decouple it. By decoupling it I mean, keep things separated, so you don't have to maintain an image just to be the fusion between python and crond.
You can use Tasker, a task runner that has cron (a scheduler) support, to solve it, if you want keep things decoupled.
Here an docker-compose.yml file, that will run some tasks for you
version: "2"
services:
tasker:
image: strm/tasker
volumes:
- "/var/run/docker.sock:/var/run/docker.sock"
environment:
configuration: |
logging:
level:
ROOT: WARN
org.springframework.web: WARN
sh.strm: DEBUG
schedule:
- every: minute
task: helloFromPython
tasks:
docker:
- name: helloFromPython
image: python:3-slim
script:
- python -c 'print("Hello world from python")'
Just run docker-compose up, and see it working. Here is the Tasker repo with the full documentation:
http://github.com/opsxcq/tasker
Single Container Method
You may run crond within the same container that is doing something closely related using a base image that handles PID 0 well, like phusion/baseimage.
Specialized Container Method
May be cleaner would be to have another Container linked to it that just runs crond. For example:
Dockerfile
FROM busybox
ADD crontab /var/spool/cron/crontabs/www-data
CMD crond -f
crontab
* * * * * echo $USER
Then run:
$ docker build -t cron .
$ docker run --rm --link something cron
Note: In this case it'll run the job as www-data. Cannot just mount the crontab file as volume because it needs to be owned by root with only write access for root, else crond will run nothing. Also you'll have to run crond as root.
Another possibility is to use Crython. Crython allows you to regularly schedule a python function from within a single python script / process. It even understands cron syntax:
#crython.job(expr='0 0 0 * * 0 *')
def job():
print "Hello world"
Using crython avoids the various headaches of running crond inside a docker container - your job is now a single process that wakes up when it needs to, which fits better into the docker execution model. But it has the downside of putting the scheduling inside your program, which isn't always desirable. Still, it might be handy in some use cases.