/bin/sh: 1: crond: not found when cron already installed - python

I have a dockerfile
FROM python:3.9.12-bullseye
COPY . .
RUN apt-get update -y
RUN apt-get install cron -y
RUN crontab crontab
CMD python task.py && crond -f
And a crontab
* * * * * python /task.py
I keep running into the error /bin/sh: 1: crond: not found when I run the docker file. Docker build is fine.
Anyone knows why this happens? If I use python:3.6.12-alpine everything works fine but with python:3.9.12-bullseye, i keep getting that error.

If you have a look for debian series cron.service, you could see next:
[Unit]
Description=Regular background program processing daemon
Documentation=man:cron(8)
After=remote-fs.target nss-user-lookup.target
[Service]
EnvironmentFile=-/etc/default/cron
ExecStart=/usr/sbin/cron -f $EXTRA_OPTS
IgnoreSIGPIPE=false
KillMode=process
Restart=on-failure
[Install]
WantedBy=multi-user.target
From ExecStart=/usr/sbin/cron -f $EXTRA_OPTS, I guess unlike alpine, the main program on such debian series linux could be cron not crond.
(PS: python:3.9.12-bullseye based on debian, while python:3.6.12-alpine based on alpine)

Related

Docker container not compling .tex file

im trying to build a pdf artifact from a .qmd file via docker and a ci cd pipeline. in my qmd file is a tiny bit of python and the rest is only markdown.
Ill give my code first, then what i tried:
my dockerfile:
FROM rocker/tidyverse
WORKDIR /app
COPY . /app
ENV DEBIAN_FRONTEND=noninteractive
RUN apt-get update
RUN apt-get install -y libxt-dev
RUN R -e "install.packages('rmarkdown')"
RUN R -e "install.packages('roxygen2')"
RUN R -e "install.packages('XQuartz')"
RUN R -e "install.packages('quarto')"
RUN R -e "install.packages('xaringan')"
RUN R -e "install.packages('knitr')"
RUN R -e "install.packages('tinytex')"
RUN R -e "tinytex::install_tinytex()"
RUN R -e "install.packages('reticulate')"
RUN R -e "reticulate::install_miniconda()"
CMD ["Rscript", "-e", "rmarkdown::render('el-portfolio.qmd', output_format = 'pdf_document')"]
im working through a ci/cd pipeline on gitlab. i didnt set up the runners myself, so if its an issue with them, ill have to take it up with my supervisor. the image builds fine, and when the container runs, it throws this error:
Error: LaTeX failed to compile el-portfolio.tex. See https://yihui.org/tinytex/r/#debugging for debugging tips. See el-portfolio.log for more info.
Execution halted
so, what i tried:
i made sure everything was installed, especially the tinytex package, since that seems to be contributing to the error. i really dont know what else to do to solve this - does anyone have any ideas? thanks so much!

Python cron job inside docker container

So I want to run cronjob.py inside a container every week. I tried some of the codes from other posts but I can't manage to get cron to work.
This is my Dockerfile:
FROM python:3.7
RUN apt-get update && apt-get install cron vim systemctl -y
WORKDIR /app
RUN pip3 install flask
COPY harborscan /etc/cron.d/harborscan
COPY cronjob.py cronjob.py
RUN systemctl start cron
RUN chmod 0644 /etc/cron.d/harborscan
RUN crontab /etc/cron.d/harborscan
RUN touch /app/cron.log
CMD ["cron", "-f"]
And the crontab file:
* * * * * /bin/touch /app/test >/app/cron.log 2>&1
So just for testing purposes I left the schedule to each minute and a simple touch command.
The cron services is running:
root#6b6c056d0039:/app# systemctl status cron
cron.service - Regular background program processing daemon
Loaded: loaded (/lib/systemd/system/cron.service, enabled)
Active: inactive (dead)
crontab -l :
root#6b6c056d0039:/app# crontab -l
* * * * * /bin/touch /app/test >/app/cron.log 2>&1
The test file is not created, I also tried to just run /usr/bin/python3 app/cronjob.py but the API calls that it is supposed to make aren't being executed.

No output on python docker cron job

I've been trying to set up a simple cron job in a docker container. When I build and run the job there are no errors but nothing is logged. If I go into the container I can see the crontab (i.e. crontab -l) and run the file (python test.py). Idk what I'm missing to see the scheduled job run. Idk if it's running and my log location is wrong or if it's not even running at all.
Dockerfile
FROM python:3.8.8
RUN apt-get update && apt-get -y install cron vim
WORKDIR /app
COPY crontab /etc/cron.d/crontab
COPY test.py /app/test.py
RUN chmod 0644 /etc/cron.d/crontab
RUN /usr/bin/crontab /etc/cron.d/crontab
# run crond as main process of container
CMD ["cron", "-f"]
crontab
* * * * * python /app/test.py > /proc/1/fd/1 2>/proc/1/fd/2
# new line
test.py
print('test')
I to reproduced your setup with a slight adjustment: I replaced your python script by a simple echo >> /crontest.txt. It works as expected. The file is created inside the docker container and each minute one line is appended.
This leaves you only with the question why python /app/test.py > /proc/1/fd/1 2>/proc/1/fd/2 behaves different than echo >> /crontest.txt.
Dockerfile:
FROM python:3.8.8
RUN apt-get update && apt-get -y install cron vim
WORKDIR /app
COPY crontab /etc/cron.d/crontab
RUN chmod 0644 /etc/cron.d/crontab
RUN /usr/bin/crontab /etc/cron.d/crontab
# run crond as main process of container
CMD ["cron", "-f"]
crontab:
* * * * * echo "test" >> /crontest.txt
# new line
build docker image docker build --tag contest .
run docker container docker run -n crontest-container contest
exec into the running container docker exec -it crontest-container bash
output content of /crontest.txt cat /crontest.txt (you can also check and see that crontab is running top)

running two separate Python-Flask api via a single docker file

I want to run two different python api files running on different ports via a single container.
My docker file looks like:
FROM python:3.7-slim-buster
RUN apt-get update && apt-get install -y libgtk2.0-dev cmake libpoppler-cpp-dev poppler-utils tesseract-ocr
WORKDIR /app
COPY requirements.txt ./
RUN pip install --no-cache-dir -r requirements.txt
COPY . .
RUN chmod a+x run.sh
CMD ["./run.sh"]
And the .sh file looks like:
#!/bin/bash
exec python3 /app1/numberToWord.py &
exec python3 /app2/dollarToGbp.py &
While the docker build is a success without any error, the docker run doesn't throw any error and exits the command line. I'm curios to know where is it failing, any insight is highly appreciated.
Try using nohup to ignore hangup signal
Ex:
#!/bin/bash
nohup python3 /app1/numberToWord.py &
nohup python3 /app2/dollarToGbp.py &
When you run a container, you can specify the specific command to run. You can run two containers, from the same image, with different commands:
docker run -p 8000:8000 --name spelling -d image /app1/numberToWord.py
docker run -p 8001:8000 --name currency -d image /app2/dollarToGbp.py
The important points here are that each container runs a single process, in the foreground.
If your main command script makes it to the end and exits, the container will exit too. The script you show only launches background processes and then completes, and when it completes the container will exit. There needs to be some foreground process to keep the container running, and the easiest way to do this is to just launch the main server you need to run as the only process in the container.

Running cron python jobs within docker

I would like to run a python cron job inside of a docker container in detached mode. My set-up is below:
My python script is test.py
#!/usr/bin/env python
import datetime
print "Cron job has run at %s" %datetime.datetime.now()
My cron file is my-crontab
* * * * * /test.py > /dev/console
and my Dockerfile is
FROM ubuntu:latest
RUN apt-get update && apt-get install -y software-properties-common python-software-properties && apt-get update
RUN apt-get install -y python cron
ADD my-crontab /
ADD test.py /
RUN chmod a+x test.py
RUN crontab /my-crontab
ENTRYPOINT cron -f
What are the potential problems with this approach? Are there other approaches and what are their pros and cons?
Several issues that I faced while trying to get a cron job running in a docker container were:
time in the docker container is in UTC not local time;
the docker environment is not passed to cron;
as Thomas noted, cron logging leaves a lot to be desired and accessing it through docker requires a docker-based solution.
There are cron-specific issues and are docker-specific issues in the list, but in any case they have to be addressed to get cron working.
To that end, my current working solution to the problem posed in the question is as follows:
Create a docker volume to which all scripts running under cron will write:
# Dockerfile for test-logs
# BUILD-USING: docker build -t test-logs .
# RUN-USING: docker run -d -v /t-logs --name t-logs test-logs
# INSPECT-USING: docker run -t -i --volumes-from t-logs ubuntu:latest /bin/bash
FROM stackbrew/busybox:latest
# Create logs volume
VOLUME /var/log
CMD ["true"]
The script that will run under cron is test.py:
#!/usr/bin/env python
# python script which needs an environment variable and runs as a cron job
import datetime
import os
test_environ = os.environ["TEST_ENV"]
print "Cron job has run at %s with environment variable '%s'" %(datetime.datetime.now(), test_environ)
In order to pass the environment variable to the script that I want to run under cron, follow Thomas' suggestion and put a crontab fragment for each script (or group of scripts) that has need of a docker environment variable in /etc/cron.d with a placeholder XXXXXXX which must be set.
# placed in /etc/cron.d
# TEST_ENV is an docker environment variable that the script test.py need
TEST_ENV=XXXXXXX
#
* * * * * root python /test.py >> /var/log/test.log
Instead of calling cron directly, wrap cron in a python script that does does things: 1. reads the environment variable from the docker environment variable and sets the environment variable in a crontab fragment.
#!/usr/bin/env python
# run-cron.py
# sets environment variable crontab fragments and runs cron
import os
from subprocess import call
import fileinput
# read docker environment variables and set them in the appropriate crontab fragment
environment_variable = os.environ["TEST_ENV"]
for line in fileinput.input("/etc/cron.d/cron-python",inplace=1):
print line.replace("XXXXXXX", environment_variable)
args = ["cron","-f", "-L 15"]
call(args)
The Dockerfile that for the container in which the cron jobs run is as follows:
# BUILD-USING: docker build -t test-cron .
# RUN-USING docker run --detach=true --volumes-from t-logs --name t-cron test-cron
FROM debian:wheezy
#
# Set correct environment variables.
ENV HOME /root
ENV TEST_ENV test-value
RUN apt-get update && apt-get install -y software-properties-common python-software-properties && apt-get update
# Install Python Setuptools
RUN apt-get install -y python cron
RUN apt-get purge -y python-software-properties software-properties-common && apt-get clean -y && apt-get autoclean -y && apt-get autoremove -y && rm -rf /var/lib/apt/lists/* /tmp/* /var/tmp/*
ADD cron-python /etc/cron.d/
ADD test.py /
ADD run-cron.py /
RUN chmod a+x test.py run-cron.py
# Set the time zone to the local time zone
RUN echo "America/New_York" > /etc/timezone && dpkg-reconfigure --frontend noninteractive tzdata
CMD ["/run-cron.py"]
Finally, create the containers and run them:
Create the log volume (test-logs) container: docker build -t test-logs .
Run log volume: docker run -d -v /t-logs --name t-logs test-logs
Create the cron container: docker build -t test-cron .
Run the cron container: docker run --detach=true --volumes-from t-logs --name t-cron test-cron
To inspect the log files of the scripts running under cron: docker run -t -i --volumes-from t-logs ubuntu:latest /bin/bash. The log files are in /var/log.
Here is a complement on rosksw answer.
There is no need to do some string replacement in the crontab file in order to pass environment variables to the cron jobs.
It is simpler to store the environment variables in a file when running the contrainer, then load them from this file at each cron execution. I found the tip here.
In the dockerfile:
CMD mkdir -p /data/log && env > /root/env.txt && crond -n
In the crontab file:
* * * * * root env - `cat /root/env.txt` my-script.sh
Adding crontab fragments in /etc/cron.d/ instead of using root's crontab might be preferable.
This would:
Let you add additional cron jobs by adding them to that folder.
Save you a few layers.
Emulate how Debian distros do it for their own packages.
Observe that the format of those files is a bit different from a crontab entry. Here's a sample from the Debian php package:
# /etc/cron.d/php5: crontab fragment for php5
# This purges session files older than X, where X is defined in seconds
# as the largest value of session.gc_maxlifetime from all your php.ini
# files, or 24 minutes if not defined. See /usr/lib/php5/maxlifetime
# Look for and purge old sessions every 30 minutes
09,39 * * * * root [ -x /usr/lib/php5/maxlifetime ] && [ -x /usr/lib/php5/sessionclean ] && [ -d /var/lib/php5 ] && /usr/lib/php5/sessionclean /var/lib/php5 $(/usr/lib/php5/maxlifetime)
Overall, from experience, running cron in a container does work very well (besides cron logging leaving a lot to be desired).
Here's an alternative solution.
in Dockerfile
ADD docker/cron/my-cron /etc/cron.d/my-cron
RUN chmod 0644 /etc/cron.d/my-cron
ADD docker/cron/entrypoint.sh /etc/entrypoint.sh
ENTRYPOINT ["/bin/sh", "/etc/entrypoint.sh"]
in entrypoint.sh
#!/usr/bin/env bash
printenv | cat - /etc/cron.d/my-cron > ~/my-cron.tmp \
&& mv ~/my-cron.tmp /etc/cron.d/my-cron
cron -f
We are using below solution. It supports both docker logs functionality and ability to hang the cron process in the container on PID 1 (if you use tail -f workarounds provided above - if cron crashes, docker will not follow restart policy):
cron.sh:
#!/usr/bin/env bash
printenv | cat - /etc/cron.d/cron-jobs > ~/crontab.tmp \
&& mv ~/crontab.tmp /etc/cron.d/cron-jobs
chmod 644 /etc/cron.d/cron-jobs
tail -f /var/log/cron.log &
cron -f
Dockerfile:
RUN apt-get install --no-install-recommends -y -q cron
ADD cron.sh /usr/bin/cron.sh
RUN chmod +x /usr/bin/cron.sh
ADD ./crontab /etc/cron.d/cron-jobs
RUN chmod 0644 /etc/cron.d/cron-jobs
RUN touch /var/log/cron.log
ENTRYPOINT ["/bin/sh", "/usr/bin/cron.sh"]
crontab:
* * * * * root <cmd> >> /var/log/cron.log 2>&1
And please don't forget to add the creepy new line in your crontab
Here is my checklist for debugging cron python scripts in docker:
Make sure you run cron command somewhere. Cron doesn't start automatically. You can run it from a Dockerfile using RUN or CMD or add it to a startup script for the container. In case you use CMD you may consider using cron -f flag which keeps cron in the foreground and won't let container die. However, I prefer using tail -f on logfiles.
Store environment variables in /etc/envoronment. Run this from a bash startscript: printenv > /etc/environment. This is an absolute must if you use environment variables inside of python scripts. Cron doesn't know anything about the environment variables by default. By it can read them from /etc/environment.
Test Cron by using the following config:
* * * * * echo "Cron works" >>/home/code/test.log
* * * * * bash -c "/usr/local/bin/python3 /home/code/test.py >>/home/code/test.log 2>/home/code/test.log"
The python test file should contain some print statements or something else that displays that the script is running. 2>/home/code/test.log will also log errors. Otherwise, you won't see errors at all and will continue guessing.
Once done, go to the container, using docker exec -it <container_name> bash and check:
That crontab config is in place using crontab -l
Monitor logs using tail -f /home/code/test.log
I have spent hours and days figuring out all of those problems. I hope this helps someone to avoid this.
Don't mix crond and your base image. Prefer to use a native solution for your language (schedule or crython as said by Anton), or decouple it. By decoupling it I mean, keep things separated, so you don't have to maintain an image just to be the fusion between python and crond.
You can use Tasker, a task runner that has cron (a scheduler) support, to solve it, if you want keep things decoupled.
Here an docker-compose.yml file, that will run some tasks for you
version: "2"
services:
tasker:
image: strm/tasker
volumes:
- "/var/run/docker.sock:/var/run/docker.sock"
environment:
configuration: |
logging:
level:
ROOT: WARN
org.springframework.web: WARN
sh.strm: DEBUG
schedule:
- every: minute
task: helloFromPython
tasks:
docker:
- name: helloFromPython
image: python:3-slim
script:
- python -c 'print("Hello world from python")'
Just run docker-compose up, and see it working. Here is the Tasker repo with the full documentation:
http://github.com/opsxcq/tasker
Single Container Method
You may run crond within the same container that is doing something closely related using a base image that handles PID 0 well, like phusion/baseimage.
Specialized Container Method
May be cleaner would be to have another Container linked to it that just runs crond. For example:
Dockerfile
FROM busybox
ADD crontab /var/spool/cron/crontabs/www-data
CMD crond -f
crontab
* * * * * echo $USER
Then run:
$ docker build -t cron .
$ docker run --rm --link something cron
Note: In this case it'll run the job as www-data. Cannot just mount the crontab file as volume because it needs to be owned by root with only write access for root, else crond will run nothing. Also you'll have to run crond as root.
Another possibility is to use Crython. Crython allows you to regularly schedule a python function from within a single python script / process. It even understands cron syntax:
#crython.job(expr='0 0 0 * * 0 *')
def job():
print "Hello world"
Using crython avoids the various headaches of running crond inside a docker container - your job is now a single process that wakes up when it needs to, which fits better into the docker execution model. But it has the downside of putting the scheduling inside your program, which isn't always desirable. Still, it might be handy in some use cases.

Categories