Cronjob through docker container not running properly - python

Im trying to setup a cronjob to fire a python script inside a docker container without success.
On my host i have setup a cronjob that should run every day like this:
30 10 * * * root docker exec -it container bash -c '/usr/bin/python myscript.py'
running the command by itself works fine so nothing wrong with it and in the syslog the cronjob is fired. But the script is not running.
Has anyone come across this before or have any clues as to why the script is not running from cronjob?

a blank line is required at the end of this file for a valid cron file.
30 10 * * * root docker exec -it container bash -c '/usr/bin/python myscript.py'
# An empty line is required at the end of this file for a valid cron file.

Related

schedule a cron job to ping a server and run python script

I am trying to automate a bash script in Ubuntu. The script pings a server and then runs a python script if the packet is not received. The python script sends me a a notification when the ping is not returned. The script works when I run it manually, but it is not working when I schedule a cron job.
The bash script is named ping.sh.
#!/bin/bash
pingString=$(ping -c 1 google.com) # google is just and example, for my script I am using a server that intentionally does not return the packet.
msgRecieved="1 received, 0% packet loss"
msgLost="0 received, 100% packet loss"
if `echo ${pingString} | grep "${msgLost}" 1>/dev/null 2>&1`
then
python3 ping.py
fi
This is how I setup the cron job:
crontab -u username -e
* * * * * /bin/sh /home/username/Documents/ping.sh
I am confused because I set other dummy cron job for testing and it works fine. Example below:
* * * * * /bin/sh /home/username/Documents/test.sh
test.sh
#! /bin/bash
touch /home/username/Documents/ping_server/text.txt
The text.txt file is successfully created every minute.
Thanks for the suggestions. My problem was solved by
adding full path of the python script "ping.py" in the bash script
adding environment variables to crontab
I didn't know environment variables set in .bashrc are not loaded when running cron.
In Ubuntu it is possible to declare env variables before the jobs scheduled just like you would in bash.rc:
crontab -u username -e
ENV_VAR1=variable1
* * * * * /bin/sh /home/username/Documents/ping.sh

How to configure crontab to run Django command?

I run a Debian 10 system have the following shell file named "update.sh":
#!/bin/bash
cd home/user/djangoprojet
source /env/bin/activate
python manage.py update
I run a root user and set "chmod +x update.sh".
When I run "home/user/djangoprojet/update.sh", executing the script works perfectly.
I now used "crontab -e" to run the script every minute:
* * * * * home/user/djangoprojet/update.sh > testcron.log
However, the script is not executed. When I run "grep CRON /var/log/syslog", I get the following result, which indicates that crontab runs:
Jan 30 15:08:01 vServer CRON[22036]: (root) CMD > (home/user/djangoprojet/update.sh > testcron.log) Jan 30 15:08:01
vServer CRON[22035]: (CRON) info (No MTA installed, discarding output)
The "testcron.log" file, located in the root directory, is empty - although the script would generate an output, if it ran.
Somewhere on StackExchange I also found to execute this command
/bin/sh -c "(export PATH=/usr/bin:/bin; home/user/djangoprojet/update.sh </dev/null)"
which works perfectly.
How can I configure crontab correctly such that my script runs? Thanks!
I now found the solution: I need to use "/home/" instad of "home/" everywhere

Python Cron Job in Docker Container

I have three files which is
crontab : lists of cron job to be execute
entrypoint.sh
#!/usr/bin/env bash
service cron start
python
and docker file basically to install the pip and to run crontab on certain folder.
My question is :
Why in my docker container, the cron just start once and Exited. have no ways to find the logs of it as it shows : Starting periodic command scheduler: cron.
i wish to know whats the proper way of setting up and how to keep it running.
Thanks
There are multiple ways on how you can run a cronjob inside a docker container. Here is an example for a cron-setup on debian using cronjob files.
Create a crontab file
* * * * * root echo "Test my-cron" > /proc/1/fd/1 2>/proc/1/fd/2
my-cron - This file contains the interval, user and the command that should be scheduled. In this example we want to print the text Test my-cron every minute.
Create a docker entrypoint
#!/usr/bin/env bash
cron # start cron service
tail -f /dev/null # keep container running
entrypoint.sh - This is the entrypoint which gets executed when the container gets started.
Create a Dockerfile
FROM debian:latest
RUN apt-get update \
&& apt-get install -y cron
# Cron file
ADD ./my-cron /etc/cron.d/my-cron
RUN chmod 0644 /etc/cron.d/my-cron
# Entrypoint
ADD ./entrypoint.sh /usr/bin/entrypoint.sh
RUN chmod +x /usr/bin/entrypoint.sh
CMD [ "entrypoint.sh" ]
Run
Build the image
docker build . --tag my-cron
Start a container
docker run -d my-cron:latest
Check the console output
docker logs <YOUR_CONTAINER_ID> --follow

Environment Variables when running from cron Ubuntu

I have a few Scrapy Python scripts which use AWS CloudWatch for logging using the watchtower module. This is in a docker container. Everything works absolutely fine when run manually. I am now looking to get cron jobs to schedule each scraper. This is when it breaks. As it is in a docker container, I cannot find out where the cron logs are kept.
The entry point to the docker container is:
CMD cron -L15 && tail -f /var/log/cron.log
However, the file /var/log/cron.log is empty.
The cron.d/spiders file is very basic at the minute as I test:
* * * * * root /usr/local/bin/scrapy runspider /spiders/myspider.py
If I remove the logging using CloudWatch and watchtower the scraper runs as expected.
https://pypi.python.org/pypi/watchtower
If I run the command from within the docker container
/usr/local/bin/scrapy runspider /spiders/myspider.py
from within the docker container with the logging back in the file it works as well. I believe the issue is with the environment variables. Watchtower looks in the environment variables for
AWS_ACCESS_KEY_ID=
AWS_SECRET_ACCESS_KEY=
AWS_DEFAULT_REGION=
So the issue is when run by cron, the environmental variables are not available. I tried running
env >> /etc/environment
but this didn't work.

set max time a docker container can be open

I'm opening a docker container and running an inline bash script. The bash script runs python code but I'm not always sure what that code will be.
Because it's arbitrary code, I'd like to enable a kill switch to close this container within 30 seconds. Is there a way to do that within this code that I'm running:
docker run my/image sh -c '$(curl -ss -o python_file.py https://www.example.com); \
python python_file.py'
Basically before running the python file, I'd like to start a timer. And if that timer hits 30 seconds, I run docker kill this specific container.
I've tried the following but it's not working.
timeout 30 docker run my/image sh -c '$(curl -ss -o python_file.py https://www.example.com); \
python python_file.py'

Categories