This is how I configure crontab (by using crontab -e)
* * * * * /home/jeff/Desktop/scripts/job_pull_queue.sh >> /home/jeff/Desktop/scripts/log.txt
This is the content of /home/jeff/Desktop/scripts/job_pull_queue.sh
#!/bin/bash
echo "Running job_pull_queue.sh # $(date)"
cd /home/jeff/Documents/code/some_project
echo $(printenv)
/home/jeff/miniconda3/bin/python -m util.main
Now the problem is, when running ./job_pull_queue.sh in the terminal, it works, but I can tell from the log file that crontab never executes that last line /home/jeff/miniconda3/bin/python -m util.main (I can see the result from the previous echo in the log file, but not the python script itself), what happened? How do I fix it?
Update: here's the result from printenv when ran by crontab
SHELL=/bin/sh PWD=/home/jeff/Documents/code/some_project LOGNAME=jeff HOME=/home/jeff LANG=en_US.UTF-8 SHLVL=0 PATH=/usr/bin:/bin OLDPWD=/home/jeff _=/usr/bin/printenv
Ok...
My Python script reads several env variables from my user profile, and of course, these variables don't exist when crontab is running the script...
And I don't have detection/logging in place so I didn't know env variables are missing.
Related
I have a Python script which uses environment variables. This script works exactly as planned when run directly; however, I would like to run it as a cron job every minute for the time being.
Currently in my cron. directory I have a file called scrapers containing:
* * * * * root /usr/bin/python3.5 /code/scraper.py
This runs the Python script but the script fails, as in the script I use two environment variables.
I read I should add SHELL=/bin/bash to the cron file, so I did, but this didn't help.
SHELL=/bin/bash
* * * * * root /usr/bin/python3.5 /code/scraper.py
Then I read
In the crontab, before you command, add . $HOME/.profile.
SHELL=/bin/bash
* * * * * . $HOME/.profile; root /usr/bin/python3.5 /code/scraper.py
but this caused the cron to stop running altogether. What is the best way of 'sending' the env variables to the cron?
Instead of executing the whole ~/.profile what I'd do is move the variables that must be shared between your cron jobs and the account that has the profile, then I'd source these both in ~/.profile and in the cron job.
The last attempt you show in the question is not properly formatted. The user id should be coming right after the scheduling information, but you've added the sourcing of the profile before the user id, which surely cannot work.
Here's an example setup that I've tested here:
*/1 * * * * someuser . /tmp/t10/setenv && /usr/bin/python /tmp/t10/test.py
I've set it to execute every minute for testing purposes. Replace someuser with something that makes sense. The /tmp/t10/setenv script I used had this:
export FOO=foovalue
export BAR=barvalue
The /tmp/t10/test.py file had this:
import os
print os.environ["FOO"], os.environ["BAR"]
My cron emails me the output of the scripts it runs. I got an email with this output:
foovalue barvalue
You can set the env variable inline:
* * * * * root ENV_VAR=VALUE /usr/bin/python3.5 /code/scraper.py
Another way is use honcho that you can pass a file with env variables.
honcho -e /path/to/.env run /code/scraper.py
You can specify your two environment variables by this:
* * * * * root env A=1 B=2 /usr/bin/python3.5 /code/scraper.py
env is a system program that runs a specified program with additional variables:
$ env A=1 B=2 /bin/sh -c 'echo $A$B' # or just 'sh': would search in $PATH
12
You can add it to the top of your crontab and keep it out of version control. Let's say the environment variable causing you difficulty is export DJANGO_SECRET_KEY="FOOBAR_1241243124312341234":
crontab
DJANGO_SECRET_KEY="FOOBAR_1241243124312341234"
SCRIPT_NAME = my_cool_script
20 21 * * 1-5 bash ~/git_repo/cronjobs/$SCRIPT_NAME.sh 2&>1 | tee ~/git_repo/cronjobs/logs/$SCRIPT_NAME.log
my_cool_script.sh
#!/usr/bin/env bash
~/anaconda3/envs/django/bin/python ~/git_repo/django_project/manage.py run_command
This has worked well for me when the environment variables in question need to be kept secret and the loading of existing .bashrc does not play nice for whatever reason.
This is one of the approach I like, write a script to set environment and execute the script with its parameters as its parameters
set_env_to_process.sh
#!/usr/bin/env bash
echo "TEST_VAR before export is: <$TEST_VAR>"
export TEST_VAR=/opt/loca/netcdf
echo "TEST_VAR after export is: <$TEST_VAR>"
export PATH=$PATH:/usr/bin/python3.5
export PYTHTONPATH=$PYTHONPATH:/my/installed/pythonpath
# execute command and its parameters as input for this script
if [ $# -eq 0 ]; then
echo "No command to execute"
else
echo "Execute commands with its parameters: $#"
eval $#
fi
usage
/usr/bin/python3.5 /code/scraper.pyare taken as input for set_env_to_process.sh
set_env_to_process.sh set the correct env for script to run
It could be used as command line, cron, sudo, ssh to setup env
* * * * * root set_env_to_process.sh /usr/bin/python3.5 /code/scraper.py
I've been trying to debug this for a while now and I feel like I've tried everything.
Code is slightly modified with *** for company reasons.
The following executes as expected when run from a session as my local user.
/var/www/****/***/run.sh path_to_my/script.py 2>&1 >> /var/www/****/***/test.log
Where run.sh is just a wrapper for running Python in a virtualenv:
#!/usr/bin/env bash
wd=$(dirname $0)
source ${wd}/virtualenv/bin/activate
python ${wd}/$1
I have placed a print statement inside of the Python main to show that it's being executed.
if __name__ == "__main__":
print("I got in here...")
When running the command as my local user, the log will contain this printed statement. However, when run in cron as:
*/30 * * * * /var/www/****/***/run.sh path_to_my/script.py 2>&1 >> /var/www/****/***/test.log
I do not get any printed statement, nor do I receive any error output from the 2>&1.
My permissions are 755 on both the .sh and .py scripts.
Everything works as expected except when run via cron.
Am I missing something? Does cron not use .bashrc for the crontab user?
First make sure your local cronjob is running by putting the following in the crontab file and check to see if it get written out at /tmp/env.output after a minute or two
* * * * * env > /tmp/env.output
Second make sure the user running the crontab has permission to write to the /var/www/****/***/test.log file
Third try changing your script to
wd=$(dirname $0)
cd $wd
source activate
python ${wd}/$1
Edited: Anders was able to figure out the answer by himself by adding PYTHONPATH to the cron environment: export PYTHONPATH="${PYTHONPATH}:${wd}"
I have a bash script that I'm using to execute a python file with a specific version of Python (3.6). The Bash script is currently located on my desktop (/home/pi/Desktop/go.sh)
#!/bin/bash
python3.6 /home/pi/scriptDir/myScript.py
Here is my crontab entry, when I do crontab -l (note, I've deleted my other jobs)
* * * * * bash /home/pi/Desktop/go.sh # JOB_ID_3
When I run this file using the command line or from the GUI it executes properly.
When I have crontab do it, nothing happens.
Both my python file and the bash script are executable. chmod +x
Is there something obvious I'm missing?
**my python script does depend on other files in the same script directory, could that be the issue?
Here's what got it working for me. I was not using the full path to my python install. Unless you log the bash file there's no indication that you have an issue.
This is my bash file now. echo's were just to determine that I was indeed running the bash file.
#!/bin/bash
echo started
/home/pi/Python-3.6.0/python home/pi/myScriptFolder/myScript.py
echo finished
To break down the line executing the script:
/home/pi/Python-3.6.0/python - is where python 3.6.0 is installed on my Pi, it could be different for you. home/pi/myScriptFolder/myScript.py is the script I want to run.
And here is my cron statement:
*/15 * * * * bash /home/pi/Desktop/go.sh > /home/pi/Desktop/clog.log 2>&1 -q -f
Breaking down this line:
*/15 * * * * is the cron time, in this case every 15 mins. bash /home/pi/Desktop/go.sh specifies to run a bash file and the directory of that file. > /home/pi/Desktop/clog.log 2>&1 -q -f this last section creates a log file named clog.log so you can see what's going on.
The key here was not just logging the go.sh bash file execution, but adding the 2>&1 -q -f to the end of the log request. Before I did that there was no indication of a problem, afterwards I was getting the python files error returned into the log file.
Cron jobs are surprisingly tricky. Aside from working directory ( which you'd have to set somewhere), you also need to handle environment setup ( $PATH, for example).
Start by redirecting your shell script's standard output and error to a log file so you can get feedback.
I tried running a Python script using cronjob but I get the following error:
cron[44405]: no path for address 0x10ff7a000
in grep cron /var/log/system.log
When I ran the script without using cronjob it worked:
/usr/bin/python /Users/anuj/Desktop/message.py
I tried adding the cron job using $sudo crontab. This is the CRON script:
*/1 11-17 * * 1-7 /usr/bin/python /Users/anuj/Desktop/message.py
Both paths are correct for root mode and user mode as I am running cron with sudo.
try to create file like message.sh
inside it run your .py file
#!/bin/sh
python path/to/python_script.py
and make this file executable with chmod a+x message.sh
*/1 11-17 * * 1-7 path/to/message.sh 2>&1
I have a Python script which uses environment variables. This script works exactly as planned when run directly; however, I would like to run it as a cron job every minute for the time being.
Currently in my cron. directory I have a file called scrapers containing:
* * * * * root /usr/bin/python3.5 /code/scraper.py
This runs the Python script but the script fails, as in the script I use two environment variables.
I read I should add SHELL=/bin/bash to the cron file, so I did, but this didn't help.
SHELL=/bin/bash
* * * * * root /usr/bin/python3.5 /code/scraper.py
Then I read
In the crontab, before you command, add . $HOME/.profile.
SHELL=/bin/bash
* * * * * . $HOME/.profile; root /usr/bin/python3.5 /code/scraper.py
but this caused the cron to stop running altogether. What is the best way of 'sending' the env variables to the cron?
Instead of executing the whole ~/.profile what I'd do is move the variables that must be shared between your cron jobs and the account that has the profile, then I'd source these both in ~/.profile and in the cron job.
The last attempt you show in the question is not properly formatted. The user id should be coming right after the scheduling information, but you've added the sourcing of the profile before the user id, which surely cannot work.
Here's an example setup that I've tested here:
*/1 * * * * someuser . /tmp/t10/setenv && /usr/bin/python /tmp/t10/test.py
I've set it to execute every minute for testing purposes. Replace someuser with something that makes sense. The /tmp/t10/setenv script I used had this:
export FOO=foovalue
export BAR=barvalue
The /tmp/t10/test.py file had this:
import os
print os.environ["FOO"], os.environ["BAR"]
My cron emails me the output of the scripts it runs. I got an email with this output:
foovalue barvalue
You can set the env variable inline:
* * * * * root ENV_VAR=VALUE /usr/bin/python3.5 /code/scraper.py
Another way is use honcho that you can pass a file with env variables.
honcho -e /path/to/.env run /code/scraper.py
You can specify your two environment variables by this:
* * * * * root env A=1 B=2 /usr/bin/python3.5 /code/scraper.py
env is a system program that runs a specified program with additional variables:
$ env A=1 B=2 /bin/sh -c 'echo $A$B' # or just 'sh': would search in $PATH
12
You can add it to the top of your crontab and keep it out of version control. Let's say the environment variable causing you difficulty is export DJANGO_SECRET_KEY="FOOBAR_1241243124312341234":
crontab
DJANGO_SECRET_KEY="FOOBAR_1241243124312341234"
SCRIPT_NAME = my_cool_script
20 21 * * 1-5 bash ~/git_repo/cronjobs/$SCRIPT_NAME.sh 2&>1 | tee ~/git_repo/cronjobs/logs/$SCRIPT_NAME.log
my_cool_script.sh
#!/usr/bin/env bash
~/anaconda3/envs/django/bin/python ~/git_repo/django_project/manage.py run_command
This has worked well for me when the environment variables in question need to be kept secret and the loading of existing .bashrc does not play nice for whatever reason.
This is one of the approach I like, write a script to set environment and execute the script with its parameters as its parameters
set_env_to_process.sh
#!/usr/bin/env bash
echo "TEST_VAR before export is: <$TEST_VAR>"
export TEST_VAR=/opt/loca/netcdf
echo "TEST_VAR after export is: <$TEST_VAR>"
export PATH=$PATH:/usr/bin/python3.5
export PYTHTONPATH=$PYTHONPATH:/my/installed/pythonpath
# execute command and its parameters as input for this script
if [ $# -eq 0 ]; then
echo "No command to execute"
else
echo "Execute commands with its parameters: $#"
eval $#
fi
usage
/usr/bin/python3.5 /code/scraper.pyare taken as input for set_env_to_process.sh
set_env_to_process.sh set the correct env for script to run
It could be used as command line, cron, sudo, ssh to setup env
* * * * * root set_env_to_process.sh /usr/bin/python3.5 /code/scraper.py