Cron job with django application - python

I would like to use a cron task in order to delete media files if the condition is True.
Users generate export files stored in the Media folder. In order to clean export files in the background, I have a Cron task which loops over each file and looks if the expiry delay is passed or not.
I used django-cron library
Example:
File in Media Folder : Final_Products___2019-04-01_17:50:43.487845.xlsx
My Cron task looks like this :
class MyCronExportJob(CronJobBase):
""" Cron job which removes expired files at 18:30 """
RUN_AT_TIMES = ['18:30']
schedule = Schedule(run_at_times=RUN_AT_TIMES)
code = 'app.export_cron_job'
def do(self):
now = datetime.datetime.now()
media_folder = os.listdir(os.path.join(settings.MEDIA_ROOT, 'exports'))
for files in media_folder:
file = os.path.splitext(files.split(settings.EXPORT_TITLE_SEPARATOR, 1)[1])[0]
if datetime.datetime.strptime(file, '%Y-%m-%d_%H:%M:%S.%f') + timedelta(minutes=settings.EXPORT_TOKEN_DELAY) < now:
os.remove(os.path.join(os.path.join(settings.MEDIA_ROOT, 'exports'), files))
# settings.EXPORT_TOKEN_DELAY = 60 * 24
I edited my crontab -e :
30 18 * * * source /home/user/Bureau/Projets/app/venv/bin/activate.csh && python /home/user/Bureau/Projets/app/src/manage.py runcrons --force app.cron.MyCronExportJob
Then I launched service cron restart
But nothing as changed. My file is still there. However, it should be removed because his date is greater than now + settings.EXPORT_TOKEN_DELAY
I'm using Ubuntu to local dev and FreeBSD as a production server environment.
EDIT:
I tried some things but crontab doesn't work for the moment.
1) * * * * * /bin/date >> /home/user/Bureau/Projets/app/cron_output
==> It works, so crontab works
2) I ran : python manage.py runcrons in my console
==> It works
3) I ran this script (cron.sh):
source /home/user/.bashrc
cd /home/user/Bureau/Projets/app
pyenv activate app
python src/manage.py runcrons --force
deactivate
==> It works
4) I ran this crontab line :
35 10 * * * /home/user/Bureau/Projets/app/utility/cron.sh
==> Service restarted at 10h32, I waited until 10h38 : nothing !

Related

Can't find dotenv environment variables from cron job

I am running a python script from a python library which loads some environment variables from a .env file in the root of the library using dotenv.
This works from the command line, but when I try to run as a cronjob using the following:
* * * * * source ./path_to_venv/activate; python ./path_to_script.py
I get a key error because it can't find the environment variable.
Any ideas why this isn't working?
Many thanks for any help!
I'm using crontab as well to execute my Node JS project. I have to explicitly state the path of my .env file like so:
require('dotenv').config({ path: '/var/www/html/myproject/.env' });
In python-dotenv, I believe it can be done similarly by using:
# OR, explicitly providing path to '.env'
from pathlib import Path # Python 3.6+ only
env_path = Path('.') / '.env'
load_dotenv(dotenv_path=env_path)
Source
I don't know if there's a more elegant solution with this one.
I was able to make my script work in crontab by adding the environment variables at the top of the cronjob.
API_KEY=value
API_KEY_SECRET=value
ACCESS_TOKEN=value=value
ACCESS_TOKEN_SECRET=value
# run on 8hrs interval
0 */8 * * * . $HOME/Coding/python/web-scraper-corona/venv/bin/activate && $HOME/Coding/python/web-scraper-corona/venv/bin/python3 /home/chan-dev/Coding/python/web-scraper-corona/twitter-covid19-bot.py >> /tmp/test.txt 2>&1
If you're using dotenvs default config path, it is resolved from path.resolve(process.cwd(), '.env'), which when running your script from cron, will not resolve to what you expect.
So, use this example if your .env file is in the same level as your script:
const dotenv = require('dotenv')
dotenv.config({ path: __dirname + '/.env' })
Node.js ES6-way without any external modules:
// env.js
import dotenv from 'dotenv'
const _dirname = new URL('.', import.meta.url).pathname
dotenv.config({ path: _dirname + '.env' })
// yourmodule.js
import _ from './env.js'
// use process.env

Schedule python script with crontab, shutil.move doesn't work

Hi I schedule my script with cron.d, everything work except a function for move some files from a folder to another.
the function is:
def move_imported_file():
all_file=get_file()
for files in all_file:
#print (files)
shutil.move("/mnt/test-file/"+files, "/mnt/test-file/imported/"+files)
my cron.d file is this:
10 12 * * * root cd /usr/local/sbin/import-file/ && ./myscript.py
If i try to run manually the script, the function move all_file, but if I run the cron.d task, nothing happens
There is any possibility to have a log of what the function are doing?
Thanks
get_file:
def get_file():
my_file = []
os.chdir("/mnt/test-file")
files = glob.glob('*.ics')
for file in files:
my_file.append(file)
#print (my_file)
return my_file
Cron needs the correct PATHs:
PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin:/snap/bin
0 15 * * * root cd /usr/local/sbin/import-file/ && ./myscript.py

User crontab and access rights on xubuntu

I have a python3.2 script that's supposed to delete a folder after everything is done:
def perforce_backup(
source,
destination,
tmp_location,
zip_tmp_loc,
):
logger.info('--------------------Perforce Backup--------------------'
)
logger.info('--- Check integrity of perforce depot (p4 verify)')
p4verify(source, 'user', 'password')
logger.info('--- Create a checkpoint (p4 admin checkpoint)')
p4checkpoint(source, 'user', 'password')
logger.info('--- Do the backup locally')
rsync(source, tmp_location)
logger.info('--- Zip perforce db and depot locally')
zipdir(tmp_location, zip_tmp_loc)
logger.info('--- Remove file from last folder on backup FTP')
shutil.rmtree(destination.path)
makedir(destination.path)
logger.info('--- Move zip to backup FTP')
cp(zip_tmp_loc.path + '/*', destination.path)
logger.info('--- Remove tmp_file locally - raw copy and archive')
shutil.rmtree(tmp_location.path)
logger.info('--- Remove tmp_file locally - raw copy and archive2')
shutil.rmtree(zip_tmp_loc.path)
logger.info('--- Remove tmp_file locally - raw copy and archive3')
When I run the script manually, using the "vbackup" user, it works. I defined a task in my "user" crontab with this syntax (i am executing crontab -e as "vbackup" by using "su vbackup":
00 22 * * * python3.2 /opt/valibackup/main.py
When I use the above, the script runs every day at 22:00. The problem is that it seems to run without the needed privileges and the shutil.rmtree() doesn't work, when it does when I run the script manually.
I tried the following syntax that I found here to be sure that it was ran with "vbackup" rights, but it doesn't even start.
*/30 * * * * vbackup python3.2 /opt/valibackup/main.py
If I edit by using "sudo crontab -e" instead, the rmtree works, but not the rsync sends a Permission denied error.
Any idea?
It sounds like you need to be a privileged user to remove the folders and need to run rsync as you local user.
Modify your script to work as below and try.
Just switch to sudo user before the delete operation and then switch back to your normal user.
I think this answer in stackoverflow will help you
switching user in python

Python script failing from crontab

I've gone through multiple threads, but I still can't seem to find my problem.
I'm building a really simple Twitter bot that I'd like to fire every hour, on the hour with a cron job from a Raspberry Pi. Here's my crontab:
PYTHONPATH=/usr/bin/python
MAILTO=*myemail*
00 * * * * /home/username/directory/my_script.py >> /var/log/cron.log
Then the script:
#! /usr/bin/env python
import sys
from twython import Twython, TwythonError
from pymarkovchain import MarkovChain
#TWITTER ACCESS
apiKey = KEY
apiSecret = SECRET
accessToken = TOKEN
accessKey = KEY
#text to pull
text = open('/home/username/directory/text.txt').read()
#Generate database and frequency table
mc = MarkovChain('/home/username/directory/markov')
mc.generateDatabase(text)
tweet = mc.generateString()
api = Twython(apiKey,apiSecret,accessToken,accessKey)
try:
api.update_status(status=tweet)
except TwythonError as e:
print e
The first thing I checked was all of my referenced files to make sure they were absolute references. Then, I checked to make sure my file paths were correct. I'm really stumped here. Running the script from command line with the full path works as expected. Any thoughts are appreciated.
After trying the suggestions above and reading countless articles, I learned that cron has to run as root, not as the user. I checked the logs and saw that the user calling the script was the owner of the file, not root. So, running chmod a+x my_script.py took care of it.
Thanks for all the suggestions - especially those getting the errors to the correct log file.
To debug better, you might want to redirect stderr:
00 * * * * /home/username/directory/my_script.py >> /tmp/cron.log 2>&1
# or
00 * * * * /home/username/directory/my_script.py >> /tmp/cron.log 2>/tmp/cron-error.log
(I also changed the path there to make sure your cron user has permission to write output.)
Another thing you could try is run the script with Python in cron:
00 * * * * python /home/username/directory/my_script.py >> /tmp/cron.log 2>&1

Create crontab with python-crontab in Python?

I am trying to add a line to my system user's crontab, from a Python script which uses the package python-crontab. My crontab file does not exist yet, and when I run this code, nothing happens (no errors, no results, no creation of crontab file):
from crontab import CronTab
cron = CronTab(user=True)
# cron = CronTab(user='my_user') I tried this line too without any results
job = cron.new(command='python3 /opt/my_script.py')
job.minute.on(2)
job.hour.on(12)
True == job.is_valid()
Am I missing anything?
You need to save the cronjob, that's all that's missing:
#!/bin/python
from crontab import CronTab
cron = CronTab(user=True)
job = cron.new(command='python3 /opt/my_script.py')
job.minute.on(2)
job.hour.on(12)
cron.write()

Categories