How can I run a django management command by cron job - python

I am working with a django app called django-mailbox. The purpose of this is to import email messages via pop3 and other protocols and store them in a db. I want to do this at regular intervals via a chron. In the documentation http://django-mailbox.readthedocs.org/en/latest/topics/polling.html it states:
Using a cron job
You can easily consume incoming mail by running the management command named getmail (optionally with an argument of the name of the mailbox you’d like to get the mail for).:
python manage.py getmail
Now I can run this at the command line locally and it works but if this was deployed to an outside server which was only accessible by a URL how would this command be given?

If you are using a virtual env use the python binary from the virtualenv
* * * * * /path/to/virtualenv/bin/python /path/to/project/manage.py management_command

on the server machine:
$ sudo crontab -l
no crontab for root
$ sudo crontab -e
no crontab for root - using an empty one
Select an editor. To change later, run 'select-editor'.
1. /bin/ed
2. /bin/nano <---- easiest
3. /usr/bin/vim.basic
4. /usr/bin/vim.tiny
Choose 1-4 [2]:
choose your preferred editor
then see http://en.wikipedia.org/wiki/Cron for how to schedule when will the command run, direct it to some .sh file on your machine, make sure you give full path as this is going to run in root user context.
the script the cron will run may look something like:
#!/bin/bash
cd /absolute/path/to/django/project
/usr/bin/python ./manage.py getmail

Related

docker and python using symfony process

I am using Laradock and want to be able to run a python script from my laravel app using Symfony Process. From inside the root on my container I can run "python3 script_name.py arg1" and it runs just fine. pip list shows all modules needed. When I run it from inside Laravel, it tells me:
"import pymysql ImportError: No module named 'pymysql'"
I have used a non-docker Laravel app to do this just fine, using:
$script = storage_path().'/app/script.py';
$process = new Process('python3 '. $script." ".session('division'));
What am I missing?
On *nix make sure that PYTHONPATH is configured correctly for all users or try to set full path to python3.
How to check
At first your php user
php -r "print shell_exec( 'whoami' );" // somebody
When run
su somebody python3 script_name.py arg1

Start django app as service

I want to create service that will be start with ubuntu and will have ability to use django models etc..
This service will create thread util.WorkerThread and wait some data in main.py
if __name__ == '__main__':
bot.polling(none_stop=True)
How I can to do this. I just don't know what I need to looking for.
If you also can say how I can create ubuntu autostart service with script like that, please tell me )
P.S. all django project run via uwsgi in emperor mode.
The easiest way in my opinion is create a script and run on crontab.
First of all create a script to start your django app.
#!/bin/bash
cd /path/to your/virtual environment #path to your virtual environment
. bin/activate #Activate your virtual environment
cd /path/to your/project directory #After that go to your project directory
python manage.py runserver #run django server
Save the script and open crontab with the command:
crontab -e
Now edit the crontab file and write on the last line:
#reboot path/to/your/script.sh
This way is not the best but the easiest, if you are not comfortable with Linux startup service creation.
I hope this help you :)
Take a look at supervisord. It is much easier than daemonizing python script.
Config it something like this:
[program:watcher]
command = /usr/bin/python /path/to/main.py
stdout_logfile = /var/log/main-stdout.log
stdout_logfile_maxbytes = 10MB
stdout_logfile_backups = 5
stderr_logfile = /var/log/main-stderr.log
stderr_logfile_maxbytes = 10MB
stderr_logfile_backups = 5
Ok, that is answer - https://www.raspberrypi-spy.co.uk/2015/10/how-to-autorun-a-python-script-on-boot-using-systemd/
In new versions ubuntu services .conf in /etc/init fail with error Unable to connect to Upstart: Failed to connect to socket /com/ubuntu/upstart: Connection refused
But services works using systemd

manage.py command in crontab not working

I have created a executeable script .sh which contains code to run a django managemenet command.
cron.sh
#!/bin/sh
. /path/to/env/activate
cd /path/to/project
/path/to/env/bin/python manage.py some_command
I can confirm this script and manage.py command is working by executing it directly on terminal
$ /path/to/cron.sh
When i do it same via crontab its not working as expected.
** What am i doing wrong ?? I can confirm there is nothing wrong with crontab, it executing the cron.sh file but path/to/env/bin/python manage.py some_command is not working as expected.
cron log also showing
CRON[14768]: (root) CMD /path/to/cron.sh > /dev/null 2>&1
I am using bitnami django ami (ubuntu 14.04.5 LTS)
Update
After removing /dev/null i am getting this error now
"Cannot locate wrapped file"
It seems that it is a PATH problem. I do not know if django uses specific paths that must be set but AFAIK the crontab PATH is really limited due to security reasons. Just to check if that is the problem you could do in a shell terminal the following:
echo $PATH
You will get a complete PATH for instance:
/usr/local/sbin:/usr/local/bin:/usr/bin:/usr/lib/jvm/default/bin:/usr/bin/site_perl:/usr/bin/vendor_perl:/usr/bin/core_perl
In your crontab, put it above your code:
PATH=/usr/local/sbin:/usr/local/bin:/usr/bin:/usr/lib/jvm/default/bin:/usr/bin/site_perl:/usr/bin/vendor_perl:/usr/bin/core_perl
Tell me if this works. If does, try to purge the provided PATH or even better provide absolute locations in your code.
I have to say that I don't know if you can perform a cd in the cron like this. I always used absolute paths or cd /some/dir && /path/to/script args.
P.S: I cannot make comments yet, for this reason I put it in an answer.
The problem is that your not using the script that Bitnami uses to load all the environment variables (/opt/bitnami/scritps/setenv.sh).
I would try using this script:
#!/bin/sh
. /opt/bitnami/scritps/setenv.sh
. /path/to/env/activate
cd /path/to/project
/path/to/env/bin/python manage.py some_command

Environment Variables when running from cron Ubuntu

I have a few Scrapy Python scripts which use AWS CloudWatch for logging using the watchtower module. This is in a docker container. Everything works absolutely fine when run manually. I am now looking to get cron jobs to schedule each scraper. This is when it breaks. As it is in a docker container, I cannot find out where the cron logs are kept.
The entry point to the docker container is:
CMD cron -L15 && tail -f /var/log/cron.log
However, the file /var/log/cron.log is empty.
The cron.d/spiders file is very basic at the minute as I test:
* * * * * root /usr/local/bin/scrapy runspider /spiders/myspider.py
If I remove the logging using CloudWatch and watchtower the scraper runs as expected.
https://pypi.python.org/pypi/watchtower
If I run the command from within the docker container
/usr/local/bin/scrapy runspider /spiders/myspider.py
from within the docker container with the logging back in the file it works as well. I believe the issue is with the environment variables. Watchtower looks in the environment variables for
AWS_ACCESS_KEY_ID=
AWS_SECRET_ACCESS_KEY=
AWS_DEFAULT_REGION=
So the issue is when run by cron, the environmental variables are not available. I tried running
env >> /etc/environment
but this didn't work.

Running fabric scripts as root

I am trying to use fabric to automate some administrative work that I am doing on a couple of servers. The general flow is the following:
SSH with local user
run: sudo su - to become root (providing local user password again)
Do the work as root :)
Unfortunately using run('sudo su -') blocks execution of the scripts and allows user input. When I type exit or Ctrl+D the scipt resumes, but without root privileges.
I have seen a similar problem in Switching user in Fabric but I am restricted to sudo su - because I am not allowed to change the /etc/sudoers file which contains the following line:
localuser ALL = /usr/bin/su -
I browsed the source of fabric trying to find a workaround but with no success.
Having faced the same problem as yours, (only sudo su - user allowed by admin, sudo -u user -c cmd not allowed), here's my working solution with fabric:
from ilogue.fexpect import expect, expecting, run
def sudosu(user, cmd):
cmd += ' ;exit'
prompts = []
prompts += expect('bash', cmd)
prompts += expect('assword:', env.password)
with expecting(prompts):
run('sudo su - ' + user)
def host_type():
sudosu('root', 'uname -s')
There are several solutions for your issue. First, you want to run commands using sudo. You can use the fabric method sudo instead of run that runs a shell command on a remote host, with superuser privileges(sudo ref).
For example, these commands are executed using sudo :
sudo("~/install_script.py")
sudo("mkdir /var/www/new_docroot", user="www-data")
sudo("ls /home/jdoe", user=1001)
result = sudo("ls /tmp/")
Another idea is that you want to wrap a set of commands (that need to be sudoed).
You can use Fabric context managers (ref) to do that. Particularly, you can use prefix or settings.
For example:
with settings(user='root'):
run('do something')
run('do another thing')
will ask you once the root password then execute commands as root.
You can tweek settings to store the password.
There is one solution for the following problem Sorry, user localuser is not allowed to execute '/usr/bin/su - -c /bin/bash -l -c pwd' as root on hostname.
You can try sudo('mkdir ~/test_fabric',shell=False). Using param shell to avoid the bash param -l.

Categories