Auto restart django development server on file save after previous error - python

While writing the code, I usually am in the habit of saving the file every minute or so. Sometimes, that leads to situations where the function is not complete, and I have saved it, causing the django development server to throw up an error like following:
Unhandled exception in thread started by ...
Traceback
..
..
File "/home/user/work/project/api/file.py", line 26
def update_something(self, )
^
SyntaxError: invalid syntax
Now in cases when the code is working fine, the django dev server auto-restarts on file save with reflected changes. How can I make the django server recover from the failed Error state and restart the server automatically on subsequent file saves?
Currently, I have to stop the python manage.py runserver command in terminal, and run it manually again.
I am using django 1.5.3 on python 2.7.6

I use a simple bash script for this. Here's a one-liner you can use:
$ while true; do python manage.py runserver; sleep 2; done
That will wait 2 seconds before attempting to restart the server. Insert whatever you think is a sane value.
I usually write this as a shell script named runserver.sh, put it in my project root (the same directory with manage.py in it) and add it to the gitignore.
while true; do
echo "Re-starting Django runserver"
python manage.py runserver
sleep 2
done
If you do this, remember to chmod +x runserver.sh, then you can execute it with:
./runserver.sh
Use Ctrl-c Ctrl-c to exit.

On windows you may use a batch file, write this as a batch script named runserver.bat
#echo off
setlocal EnableDelayedExpansion
setlocal EnableExtensions
:WHILE_0
if 1 EQU 1 (
python manage.py runserver
sleep 2
goto WHILE_0
)
Then you can execute it clicking it or from the command line:
./runserver.sh

Related

How to run a python script in the background django

I have a python script (ml.py)that generates some data. I want it to run it in the background daily at 1 AM. How to achieve that ?
ml.py
Takes input from the django models, runs some logic and save the results to the database
I tried installing django-background-tasks and created the following files
tasks.py
from background_task import background
#background(schedule=1)
def hello():
execute('./scripts/ml.py')
hello(repeat=Task.Daily)
In the shell, I executed the following command:
python manage.py process_tasks
Now I get an error saying that the name execute is not defined
My other questions are :
Do I have to write the command python manage.py process_tasks
everyday ?
Can I exit out of the command window and does the process
still run everyday ?

manage.py command in crontab not working

I have created a executeable script .sh which contains code to run a django managemenet command.
cron.sh
#!/bin/sh
. /path/to/env/activate
cd /path/to/project
/path/to/env/bin/python manage.py some_command
I can confirm this script and manage.py command is working by executing it directly on terminal
$ /path/to/cron.sh
When i do it same via crontab its not working as expected.
** What am i doing wrong ?? I can confirm there is nothing wrong with crontab, it executing the cron.sh file but path/to/env/bin/python manage.py some_command is not working as expected.
cron log also showing
CRON[14768]: (root) CMD /path/to/cron.sh > /dev/null 2>&1
I am using bitnami django ami (ubuntu 14.04.5 LTS)
Update
After removing /dev/null i am getting this error now
"Cannot locate wrapped file"
It seems that it is a PATH problem. I do not know if django uses specific paths that must be set but AFAIK the crontab PATH is really limited due to security reasons. Just to check if that is the problem you could do in a shell terminal the following:
echo $PATH
You will get a complete PATH for instance:
/usr/local/sbin:/usr/local/bin:/usr/bin:/usr/lib/jvm/default/bin:/usr/bin/site_perl:/usr/bin/vendor_perl:/usr/bin/core_perl
In your crontab, put it above your code:
PATH=/usr/local/sbin:/usr/local/bin:/usr/bin:/usr/lib/jvm/default/bin:/usr/bin/site_perl:/usr/bin/vendor_perl:/usr/bin/core_perl
Tell me if this works. If does, try to purge the provided PATH or even better provide absolute locations in your code.
I have to say that I don't know if you can perform a cd in the cron like this. I always used absolute paths or cd /some/dir && /path/to/script args.
P.S: I cannot make comments yet, for this reason I put it in an answer.
The problem is that your not using the script that Bitnami uses to load all the environment variables (/opt/bitnami/scritps/setenv.sh).
I would try using this script:
#!/bin/sh
. /opt/bitnami/scritps/setenv.sh
. /path/to/env/activate
cd /path/to/project
/path/to/env/bin/python manage.py some_command

How do I pipe a model query into the Django Shell via a Bash Script?

I'm writing a startup.sh script to be ran when a docker container is created.
#!/bin/bash
python manage.py runserver
python manage.py makemigrations accounts
python manage.py migrate
python manage.py check_permissions
python manage.py cities --import=country --force
*python manage.py shell | from cities.models import * Country.objects.all().exclude(name='United States").delete()*
python manage.py cities --import=cities
python manage.py cities --import=postal_code
I am guessing the line in question is incorrect, what would be the correct way to do this in a bash script?
Use a heredoc:
python manage.py shell <<'EOF'
from cities.models import *
Country.objects.all().exclude(name='United States').delete()
EOF
It's not such a good idea to include django code in a shell script file. It's better to either make a python file and put those code in it and do:
python manage.py shell < script.py
Or better, write a django management command. In this way you could track your code in the same project/repo and people got less confused when they see this.

How to automate django dumpdata?

I am populating my DB locally and I want to dump that data to the production server with a script for all my apps.
I am trying to write a script that will do this...
$ source path/to/venv && python manage.py dumpdata app1 > file1.json
$ source path/to/venv && python manage.py dumpdata app2 > file2.json
...etc
I use fabric for my deploy script and I thought it would be nice to incorporate it in there, but the 'local' method in fabric doesn't seem to be able to do such a thing. the run command does, but IDK why.
I think it might have something to do with this...
local is not currently capable of simultaneously printing and
capturing output, as run/sudo do. The capture kwarg allows you to
switch between printing and capturing as necessary, and defaults to
False. (http://docs.fabfile.org/en/latest/api/core/operations.html)
but I am not sure
I tried doing it with os.system n a separate python script as well but that didn't work either, both of them give me the same error which is...
sh: 1: source: not found
I have checked and double checked the path many times, I can't seem to figure it out. What do you think?
Your script executes under the classic sh shell, not under bash. "source" is a bash command; the classic import command is a period (like ". pathto/pyenv/bin/activate"). Or you could force bash with #!/bin/bash at the start of your script.
Since '$ source' was the thing that could not be executed. I made a shell script, placed it in a directory and just executed that
source pathto/pyenv/bin/activate && python manage.py dumpdata quiz > data_dump/foo.json
source pathto/pyenv/bin/activate && python manage.py dumpdata main > data_dump/bar.json
source pathto/pyenv/bin/activate && python manage.py dumpdata study > data_dump/waz.json
and then in the fabric file...
def foobar():
local('/pathto/foo.sh')

How can I know if my python script is running? (using Cygwin or Windows shell)

I have a python script named sudoserver.py that I start in a CygWin shell by doing:
python sudoserver.py
I am planning to create a shell script (I don't know yet if I will use Windows shell script or a CygWin script) that needs to know if this sudoserver.py python script is running.
But if I do in CygWin (while sudoserver.py is running):
$ ps -e | grep "python" -i
11020 10112 11020 7160 cons0 1000 00:09:53 /usr/bin/python2.7
and in Windows shell:
C:\>tasklist | find "python" /i
python2.7.exe 4344 Console 1 13.172 KB
So it seems I have no info about the .py file being executed. All I know is that python is running something.
The -l (long) option for 'ps' on CygWin does not find my .py file. Nor does it the /v (verbose) switch at tasklist.
What should be the appropriate shell (Windows or CygWin shell would enough; both if possible would be fine) way to programmatically find if an specific python script is executing right now?
NOTE: The python process could be started by another user. Even from a user not logged in a GUI shell, and, even more, the "SYSTEM" (privileged) Windows user.
It is a limitation of the platform.
You probably need to use some low level API to retrieve the process info. You can take a look at this one: Getting the command line arguments of another process in Windows
You can probably use win32api module to access these APIs.
(Sorry, away from a Windows PC so I can't try it out)
Since sudoserver.py is your script, you could modify it to create a file in an accessible location when it starts and to delete the file when it finishes. Your shell script can then check for the existence of that file to find out if sudoserver.py is running.
(EDIT)
Thanks to the commenters who suggested that while the presence or absence of the file is an unreliable indicator, a file's lock status is not.
I wrote the following Python script testlock.py:
f = open ("lockfile.lck","w")
for i in range(10000000):
print (i)
f.close()
... and ran it in a Cygwin console window on my Windows PC. At the same time, I had another Cygwin console window open in the same directory.
First, after I started testlock.py:
Simon#Simon-PC ~/test/python
$ ls
lockfile.lck testlock.py
Simon#Simon-PC ~/test/python
$ rm lockfile.lck
rm: cannot remove `lockfile.lck': Device or resource busy
... then after I had shut down testlock.py by using Ctrl-C:
Simon#Simon-PC ~/test/python
$ rm lockfile.lck
Simon#Simon-PC ~/test/python
$ ls
testlock.py
Simon#Simon-PC ~/test/python
$
Thus, it appears that Windows is locking the file while the testlock.py script is running but it is unlocked when it is stopped with Ctrl-C. The equivalent test can be carried out in Python with the following script:
import os
try:
os.remove ("lockfile.lck")
except:
print ("lockfile.lck in use")
... which correctly reports:
$ python testaccess.py
lockfile.lck in use
... when testlock.py is running but successfully removes the locked file when testlock.py has been stopped with a Ctrl-C.
Note that this approach works in Windows but it won't work in Unix because, according to the Python documentation:
On Windows, attempting to remove a file that is in use causes
an exception to be raised; on Unix, the directory entry is removed
but the storage allocated to the file is not made available until
the original file is no longer in use.
A platform-independent solution using an additional Python module FileLock is described in Locking a file in Python.
(FURTHER EDIT)
It appears that the OP didn't necessarily want a solution in Python. An alternative would be to do this in bash. Here is testlock.sh:
#!/bin/bash
flock lockfile.lck sequence.sh
The script sequence.sh just runs a time-consuming operation:
#!/bin/bash
for i in `seq 1 1000000`;
do
echo $i
done
Now, while testlock.sh is running, we can test the lock status using another variant on flock:
$ flock -n lockfile.lck echo "Lock acquired" || echo "Could not acquire lock"
Could not acquire lock
$ flock -n lockfile.lck echo "Lock acquired" || echo "Could not acquire lock"
Could not acquire lock
$ flock -n lockfile.lck echo "Lock acquired" || echo "Could not acquire lock"
Lock acquired
$
The first two attempts to lock the file failed because testlock.sh was still running and so the file was locked. The last attempt succeeded because testlock.sh had finished running.

Categories