How to automate django dumpdata? - python

I am populating my DB locally and I want to dump that data to the production server with a script for all my apps.
I am trying to write a script that will do this...
$ source path/to/venv && python manage.py dumpdata app1 > file1.json
$ source path/to/venv && python manage.py dumpdata app2 > file2.json
...etc
I use fabric for my deploy script and I thought it would be nice to incorporate it in there, but the 'local' method in fabric doesn't seem to be able to do such a thing. the run command does, but IDK why.
I think it might have something to do with this...
local is not currently capable of simultaneously printing and
capturing output, as run/sudo do. The capture kwarg allows you to
switch between printing and capturing as necessary, and defaults to
False. (http://docs.fabfile.org/en/latest/api/core/operations.html)
but I am not sure
I tried doing it with os.system n a separate python script as well but that didn't work either, both of them give me the same error which is...
sh: 1: source: not found
I have checked and double checked the path many times, I can't seem to figure it out. What do you think?

Your script executes under the classic sh shell, not under bash. "source" is a bash command; the classic import command is a period (like ". pathto/pyenv/bin/activate"). Or you could force bash with #!/bin/bash at the start of your script.

Since '$ source' was the thing that could not be executed. I made a shell script, placed it in a directory and just executed that
source pathto/pyenv/bin/activate && python manage.py dumpdata quiz > data_dump/foo.json
source pathto/pyenv/bin/activate && python manage.py dumpdata main > data_dump/bar.json
source pathto/pyenv/bin/activate && python manage.py dumpdata study > data_dump/waz.json
and then in the fabric file...
def foobar():
local('/pathto/foo.sh')

Related

Advance Scripting inside a DockerFile

I am trying to create a Docker image/container that will run on Windows 10/Linux and test a REST API. Is it possible to embed the function (from my .bashrc file) inside the DockerFile? The function pytest calls pylint before running the .py file. If the rating is not 10/10, then it prompts the user to fix the code and exits. This works fine on Linux.
Basically here is the pseudo-code inside the DockerFile I am attempting to build an image.
------------------------------------------
From: Ubuntu x.xx
install python
Install pytest
install pylint
copy test_file to the respective folder
Execute pytest test_file_name.py
if the rating is not 10\10:
prompt the user to resolve the rating issue and exit
------------here is the partial code snippet from the func------------------------
function pytest () {
argument1="$1"
# Extract the path and file name for pylint when method name is passed
pathfilename=`echo ${argument1} | sed 's/::.*//'`
clear && printf '\e[3J'
output=$(docker exec -t orch-$USER pylint -r n ${pathfilename})
if (echo "$output" | grep 'warning.*error' o&>/dev/null or
echo "${output}" | egrep 'warning|convention' &>/dev/null)
then
echo echo "${output}" | sed 's/\(warning\)/\o033[33m\1\o033[39m/;s/\(errors\|error\)/\o033[31m\1\o033[39m/'
YEL='\033[0;1;33m'
NC='\033[0m'
echo -e "\n ${YEL}Fix module as per pylint/PEP8 messages to achieve 10/10 rating before pusing to github\n${NC}"`
fi
Another option I can think of:
Step 1] Build the image (using DockerFile) with all the required software
Step 2] In a .py file, add the call for execution of pytest with the logic from the function.
Your thoughts?
You can turn that function into a standalone shell script. (Pretty much by just removing the function wrapper, and taking out the docker exec part of the tool invocation.) Once you've done that, you can COPY the shell script into your image, and once you've done that, you can RUN it.
...
COPY pylint-enforcer.sh .
RUN chmod +x ./pylint-enforcer.sh \
&& ./pylint-enforcer.sh
...
It looks like pylint will produce a non-zero exit code if it emits any messages. For purposes of a Dockerfile, it may be enough to just RUN pylint -r -n .; if it prints anything, it looks like it will return a non-zero exit code, which docker build will interpret as "failure" and not proceed.
You might consider whether you'll ever want the ability to build and push an image of code that isn't absolutely perfect (during a production-down event, perhaps), and whether you want to require root-level permissions to run simple code-validity tools (if you can docker anything you can edit arbitrary files on the host as root). I'd suggest running these tools out of a non-Docker virtual environment during your CI process, and neither place them in your Dockerfile nor depend on docker exec to run them.

docker and python using symfony process

I am using Laradock and want to be able to run a python script from my laravel app using Symfony Process. From inside the root on my container I can run "python3 script_name.py arg1" and it runs just fine. pip list shows all modules needed. When I run it from inside Laravel, it tells me:
"import pymysql ImportError: No module named 'pymysql'"
I have used a non-docker Laravel app to do this just fine, using:
$script = storage_path().'/app/script.py';
$process = new Process('python3 '. $script." ".session('division'));
What am I missing?
On *nix make sure that PYTHONPATH is configured correctly for all users or try to set full path to python3.
How to check
At first your php user
php -r "print shell_exec( 'whoami' );" // somebody
When run
su somebody python3 script_name.py arg1

manage.py command in crontab not working

I have created a executeable script .sh which contains code to run a django managemenet command.
cron.sh
#!/bin/sh
. /path/to/env/activate
cd /path/to/project
/path/to/env/bin/python manage.py some_command
I can confirm this script and manage.py command is working by executing it directly on terminal
$ /path/to/cron.sh
When i do it same via crontab its not working as expected.
** What am i doing wrong ?? I can confirm there is nothing wrong with crontab, it executing the cron.sh file but path/to/env/bin/python manage.py some_command is not working as expected.
cron log also showing
CRON[14768]: (root) CMD /path/to/cron.sh > /dev/null 2>&1
I am using bitnami django ami (ubuntu 14.04.5 LTS)
Update
After removing /dev/null i am getting this error now
"Cannot locate wrapped file"
It seems that it is a PATH problem. I do not know if django uses specific paths that must be set but AFAIK the crontab PATH is really limited due to security reasons. Just to check if that is the problem you could do in a shell terminal the following:
echo $PATH
You will get a complete PATH for instance:
/usr/local/sbin:/usr/local/bin:/usr/bin:/usr/lib/jvm/default/bin:/usr/bin/site_perl:/usr/bin/vendor_perl:/usr/bin/core_perl
In your crontab, put it above your code:
PATH=/usr/local/sbin:/usr/local/bin:/usr/bin:/usr/lib/jvm/default/bin:/usr/bin/site_perl:/usr/bin/vendor_perl:/usr/bin/core_perl
Tell me if this works. If does, try to purge the provided PATH or even better provide absolute locations in your code.
I have to say that I don't know if you can perform a cd in the cron like this. I always used absolute paths or cd /some/dir && /path/to/script args.
P.S: I cannot make comments yet, for this reason I put it in an answer.
The problem is that your not using the script that Bitnami uses to load all the environment variables (/opt/bitnami/scritps/setenv.sh).
I would try using this script:
#!/bin/sh
. /opt/bitnami/scritps/setenv.sh
. /path/to/env/activate
cd /path/to/project
/path/to/env/bin/python manage.py some_command

How do I pipe a model query into the Django Shell via a Bash Script?

I'm writing a startup.sh script to be ran when a docker container is created.
#!/bin/bash
python manage.py runserver
python manage.py makemigrations accounts
python manage.py migrate
python manage.py check_permissions
python manage.py cities --import=country --force
*python manage.py shell | from cities.models import * Country.objects.all().exclude(name='United States").delete()*
python manage.py cities --import=cities
python manage.py cities --import=postal_code
I am guessing the line in question is incorrect, what would be the correct way to do this in a bash script?
Use a heredoc:
python manage.py shell <<'EOF'
from cities.models import *
Country.objects.all().exclude(name='United States').delete()
EOF
It's not such a good idea to include django code in a shell script file. It's better to either make a python file and put those code in it and do:
python manage.py shell < script.py
Or better, write a django management command. In this way you could track your code in the same project/repo and people got less confused when they see this.

Auto restart django development server on file save after previous error

While writing the code, I usually am in the habit of saving the file every minute or so. Sometimes, that leads to situations where the function is not complete, and I have saved it, causing the django development server to throw up an error like following:
Unhandled exception in thread started by ...
Traceback
..
..
File "/home/user/work/project/api/file.py", line 26
def update_something(self, )
^
SyntaxError: invalid syntax
Now in cases when the code is working fine, the django dev server auto-restarts on file save with reflected changes. How can I make the django server recover from the failed Error state and restart the server automatically on subsequent file saves?
Currently, I have to stop the python manage.py runserver command in terminal, and run it manually again.
I am using django 1.5.3 on python 2.7.6
I use a simple bash script for this. Here's a one-liner you can use:
$ while true; do python manage.py runserver; sleep 2; done
That will wait 2 seconds before attempting to restart the server. Insert whatever you think is a sane value.
I usually write this as a shell script named runserver.sh, put it in my project root (the same directory with manage.py in it) and add it to the gitignore.
while true; do
echo "Re-starting Django runserver"
python manage.py runserver
sleep 2
done
If you do this, remember to chmod +x runserver.sh, then you can execute it with:
./runserver.sh
Use Ctrl-c Ctrl-c to exit.
On windows you may use a batch file, write this as a batch script named runserver.bat
#echo off
setlocal EnableDelayedExpansion
setlocal EnableExtensions
:WHILE_0
if 1 EQU 1 (
python manage.py runserver
sleep 2
goto WHILE_0
)
Then you can execute it clicking it or from the command line:
./runserver.sh

Categories