I'm developing a web app in Python 2.7 using Django 1.4 with PyCharm 2.5 as my IDE and a Postgres database. I am able to run manage.py commands such as sql and syncdb to create the SQL and the tables, but other commands are not recognized. When I attempt to run sqlreset (or any other command that drops tables or alters data), I get an "Unknown command" error:
runnerw.exe C:\Python27\python.exe "C:\Program Files (x86)\JetBrains\PyCharm 2.5.1\helpers\pycharm\django_manage.py" sqlreset EventMapperApp C:/Users/Karen/PycharmProjects/eventsMap
Unknown command: 'sqlreset'
Type 'manage.py help' for usage.
Process finished with exit code 1
Could anyone help me figure out what's going on?
Are you sure you are running Django 1.4? sqlreset has been deprecated since 1.3, I think, and is slated to be removed in 1.5. It is present in Django 1.4, but has been removed in the development version.
There isn't a sqlreset.
Info here: https://docs.djangoproject.com/en/dev/ref/django-admin/
or manage.py help as your post shows...
Django's management commands don't automatically run commands that may destroy your database (commands such as altering a table, or changing the type of columns).
If you want to completely reset your application - as if you just ran syncdb, you need to do it manually.
I wrote this gist that will reset your database. It is not portable (only works on *inx-like systems), but it might help you out.
Note: this will delete (drop) everything.
echo 'from django.conf import settings; print settings.INSTALLED_APPS; quit();' | \
python manage.py shell --plain 2>&1 | \
tail -n1 | sed -r "s|^.*\((.*)\).*$|\1|; s|[',]| |g; s|django\.contrib\.||g" | \
xargs python manage.py sqlclear | \
python manage.py dbshell && python manage.py syncdb
Related
I'm writing a startup.sh script to be ran when a docker container is created.
#!/bin/bash
python manage.py runserver
python manage.py makemigrations accounts
python manage.py migrate
python manage.py check_permissions
python manage.py cities --import=country --force
*python manage.py shell | from cities.models import * Country.objects.all().exclude(name='United States").delete()*
python manage.py cities --import=cities
python manage.py cities --import=postal_code
I am guessing the line in question is incorrect, what would be the correct way to do this in a bash script?
Use a heredoc:
python manage.py shell <<'EOF'
from cities.models import *
Country.objects.all().exclude(name='United States').delete()
EOF
It's not such a good idea to include django code in a shell script file. It's better to either make a python file and put those code in it and do:
python manage.py shell < script.py
Or better, write a django management command. In this way you could track your code in the same project/repo and people got less confused when they see this.
Trying to run heroku run python manage.py migrate --remote [my app] and it is outputting a list of subcommands. Tried various other django commands with the same result, everything from 'shell' to some custom commands I invented.
heroku run python is working fine as well has other heroku commands (run ls). is there a problem with django apps at the moment? i haven't edited my heroku settings or done anything related to heroku (rolled back to much farther-back deploys and still broken, so isn't any recent code changes)
Running python manage.py help migrate on tempotrader-staging... up, run.7740
/app/.heroku/python/lib/python2.7/site-packages/stream_django/enrich.py:3: RemovedInDjango19Warning: The utilities in django.db.models.loading are deprecated in favor of the new application loading system.
from django.db.models.loading import get_model
Type 'manage.py help <subcommand>' for help on a specific subcommand.
Available subcommands:
[account]
account_emailconfirmationmigration
account_unsetmultipleprimaryemails
[auth]
changepassword
createsuperuser
[avatar]
rebuild_avatars
[charting]
update_portfolios
[django]
check
compilemessages
createcachetable
dbshell
diffsettings
dumpdata
flush
inspectdb
loaddata
makemessages
makemigrations
migrate
runfcgi
shell
showmigrations
sql
sqlall
sqlclear
sqlcustom
sqldropindexes
sqlflush
sqlindexes
sqlmigrate
sqlsequencereset
..... etc
Heroku fixed it!
Image of Fix
Full link here of status/incident report
I have my Django app set up on Elastic Beanstalk and recently made a change to the DB that I would like to have applied to the live DB now. I understand that I need to set this up as a container command, and after checking the DB I can see that the migration was run, but I can't figure out how to have more controls over the migration. For example, I only want a migration to run when necessary but from my understanding, the container will run the migration on every deploy assuming the command is still listed in the config file. Also, on occassion, I will be given options during a migration such as:
Any objects realted to these content types by a foreign key will also be deleted.
Are you sure you want to delete these content types?
If you're unsure, answer 'no'
How do I set up the container command to respond to this with a yes during the deployment phase?
This is my current config file
container_commands:
01_migrate:
command: 'source /opt/python/run/venv/bin/actiate && python app/manage.py makemigrations'
command: 'source /opt/python/run/venv/bin/activate && python app/manage.py migrate'
Is there a way to set these 2 commands to only run when necessary and to respond to the yes/no options I receive during a migration?
I'm not sure there is a specific way to answer yes or no. but you can append --noinput to your container command. Use the --noinput option to suppress all user prompting, such as “Are you sure?” confirmation messages.
try
command: 'source /opt/python/run/venv/bin/activate && python app/manage.py migrate --noinput'
OR..
You can ssh into your elasticbean instance and run your command manually.
Then you'll have more control over the migrations.
Install awsebcli with pip install awsebcli
Type eb ssh Your EnvironmentName
Navigate to your eb instance app directory with:
sudo -s
source /opt/python/run/venv/bin/activate
source /opt/python/current/env
cd /opt/python/current/app
then run your command.
./manage.py migrate
I hope this helps
Aside from the automatic migration that you can add to deploy script (which runs every time you update the environment, and may not be desirable if you have long running migration or other Django management commands), you can ssh into an EB instance to run migration manually.
Here is how to manually run migration (and any other Django management commands) while working with Amazon Linux 2 (Python 3.7, 3.8) created by Elastic Beanstalk:
First, from your EB cli: eb ssh to connect an instance.
The virtual environment can be activated by
source /var/app/venv/*/bin/activate
The manage.py can be ran by
python3 /var/app/current/manage.py
Now the only tricky bit is to get Elastic Beanstalk's environment variables. You can access them by /opt/elasticbeanstalk/bin/get-config, I'm not super familiar with bash script, but here is a little script that I use to get and set environment variables, maybe someone can improve it to make it less hard-coded:
#! /bin/bash
export DJANGO_SECRET_KEY=$(/opt/elasticbeanstalk/bin/get-config environment -k DJANGO_SECRET_KEY)
...
More info regarding Amazon Linux 2 splatform script tools: https://docs.aws.amazon.com/elasticbeanstalk/latest/dg/custom-platforms-scripts.html
Make sure that the same settings are used when migrating and running!
Thus I would recommend you change this kind of code in django.config
container_commands:
01_migrate:
command: "source /opt/python/run/venv/bin/activate && python manage.py migrate"
leader_only: true
to:
container_commands:
01_migrate:
command: "django-admin migrate"
leader_only: true
option_settings:
aws:elasticbeanstalk:application:environment:
DJANGO_SETTINGS_MODULE: fund.productionSettings
as recommended here. This will help you avoid issues with wrong settings used.
More on manage.py v.s. django-admin.py.
django-admin method not working as it was not configured properly. You can also use python manage.py migrate in
.ebextentions/django.config
container_commands:
01_migrate:
command: "python manage.py migrate"
leader_only: true
In reference to Oscar Chen answer, you can set environmental variables using eb cli with
eb setenv key1=value1 key2=valu2 ...etc
The trick is that the full output of container_commands is in /var/log/cfn-init-cmd.log (Amazon Linux 2 Elastic Beanstalk released November 2020).
To view this you would run:
eb ssh [environment-name]
sudo tail -n 50 -f /var/log/cfn-init-cmd.log
This doesn't seem to be documented anywhere obvious and it's not displayed by eb logs; I found it by hunting around in /var/log.
The Django example management command django-admin.py migrate did not work for me. Instead I had to use something like:
01_migrate:
command: "$PYTHONPATH/python manage.py migrate"
leader_only: true
02_collectstatic:
command: "$PYTHONPATH/python manage.py collectstatic --noinput --verbosity=0 --clear"
To see the values of your environment variables at deploy time, you can create a debug command like:
03_debug:
command: "env"
You can see most of these environment variable with eb ssh; sudo cat /opt/elasticbeanstalk/deployment/env, but there seem to be some subtle differences at deploy time, hence using env above to be sure.
Here you'll see that $PYTHONPATH is being in a non-typical way, pointing to the virtualenv's bin directory, not the site-packages directory.
This answer looks like it will work for you if you just want to send "yes" to a few prompts.
You might also consider the --noinput flag so that your config looks like:
container_commands:
01_migrate:
command: 'source /opt/python/run/venv/bin/actiate && python app/manage.py makemigrations'
command: 'source /opt/python/run/venv/bin/activate && python app/manage.py migrate --noinput
This takes the default setting, which is "no".
It also appears that there's an open issue/fix to solve this problem a better way.
I am populating my DB locally and I want to dump that data to the production server with a script for all my apps.
I am trying to write a script that will do this...
$ source path/to/venv && python manage.py dumpdata app1 > file1.json
$ source path/to/venv && python manage.py dumpdata app2 > file2.json
...etc
I use fabric for my deploy script and I thought it would be nice to incorporate it in there, but the 'local' method in fabric doesn't seem to be able to do such a thing. the run command does, but IDK why.
I think it might have something to do with this...
local is not currently capable of simultaneously printing and
capturing output, as run/sudo do. The capture kwarg allows you to
switch between printing and capturing as necessary, and defaults to
False. (http://docs.fabfile.org/en/latest/api/core/operations.html)
but I am not sure
I tried doing it with os.system n a separate python script as well but that didn't work either, both of them give me the same error which is...
sh: 1: source: not found
I have checked and double checked the path many times, I can't seem to figure it out. What do you think?
Your script executes under the classic sh shell, not under bash. "source" is a bash command; the classic import command is a period (like ". pathto/pyenv/bin/activate"). Or you could force bash with #!/bin/bash at the start of your script.
Since '$ source' was the thing that could not be executed. I made a shell script, placed it in a directory and just executed that
source pathto/pyenv/bin/activate && python manage.py dumpdata quiz > data_dump/foo.json
source pathto/pyenv/bin/activate && python manage.py dumpdata main > data_dump/bar.json
source pathto/pyenv/bin/activate && python manage.py dumpdata study > data_dump/waz.json
and then in the fabric file...
def foobar():
local('/pathto/foo.sh')
I generated a fixture:
python manage.py dumpdata --all > ./mydump.json
I emptied all my databases using:
python manage.py sqlflush | psql mydatabase -U mydbuser
But when i try to use loaddata:
python manage.py loaddata ./mydump.json
I'm recieving this error:
IntegrityError: Could not load tastypie.ApiKey(pk=1): duplicate key
value violates unique constraint "tastypie_apikey_user_id_key"
DETAIL: Key (user_id)=(2) already exists.
I'm having this problem on production and i'm out of ideas. Someone had a similar problem?
Run loaddata with all #recievers commented out because they will be fired when loaddata loads your data. If #recievers create other objects as a sideeffect it will cause collisions.
First:
I believe your unix pipe is incorrectly written.
# 1: Dump your json
$ python manage.py dumpdata --all > ./mydump.json
# 2: dump your schema
$ python manage.py sqlflush > schema.sql
# 3: launch psql
# this is how I launch psql ( seems to be more portable between rhel/ubuntu )
# you might use a bit different technique, and that is ok.
Edited: (very important)
Make sure you do not have any active django connections running on your server. Then:
$ sudo -u myuser psql mydatabase
# 4: read in schema
mydatabase=# \i schema.sql
mydatabase=# ctrl-d
# 5: load back in your fixture.
$ python manage.py loaddata ./mydump.json
Second:
If your pipe is ok.. and it might be. Depending on your schema/data you may need to use natural-keys.
# 1: Dump your json using ( -n ) natural keys.
$ python manage.py dumpdata -n --all > ./mydump.json
# followed by steps 2-5 above.
Jeff Sheffield's solution is correct, but now I find that a solution like django-dbbackup is by far the most generic and simplier way to do it with any database.
python manage.py dbbackup