I've added new models and pushed to our staging server, run syncdb to create their tables, and it locks up. It gets as far as 'Create table photos_photousertag' and postgres output shows the notice for creation of 'photos_photousertag_id_seq', but otherwise i get nothing on either said. I can't ctrl+c the syncdb process and I have no indication of what route to take from here. Has anyone else ran into this?
We use postgres, and while we've not run into this particular issue, there are some steps you may find helpful in debugging:
a. What version of postgres and psycopg2 are you using? For that matter, what version of django?
b. Try running the syncdb command with the "--verbosity=2" option to show all output.
c. Find the SQL that django is generating by running the "manage.py sql " command. Run the CREATE TABLE statements for your new models in the postgres shell and see what develops.
d. Turn the error logging, statement logging, and server status logging on postgres way up to see if you can catch any particular messages.
In the past, we've usually found that either option b or option c points out the problem.
I just experienced this as well, and it turned out to just be a plain old lock on that particular table, unrelated to Django. Once that cleared the sync went through just fine.
Try querying the table that the sync is getting stuck on and make sure that's working correctly first.
Strange here too, but simply restarting the PostgreSQL service (or server) solved it. I'd tried manually pasting the table creation code in psql too, but that wasn't solving it either (well, no way it could if it was a lock thing) - so I just used the restart:
systemctl restart postgresql.service
that's on my Suse box.
Am not sure whether reloading the service/server might lift existing table locks too?
Related
I encountered a weird situation and I need your help.
I am developing a Restful API using the Python 3.7 with Flask and SQLAlchemy. The application is hosted using AWS EC2 and database in AWS RDS (MySQL).
I also have an application hosted using Raspberry PI which will call the API and communicate with the EC2 Server.
Sometimes, I encountered a long transaction time between Raspberry and my API server, most of the time, I will kill the process in Raspberry PI and try to restart the process again and debug to see where goes wrong. However, when I restart the process I will see an error message related to my database. Then when I check my database, I notice all my tables are gone, nothing left. I am pretty sure that no drop tables in my codes and I have no idea why this occurred.
Is there anyone encountered the same situation? If yes, please tell me the root cause and the solution for this issue.
By the way, there is no error message recorded in MySQL log nor my RestAPI.
Thank you and good day.
To my eye this looks like magic and there is to much guessing involved to point a finger properly.
But there is an easy workaround so it does not happen in the future. A good practice is to separate the admin user (can do anything, including schema migrations) from the connect user (can do insert, update, delete, select, but may not run any DDL scripts). Only the connect user may be used by the applications. In this case no table drop would be performed even if the application is running berserk.
Enabling logging might help too: How to log PostgreSQL queries?
I'm working with a leveldb database (the leveldb wrapper, not plyvel); I ran a few test Put/Get/Delete operations on the database and everything was okay. (If it's relevant, I was accessing the database from 2 separate Python scripts.) Then I tried to make another database within a Python file that was already accessing the first database and I got this error:
leveldb.LevelDBError: IO error: lock ./states/LOCK: already held by process
So far I've tried deleting the database, uninstalling and reinstalling leveldb, deleting the LOCK file within the database, restarting my computer, and whatever this code snippet is. I'm kind of at my wits' end now; any advice you can offer would be greatly appreciated. Thanks.
per design, leveldb databases can only be held open by a single process at the time.
The Flask-SQLAlchemy db migrate command works fine most of the time. However after that, running db upgrade sometimes returns errors: for instance, trying to ALTER an SQlite column from NULL to NOT NULL.
When this happens, I just get stuck; because I cannot undo the migration, db downgrade doesn't solve the problem either. Most times I have to loose all data in the DB and then look for other ways to recover some of them.
What is the solution to this?
You need to run:
db stamp head
in case upgrade fails.
I have an application in Django 1.6.5. I have a model where I removed one field, I added another field, and the third upgraded. And when we now turn to the model in the admin panel, I get the message:
ProgrammingError at /admin/app/subscription/
column app_subscription.enabled does not exist
The command python manage.py syncdb does not work.
Django (hopefully) doesn't modify your database schema if you don't explicitely ask for it. The syncdb command works perfectly, but (as documented) it will only create tables that don't yet exists (and are not marked as being managed externally in your models).
So you have mostly three options here:
manually drop your table and re-run syncdb. This mean you will loose all our data, so it's hardly a "solution"
manually alter your database schema. You won't loose your data, but you'll have to repeat the same (manual) operation everywhere your app is deployed... If it's only installed on your local workstation that might be ok, else it's not a reliable professional production-level option.
Use South (which seems to be installed since you do have a migrate command available.
Note that solution #3 imply that you do create the migration files for your app, as documented here : http://south.readthedocs.org/en/latest/tutorial/part1.html#the-first-migration
It just happened that I faced the same issue with django 1.9.x, where I added a new field in my django app which triggered the same error as you mentioned above.
I logged into the dbshell environment using
python manage.p dbshell # I know some use ./manage.py
and dropped all of my tables by running the following command within the dbshell to drop the tables
your_psql=# drop schema public cascade;
This will drop all of your tables (be careful as you may lose your data, there away to keep the data!) and you will get a message right after executing this command tells you that all dropped. Right after that run the following command to create the schema again, otherwise your server will not run:
your_psql=# create schema public;
Then just do the
python manage.py makemigrations # you might not need this, and
python manage.py migrate
And you're ready to go.
I know this answer might be very late but I hope it will help someone.
Cheers
I got this after installing the tagging application. I've installed it via settings.py as well as placing it on the import path so I think I've done everything right there. This is what turns up. You can see my error log here. I've run syncdb, so my database should be synced up.
Have you checked output of syncdb and actually seen, that table was created? Take a look into your database and check, whether the table is created. If not, run syncdb again and if this doesn't help, create the table by hand (or drop the database and create it again from scratch).