SQL alechmy update database schema - python

I'm using SQL alechmy for a project, I've got a system running with some important data, but I would like to update my schemas to make new features. What's the best practice here?
Can schemas be updated without dropping all tables and recreate database? (currently I am running into trouble with
manage.py migrate
Not working. Ie. Not picking up new changes to table fields)
Thanks a lot in advance,
C

Related

Django : How can I put the initial data into database by editting migrations file?

I added 'models' and created a file using 'makemigrations'.
I want to have the initial data in the database at the same time as 'migrate'.
However, no matter how much I edit the 'migrations' file, there is an error that says no because there is no 'table' in the database before 'migrate'.
Help me...
This will help https://docs.djangoproject.com/en/2.2/topics/migrations/#data-migrations
This is also the nice blog where you can create data migration similar to how you create database migration.
https://simpleisbetterthancomplex.com/tutorial/2017/09/26/how-to-create-django-data-migrations.html
You might want to look into Django data migrations:
https://docs.djangoproject.com/en/2.2/topics/migrations/#data-migrations
In the operations, run your actual table creation before running data initialization. Please give code example if you run into problems.

Importing data from multiple related tables in mySQL to SQLite3 or postgreSQL

I'm updating from an ancient language to Django. I want to keep the data from the old project into the new.
But old project is mySQL. And I'm currently using SQLite3 in dev mode. But read that postgreSQL is most capable. So first question is: Is it better to set up postgreSQL while in development. Or is it an easy transition to postgreSQL from SQLite3?
And for the data in the old project. I am bumping up the table structure from the old mySQL structure. Since it got many relation db's. And this is handled internally with foreignkey and manytomany in SQLite3 (same in postgreSQL I guess).
So I'm thinking about how to transfer the data. It's not really much data. Maybe 3-5.000 rows.
Problem is that I don't want to have same table structure. So a import would be a terrible idea. I want to have the sweet functionality provided by SQLite3/postgreSQL.
One idea I had was to join all the data and create a nested json for each post. And then define into what table so the relations are kept.
But this is just my guessing. So I'm asking you if there is a proper way to do this?
Thanks!
better create the postgres database. write down the python script which take the data from the mysql database and import in postgres database.

How do I handle migrations in sqlalchemy without any framework?

I have seen sqlalchemy-migrate and alembic, but I do not want to use those frameworks. How can I write the migration script? Most of the migrations as I understand revolve around altering/dropping existing tables? Additionally, I use sqlalchemy mostly at orm level than schema/core/engine level?
The reasons I wish to do-it-myself is mostly a learning purpose and understanding how django orm automatically generates a migration script?
You should just use alembic to execute raw sql to start. Then if you decide to try to use more alembic features you'll be all set.
For example after creating a new revision named drop nick you can execute raw sql:
op.execute ('ALTER TABLE users DROP COLUMN nickname')
This way alembic can handle the version numbers but you can, or rather have to, do all the sql manipulations manually.

django postgresql query not working

I have a postgreSQL database that has a table foo that I've created outside of django. I used manage.py inspectdb to build the model for table foo for me. This technique worked fine when I was using MySQL but with PostgreSQL it is failing MISERABLY. The table is multiple gigabytes and I build it from a text file with PostgreSQL 'COPY'.
I can run raw queries on table foo and everything executes and expected.
For example
foo.objects.raw('bar_sql')
executes as expected.
But running queries like:
foo.objects.get(bar=bar)
throw
ProgrammingError column foo.id does not exist LINE 1: SELECT "foo"."id", "foo"."bar1", "all_...
foo doesn't innately have an id field. As I understand it django is suppose to create one. Have I some how subverted this step when creating the tables outside of django?
Queries run on models whose table was populated threw django run as expected in all cases.
I'm missing something very basic here and any help would be appreciated.
I'm using django 1.6 with postgreSQL 9.3.
Django doesn't modify your existing database tables. It only creates new tables. If you have existing tables, it usually doesn't touch them at all.
"As I understand it django is suppose to create one." --> It only adds a primary key to a table when it creates it, which means you don't need to specify that explicitly in your model, but it won't do anything to an existing table.
So if for example you later on decide to add fields to your models, you have to update your databases manually.
What you need to do in your case is that by doing manual database administration make sure that your table has a primary key, and also that the name of the primary key is "id" (although I am not sure if this is necessary, it is better to do it.) So use a database administration tool, modify your table and add the primary key, and name it id. Then it should start working.

Django -- Multiple Databases -- Data Migration

I am having difficulty understanding Django Multiple Databases documentation. Below is what I am trying to achieve.
I have to migrate some data from one database to another in Python.
Both databases have same structure, so I only have one model file.
What I need to do in code is select data from some tables of one database and insert into tables of another database.
How can I do it, i.e., select in Model query which database to use? Also any suggestions and recommendation would be appreciated.
Thanks
The documentation here https://docs.djangoproject.com/en/1.6/topics/db/multi-db/#manually-selecting-a-database is quite clear.
Assuming you have "db_for_read" and "db_for_write" configured in your settings, for the reading:
YourModel.objects.using("db_for_read").all()
For the writing - per instance:
your_model_instance.save(using="db_for_write")
or in batch:
YourModel.objects.using("db_for_write").bulk_create(
[your_model_instance1, your_model_instance2, etc]
)

Categories