Django ManyToMany through with multiple databases - python

TLTR: Django does not include database names in SQL queries, can I somehow force it to do this or is there a workaround?
The long version:
I have two legacy MySQL databases (Note: I have no influence on the DB layout) for which I'm creating a readonly API using DRF on Django 1.11 and python 3.6
I'm working around the referential integrity limitation of MyISAM DBs by using the SpanningForeignKey field suggested here: https://stackoverflow.com/a/32078727/7933618
I'm trying to connect a table from DB1 to a table from DB2 via a ManyToMany through table on DB1. That's the query Django is creating:
SELECT "table_b"."id" FROM "table_b" INNER JOIN "throughtable" ON ("table_b"."id" = "throughtable"."b_id") WHERE "throughtable"."b_id" = 12345
Which of course gives me an Error "Table 'DB2.throughtable' doesn't exist" because throughtable is on DB1 and I have no idea how to force Django to prefix the tables with the DB name. The query should be:
SELECT table_b.id FROM DB2.table_b INNER JOIN DB1.throughtable ON (table_b.id = throughtable.b_id) WHERE throughtable.b_id = 12345
Models for app1 db1_app/models.py: (DB1)
class TableA(models.Model):
id = models.AutoField(primary_key=True)
# some other fields
relations = models.ManyToManyField(TableB, through='Throughtable')
class Throughtable(models.Model):
id = models.AutoField(primary_key=True)
a_id = models.ForeignKey(TableA, to_field='id')
b_id = SpanningForeignKey(TableB, db_constraint=False, to_field='id')
Models for app2 db2_app/models.py: (DB2)
class TableB(models.Model):
id = models.AutoField(primary_key=True)
# some other fields
Database router:
def db_for_read(self, model, **hints):
if model._meta.app_label == 'db1_app':
return 'DB1'
if model._meta.app_label == 'db2_app':
return 'DB2'
return None
Can I force Django to include the database name in the query? Or is there any workaround for this?

A solution exists for Django 1.6+ (including 1.11) for MySQL and sqlite backends, by option ForeignKey.db_constraint=False and explicit Meta.db_table. If the database name and table name are quoted by ' ` ' (for MySQL) or by ' " ' (for other db), e.g. db_table = '"db2"."table2"'). Then it is not quoted more and the dot is out of quoted. Valid queries are compiled by Django ORM. A better similar solution is db_table = 'db2"."table2' (that allows not only joins but it is also by one issue nearer to cross db constraint migration)
db2_name = settings.DATABASES['db2']['NAME']
class Table1(models.Model):
fk = models.ForeignKey('Table2', on_delete=models.DO_NOTHING, db_constraint=False)
class Table2(models.Model):
name = models.CharField(max_length=10)
....
class Meta:
db_table = '`%s`.`table2`' % db2_name # for MySQL
# db_table = '"db2"."table2"' # for all other backends
managed = False
Query set:
>>> qs = Table2.objects.all()
>>> str(qs.query)
'SELECT "DB2"."table2"."id" FROM DB2"."table2"'
>>> qs = Table1.objects.filter(fk__name='B')
>>> str(qs.query)
SELECT "app_table1"."id"
FROM "app_table1"
INNER JOIN "db2"."app_table2" ON ( "app_table1"."fk_id" = "db2"."app_table2"."id" )
WHERE "db2"."app_table2"."b" = 'B'
That query parsing is supported by all db backends in Django, however other necessary steps must be discussed individually by backends. I'm trying to answer more generally because I found a similar important question.
The option 'db_constraint' is necessary for migrations, because Django can not create the reference integrity constraint
ADD foreign key table1(fk_id) REFERENCES db2.table2(id),
but it can be created manually for MySQL.
A question for particular backends is if another database can be connected to the default at run-time and if a cross database foreign key is supported. These models are also writable. The indirectly connected database should be used as a legacy database with managed=False (because only one table django_migrations for migrations tracking is created only in the directly connected database. This table should describe only tables in the same database.) Indexes for foreign keys can however be created automatically on the managed side if the database system supports such indexes.
Sqlite3: It has to be attached to another default sqlite3 database at run-time (answer SQLite - How do you join tables from different databases), at best by the signal connection_created:
from django.db.backends.signals import connection_created
def signal_handler(sender, connection, **kwargs):
if connection.alias == 'default' and connection.vendor == 'sqlite':
cur = connection.cursor()
cur.execute("attach '%s' as db2" % db2_name)
# cur.execute("PRAGMA foreign_keys = ON") # optional
connection_created.connect(signal_handler)
Then it doesn't need a database router of course and a normal django...ForeignKey can be used with db_constraint=False. An advantage is that "db_table" is not necessary if the table names are unique between databases.
In MySQL foreign keys between different databases are easy. All commands like SELECT, INSERT, DELETE support any database names without attaching them previously.
This question was about legacy databases. I have however some interesting results also with migrations.

I have a similar setup with PostgreSQL. Utilizing search_path to make cross-schema references possible in Django (schema in postgres = database in mysql). Unfortunately, seems like MySQL doesn't have such a mechanism.
However, you might try your luck creating views for it. Make views in one databases that references other databases, use it to select data. I think it's the best option since you want your data read-only anyway.
It's however not a perfect solution, executing raw queries might be more useful in some cases.
UPD: Providing mode details about my setup with PostgreSQL (as requested by bounty later). I couldn't find anything like search_path in MySQL documentation.
Quick intro
PostgreSQL has Schemas. They are synonymous to MySQL databases. So if you are MySQL user, imaginatively replace word "schema" with word "database". Requests can join tables between schemas, create foreign keys, etc... Each user (role) has a search_path:
This variable [search_path] specifies the order in which schemas are searched when
an object (table, data type, function, etc.) is referenced by a simple
name with no schema specified.
Special attention on "no schema specified", because that's exactly what Django does.
Example: Legacy databases
Let's say we got coupe legacy schemas, and since we are not allowed to modify them, we also want one new schema to store the NM relation in it.
old1 is the first legacy schema, it has old1_table (which is also the model name, for convenience sake)
old2 is the second legacy schema, it has old2_table
django_schema is a new one, it will store the required NM relation
All we need to do is:
alter role django_user set search_path = django_schema, old1, old2;
This is it. Yes, that simple. Django has no names of the schemas ("databases") specified anywhere. Django actually has no idea what is going on, everything is managed by PostgreSQL behind the scenes. Since django_schema is first in the list, new tables will be created there. So the following code ->
class Throughtable(models.Model):
a_id = models.ForeignKey('old1_table', ...)
b_id = models.ForeignKey('old2_table', ...)
-> will result in a migration that creates table throughtable that references old1_table and old2_table.
Problems: if you happened to have several tables with same names, you will either need to rename them or still trick Django into using a dot inside of table names.

Django does have the ability to work with multiple databases. See https://docs.djangoproject.com/en/1.11/topics/db/multi-db/.
You can also use raw SQL queries in Django. See https://docs.djangoproject.com/en/1.11/topics/db/sql/.

Related

sqlalchemy.exc.ProgrammingError: (psycopg2.errors.UndefinedTable) relation XXXXXXXX does not exist ------- Error using SQLAlchemy ORM [duplicate]

We host a multitenant app with SQLAlchemy and postgres. I am looking at moving from having separate databases for each tenant to a single database with multiple schemas. Does SQLAlchemy support this natively? I basically just want every query that comes out to be prefixed with a predetermined schema... e.g
select * from client1.users
instead of just
select * from users
Note that I want to switch the schema for all tables in a particular request/set of requests, not just a single table here and there.
I imagine that this could be accomplished with a custom query class as well but I can't imagine that something hasn't been done in this vein already.
well there's a few ways to go at this and it depends on how your app is structured. Here is the most basic way:
meta = MetaData(schema="client1")
If the way your app runs is one "client" at a time within the whole application, you're done.
But what may be wrong with that here is, every Table from that MetaData is on that schema. If you want one application to support multiple clients simultaneously (usually what "multitenant" means), this would be unwieldy since you'd need to create a copy of the MetaData and dupe out all the mappings for each client. This approach can be done, if you really want to, the way it works is you'd access each client with a particular mapped class like:
client1_foo = Client1Foo()
and in that case you'd be working with the "entity name" recipe at http://www.sqlalchemy.org/trac/wiki/UsageRecipes/EntityName in conjunction with sometable.tometadata() (see http://docs.sqlalchemy.org/en/latest/core/metadata.html#sqlalchemy.schema.Table.tometadata).
So let's say the way it really works is multiple clients within the app, but only one at a time per thread. Well actually, the easiest way to do that in Postgresql would be to set the search path when you start working with a connection:
# start request
# new session
sess = Session()
# set the search path
sess.execute("SET search_path TO client1")
# do stuff with session
# close it. if you're using connection pooling, the
# search path is still set up there, so you might want to
# revert it first
sess.close()
The final approach would be to override the compiler using the #compiles extension to stick the "schema" name in within statements. This is doable, but would be tricky as there's not a consistent hook for everywhere "Table" is generated. Your best bet is probably setting the search path on each request.
If you want to do this at the connection string level then use the following:
dbschema='schema1,schema2,public' # Searches left-to-right
engine = create_engine(
'postgresql+psycopg2://dbuser#dbhost:5432/dbname',
connect_args={'options': '-csearch_path={}'.format(dbschema)})
But, a better solution for a multi-client (multi-tenant) application is to configure a different db user for each client, and configure the relevant search_path for each user:
alter role user1 set search_path = "$user", public
It can now be done using schema translation map in Sqlalchemy 1.1.
class User(Base):
__tablename__ = 'user'
id = Column(Integer, primary_key=True)
__table_args__ = {'schema': 'per_user'}
On each request, the Session can be set up to refer to a different schema each time:
session = Session()
session.connection(execution_options={
"schema_translate_map": {"per_user": "account_one"}})
# will query from the ``account_one.user`` table
session.query(User).get(5)
Referred it from the SO answer here.
Link to the Sqlalchemy docs.
You may be able to manage this using the sqlalchemy event interface. So before you create the first connection, set up a listener along the lines of
from sqlalchemy import event
from sqlalchemy.pool import Pool
def set_search_path( db_conn, conn_proxy ):
print "Setting search path..."
db_conn.cursor().execute('set search_path=client9, public')
event.listen(Pool,'connect', set_search_path )
Obviously this needs to be executed before the first connection is created (eg in the application initiallization)
The problem I see with the session.execute(...) solution is that this executes on a specific connection used by the session. However I cannot see anything in sqlalchemy that guarantees that the session will continue to use the same connection indefinitely. If it picks up a new connection from the connection pool, then it will lose the search path setting.
I am needing an approach like this in order to set the application search_path, which is different to the database or user search path. I'd like to be able to set this in the engine configuration, but cannot see a way to do this. Using the connect event does work. I'd be interested in a simpler solution if anyone has one.
On the other hand, if you are wanting to handle multiple clients within an application, then this won't work - and I guess the session.execute(...) approach may be the best approach.
from sqlalchemy 1.1,
this can be done easily using using schema_translation_map.
https://docs.sqlalchemy.org/en/11/changelog/migration_11.html#multi-tenancy-schema-translation-for-table-objects
connection = engine.connect().execution_options(
schema_translate_map={None: "user_schema_one"})
result = connection.execute(user_table.select())
Here is a detailed reviews of all options available:
https://github.com/sqlalchemy/sqlalchemy/issues/4081
It's possible to solve this on DB level. I suppose you have a dedicated user for your application who is granted some privileges on the schema. Just set search_path for him to this schema:
ALTER ROLE your_user IN DATABASE your_db SET search_path TO your_schema;
There is a schema property in Table definitions
I'm not sure if it works but you can try:
Table(CP.get('users', metadata, schema='client1',....)
I tried:
con.execute('SET search_path TO {schema}'.format(schema='myschema'))
and that didn't work for me. I then used the schema= parameter in the init function:
# We then bind the connection to MetaData()
meta = sqlalchemy.MetaData(bind=con, reflect=True, schema='myschema')
Then I qualified the table with the schema name
house_table = meta.tables['myschema.houses']
and everything worked.
You can just change your search_path. Issue
set search_path=client9;
at the start of your session and then just keep your tables unqualified.
You can also set a default search_path at a per-database or per-user level. I'd be tempted to set it to an empty schema by default so you can easily catch any failure to set it.
http://www.postgresql.org/docs/current/static/ddl-schemas.html#DDL-SCHEMAS-PATH
I found none of the above answers worked with SqlAlchmeny 1.2.4. This is the solution that worked for me.
from sqlalchemy import MetaData, Table
from sqlalchemy import create_engine
def table_schemato_psql(schema_name, table_name):
conn_str = 'postgresql://{username}:{password}#localhost:5432/{database}'.format(
username='<username>',
password='<password>',
database='<database name>'
)
engine = create_engine(conn_str)
with engine.connect() as conn:
conn.execute('SET search_path TO {schema}'.format(schema=schema_name))
meta = MetaData()
table_data = Table(table_name, meta,
autoload=True,
autoload_with=conn,
postgresql_ignore_search_path=True)
for column in table_data.columns:
print column.name
I use the following pattern.
engine = sqlalchemy.create_engine("postgresql://postgres:mypass#172.17.0.2/mydb")
for schema in ['schema1', 'schema2']:
engine.execute(CreateSchema(schema))
tmp_engine = engine.execution_options(schema_translate_map = { None: schema } )
Base.metadata.create_all(tmp_engine)
For anyone who is coming here, for a more general solution that can support MYSQL or Oracle, please refer to this guide.
So basically it set the schemas for the engine when the first connection to the database is made.
engine = create_engine("engine_url")
#event.listens_for(engine, "connect", insert=True)
def set_current_schema(dbapi_connection, connection_record):
cursor_obj = dbapi_connection.cursor()
cursor_obj.execute(f"USE {self.schemas_name}")
cursor_obj.close()
the query to execute depends is specific to the database you are using, so for PSQL you will have a different query, for ORACLE, you will have a different, etc.

Use "on_conflict_do_update()" with sqlalchemy ORM

I am currently using SQLAlchemy ORM to deal with my db operations. Now I have a SQL command which requires ON CONFLICT (id) DO UPDATE. The method on_conflict_do_update() seems to be the correct one to use. But the post here says the code have to switch to SQLAlchemy core and the high-level ORM functionalities are missing. I am confused by this statement since I think the code like the demo below can achieve what I want while keep the functionalities of SQLAlchemy ORM.
class Foo(Base):
...
bar = Column(Integer)
foo = Foo(bar=1)
insert_stmt = insert(Foo).values(bar=foo.bar)
do_update_stmt = insert_stmt.on_conflict_do_update(
set_=dict(
bar=insert_stmt.excluded.bar,
)
)
session.execute(do_update_stmt)
I haven't tested it on my project since it will require a huge amount of modification. Can I ask if this is the correct way to deal with ON CONFLICT (id) DO UPDATE with SQLALchemy ORM?
As noted in the documentation, the constraint= argument is
The name of a unique or exclusion constraint on the table, or the constraint object itself if it has a .name attribute.
so we need to pass the name of the PK constraint to .on_conflict_do_update().
We can get the PK constraint name via the inspection interface:
insp = inspect(engine)
pk_constraint_name = insp.get_pk_constraint(Foo.__tablename__)["name"]
print(pk_constraint_name) # tbl_foo_pkey
new_bar = 123
insert_stmt = insert(Foo).values(id=12345, bar=new_bar)
do_update_stmt = insert_stmt.on_conflict_do_update(
constraint=pk_constraint_name, set_=dict(bar=new_bar)
)
with Session(engine) as session, session.begin():
session.execute(do_update_stmt)

Deleting objects with unique-together constraints on SQL Server

Consider the following Django ORM example:
class A(models.Model):
pass
class B(models.Model):
a = models.ForeignKey('A', on_delete=models.CASCADE, null=True)
b_key = models.SomeOtherField(doesnt_really_matter=True)
class Meta:
unique_together = (('a', 'b_key'),)
Now let's say I delete an instance of A that's linked to an instance of B. Normally, this is no big deal: Django can delete the A object, setting B.a = NULL, then delete B after the fact. This is usually fine because most databases don't consider NULL values in unique constraints, so even if you have b1 = B(a=a1, b_key='non-unique') and b2 = B(a=a2, b_key='non-unique'), and you delete both a1 and a2, that's not a problem because (NULL, 'non-unique') != (NULL, 'non-unique') because NULL != NULL.
However, that's not the case with SQL Server. SQL Server brilliantly defines NULL == NULL, which breaks this logic. The workaround, if you're writing raw SQL, is to use WHERE key IS NOT NULL when defining the unique constraint, but this is generated for me by Django. While I can manually create the RunSQL migrations that I'd need to drop all the original unique constraints and add the new filtered ones, it's definitely a shortcoming of the ORM or driver (I'm using pyodbc + django-pyodbc-azure). Is there some way to either coax Django into generating filtered unique constraints in the first place, or force it to delete tables in a certain order to circumvent this issue altogether, or some other general fix I could apply to the SQL server?

How to target PostgreSQL schema in SQLAlchemy DB URI? [duplicate]

We host a multitenant app with SQLAlchemy and postgres. I am looking at moving from having separate databases for each tenant to a single database with multiple schemas. Does SQLAlchemy support this natively? I basically just want every query that comes out to be prefixed with a predetermined schema... e.g
select * from client1.users
instead of just
select * from users
Note that I want to switch the schema for all tables in a particular request/set of requests, not just a single table here and there.
I imagine that this could be accomplished with a custom query class as well but I can't imagine that something hasn't been done in this vein already.
well there's a few ways to go at this and it depends on how your app is structured. Here is the most basic way:
meta = MetaData(schema="client1")
If the way your app runs is one "client" at a time within the whole application, you're done.
But what may be wrong with that here is, every Table from that MetaData is on that schema. If you want one application to support multiple clients simultaneously (usually what "multitenant" means), this would be unwieldy since you'd need to create a copy of the MetaData and dupe out all the mappings for each client. This approach can be done, if you really want to, the way it works is you'd access each client with a particular mapped class like:
client1_foo = Client1Foo()
and in that case you'd be working with the "entity name" recipe at http://www.sqlalchemy.org/trac/wiki/UsageRecipes/EntityName in conjunction with sometable.tometadata() (see http://docs.sqlalchemy.org/en/latest/core/metadata.html#sqlalchemy.schema.Table.tometadata).
So let's say the way it really works is multiple clients within the app, but only one at a time per thread. Well actually, the easiest way to do that in Postgresql would be to set the search path when you start working with a connection:
# start request
# new session
sess = Session()
# set the search path
sess.execute("SET search_path TO client1")
# do stuff with session
# close it. if you're using connection pooling, the
# search path is still set up there, so you might want to
# revert it first
sess.close()
The final approach would be to override the compiler using the #compiles extension to stick the "schema" name in within statements. This is doable, but would be tricky as there's not a consistent hook for everywhere "Table" is generated. Your best bet is probably setting the search path on each request.
If you want to do this at the connection string level then use the following:
dbschema='schema1,schema2,public' # Searches left-to-right
engine = create_engine(
'postgresql+psycopg2://dbuser#dbhost:5432/dbname',
connect_args={'options': '-csearch_path={}'.format(dbschema)})
But, a better solution for a multi-client (multi-tenant) application is to configure a different db user for each client, and configure the relevant search_path for each user:
alter role user1 set search_path = "$user", public
It can now be done using schema translation map in Sqlalchemy 1.1.
class User(Base):
__tablename__ = 'user'
id = Column(Integer, primary_key=True)
__table_args__ = {'schema': 'per_user'}
On each request, the Session can be set up to refer to a different schema each time:
session = Session()
session.connection(execution_options={
"schema_translate_map": {"per_user": "account_one"}})
# will query from the ``account_one.user`` table
session.query(User).get(5)
Referred it from the SO answer here.
Link to the Sqlalchemy docs.
You may be able to manage this using the sqlalchemy event interface. So before you create the first connection, set up a listener along the lines of
from sqlalchemy import event
from sqlalchemy.pool import Pool
def set_search_path( db_conn, conn_proxy ):
print "Setting search path..."
db_conn.cursor().execute('set search_path=client9, public')
event.listen(Pool,'connect', set_search_path )
Obviously this needs to be executed before the first connection is created (eg in the application initiallization)
The problem I see with the session.execute(...) solution is that this executes on a specific connection used by the session. However I cannot see anything in sqlalchemy that guarantees that the session will continue to use the same connection indefinitely. If it picks up a new connection from the connection pool, then it will lose the search path setting.
I am needing an approach like this in order to set the application search_path, which is different to the database or user search path. I'd like to be able to set this in the engine configuration, but cannot see a way to do this. Using the connect event does work. I'd be interested in a simpler solution if anyone has one.
On the other hand, if you are wanting to handle multiple clients within an application, then this won't work - and I guess the session.execute(...) approach may be the best approach.
from sqlalchemy 1.1,
this can be done easily using using schema_translation_map.
https://docs.sqlalchemy.org/en/11/changelog/migration_11.html#multi-tenancy-schema-translation-for-table-objects
connection = engine.connect().execution_options(
schema_translate_map={None: "user_schema_one"})
result = connection.execute(user_table.select())
Here is a detailed reviews of all options available:
https://github.com/sqlalchemy/sqlalchemy/issues/4081
It's possible to solve this on DB level. I suppose you have a dedicated user for your application who is granted some privileges on the schema. Just set search_path for him to this schema:
ALTER ROLE your_user IN DATABASE your_db SET search_path TO your_schema;
There is a schema property in Table definitions
I'm not sure if it works but you can try:
Table(CP.get('users', metadata, schema='client1',....)
I tried:
con.execute('SET search_path TO {schema}'.format(schema='myschema'))
and that didn't work for me. I then used the schema= parameter in the init function:
# We then bind the connection to MetaData()
meta = sqlalchemy.MetaData(bind=con, reflect=True, schema='myschema')
Then I qualified the table with the schema name
house_table = meta.tables['myschema.houses']
and everything worked.
You can just change your search_path. Issue
set search_path=client9;
at the start of your session and then just keep your tables unqualified.
You can also set a default search_path at a per-database or per-user level. I'd be tempted to set it to an empty schema by default so you can easily catch any failure to set it.
http://www.postgresql.org/docs/current/static/ddl-schemas.html#DDL-SCHEMAS-PATH
I found none of the above answers worked with SqlAlchmeny 1.2.4. This is the solution that worked for me.
from sqlalchemy import MetaData, Table
from sqlalchemy import create_engine
def table_schemato_psql(schema_name, table_name):
conn_str = 'postgresql://{username}:{password}#localhost:5432/{database}'.format(
username='<username>',
password='<password>',
database='<database name>'
)
engine = create_engine(conn_str)
with engine.connect() as conn:
conn.execute('SET search_path TO {schema}'.format(schema=schema_name))
meta = MetaData()
table_data = Table(table_name, meta,
autoload=True,
autoload_with=conn,
postgresql_ignore_search_path=True)
for column in table_data.columns:
print column.name
I use the following pattern.
engine = sqlalchemy.create_engine("postgresql://postgres:mypass#172.17.0.2/mydb")
for schema in ['schema1', 'schema2']:
engine.execute(CreateSchema(schema))
tmp_engine = engine.execution_options(schema_translate_map = { None: schema } )
Base.metadata.create_all(tmp_engine)
For anyone who is coming here, for a more general solution that can support MYSQL or Oracle, please refer to this guide.
So basically it set the schemas for the engine when the first connection to the database is made.
engine = create_engine("engine_url")
#event.listens_for(engine, "connect", insert=True)
def set_current_schema(dbapi_connection, connection_record):
cursor_obj = dbapi_connection.cursor()
cursor_obj.execute(f"USE {self.schemas_name}")
cursor_obj.close()
the query to execute depends is specific to the database you are using, so for PSQL you will have a different query, for ORACLE, you will have a different, etc.

updating a field in model according to count "many to many" field

I have a django model called Friend that contains a many to many field called friends through another model called FriendshipInfo. For performance reasons i decided to hold a field that holds the number of friends each person has. So now in the migration scripts i need to update the existing friends in my db. This is how i did it:
def forward(...):
# ... adding the new count field ...
for person in Friend.objects.all():
person.friends_count = len(persion.friends.all())
person.uupdate()
I was wondering if there is any way to do this in a much more efficient way (bulk update somehow?)
Tech info:
I am using Python 2.7
I am using django 1.6
For migrations I'm using south
I was tempted to use the extra queryset method to grab the count of friends and bulk update your Friend objects like:
def forward(...):
# adding the new count field
Friend.objects.extra(select = {
'friends_number': 'SELECT COUNT(*) FROM <your_many_to_many_table_name> WHERE <your_many_to_many_table_name>.<your_FriendshipInfo_related_name> = <your_Friend_table_name>.id'
}).update(friends_count=F('friends_number'))
But, by the look of things, it is not possible. However, you can use the raw method custom SQL queries with an update from count query:
from django.db import connection
def forward(...):
# adding the new count field
cursor = connection.cursor()
cursor.execute('\
UPDATE <your_Friend_table_name>\
SET friends_count = \
(SELECT COUNT(*) FROM <your_many_to_many_table_name> WHERE <your_many_to_many_table_name>.<your_FriendshipInfo_related_name> = <your_Friend_table_name>.id)')
the right way it's in the migration file do the data migration (.py)
there you can put the sql query without a problem, the method is migrations.RUNSQL
here it's a example:
class Migration(migrations.Migration):
dependencies = [
('procesos', '0020_auto_20150703_1656'),
]
operations = [
migrations.RunSQL("UPDATE procesos_busquedainmueble SET tipo_inmueble_id=(filtros::json->>'tipo_inmueble')::int;"),

Categories