foreign keys between databases with peewee - python

I have two legacy MySQL databases for which I'd like to define an ORM class-model in peewee (python). Specifically, one database holds front-end data, the other back-end data and some information between the tables of the databases are linked with foreign keys from one database to the other.
Example code (not the actual code, inspired by the example in quick start):
import peewee
frontend = peewee.MySQLDatabase('frontend', host=host, user=user, passwd=passwd)
backend = peewee.MySQLDatabase('backend', host=host, user=user, passwd=passwd)
class User(peewee.Model):
name = peewee.CharField()
class Meta:
database = frontend
class Tweet(peewee.Model):
user = peewee.ForeignKeyField(User, related_name='tweets')
content = peewee.TextField()
class Meta:
database = backend
Going through the docs, I couldn't find a direct way to link the foreign keys between tables. Also, I've tried generating a peewee model using the supplied pwiz.py script, which worked successfully on the front-end database, but not on the back-end (probably because the back-end only seems to refer to the front-end and not vice-versa). Nevertheless, I'd like to ask whether such a model with two databases is possible.

Peewee does not support foreign keys across multiple databases.

Related

Is there a way to share a common table of database between two django projects without faking it?

I am trying to use a same table across two different projects and want to reflect the changes between them.
Let's assume that there is a student table in the data created by PROJECT1-APP1 of django. I want that same student table to use in PROJECT2-APP1. I have searched the interned and get recommendation to import the model from 1 project to other but according to my understanding it is just "faking" it and some functionality like BigAutoFeild and Foreign Keys would not work in this solution. Is there a way that I can use the same database in two different projects which will not mess up with the core operation?
EDIT:
the other questions just shows how to connect database which I know, the problem occurs when I try to use the same tables.
You may need to create a model in the other app and explicitly specify the database table in the class Meta section. Similar to below:
class Person(models.Model):
id = models.IntegerField(primary_key=True)
first_name = models.CharField(max_length=70)
class Meta:
db_table = 'table_persons'

Using Django models with an already created DB table

I have a DB (lets call it data_db) containing some tables. I want to create a dashboard to present data from data_db, so I created a Django project for that purpose.
When I want to get data from one of the tables in data_db, is there a way to do it with Models? (I want Django security management with the DB) or do I have to use raw SQL?
One note: there is existing data in the data_db's table, and I don't want to create a new table with the same exact data on the default Django DB. I also use 2 DBs, Django default's and data_db and I created a database router for data_db to prevent Django from creating all its tables in there.
Thanks.
Yes. In fact Django can even help you create the models. Models that you do not migrate with the help of Django are unmanaged models. These have a managed = False attribute in the Meta class, so something like:
class MyModel(models.Model):
# … fields …
class Meta:
managed = False
If you thus write these unmanaged models, you can make queries with the Django ORM, without Django trying to create new models for these tables.
Of course, specifying models that match with the database is cumbersome. Therefore Django can often construct models based on the tables. You can generate the models with the inspectdb command [Django-doc].
You can generate these models on the stdout with:
python3 manage.py inspectdb
or you can save these to a file through I/O redirection:
python3 manage.py inspectdb > app_name/models.py

Foreign key to a table not managed by SQLAlchemy

I want to create a table using SQLAlchemy that has a foreign key to a table not managed by SQLAlchemy. The reason is that this database is used by many different applications. Other applications do need this mapping, but the Python application doesn't. Is there any way to let SQLAlchemy proceed with creation of a table even without declaring other models (there are about dozen of other models I'd have to declare just to make SQLAlchemy validator happy).
I've tried:
__table_args__ = {'extend_existing': True}
but this doesn't make a difference.

Django MySQL schema selection

In the same MySQL server we have a database for each client. This databases share the same table structure. We were able to create a Model for table Foo that looks like this:
class Foo(models.Model):
id = models.AutoField(primary_key=True)
bar = models.CharField(max_length=50)
class Meta:
managed = False
db_table = 'foobar'
Our Django project needs to manage all of our clients. As all of them have the same structure we would also like to share the Foo model. At the begging we were able to handle this issue by defining in the project settings each client's database and using routers. At the moment we have too many clients to have them all defined in setting.py.
Is there any reasonable way to tell the model which schema has to be used every time we use it?

Using dynamic models in Django framework

I am currently using Django framework including its Models mechanism to abstract the database schema declaration and general db access, which is working fine for most scenarios.
However, my application also requires tables to be created and accessed dynamically during runtime, which as far as I can see, is not supported by Django out of the box.
These tables usually have an identical structure, and can basically be abstracted by the same Model class, but Django doesn't let you change the underlying db_table of a certain model query, as it is declared on the Model class and not on the Manager.
My solution for this is to do this process whenever I need a new table to be created, populated and accessed:
Create and populate the table using raw sql
Add indexes to the table using raw sql
When I need to access the table (using django queryset api), I declare a new type dynamically and return it as the model for the query, by using this code:
table_name = # name of the table created by sql
model_name = '%d_%s' % (connection.tenant.id, table_name)
try:
model = apps.get_registered_model('myapp', model_name)
return model
except LookupError:
pass
logger.debug("no model exists for model %s, creating one" % model_name)
class Meta:
db_table = table_name
managed = False
attrs = {
'field1' : models.CharField(max_length=200),
'field2' : models.CharField(max_length=200),
'field3' : models.CharField(max_length=200)
'__module__': 'myapp.models',
'Meta':Meta
}
model = type(str(model_name), (models.Model,), attrs)
return model
Note that I do check if the model is already registered in django and I'm using an existing model in case it does. The model name is always unique for each table. Since I'm using multi tenants, the tenant name is also part of the model name to avoid conflict with similar tables declared on different schemas.
In case it's not clear: the tables created dynamically will and should be persisted permanently for future sessions.
This solution works fine for me so far.
However, the application will need to support a large number of these tables. i.e. 10,000 - 100,000 such tables(and corresponding model classes), with up to a million rows per table.
Assuming the underlying db is fine with this load, my questions are:
Do you see any problem with this solution, with and without regards to the expected scale ?
Anybody has a better solution for this scenario ?
Thanks.
There is a wiki page on creating models dynamically, although it has been a while since it was last updated:
DynamicModels Django
There are also a few apps that are designed for this use case, but I don't think any of them is being actively maintained:
Django Packages: Dynamic models
I understand that if you are already committed to Django this isn't very helpful, but this a use case for which Django isn't really good. It might be more costly to fight against the abstractions provided by Django's model layer, than to just use psycopg2 or whatever other adapter is appropriate for your data.
Depending on what sort of operations you are going to perform on your data, it may be also more reasonable to use a single model with an indexed field that allows you to distinguish in which table that row would be and then sharding the data by that column.
If you still need to do this, the general idea would be:
Create a metaclass that extends Django's ModelBase. This metaclass you would use as a factory for your actual models.
Consider stuff mentioned on that wiki page, like circumventing the app_label issue.
Generate and execute the sql for the creation of the model as also shown on the wiki page.

Categories