I'm going to ask this question in two parts: first the general question, and than the question for my specific use case.
The general question:
I'm building a podcast app, where, hopefully, we'll have users. Users have subscripitons, settings, ... , which I'd like to store on the User object, but subscriptions and settings don't belong in the same module in my code.
How do you structure your code so that all the relevant data about a user is stored together, but the code that defines and deals with specific properties can be separated?
My specific use case
I'm building the back end on Google App Engine. My user class looks something like this:
class User(ndb.Model):
username = ndb.StringProperty(required=True)
email = ndb.StringProperty(required=True)
...
Now I could just add another property for subscriptions, settings etc, but these definitions don't really belong in the users module. I've tried defining a SubscriptionsHolder and SettingsHolder class using ndb.PolyModel, but with multiple inheritance, only queries on the last superclass in the User definition supports querying.
I could just make the settings and other module query the User model directly, but this results in a circular dependency, where the User the users module depends on settings for subclassing, and settings depends on users for querying. I know I can resolve the circular dependency by moving the import statement around, but that just seems like a hack to me.
My approach was to treat User and Settings data as separate but related collections. Instead of subclassing or using PolyModel I simply introduced a way to imply a 1:1 relation between those data sets.
One way is to add a KeyProperty to Settings that links back to User. Another way is to create each Settings entity with the same id/name that is used by the related User entity. This second way allows a direct Settings.get_by_id() call once you have the User key.
I am writing a project in Django and I see that 80% of the code is in the file models.py. This code is confusing and, after a certain time, I cease to understand what is really happening.
Here is what bothers me:
I find it ugly that my model level (which was supposed to be
responsible only for the work with data from a database) is also
sending email, walking on API to other services, etc.
Also, I find it unacceptable to place business logic in the view, because
this way it becomes difficult to control. For example, in my
application there are at least three ways to create new
instances of User, but technically it should create them uniformly.
I do not always notice when the methods and
properties of my models become non-deterministic and when they develop
side effects.
Here is a simple example. At first, the User model was like this:
class User(db.Models):
def get_present_name(self):
return self.name or 'Anonymous'
def activate(self):
self.status = 'activated'
self.save()
Over time, it turned into this:
class User(db.Models):
def get_present_name(self):
# property became non-deterministic in terms of database
# data is taken from another service by api
return remote_api.request_user_name(self.uid) or 'Anonymous'
def activate(self):
# method now has a side effect (send message to user)
self.status = 'activated'
self.save()
send_mail('Your account is activated!', '…', [self.email])
What I want is to separate entities in my code:
Database level entities, i.e. database level logic: What kind of data does my application store?
application level entities, i.e. business level logic: What does my application do?
What are the good practices to implement such an approach that can be applied in Django?
It seems like you are asking about the difference between the data model and the domain model – the latter is where you can find the business logic and entities as perceived by your end user, the former is where you actually store your data.
Furthermore, I've interpreted the 3rd part of your question as: how to notice failure to keep these models separate.
These are two very different concepts and it's always hard to keep them separate. However, there are some common patterns and tools that can be used for this purpose.
About the Domain Model
The first thing you need to recognize is that your domain model is not really about data; it is about actions and questions such as "activate this user", "deactivate this user", "which users are currently activated?", and "what is this user's name?". In classical terms: it's about queries and commands.
Thinking in Commands
Let's start by looking at the commands in your example: "activate this user" and "deactivate this user". The nice thing about commands is that they can easily be expressed by small given-when-then scenario's:
given an inactive user
when the admin activates this user
then the user becomes active
and a confirmation e-mail is sent to the user
and an entry is added to the system log
(etc. etc.)
Such scenario's are useful to see how different parts of your infrastructure can be affected by a single command – in this case your database (some kind of 'active' flag), your mail server, your system log, etc.
Such scenario's also really help you in setting up a Test Driven Development environment.
And finally, thinking in commands really helps you create a task-oriented application. Your users will appreciate this :-)
Expressing Commands
Django provides two easy ways of expressing commands; they are both valid options and it is not unusual to mix the two approaches.
The service layer
The service module has already been described by #Hedde. Here you define a separate module and each command is represented as a function.
services.py
def activate_user(user_id):
user = User.objects.get(pk=user_id)
# set active flag
user.active = True
user.save()
# mail user
send_mail(...)
# etc etc
Using forms
The other way is to use a Django Form for each command. I prefer this approach, because it combines multiple closely related aspects:
execution of the command (what does it do?)
validation of the command parameters (can it do this?)
presentation of the command (how can I do this?)
forms.py
class ActivateUserForm(forms.Form):
user_id = IntegerField(widget = UsernameSelectWidget, verbose_name="Select a user to activate")
# the username select widget is not a standard Django widget, I just made it up
def clean_user_id(self):
user_id = self.cleaned_data['user_id']
if User.objects.get(pk=user_id).active:
raise ValidationError("This user cannot be activated")
# you can also check authorizations etc.
return user_id
def execute(self):
"""
This is not a standard method in the forms API; it is intended to replace the
'extract-data-from-form-in-view-and-do-stuff' pattern by a more testable pattern.
"""
user_id = self.cleaned_data['user_id']
user = User.objects.get(pk=user_id)
# set active flag
user.active = True
user.save()
# mail user
send_mail(...)
# etc etc
Thinking in Queries
You example did not contain any queries, so I took the liberty of making up a few useful queries. I prefer to use the term "question", but queries is the classical terminology. Interesting queries are: "What is the name of this user?", "Can this user log in?", "Show me a list of deactivated users", and "What is the geographical distribution of deactivated users?"
Before embarking on answering these queries, you should always ask yourself this question, is this:
a presentational query just for my templates, and/or
a business logic query tied to executing my commands, and/or
a reporting query.
Presentational queries are merely made to improve the user interface. The answers to business logic queries directly affect the execution of your commands. Reporting queries are merely for analytical purposes and have looser time constraints. These categories are not mutually exclusive.
The other question is: "do I have complete control over the answers?" For example, when querying the user's name (in this context) we do not have any control over the outcome, because we rely on an external API.
Making Queries
The most basic query in Django is the use of the Manager object:
User.objects.filter(active=True)
Of course, this only works if the data is actually represented in your data model. This is not always the case. In those cases, you can consider the options below.
Custom tags and filters
The first alternative is useful for queries that are merely presentational: custom tags and template filters.
template.html
<h1>Welcome, {{ user|friendly_name }}</h1>
template_tags.py
#register.filter
def friendly_name(user):
return remote_api.get_cached_name(user.id)
Query methods
If your query is not merely presentational, you could add queries to your services.py (if you are using that), or introduce a queries.py module:
queries.py
def inactive_users():
return User.objects.filter(active=False)
def users_called_publysher():
for user in User.objects.all():
if remote_api.get_cached_name(user.id) == "publysher":
yield user
Proxy models
Proxy models are very useful in the context of business logic and reporting. You basically define an enhanced subset of your model. You can override a Manager’s base QuerySet by overriding the Manager.get_queryset() method.
models.py
class InactiveUserManager(models.Manager):
def get_queryset(self):
query_set = super(InactiveUserManager, self).get_queryset()
return query_set.filter(active=False)
class InactiveUser(User):
"""
>>> for user in InactiveUser.objects.all():
… assert user.active is False
"""
objects = InactiveUserManager()
class Meta:
proxy = True
Query models
For queries that are inherently complex, but are executed quite often, there is the possibility of query models. A query model is a form of denormalization where relevant data for a single query is stored in a separate model. The trick of course is to keep the denormalized model in sync with the primary model. Query models can only be used if changes are entirely under your control.
models.py
class InactiveUserDistribution(models.Model):
country = CharField(max_length=200)
inactive_user_count = IntegerField(default=0)
The first option is to update these models in your commands. This is very useful if these models are only changed by one or two commands.
forms.py
class ActivateUserForm(forms.Form):
# see above
def execute(self):
# see above
query_model = InactiveUserDistribution.objects.get_or_create(country=user.country)
query_model.inactive_user_count -= 1
query_model.save()
A better option would be to use custom signals. These signals are of course emitted by your commands. Signals have the advantage that you can keep multiple query models in sync with your original model. Furthermore, signal processing can be offloaded to background tasks, using Celery or similar frameworks.
signals.py
user_activated = Signal(providing_args = ['user'])
user_deactivated = Signal(providing_args = ['user'])
forms.py
class ActivateUserForm(forms.Form):
# see above
def execute(self):
# see above
user_activated.send_robust(sender=self, user=user)
models.py
class InactiveUserDistribution(models.Model):
# see above
#receiver(user_activated)
def on_user_activated(sender, **kwargs):
user = kwargs['user']
query_model = InactiveUserDistribution.objects.get_or_create(country=user.country)
query_model.inactive_user_count -= 1
query_model.save()
Keeping it clean
When using this approach, it becomes ridiculously easy to determine if your code stays clean. Just follow these guidelines:
Does my model contain methods that do more than managing database state? You should extract a command.
Does my model contain properties that do not map to database fields? You should extract a query.
Does my model reference infrastructure that is not my database (such as mail)? You should extract a command.
The same goes for views (because views often suffer from the same problem).
Does my view actively manage database models? You should extract a command.
Some References
Django documentation: proxy models
Django documentation: signals
Architecture: Domain Driven Design
I usually implement a service layer in between views and models. This acts like your project's API and gives you a good helicopter view of what is going on. I inherited this practice from a colleague of mine that uses this layering technique a lot with Java projects (JSF), e.g:
models.py
class Book:
author = models.ForeignKey(User)
title = models.CharField(max_length=125)
class Meta:
app_label = "library"
services.py
from library.models import Book
def get_books(limit=None, **filters):
""" simple service function for retrieving books can be widely extended """
return Book.objects.filter(**filters)[:limit] # list[:None] will return the entire list
views.py
from library.services import get_books
class BookListView(ListView):
""" simple view, e.g. implement a _build and _apply filters function """
queryset = get_books()
Mind you, I usually take models, views and services to module level and
separate even further depending on the project's size
First of all, Don't repeat yourself.
Then, please be careful not to overengineer, sometimes it is just a waste of time, and makes someone lose focus on what is important. Review the zen of python from time to time.
Take a look at active projects
more people = more need to organize properly
the django repository they have a straightforward structure.
the pip repository they have a straigtforward directory structure.
the fabric repository is also a good one to look at.
you can place all your models under yourapp/models/logicalgroup.py
e.g User, Group and related models can go under yourapp/models/users.py
e.g Poll, Question, Answer ... could go under yourapp/models/polls.py
load what you need in __all__ inside of yourapp/models/__init__.py
More about MVC
model is your data
this includes your actual data
this also includes your session / cookie / cache / fs / index data
user interacts with controller to manipulate the model
this could be an API, or a view that saves/updates your data
this can be tuned with request.GET / request.POST ...etc
think paging or filtering too.
the data updates the view
the templates take the data and format it accordingly
APIs even w/o templates are part of the view; e.g. tastypie or piston
this should also account for the middleware.
Take advantage of middleware / templatetags
If you need some work to be done for each request, middleware is one way to go.
e.g. adding timestamps
e.g. updating metrics about page hits
e.g. populating a cache
If you have snippets of code that always reoccur for formatting objects, templatetags are good.
e.g. active tab / url breadcrumbs
Take advantage of model managers
creating User can go in a UserManager(models.Manager).
gory details for instances should go on the models.Model.
gory details for queryset could go in a models.Manager.
you might want to create a User one at a time, so you may think that it should live on the model itself, but when creating the object, you probably don't have all the details:
Example:
class UserManager(models.Manager):
def create_user(self, username, ...):
# plain create
def create_superuser(self, username, ...):
# may set is_superuser field.
def activate(self, username):
# may use save() and send_mail()
def activate_in_bulk(self, queryset):
# may use queryset.update() instead of save()
# may use send_mass_mail() instead of send_mail()
Make use of forms where possible
A lot of boilerplate code can be eliminated if you have forms that map to a model. The ModelForm documentation is pretty good. Separating code for forms from model code can be good if you have a lot of customization (or sometimes avoid cyclic import errors for more advanced uses).
Use management commands when possible
e.g. yourapp/management/commands/createsuperuser.py
e.g. yourapp/management/commands/activateinbulk.py
if you have business logic, you can separate it out
django.contrib.auth uses backends, just like db has a backend...etc.
add a setting for your business logic (e.g. AUTHENTICATION_BACKENDS)
you could use django.contrib.auth.backends.RemoteUserBackend
you could use yourapp.backends.remote_api.RemoteUserBackend
you could use yourapp.backends.memcached.RemoteUserBackend
delegate the difficult business logic to the backend
make sure to set the expectation right on the input/output.
changing business logic is as simple as changing a setting :)
backend example:
class User(db.Models):
def get_present_name(self):
# property became not deterministic in terms of database
# data is taken from another service by api
return remote_api.request_user_name(self.uid) or 'Anonymous'
could become:
class User(db.Models):
def get_present_name(self):
for backend in get_backends():
try:
return backend.get_present_name(self)
except: # make pylint happy.
pass
return None
more about design patterns
there's already a good question about design patterns
a very good video about practical design patterns
django's backends are obvious use of delegation design pattern.
more about interface boundaries
Is the code you want to use really part of the models? -> yourapp.models
Is the code part of business logic? -> yourapp.vendor
Is the code part of generic tools / libs? -> yourapp.libs
Is the code part of business logic libs? -> yourapp.libs.vendor or yourapp.vendor.libs
Here is a good one: can you test your code independently?
yes, good :)
no, you may have an interface problem
when there is clear separation, unittest should be a breeze with the use of mocking
Is the separation logical?
yes, good :)
no, you may have trouble testing those logical concepts separately.
Do you think you will need to refactor when you get 10x more code?
yes, no good, no bueno, refactor could be a lot of work
no, that's just awesome!
In short, you could have
yourapp/core/backends.py
yourapp/core/models/__init__.py
yourapp/core/models/users.py
yourapp/core/models/questions.py
yourapp/core/backends.py
yourapp/core/forms.py
yourapp/core/handlers.py
yourapp/core/management/commands/__init__.py
yourapp/core/management/commands/closepolls.py
yourapp/core/management/commands/removeduplicates.py
yourapp/core/middleware.py
yourapp/core/signals.py
yourapp/core/templatetags/__init__.py
yourapp/core/templatetags/polls_extras.py
yourapp/core/views/__init__.py
yourapp/core/views/users.py
yourapp/core/views/questions.py
yourapp/core/signals.py
yourapp/lib/utils.py
yourapp/lib/textanalysis.py
yourapp/lib/ratings.py
yourapp/vendor/backends.py
yourapp/vendor/morebusinesslogic.py
yourapp/vendor/handlers.py
yourapp/vendor/middleware.py
yourapp/vendor/signals.py
yourapp/tests/test_polls.py
yourapp/tests/test_questions.py
yourapp/tests/test_duplicates.py
yourapp/tests/test_ratings.py
or anything else that helps you; finding the interfaces you need and the boundaries will help you.
Django employs a slightly modified kind of MVC. There's no concept of a "controller" in Django. The closest proxy is a "view", which tends to cause confusion with MVC converts because in MVC a view is more like Django's "template".
In Django, a "model" is not merely a database abstraction. In some respects, it shares duty with the Django's "view" as the controller of MVC. It holds the entirety of behavior associated with an instance. If that instance needs to interact with an external API as part of it's behavior, then that's still model code. In fact, models aren't required to interact with the database at all, so you could conceivable have models that entirely exist as an interactive layer to an external API. It's a much more free concept of a "model".
In Django, MVC structure is as Chris Pratt said, different from classical MVC model used in other frameworks, I think the main reason for doing this is avoiding a too strict application structure, like happens in others MVC frameworks like CakePHP.
In Django, MVC was implemented in the following way:
View layer is splitted in two. The views should be used only to manage HTTP requests, they are called and respond to them. Views communicate with the rest of your application (forms, modelforms, custom classes, of in simple cases directly with models).
To create the interface we use Templates. Templates are string-like to Django, it maps a context into them, and this context was communicated to the view by the application (when view asks).
Model layer gives encapsulation, abstraction, validation, intelligence and makes your data object-oriented (they say someday DBMS will also). This doesn't means that you should make huge models.py files (in fact a very good advice is to split your models in different files, put them into a folder called 'models', make an '__init__.py' file into this folder where you import all your models and finally use the attribute 'app_label' of models.Model class). Model should abstract you from operating with data, it will make your application simpler. You should also, if required, create external classes, like "tools" for your models.You can also use heritage in models, setting the 'abstract' attribute of your model's Meta class to 'True'.
Where is the rest? Well, small web applications generally are a sort of an interface to data, in some small program cases using views to query or insert data would be enough. More common cases will use Forms or ModelForms, which are actually "controllers". This is not other than a practical solution to a common problem, and a very fast one. It's what a website use to do.
If Forms are not enogh for you, then you should create your own classes to do the magic, a very good example of this is admin application: you can read ModelAmin code, this actually works as a controller. There is not a standard structure, I suggest you to examine existing Django apps, it depends on each case. This is what Django developers intended, you can add xml parser class, an API connector class, add Celery for performing tasks, twisted for a reactor-based application, use only the ORM, make a web service, modify the admin application and more... It's your responsability to make good quality code, respect MVC philosophy or not, make it module based and creating your own abstraction layers. It's very flexible.
My advice: read as much code as you can, there are lots of django applications around, but don't take them so seriously. Each case is different, patterns and theory helps, but not always, this is an imprecise cience, django just provide you good tools that you can use to aliviate some pains (like admin interface, web form validation, i18n, observer pattern implementation, all the previously mentioned and others), but good designs come from experienced designers.
PS.: use 'User' class from auth application (from standard django), you can make for example user profiles, or at least read its code, it will be useful for your case.
An old question, but I'd like to offer my solution anyway. It's based on acceptance that model objects too require some additional functionality while it's awkward to place it within the models.py. Heavy business logic may be written separately depending on personal taste, but I at least like the model to do everything related to itself. This solution also supports those who like to have all the logic placed within models themselves.
As such, I devised a hack that allows me to separate logic from model definitions and still get all the hinting from my IDE.
The advantages should be obvious, but this lists a few that I have observed:
DB definitions remain just that - no logic "garbage" attached
Model-related logic is all placed neatly in one place
All the services (forms, REST, views) have a single access point to logic
Best of all: I did not have to rewrite any code once I realised that my models.py became too cluttered and had to separate the logic away. The separation is smooth and iterative: I could do a function at a time or entire class or the entire models.py.
I have been using this with Python 3.4 and greater and Django 1.8 and greater.
app/models.py
....
from app.logic.user import UserLogic
class User(models.Model, UserLogic):
field1 = models.AnyField(....)
... field definitions ...
app/logic/user.py
if False:
# This allows the IDE to know about the User model and its member fields
from main.models import User
class UserLogic(object):
def logic_function(self: 'User'):
... code with hinting working normally ...
The only thing I can't figure out is how to make my IDE (PyCharm in this case) recognise that UserLogic is actually User model. But since this is obviously a hack, I'm quite happy to accept the little nuisance of always specifying type for self parameter.
I would have to agree with you. There are a lot of possibilities in django but best place to start is reviewing Django's design philosophy.
Calling an API from a model property would not be ideal, it seems like it would make more sense to do something like this in the view and possibly create a service layer to keep things dry. If the call to the API is non-blocking and the call is an expensive one, sending the request to a service worker (a worker that consumes from a queue) might make sense.
As per Django's design philosophy models encapsulate every aspect of an "object". So all business logic related to that object should live there:
Include all relevant domain logic
Models should encapsulate every aspect of an “object,” following Martin Fowler’s Active Record design pattern.
The side effects you describe are apparent, the logic here could be better broken down into Querysets and managers. Here is an example:
models.py
import datetime
from djongo import models
from django.db.models.query import QuerySet
from django.contrib import admin
from django.db import transaction
class MyUser(models.Model):
present_name = models.TextField(null=False, blank=True)
status = models.TextField(null=False, blank=True)
last_active = models.DateTimeField(auto_now=True, editable=False)
# As mentioned you could put this in a template tag to pull it
# from cache there. Depending on how it is used, it could be
# retrieved from within the admin view or from a custom view
# if that is the only place you will use it.
#def get_present_name(self):
# # property became non-deterministic in terms of database
# # data is taken from another service by api
# return remote_api.request_user_name(self.uid) or 'Anonymous'
# Moved to admin as an action
# def activate(self):
# # method now has a side effect (send message to user)
# self.status = 'activated'
# self.save()
# # send email via email service
# #send_mail('Your account is activated!', '…', [self.email])
class Meta:
ordering = ['-id'] # Needed for DRF pagination
def __unicode__(self):
return '{}'.format(self.pk)
class MyUserRegistrationQuerySet(QuerySet):
def for_inactive_users(self):
new_date = datetime.datetime.now() - datetime.timedelta(days=3*365) # 3 Years ago
return self.filter(last_active__lte=new_date.year)
def by_user_id(self, user_ids):
return self.filter(id__in=user_ids)
class MyUserRegistrationManager(models.Manager):
def get_query_set(self):
return MyUserRegistrationQuerySet(self.model, using=self._db)
def with_no_activity(self):
return self.get_query_set().for_inactive_users()
admin.py
# Then in model admin
class MyUserRegistrationAdmin(admin.ModelAdmin):
actions = (
'send_welcome_emails',
)
def send_activate_emails(self, request, queryset):
rows_affected = 0
for obj in queryset:
with transaction.commit_on_success():
# send_email('welcome_email', request, obj) # send email via email service
obj.status = 'activated'
obj.save()
rows_affected += 1
self.message_user(request, 'sent %d' % rows_affected)
admin.site.register(MyUser, MyUserRegistrationAdmin)
I'm mostly agree with chosen answer (https://stackoverflow.com/a/12857584/871392), but want to add option in Making Queries section.
One can define QuerySet classes for models for make filter queries and so on. After that you can proxy this queryset class for model's manager, like build-in Manager and QuerySet classes do.
Although, if you had to query several data models to get one domain model, it seems more reasonable to me to put this in separate module like suggested before.
Most comprehensive article on the different options with pros and cons:
Idea #1: Fat Models
Idea #2: Putting Business Logic in Views/Forms
Idea #3: Services
Idea #4: QuerySets/Managers
Conclusion
Source:
https://sunscrapers.com/blog/where-to-put-business-logic-django/
It is design problem.
Let's assume that we have this kind of model in Django:
class Payment(models.Model):
purchase = ForeignKeyField(Purchase)
net_price = DecimalField()
is_accepted = BooleanField()
def set_accept(self):
# there will be some logic, which touch purchase, send emails etc.
def price_with_tax(self):
return net_price * (1. + TAX)
We have also another file called actions.py and we implement
there others actions.
Our problem is to determine which kind of methods should be placed in models.py,
which in actions.py.
Do you know any common approach, guide or something like that?
I want to use existing solutions as much as possible.
Thanks
The overall convention in MVC frameworks (like Django) is to place as much logic as possible into your models. This serves a lot of purposes:
It binds your logic to your data (good thing).
Makes it easy to look to one place in the code for all data manipulation methods.
Allows you to run the methods on your models directly without relying on views (makes testing simpler).
Gives you a really 'clean' API to use in your templates, eg: {{ object.price_with_tax }}, as opposed to rendering different views for different behaviors.
For your project layout, you should try to keep any code that works on models in your models.py file, and try to avoid using an actions.py or helpers.py unless you really need it. If you do have long amounts of code that aren't appropriate to put into your models.py (maybe you're implementing algorithms or something), the convention is to use a helpers.py.
There's a lot more stuff you can do later on to keep your app hierarchy clean and organized, but that is the basic gist of it all.
The standard way in django is to put code that works on table row basis directly in the model, and code that works with several rows, or on table basis, in a manager.
class MyManager(models.Manager):
def do_something_with_some_rows(self):
query = self.filter(...)
result = do_someting_with_this_query(query)
return result
class MyModel(models.Model):
objects = MyManager()
then you can use this manager like this
>>> result = MyModel.objects.do_something_with_some_rows()
as rdegges said, this makes your api much cleaner and simpler to use, and it's also a lot easier to test.
https://docs.djangoproject.com/en/dev/topics/db/managers/#managers
The problem: I wish to use Postgres Schemas to separate the tables of different parts of my django app at database level.
Aside
You can skip this section, but I think it's helpful to add context to these things. My app is working on a database of existing data (stored in the public schema, helpfully), which it's very important I don't modify. As such, I want to separate "my" data into a separate schema (to which django will be given read/write/play in the sand access), while restricting access to the public schema to read-only. I originally tried to solve this by separating my data out into a separate database and using database routing, but it turns out (if I'd only read the documentation) that django doesn't support cross database dependencies (which is fair enough I suppose), and my models have foreign keys into the read-only data.
The meat
There exists a workaround for Django's lack of schema support (which you can read about here) which is to specify the db_table attribute in your model's meta, like so:
class MyModel(models.Model):
attribute1 = models.CharField()
#Fool django into using the schema
class Meta:
db_table = 'schema_name\".\"table_name'
This is great, but I didn't really want to have to write this for every single model in my app - for a start, it doesn't seem pythonic, and also there's every chance of me forgetting when I have to add a new model.
My solution was the following snippet:
def SchemaBasedModel(cls):
class Meta:
db_table = '%s\".\"%s' % (schema_name, cls.__name__)
cls.Meta = Meta
return cls
#SchemaBasedModel
class MyModel(models.Model):
attribute1 = models.CharField()
...
When I then run python manage.py shell I get the following:
>>> from myapp import models
>>> myModel = models.MyModel
>>> myModel.Meta.db_table
'myschema"."mymodel'
>>>
"Looks good to me," I thought. I then ran: python manage.py sqlall myapp. Sadly, this yielded the original table names - that is, the table names as they were before I applied this meta info. When I went back and applied the meta info "by hand" (i.e. by adding Meta inner classes to all my models), things were as expected (new table names).
I was hoping somebody could enlighten me as to what was going on here? Or, more usefully, what's the "right" way to do this? I thought the decorator pattern I've talked about here would be just the ticket for this problem, but apparently it's a non-starter. How can I quickly and easily apply this meta info to all my models, without typing it out every single time?
Edit: Perhaps I was a little unclear when I asked this - I'm as interested in know what's "actually going on" (i.e. why things aren't working the way I thought they would - what did I misunderstand here?) as how to solve my problem (clear separation of "my" data from legacy data, preferably on a schema level - but it's not the end of the world if I have to dump everything into the public schema and manage permissions on a per-table basis).
Second Edit: The accepted answer doesn't necessarily tell me what I really want to know, but it is probably the right solution for the actual problem. Short answer: don't do this.
I didn't really want to have to write this for every single model in my app -
for a start, it doesn't seem pythonic,
That's false. Some things have to be written down explicitly. "Explicit is better than Implicit".
and also there's every chance of me forgetting when I have to add a new model
That's false, also.
You won't "forget".
Bottom Line: Don't mess with this kind of thing. Simply include the 2 lines of code explicitly where necessary.
You don't have that many tables.
You won't forget.
Also, be sure to use DB permissions. Grant SELECT permission only on your "legacy" tables (the tables you don't want to write to). Then you can't write to them.