Is it possible to develop an API in Django "TastyPie" in a way which doesn't tie it directly to a "single" Django ORM model? i.e. a call /api/xyz/ would retrieve data from "a", "b" & "c" into a single JSON output. If so, please point me in the right direction.
tastypie is more tightly coupled to the ORM than django-piston, but there are methods that you can define in a tastypie resource to specify how to handle create, read, update, delete: http://readthedocs.org/docs/django-tastypie/en/latest/resources.html?highlight=put_list#obj-get
And you would just not set the queryset meta field.
django-piston on the other hand, has a more direct initial approach to having you define one or more of these methods. The resource can still be bound to a model to give you out of the box REST, but its more up front about showing you the methods to define for custom handling.
tastypie is a bit more robust in its process and features, but it makes this specific feature set a little less apparent.
Tastypie has ModelResource and Resource. The former is tied to a model(which you can override a lot of its methods as jdi suggested) and the latter is what you need I think. Example of Resource here. The example is for a Riak data source, in your case it would be a combination of django models.
Related
I have a particular model that I'd like to perform custom validations on. I'd like to guarantee that at least one identifier field is always present when creating a new instance such that its impossible to create an instance without one of these fields, though no field in particular is individually required.
from django.db import models
class Security(models.Model):
symbol = models.CharField(unique=True, blank=True)
sedol = models.CharField(unique=True, blank=True)
tradingitemid = models.Charfield(unique=True, blank=True)
I'd like a clean, reliable way to do this no matter where the original data is coming from (e.g., an API post or internal functions that get this data from other sources like a .csv file).
I understand that I could overwrite the models .save() method and perform validation, but best practice stated here suggests that raising validation errors in the .save() method is a bad idea because views will simply return a 500 response instead of returning a validation error to a post request.
I know that I can define a custom serializer with a validator using Django Rest Framework for this model that validates the data (this would be a great solution for a ModelViewSet where the objects are created and I can guarantee this serializer is used each time). But this data integrity guarantee is only good on that API endpoint and then as good as the developer is at remembering to use that serializer each and every time an object is created elsewhere in the codebase (objects can be created throughout the codebase from sources besides the web API).
I am also familiar with Django's .clean() and .full_clean() methods. These seem like the perfect solutions, except that it again relies upon the developer always remembering to call these methods--a guarantee that's only as good as the developer's memory. I know the methods are called automatically when using a ModelForm, but again, for my use case models can be created from .csv downloads as well--I need a general purpose guarantee that's best practice. I could put .clean() in the model's .save() method, but this answer (and related comments and links in the post) seem to make this approach controversial and perhaps an anti-pattern.
Is there a clean, straightforward way to make a guarantee that this model can never be saved without one of the three fields that 1. doesn't raise 500 errors through a view, 2. that doesn't rely upon the developer explicitly using the correct serializer throughout the codebase when creating objects, and 3. Doesn't rely upon hacking a call to .clean() into the .save() method of the model (a seeming anti-pattern)? I feel like there must be a clean solution here that isn't a hodge podge of putting some validation in a serializer, some in a .clean() method, hacking the .save() method to call .clean() (it would get called twice with saves from ModelForms), etc...
One could certainly imagine a design where save() did double duty and handled validation for you. For various reasons (partially summarized in the links here), Django decided to make this a two-step process. So I agree with the consensus you found that trying to shoehorn validation into Model.save() is an anti-pattern. It runs counter to Django's design, and will probably cause problems down the road.
You've already found the "perfect solution", which is to use Model.full_clean() to do the validation. I don't agree with you that remembering this will be burdensome for developers. I mean, remembering to do anything right can be hard, especially with a large and powerful framework, but this particular thing is straightforward, well documented, and fundamental to Django's ORM design.
This is especially true when you consider what is actually, provably difficult for developers, which is the error handling itself. It's not like developers could just do model.validate_and_save(). Rather, they would have to do:
try:
model.validate_and_save()
except ValidationError:
# handle error - this is the hard part
Whereas Django's idiom is:
try:
model.full_clean()
except ValidationError:
# handle error - this is the hard part
else:
model.save()
I don't find Django's version any more difficult. (That said, there's nothing stopping you from writing your own validate_and_save convenience method.)
Finally, I would suggest adding a database constraint for your requirement as well. This is what Django does when you add a constraint that it knows how to enforce at the database level. For example, when you use unique=True on a field, Django will both create a database constraint and add Python code to validate that requirement. But if you want to create a constraint that Django doesn't know about you can do the same thing yourself. You would simply write a Migration that creates the appropriate database constraint in addition to writing your own Python version in clean(). That way, if there's a bug in your code and the validation isn't done, you end up with an uncaught exception (IntegrityError) rather than corrupted data.
I'm having some issues figuring out the best (read: DRY & maintainable) place for introducing validation logic in Django, namely between models, forms, and DRF serializers.
I've worked with Django for several years and have been following the various conventions for handling model, form, and REST API endpoint validation. I've tried a lot of variations for ensuring overall data integrity, but I've hit a bit of a stumbling block recently. Here is a brief list of what I've tried after looking through many articles, SO posts, and tickets:
Validation at the model level; namely, ensuring all of my custom constraints are matched before calling myModel.save() by overriding myModel.clean() (as well as field-specific and unique together methods). To do this, I ensured myModel.full_clean() was called in myForm.clean() (for forms -- and the admin panel actually already does this) and mySerializer.validate() (for DRF serializers) methods.
Validation at the form and serializer level, calling a shared method for maintainable, DRY code.
Validation at the form and serializer level, with a distinct method for each to ensure maximum flexibility (i.e. for when forms and endpoints have different constraints).
Method one seems the most intuitive to me for when forms and serializers have identical constraints, but is a bit messy in practice; first, data is automatically cleaned and validated by the form or serializer, then the model entity is instantiated, and more validation is run again -- which is a little convoluted and can get complicated.
Method three is what Django Rest Framework recommends as of version 3.0; they eliminated a lot of their model.save() hooks and prefer to leave validation to the user-facing aspects of your application. This makes some sense to me, since Django's base model.save() implementation doesn't call model.full_clean() anyway.
So, method two seems to be the best overall generalized outcome to me; validation lives in a distinct place -- before the model is ever touched -- and the codebase is less cluttered / more DRY due to the shared validation logic.
Unfortunately, most of the trouble I've encountered is with getting Django Rest Framework's serializers to cooperate. All three approaches work well for forms, and in fact work well for most HTTP methods (most notably when POSTing for entity creation) -- but none seem to play well when updating an existing entity (PUT, PATCH).
Long story short, it has proved rather difficult to validate incoming data when it is incomplete (but otherwise valid -- often the case for PATCH). The request data may only contain some fields -- those that contain different / new information -- and the model instance's existing information is maintained for all other fields. In fact, DRF issue #4306 perfectly sums up this particular challenge.
I've also considered running custom model validation at the viewset level (after serializer.validated_data is populated and serializer.instance exists, but before serializer.save() is called), but I'm still struggling to come up with a clean, generalized approach due to the complexities of handling updates.
TL;DR Django Rest Framework makes it a bit hard to write clean, maintainable validation logic in an obvious place, especially for partial updates that rely on a blend of existing model data and incoming request data.
I'd love to have some Django gurus weigh in on what they've gotten to work, because I'm not seeing any convenient solution.
Thanks.
Just realized I never posted my solution back to this question. I ended up writing a model mixin to always run validation before saving; it's a bit inconvenient as validation will technically be run twice in Django's forms (i.e. in the admin panel), but it lets me guarantee that validation is run -- regardless of what triggers a model save. I generally don't use Django's forms, so this doesn't have much impact on my applications.
Here's a quick snippet that does the trick:
class ValidatesOnSaveModelMixin:
""" ValidatesOnSaveModelMixin
A mixin that ensures valid model state prior to saving.
"""
def save(self, **kwargs):
self.full_clean()
super(ValidatesOnSaveModelMixin, self).save(**kwargs)
Here is how you'd use it:
class ImportantModel(ValidatesOnSaveModelMixin, models.Model):
""" Will always ensure its fields pass validation prior to saving. """
There is one important caveat: any of Django's direct-to-database operations (i.e. ImportantModel.objects.update()) don't call a model's save() method, and therefore will not be validated. There's not much to do about this, since these methods are really about optimizing performance by skipping a bunch of database calls -- so just be aware of their impact if you use them.
I agree, the link between models/serializers/validation is broken.
The best DRY solution I've found is to keep validation in model, with validators specified on fields, then if needed, model level validation in clean() overridden.
Then in serializer, override validate and call the model clean() e.g. in MySerializer:
def validate(self, data):
instance = FooModel(**data)
instance.clean()
return data
It's not nice, but I prefer this to 2-level validation in serializer and model.
Just wanted to add on SamuelMS's answer.
In case you use F() expressions and similar. As explained here this will fail.
class ValidatesOnSaveModelMixin:
""" ValidatesOnSaveModelMixin
A mixin that ensures valid model state prior to saving.
"""
def save(self, **kwargs):
if 'clean_on_save_exclude' in kwargs:
self.full_clean(exclude=kwargs.pop('clean_on_save_exclude', None)
else:
self.full_clean()
super(ValidatesOnSaveModelMixin, self).save(**kwargs)
Then just use it the same way he explained.
And now when calling save, if you use query expressions can just call
instance.save(clean_on_save_exclude=['field_name'])
Just like you would exclude if you were calling full_clean and exclude the fields with query expressions.
See https://docs.djangoproject.com/en/2.2/ref/models/instances/#django.db.models.Model.full_clean
What is the rationale behind the decision to not support Single Table Inheritance in Django?
Is STI a bad design? Does it result in poor performance? Would it conflict with the Django ORM as it is?
Just wondering because it's been a missing feature for like ten years now and so there must have been a conscious decision made that it would never be supported.
One reason is possibly that Django does not (currently) have the ability to modify database tables after creation.
You can 'kind-of' do STI using proxy models. This will not allow you to have different fields on the different models, but it will allow you to attach different behaviour (via model methods) to different subclasses.
However, if you decide to create a subclass with extra fields, Django will not be able to update the database to reflect that.
I always use FBVs (Function Based Views) when creating a django app because it's very easy to handle. But most developers said that it's better to use CBVs (Class Based Views) and use only FBVs if it is complicated views that would be a pain to implement with CBVs.
Why? What are the advantages of using CBVs?
The single most significant advantage is inheritance. On a large project it's likely that you will have lots of similar views. Rather than write the same code again and again, you can simply have your views inherit from a base view.
Also django ships with a collection of generic view classes that can be used to do some of the most common tasks. For example the DetailView class is used to pass a single object from one of your models, render it with a template and return the http response. You can plug it straight into your url conf..
url(r'^author/(?P<pk>\d+)/$', DetailView.as_view(model=Author)),
Or you could extend it with custom functionality
class SpecialDetailView(DetailView):
model = Author
def get_context_data(self, *args, **kwargs):
context = super(SpecialDetailView, self).get_context_data(*args, **kwargs)
context['books'] = Book.objects.filter(popular=True)
return context
Now your template will be passed a collection of book objects for rendering.
A nice place to start with this is having a good read of the docs (Django 4.0+).
Update
ccbv.co.uk has comprehensive and easy to use information about the class based views you already have available to you.
When I started with DJango I never used CBVs because of their learning curve and a bit complex structure. Fast forward over two years, I use FBVs only at few places. Where I am sure the code will be really simple and is going to stay simple.
Major benefit of CBVs and Multiple Inheritence that comes along with them is that I can completely avoid writing signals, helper methods and copy paste code. Especially in the cases where the app does much more than basic CRUD operations. Views with multiple inheritance are multiple times easier to debug that a code with signals and helper methods, especially if it is an unknown code base.
Apart from Multiple inheritence CBVs by provide different methods to do dispatching, retrieving templates, handling different request types, passing template context variables, validating forms, and much more out of the box. These make code modular and hence maintainable.
Some views are best implemented as CBVs, and others are best implemented as FBVs.
If you aren’t sure which method to choose, see the following chart:
SOME WORDS FROM TWO SCOOPS
Tip Alternative Apporach - Staying With FBVs
Some developer prefer to err on the side of using FBVs for most views and CBVs only for views that need to be subclassed. That strategy is fine as well.
Class based views are excellent if you want to implement a fully functional CRUD operations in your Django application, and the same will take little time & effort to implement using function based views.
I will recommend you to use function based views when you are not going to implement any CRUD on your site/application means your intension is to simply render the template.
I had created a simple CRUD based application using class based views which is live. Visit http://filtron.pythonanywhere.com/view/ (will/won't be working now) and enjoy. Then you will know the importance of it.
I have been using FBVs in most of the cases where I do not see a real opportunity of extending views. As documented in the docs, I consider going for CBVs if the following two characteristics suit my use-case.
Organization of code related to specific HTTP methods (GET, POST, etc.) can be addressed by separate methods instead of conditional branching.
Object oriented techniques such as mixins (multiple inheritance) can be used to factor code into reusable components.
Function-Based Views(FBVs) are:
Easy to use but the
Code is not reusable by inheritance.
Recommended to use
Class-Based Views(CBVs) are:
Too much learning curve because it's really complicated
Code is reusable by inheritance.
Not recommended to use (FBVs are much beter)
I'm using CherryPy, Mako templates, and SQLAlchemy in a web app. I'm coming from a Ruby on Rails background and I'm trying to set up some data validation for my models. I can't figure out the best way to ensure, say, a 'name' field has a value when some other field has a value. I tried using SAValidation but it allowed me to create new rows where a required column was blank, even when I used validates_presence_of on the column. I've been looking at WTForms but that seems to involve a lot of duplicated code--I already have my model class set up with the columns in the table, why do I need to repeat all those columns again just to say "hey this one needs a value"? I'm coming from the "skinny controller, fat model" mindset and have been looking for Rails-like methods in my model like validates_presence_of or validates_length_of. How should I go about validating the data my model receives, and ensuring Session.add/Session.merge fail when the validations fail?
Take a look at the documentation for adding validation methods. You could just add an "update" method that takes the POST dict, makes sure that required keys are present, and uses the decorated validators to set the values (raising an error if anything is awry).
I wrote SAValidation for the specific purpose of avoiding code duplication when it comes to validating model data. It works well for us, at least for our use cases.
In our tests, we have examples of the model's setup and tests to show the validation works.
API Logic Server provides business rules for SQLAlchemy models. This includes not only multi-field, multi-table validations, but multi-table validations. It's open source.
I ended up using WTForms after all.