What is the correct method for validating input for a custom multiwidget in each of these cases:
if I want to implement a custom Field?
if I want to use an existing database field type (say DateField)?
The motivation for this comes from the following two questions:
How do I use django's multi-widget?
Django subclassing multiwidget
I am specifically interested in the fact that I feel I have cheated. I have used value_from_datadict() like so:
def value_from_datadict(self, data, files, name):
datelist = [widget.value_from_datadict(data, files, name + '_%s' % i) for i, widget in enumerate(self.widgets)]
try:
D = date(day=int(datelist[0]), month=int(datelist[1]), year=int(datelist[2]))
return str(D)
except ValueError:
return None
Which looks through the POST dictionary and constructs a value for my widget (see linked questions). However, at the same time I've tacked on some validation; namely if the creation of D as a date object fails, I'm returning None which will fail in the is_valid() check.
My third question therefore is should I be doing this some other way? For this case, I do not want a custom field.
Thanks.
You validate your form fields just like any other fields, implementing the clean_fieldname method in your form. If your validation logic spreads across many form fields (which is nto the same as many widgets!) you put it in your form's clean() method.
http://docs.djangoproject.com/en/1.2/ref/forms/validation/
According to the documentation, validation is the responsibility of the field behind the widget, not the widget itself. Widgets should do nothing but present the input for the user and pass input data back to the field.
So, if you want to validate data that's been submitted, you should write a validator.
This is especially important with MultiWidgets, as you can have more than one aspect of the data error out. Each aspect needs to be returned to the user for consideration, and the built in way to do that is to write validators and place them in the validators attribute of the field.
Contrary to the documentation, you don't have to do this per form. You can, instead, extend one of the built in forms and add an entry to default_validators.
One more note: If you're going to implement a MultiWidget, your form is going to pass some sort of 'compressed' data back to it to render. The docs say:
This method takes a single “compressed” value from the field and returns a list of “decompressed” values. The input value can be assumed valid, but not necessarily non-empty.
-Widgets
Just make sure you're handling that output correctly and you'll be fine.
Related
A mongoengine.DynamicEmbeddedDocument can be used to leverage MongoDB's flexible schema-less design. It's expandable and doesn't apply type constraints to the fields, afaik.
A mongoengine.DictField similarly allows for use of MongoDB's schema-less nature. In the documentation they simply say (w.r.t. the DictField)
This is similar to an embedded document, but the structure is not defined.
Does that mean, then, the mongoengine.fields.DictField and the mongoengine.DynamicEmbeddedDocument are completely interchangeable?
EDIT (for more information):
mongoengine.DynamicEmbeddedDocument inherits from mongoengine.EmbeddedDocument which, from the code is:
A mongoengine.Document that isn't stored in its own collection. mongoengine.EmbeddedDocuments should be used as fields on mongoengine.Documents through the mongoengine.EmbeddedDocumentField field type.
A mongoengine.fields.EmbeddedDocumentField is
An embedded document field - with a declared document_type. Only valid values are subclasses of EmbeddedDocument.
Does this mean the only thing that makes the DictField and DynamicEmbeddedDocument not totally interchangeable is that the DynamicEmbeddedDocument has to be defined through the EmbeddedDocumentField field type?
From what I’ve seen, the two are similar, but not entirely interchangeable. Each approach may have a slight advantage based on your needs. First of all, as you point out, the two approaches require differing definitions in the document, as shown below.
class ExampleDynamicEmbeddedDoc(DynamicEmbeddedDocument):
pass
class ExampleDoc(Document):
dict_approach = DictField()
dynamic_doc_approach = EmbeddedDocumentField(ExampleDynamicEmbeddedDoc, default = ExampleDynamicEmbeddedDoc())
Note: The default is not required, but the dynamic_doc_approach field will need to be set to a ExampleDynamicEmbeddedDoc object in order to save. (i.e. trying to save after setting example_doc_instance.dynamic_doc_approach = {} would throw an exception). Also, you could use the GenericEmbeddedDocumentField if you don’t want to tie the field to a specific type of EmbeddedDocument, but the field would still need to be point to an object subclassed from EmbeddedDocument in order to save.
Once set up, the two are functionally similar in that you can save data to them as needed and without restrictions:
e = ExampleDoc()
e.dict_approach["test"] = 10
e.dynamic_doc_approach.test = 10
However, the one main difference that I’ve seen is that you can query against any values added to a DictField, whereas you cannot with a DynamicEmbeddedDoc.
ExampleDoc.objects(dict_approach__test = 10) # Returns a QuerySet containing our entry.
ExampleDoc.objects(dynamic_doc_approach__test = 10) # Throws an exception.
That being said, using an EmbeddedDocument has the advantage of validating fields which you know will be present in the document. (We simply would need to add them to the ExampleDynamicEmbeddedDoc definition). Because of this, I think it is best to use a DynamicEmbeddedDocument when you have a good idea of a schema for the field and only anticipate adding fields minimally (which you will not need to query against). However, if you are not concerned about validation or anticipate adding a lot of fields which you’ll query against, go with a DictField.
I have a MultipleChoiceField representing US states, and passing a GET request to my form like ?state=AL%2CAK results in the error:
Select a valid choice. AL,AK is not one of the available choices.
However, these values are definitely listed in the fields choices, as they're rendered in the form field correctly.
I've tried specifying a custom clean_state() method in my form, to convert the value to a list, but that has no effect. Printing the cleaned_data['state'] seems to show it's not even being called with the data from request.GET.
What's causing this error?
from django import forms
class MyForm(forms.Form):
state = forms.MultipleChoiceField(
required=False,
choices=[('AL','Alabama'),('AK','Alaska')],
)
MultipleChoiceFields don't pass all of the selected values in a list, they pass several different values for the same key instead.
In other words, if you select 'AL' and 'AK' your querystring should be ?state=AL&state=AK instead of ?state=AL%2CAK.
Without seeing your custom clean_state() method I can't tell you what's going wrong with it, but if the state field isn't valid because the querystring is wrong then 'state' won't be in cleaned_data (because cleaned_data only holds valid data).
Hopefully that helps. If you're still stuck try adding a few more details and I can try to be more specific.
I have a very complicated form and I choose to not use ModelForm since I needed flexibility and control over the fields. Since I am not using ModelForm, I can't simply do something like instance=order, where order = Order.objects.get(pk=1).
Currently I am pre-populating every field with initial in the forms.py as oppose to the views.py like this
self.fields['work_type'] = forms.ChoiceField(choices=Order.WORK_TYPE_CHOICES, initial=order.work_type)
But I was wondering if I could pass the entire order object to a form or do I have to declare initial to every field?
Is there a way to do something like
order_form = OrderEditForm(data=request.POST, initial=order)
in views.py?
I have a very complicated form and I choose to not use ModelForm since
I needed flexibility and control over the fields
Everything that you can do using a Form, you can do in a ModelForm such as adding new fields or over-riding attributes on the fields etc.
But I was wondering if I could pass the entire order object to a form
or do I have to declare initial to every field?
You can pass the order object into the form but you will still have to populate each field individually either in the forms or in the view function.
So in your view you would do something like this:
intial = {'order_number': order.number, 'order_id': order.id}
form = OrderForm(initial=initial)
The easiest way to prepopulate data to a form is passing a dictionary as first argument to de form constructor.
order_form = OrderEditForm(order.__dict__())
where __dict__() is a method that passes "order" object attributes to a dictionary with each attribute's name as a key and their content as value.
An example of how to "invent" such a method could be something like:
order_initial = Order.objects.filter(pk=order.pk).values()[0]
and then construct the form with:
order_form = OrderEditForm(order_initial)
Look at this example (how they populate values at "post" time):
For future reference to other people:
I have since found out after reading SO's comments and answers that it's better to use ModelForm even if you end up explicitly defining every field manually (using something like self.fields['foo'] = forms.CharField()).
In any case, if you are trying to pass a dictionary of current values in a form then the best (built-in) way to convert a model to a dictionary is actually using model_to_dict:
from django.forms.models import model_to_dict
order = Order.objects.get(pk=1)
dictionary = model_to_dict(order)
form = OrderEditForm(dictionary)
I got the solution from this blog. I hope this will be helpful for someone.
I have a few questions about validation in Models and Forms. Could you help me out with these:
Where should validation be done? Should it be in the Model or the Form? Is the right way to go about this is to have validators in the form and constraints in the mode?
What is the difference between writing a 'clean_' method in the form and writing a validator? I've seen that people often put validation checks in the 'clean_' method.
In the request that I'm handling, I have a param in the URL string called 'alive'. This is generally 1 or 0. What would be the correct way of defining this in my form? I need to validate it is a number and can only be 1 or 0. Is this the right way?
alive = models.IntegerField(null=False, max_value=1, min_value=0)
How do I define a default value for this field i.e. if this parameter isn't passed, I default to 0 (False).
I don't have a form on the client side. I'm using the Django form the validate my JS POST request.
In one of model fields I need to store screen resolution in the format 1234x4321. Should I declare this as a CharField add some regular expression validation in both the Model and the Form? Any examples of regular expression validations would be helpful.
Thanks.
The validation should be done on the form, not the model. However, if you are using ModelForms, which is usually makes a lot of sense, it will inherit some of the validation rules from the models themselves (those specific to the database, like maximum_field length, database field type, but also if they can be left blank).
The default value of a field should be passed with its constructor:
form = SomeForm(initial={'alive' : 0})
Although in your case, it appears that if the values that can be obtained are only zero and one, it would be make sense to use a BooleanField instead(tand in that case it would default to false).
In the case of resolutions I would create a mapping between the possible resolution and some arbitrary value.
RESOLUTIONS = (
("1","800x600"),
("2","1024x768"),
.....
)
and then pass it to the model:
resolutions = models.CharField(RESOLUTIONS, max_length=1)
So that the user gets a select field with the corresponding options and values.
On the other hand, if you need the user to insert it him/herself, using two fields (one for width, another for height) would be much easier than validating the user input.
So you can define a method for the model:
def get_resolution(self):
return "%sx%s" % (self.width, self.height)
I have a series of tests and cases in a database. Whenever a test is obsoleted, it gets end dated, and any sub-cases of that test should also be end dated. I see two ways to accomplish this:
1) Modify the save function to end date sub-cases.
2) Create a receiver which listens for Test models being saved, and then end dates their sub-cases.
Any reason to use one other than the other?
Edit: I see this blog post suggests to use the save method whenever you check given values of the model. Since I'm checking the end_date, maybe that suggests I should use a custom save?
Edit2: Also, for the record, the full hierarchy is Protocol -> Test -> Case -> Planned_Execution, and anytime one is end_dated, every child must also be endDated. I figure I'll end up doing basically the same thing for each.
Edit3: It turns out that in order to tell whether the current save() is the one that is endDating the Test, I need to have access to the old data and the new data, so I used a custom save. Here's what it looks like:
def save(self):
"""Use a custom save to end date any subCases"""
try:
orig = Test.objects.get(id=self.id)
enddated = (not orig.end_date) and self.end_date is not None
except:
enddated = False
super(Test, self).save()
if enddated:
for case in self.case_set.exclude(end_date__isnull=False):
case.end_date = self.end_date
case.enddater = self.enddater
case.save()
I generally use this rule of thumb:
If you have to modify data so that the save won't fail, then override save() (you don't really have another option). For example, in an app I'm working on, I have a model with a text field that has a list of choices. This interfaces with old code, and replaces an older model that had a similar text field, but with a different list of choices. The old code sometimes passes my model a choice from the older model, but there's a 1:1 mapping between choices, so in such a case I can modify the choice to the new one. Makes sense to do this in save().
Otherwise, if the save can proceed without intervention, I generally use a post-save signal.
In my understanding, signals are a means for decoupling modules. Since your task seems to happen in only one module I'd customize save.