Django Rest Framework: POST and PUT in a single nested request - python

I'm using Django Rest Framework to create an object. The JSON contains nested objects as well; an array of objects to create and link to the "main object" and an object that should be partially updated.
JSON looks like this:
{
"opcitem_set" : [
{
"comment" : "Test comment",
"grade" : "1",
"name" : "1a"
},
{
"comment" : "Test comment",
"grade" : "2",
"name" : "1b"
},
{
"description" : "Additional test item",
"comment" : "Additional comment",
"grade" : "1",
"name" : "extra_1"
}
],
"is_right_seat_training" : false,
"checked_as" : "FC",
"date" : "2015-10-23",
"check_reason" : "Check ride",
"opc_program" : "2",
"is_pc" : true,
"questionnaire_test_passed" : "Passed",
"pnf_time" : 2,
"other_comments_complete_crew" : "Other comments",
"other_comments_flying_pilot" : "Other comments",
"is_cat_2_training" : false,
"opc_passed" : "Passed",
"pilot" : {
"pc_valid_to" : "2015-10-23",
"id" : 721,
"email" : "jens.nilsson#nextjet.se",
"total_time" : 3120,
"medical_valid_to" : "2015-10-23"
},
"pf_time" : 2,
"aircraft_type" : "S340",
"typeratingexaminer" : 734
}
The "opcitem_set" contains objects of type OpcItem that should be created and have a ForeignKey to the main object. So far so good, I can do this by overriding the create() method on the ModelSerializer as outlined in http://www.django-rest-framework.org/api-guide/serializers/#writable-nested-representations.
Then we have the case of the "pilot" object. This will always contain an ID and some other fields to PATCH the object with that ID.
The "typeratingexaminer" field is just another "Pilot" object, but it shouldn't be PATCHed, just set as a foreign key.
My question is: Can I PATCH (partially update) the "pilot" as well in the create() method, or would that break some sort of design pattern? Since it's really a PATCH and not a POST, should I do it in a separate request after the original request has finished? In that case, can I have a transaction spanning two requests, so that if the second request fail, the first request will be rolled back?
Would love to be able to send only one request from the client instead of splitting it in two requests. Maybe you could separate the JSON already in the ViewSet and send it to different serializers?
Happy to hear your thoughts about this, I'm a bit lost.

If you are not creating a main object but only nested objects you should override .update() method in the serializer and do somethink like this:
def update(self, instance, validated_data):
if 'opcitem_set' in validated_data:
opcitem_set_data = validated_data.pop('opcitem_set')
if 'pilot' in validated_data:
pilot_data = validated_data.pop('pilot')
...
for opcitem_set in opcitem_set_data:
Opcitem.objects.create(main_object=instance,
**opcitem_set)
current_pilot = self.instance.pilot
current_pilot.pc_valid_to = pilot_data.get('name', current_pilot.pc_valid_to)
...
current_pilot.save()
"""
Update instance as well if you need
"""
return instance
If you need to create main object as well then you need to override .create() method. But then PATCHing pilot will be not really a good way to do that.

I would recommend to move away from the serializer create method and build your extensive logic in the view, where you could make good use of simpler, dumb serializers where needed. You could surly do updates in the create method of your serializer but suddenly that's not a serializer anymore, it's more of a controller, thus it would be better placed in the view code, by overwriting the create or post method; this design enables you to have only one request from the client, you can massage the request data in the view code and use simple serializers to instantiate/update objects, with embedded data validation, if needed.
If you have models and serializers you could share, we might be able to comment more to the point.

Related

Error converting uuid4 value to string in Pydantic model

I'm new to Python and Fastapi. The problem is Fastapi response body schema shows me this error everytime I attempt to make a post request. Firstly, I intend to convert uuid4 into string using the hex function then display the converted output in the response body.
What I want is that the field named api_key automatically generates dynamic default uuids for each input record (not the same as id but rather a field to store unique values)
**
422 Unprocessable Entity
{
"detail": [
{
"loc": [
"body"
],
"msg": "'str' object is not callable",
"type": "type_error"
}
]
}**
And here is the code I tried:
class Terminal(BaseModel):
api_key: str = Field(title="api key", default_factory=uuid.uuid4().hex)
name: str = Field(..., title="Terminal name", regex="[^\s]+")
mac: MACAddressStr = Field(..., title="MAC address")
mgmt_ip: IPv4Address = Field(..., title="Management IP")
All the fields that require user input works perfectly fine.
It also works when I tried to enter api_key as a string manually.
My desired output:
{
"api_key": "5876753e02f141b5a83f7e9cff6db1ba" // auto-generated
"name": "terminal1",
"mac": "aa:bb:cc:dd:ee:ff",
"mgmt_ip": "1.1.1.1"
}
From docs
default_factory: If provided, it must be a zero-argument callable that will be called when a default value is needed for this field. Among other purposes, this can be used to specify fields with mutable default values, as discussed below. It is an error to specify both default and default_factory.
So, doing this is going to work:
default_factory=lambda: uuid.uuid4().hex

elasticsearch-dsl in python: How can I return the "inner hits" for my query?

I am currently exploring elasticsearch in python using the elasticsearch_dsl library. I am aware that my Elasticsearch knowledge is currently limited.
I have created a model like so:
class Post(InnerDoc):
text = Text()
id = Integer()
class User(Document):
name = Text()
posts = Object(doc_class=Posts)
signed_up_at = Date()
The data for posts is an array like this:
[
{
"text": "Test",
"id": 2
},
]
Storing my posts works. However, to me this seems wrong. I specify the "posts" attribute to be a Post - not a List of Posts.
Querying works, I can:
s = Search(using=client).query("match", posts__text="test")
and will retrieve the User that has a post containing the words as a result.
What I want is that I get the user + all Posts that qualified the user to appear in the result (meaning all posts containing the search phrase). I called that the inner hits, but I am not sure if this is correct.
Help would be highly appreciated!
I tried using "nested" instead of "match" for the query, but that does not work:
[nested] query does not support [posts]
I suspect that this has to do with the fact that my index is specified incorrectly.
I updated my model to this:
class Post(InnerDoc):
text = Text(analyzer="snowball")
id = Integer()
class User(Document):
name = Text()
posts = Nested(doc_class=Posts)
signed_up_at = Date()
This allows me to do the following query:
GET users/_search
{
"query": {
"nested": {
"path": "posts",
"query": {
"match": {
"posts.text": "idea"
}
},
"inner_hits": {}
}
}
}
This translates to the following elasticsearch-dsl query in python:
s = (
Search(using=client).query(
"nested",
path="posts",
query=Q("term", **{"post.text": "Idea"}),
inner_hits={},
)
Access inner hits like this:
Using Nested might be required, because of how elasticsearch represents objects internally (https://www.elastic.co/guide/en/elasticsearch/reference/current/nested.html). As lists of objects might be flattened, it might not allow to retrieve complete inner hits that contain the correct association of text and id for a post.

Serializing Django Model and including further information for ForeignKeyField + OneToOneField

Using Django 1.7.
I have a model class Topic that I want to serialize. Topic has an attribute Creator that is a ForeignKey to a class UserProfile. The output from serialization gives me the following string:
'{"fields": {"building": 1, "title": "Foobar", "created": "2015-02-13T16:14:47Z", "creator": 1}, "model": "bb.topic", "pk": 2}'
I want key "creator" to say the username of associated with UserProfile (as opposed to right now, where it is giving me the pk value associated with UserProfile. The username is held within a OneToOneField with django.contrib.auth.models class User.
I tried to implement a UserProfileManager, but either I have done it incorrectly or the following is not an appropriate strategy:
def get_by_natural_key(self, user, picture, company):
return self.get(user_id=user.id, user_pk=user.pk, username=user.get_username, picture=picture, company=company)
Finally, I looked around SO and found pointers to this: https://code.google.com/p/wadofstuff/wiki/DjangoFullSerializers but it says it was last updated in 2011 so I am assuming there is something else out there.
Thanks. Any help is much appreciated.
It looks you haven't implemented all the code needed to serialize the UserProfile with a natural key.
Actually the get_by_natural_key method is called when deserializing an object. If you want it to be serialized with a natural key instead of pk you should provide the natural_key method to the model.
Something like:
class UserProfileManager(models.Manager):
def get_by_natural_key(self, user, company):
return self.get(user__user__username=user, company=company)
class UserProfile(models.Model):
objects = UserProfileManager()
user = models.OneToOneField(User)
company = models.CharField(max_length=255)
def natural_key(self):
return (self.user.name, self.company)
Now, if you serialize a Topic instance:
import django.core.serializers
obj = serializers.serialize('json', Topic.objects.all(), indent=2, use_natural_foreign_keys=True, use_natural_primary_keys=True)
you should get an output similar to:
{
"fields": {
"building": 1,
"title": "Foobar",
"created": "2015-02-13T16:14:47Z",
"creator": [
"dummy",
"ACME Inc."
]
},
"model": "bb.topic", "pk": 2
}
supposing a dummy user exist with a company named ACME Inc. in its user profile.
Hope this helps.

Loading a Django Fixture containing Natural Keys

How do you load a Django fixture so that models referenced via natural keys don't conflict with pre-existing records?
I'm trying to load such a fixture, but I'm getting IntegrityErrors from my MySQL backend, complaining about Django trying to insert duplicate records, which doesn't make any sense.
As I understand Django's natural key feature, in order to fully support dumpdata and loaddata usage, you need to define a natural_key method in the model, and a get_by_natural_key method in the model's manager.
So, for example, I have two models:
class PersonManager(models.Manager):
def get_by_natural_key(self, name):
return self.get(name=name)
class Person(models.Model):
objects = PersonManager()
name = models.CharField(max_length=255, unique=True)
def natural_key(self):
return (self.name,)
class BookManager(models.Manager):
def get_by_natural_key(self, title, *person_key):
person = Person.objects.get_by_natural_key(*person_key)
return self.get(title=title, person=person)
class Book(models.Model):
objects = BookManager()
author = models.ForeignKey(Person)
title = models.CharField(max_length=255)
def natural_key(self):
return (self.title,) + self.author.natural_key()
natural_key.dependencies = ['myapp.Person']
My test database already contains a sample Person and Book record, which I used to generate the fixture:
[
{
"pk": null,
"model": "myapp.person",
"fields": {
"name": "bob"
}
},
{
"pk": null,
"model": "myapp.book",
"fields": {
"author": [
"bob"
],
"title": "bob's book",
}
}
]
I want to be able to take this fixture and load it into any instance of my database to recreate the records, regardless of whether or not they already exist in the database.
However, when I run python manage.py loaddata myfixture.json I get the error:
IntegrityError: (1062, "Duplicate entry '1-1' for key 'myapp_person_name_uniq'")
Why is Django attempting to re-create the Person record instead of reusing the one that's already there?
Turns out the solution requires a very minor patch to Django's loaddata command. Since it's unlikely the Django devs would accept such a patch, I've forked it in my package of various Django admin related enhancements.
The key code change (lines 189-201 of loaddatanaturally.py) simply involves calling get_natural_key() to find any existing pk inside the loop that iterates over the deserialized objects.
Actually loaddata is not supposed to work with existing data in database, it is normally used for initial load of models.
Look at this question for another way of doing it: Import data into Django model with existing data?

Excluding primary key in Django dumpdata with natural keys

How do you exclude the primary key from the JSON produced by Django's dumpdata when natural keys are enabled?
I've constructed a record that I'd like to "export" so others can use it as a template, by loading it into a separate databases with the same schema without conflicting with other records in the same model.
As I understand Django's support for natural keys, this seems like what NKs were designed to do. My record has a unique name field, which is also used as the natural key.
So when I run:
from django.core import serializers
from myapp.models import MyModel
obj = MyModel.objects.get(id=123)
serializers.serialize('json', [obj], indent=4, use_natural_keys=True)
I would expect an output something like:
[
{
"model": "myapp.mymodel",
"fields": {
"name": "foo",
"create_date": "2011-09-22 12:00:00",
"create_user": [
"someusername"
]
}
}
]
which I could then load into another database, using loaddata, expecting it to be dynamically assigned a new primary key. Note, that my "create_user" field is a FK to Django's auth.User model, which supports natural keys, and it output as its natural key instead of the integer primary key.
However, what's generated is actually:
[
{
"pk": 123,
"model": "myapp.mymodel",
"fields": {
"name": "foo",
"create_date": "2011-09-22 12:00:00",
"create_user": [
"someusername"
]
}
}
]
which will clearly conflict with and overwrite any existing record with primary key 123.
What's the best way to fix this? I don't want to retroactively change all the auto-generated primary key integer fields to whatever the equivalent natural keys are, since that would cause a performance hit as well as be labor intensive.
Edit: This seems to be a bug that was reported...2 years ago...and has largely been ignored...
Updating the answer for anyone coming across this in 2018 and beyond.
There is a way to omit the primary key through the use of natural keys and unique_together method. Taken from the Django documentation on serialization:
You can use this command to test :
python manage.py dumpdata app.model --pks 1,2,3 --indent 4 --natural-primary --natural-foreign > dumpdata.json ;
Serialization of natural keys
So how do you get Django to emit a natural key when serializing an object? Firstly, you need to add another method – this time to the model itself:
class Person(models.Model):
objects = PersonManager()
first_name = models.CharField(max_length=100)
last_name = models.CharField(max_length=100)
birthdate = models.DateField()
def natural_key(self):
return (self.first_name, self.last_name)
class Meta:
unique_together = (('first_name', 'last_name'),)
That method should always return a natural key tuple – in this example, (first name, last name). Then, when you call serializers.serialize(), you provide use_natural_foreign_keys=True or use_natural_primary_keys=True arguments:
serializers.serialize('json', [book1, book2], indent=2, use_natural_foreign_keys=True, use_natural_primary_keys=True)
When use_natural_foreign_keys=True is specified, Django will use the natural_key() method to serialize any foreign key reference to objects of the type that defines the method.
When use_natural_primary_keys=True is specified, Django will not provide the primary key in the serialized data of this object since it can be calculated during deserialization:
{
"model": "store.person",
"fields": {
"first_name": "Douglas",
"last_name": "Adams",
"birth_date": "1952-03-11",
}
}
The problem with json is that you can't omit the pk field since it will be required upon loading of the fixture data again. If not existing, json will fail with
$ python manage.py loaddata some_data.json
[...]
File ".../django/core/serializers/python.py", line 85, in Deserializer
data = {Model._meta.pk.attname : Model._meta.pk.to_python(d["pk"])}
KeyError: 'pk'
As pointed out in the answer to this question, you can use yaml or xml if you really want to omit the pk attribute OR just replace the primary key value with null.
import re
from django.core import serializers
some_objects = MyClass.objects.all()
s = serializers.serialize('json', some_objects, use_natural_keys=True)
# Replace id values with null - adjust the regex to your needs
s = re.sub('"pk": [0-9]{1,5}', '"pk": null', s)
Override the Serializer class in a separate module:
from django.core.serializers.json import Serializer as JsonSerializer
class Serializer(JsonSerializer):
def end_object(self, obj):
self.objects.append({
"model" : smart_unicode(obj._meta),
"fields" : self._current,
# Original method adds the pk here
})
self._current = None
Register it in Django:
serializers.register_serializer("json_no_pk", "path.to.module.with.custom.serializer")
Add use it:
serializers.serialize('json_no_pk', [obj], indent=4, use_natural_keys=True)

Categories