I have a React/Django application where users can answer multiple choice questions. I have the "choices" array rendered onto the UI in this exact order.
{
"id": 2,
"question_text": "Is Lebron James the GOAT?",
"choices": [
{
"id": 5,
"choice_text": "No",
"votes": 0,
"percent": 0
},
{
"id": 4,
"choice_text": "Yes",
"votes": 1,
"percent": 100
}
],
}
When I select a choice in development mode, I send a request to Django to increment the votes counter for that choice and it will send back a response with updated votes in the same order. When I try to select a choice in production mode using npm run build, the order becomes switched.
{
"id": 2,
"question_text": "Is Lebron James the GOAT?",
"choices": [
{
"id": 4,
"choice_text": "Yes",
"votes": 1,
"percent": 50
},
{
"id": 5,
"choice_text": "No",
"votes": 1,
"percent": 50
}
]
}
I thought the order of JSON array must be preserved. Can anyone explain why this is happening? I'm almost positive that this issue is originating from Django. Here is the function view on Django.
#api_view(['POST'])
def vote_poll(request, poll_id):
if request.method == 'POST':
poll = Poll.objects.get(pk=poll_id)
selected_choice = Choice.objects.get(pk=request.data['selected_choice_id'])
selected_choice.votes += 1
selected_choice.save()
poll_serializer = PollAndChoicesSerializer(poll)
return Response({ 'poll': poll_serializer.data })
You need to set ordering option in your Choice model Meta if you want to have consistent order.
class Choice(Smodels.Model):
class Meta:
ordering = ['-id']
From docs:
Warning
Ordering is not a free operation. Each field you add to the ordering incurs a cost to your database. Each foreign key you add will implicitly include all of its default orderings as well.
If a query doesn’t have an ordering specified, results are returned from the database in an unspecified order. A particular ordering is guaranteed only when ordering by a set of fields that uniquely identify each object in the results. For example, if a name field isn’t unique, ordering by it won’t guarantee objects with the same name always appear in the same order.
Related
I am parsing a file and end up with a dictionary like this:
user_data = {
"title": "TestA",
"sects": [
{
"type": "cmd",
"cmd0": {
"moveX": 12,
"moveY": 34,
"action": "fire"
},
"session": 9999,
"cmd1": {
"moveX": 56,
"moveY": 78,
"action": "stop"
},
"endType": 0,
},
{
"type": "addUsers",
"user1": {
"name": "John",
"city": "London"
},
"user2": {
"name": "Mary",
"city": "New York"
},
post = "yes"
}
]
}
I am attempting to validate it using marshmallow but not sure how to handle these two things:
With sects the content of each nested dict is dependent on the type (cmd, addUser, etc). Is there a way to pick a schema based on the value of a field?
Is there such a thing as field 'startsWith' to handle the fact that I may have cmd0, cmd1...cmdN?
So, something like the following:
class CmdSchema(schema):
type = fields.Str()
cmdN = fields.Dict(...) # Allow cmd1, cmd2, etc
session = fields.Int()
endType = fields.Int()
class AddUsersSchema(schema):
type = fields.Str()
userN = fields.Dict(...) # Allow user1, user2, etc
post = fields.Str()
class ParsedFileSchema(schema):
title = fields.Str()
sects = fields.Nested(...) # Choose nested schema based on sects[i]['type']?
Note: I cannot change the format/content of the file I'm parsing and order matters (so I need to retain the order of cmd1, cmd2, etc so they can be processed in the correct order).
1/ What you want to achieve is polymorphism. You may want to check out marshmallow-oneofschema.
2/ This does not exist to my knowledge. It shouldn't be too hard to create as a custom validator, though. It might even make it to the core.
I have a list of ids called batch i want to update all of them to set a field called fetched to true.
Original Test Collection
[{
"user_id": 1,
},
{
"user_id": 2,
}
]
batch variable
[1, 2]
UpdateMany:
mongodb["test"].update_many({"user_id": {"$in": batch}}, {"$set": {"fetched": True}})
I can do that using the above statement.
I also have another variable called user_profiles which is a list/array of json objects. I now ALSO want to set a field profile to be the profile found in the list(user_profiles) where the id matches the user_id/batch(id) i am updating.
user_profiles
[{
"id": 1,
"name": "john"
},
{
"id": 2,
"name": "jane"
}
]
Expected Final Result
[{
"user_id": 1,
"fetched": true,
"profile": {
"id": 1,
"name": "john"
}
},
{
"user_id": 2,
"fetched": true,
"profile": {
"id": 2,
"name": "jane"
}
}
]
I have a millions of these documents so i am trying to keep performance in mind.
You'll want to use db.collection.bulkWrite, see the updateOne example in the docs
If you've got millions you'll want to batch the bulkWrites into smaller chunks that work with your database server's capabilities.
Edit:
#Kay I just re-read the second part of your question which I didn't address earlier. You may want to try the $out stage of the aggregation pipeline. Be careful though since it will overwrite the existing collection so if you don't project all fields you could lose data. Definitely worth using a temporary collection for testing first.
Finally, you could also just create a view based on the aggregation query (with $lookup) if you don't absolutely need that data physically stored in the same collection.
I am trying to create a full-featured M2M (through table) nested serializer, which works perfect on create(). However, when I take the JSON returned by the GET version of serializer which contains the id's of the nested records and perform a PUT against the same serializer, the 'id' fields are removed from the nested record validated_data by the time it gets to the update() method.
{
"id": 1,
"addresses": [
{
"id": 1, # This is ripped out
"city": "Oakville",
"addr": "13 Main St",
"postal_code": "01101"
},
{
"id": 2, # This is ripped out
"city": "Watertown",
"addr": "88 Main St",
"postal_code": "01101"
},
"customer_number": 1234,
"customer_type": 1,
"pricing_sequence": 2,
"name": "Customer number 1234"
}
Any ideas?
Yes this is a duplicate of django-rest-framework: serializer from DATA don't update model ID
I think I figured it out. It seems the ModelSerializer by default makes the 'id' field read-only. Solution is to add an explicit 'id' field to the serializer. See tomchristie's comment https://github.com/tomchristie/django-rest-framework/issues/2114
Curently I have a viewset of an example Warehouse and I want to pass additional 'filter' list to each dictionary that is returned.
My WarehouseViewSet:
class WarehouseViewSet(viewsets.ReadOnlyModelViewSet):
filters = [{'date': 'Date/Time'}]
queryset = Warehouse.objects.all()
serializer_class = WarehouseSerializer
WarehouseSerializer:
class WarehouseSerializer(serializers.ModelSerializer):
class Meta:
model = Warehouse
field = ('name', 'address', 'action_list')
Currently I get an json list response like:
[
{
"id": 1,
"name": "Brameda Warehouse",
"address": "Bergijk"
},
{
"id": 2,
"name": "Amazon",
"address": "UK"
}
]
I would like to get:
[
{
"id": 1,
"name": "Brameda Warehouse",
"address": "Bergijk"
"filters": [
{'date': 'dateTime'}, {'actove': 'ActiveObject'}
]
},
{
"id": 2,
"name": "Amazon",
"address": "UK",
"filters": [
{'date': 'dateTime'}, {'actove': 'ActiveObject'}
]
}
]
I understand that having one filter is enough outside the objects dictionary, but I would like to know how to pass lists inside objects.
Any ideas how I can pass additional lists that would be returned as json object would be appreaciated.
I feel a bit unclear as to what you want, but if you just want to add some read-only computed field to the output, you can use SerializerMethodField:
class WarehouseSerializer(serializers.ModelSerializer):
# your other declared fields here
filters = serializers.SerializerMethodField()
# your Meta options here
def get_filters(self, obj):
return ['some', 'stuff', 'here', {'and': 'more'}]
The method has to be named get_field_name (there is an option to change it but I don't really see any point using it).
You get the instance being serialized as obj.
You may return anything that is made of regular types (numbers, strings, dicts, lists, tuples, booleans, None).
If the data has to come from outside, you should have the caller add it to the context, and it will be available on self.context['foobar'].
Given the below JSON result from tastypie, I would like to create a new value at check.payments_total which is equal to the total amount of the payments (in this case, 44.00). Any clue on how to do this? I'm completely stumped. payments is a joined foreign key to the check table.
{
"objects": [
{
"check": {
"id": "58a81b36-1ea6-403b-9902-a50cbd13cf2e",
"number": 2,
"payments": [
{
"amount": "5.00",
},
{
"amount": "39.00",
}
]
}
}
]
}
If for the response, then you could override the following method in your resource (the snippet is from tastypie.resources.Resource):
def alter_list_data_to_serialize(self, request, data):
"""
A hook to alter list data just before it gets serialized & sent to the user.
Useful for restructuring/renaming aspects of the what's going to be
sent.
Should accommodate for a list of objects, generally also including
meta data.
"""
return data
just include something like (not tested, consider to be pseudo-code):
total_amount = 0.0
for object in data[ 'objects' ]:
total_amount += object[ 'amount' ]
return { 'objects' : data[ 'objects' ], 'total_amount' : total_amount }
and you should be done.