Serializing data with marshmallow if data key is unknown - python

I'm trying to serialise data where the data key is not known beforehand. The data would look something like this:
{
"unknown key": ["list", "of", "random", "strings"]
}
Is this something that's possible with marshmallow? I can't seem to find anything in the docs about it.

You can implement a method to dynamically generatea Schema class at runtime.
from marshmallow import Schema as BaseSchema
class Schema(BaseSchema):
#classmethod
def from_dict(cls, fields_dict):
attrs = fields_dict.copy()
attrs["Meta"] = type(
"GeneratedMeta", (getattr(cls, "Meta", object),), {"register": False}
)
return type("", (cls,), attrs)
Usage:
from marshmallow import fields
MySchema = Schema.from_dict({
"unknown_key": fields.List(fields.Str())
})
Update: Schema.from_dict is now built-in method in marshmallow 3.0, which will be released soon. See https://stevenloria.com/dynamic-schemas-in-marshmallow/ for usage examples.

Related

How to deserialize a bjson structure to an marshmallow schema

I'm trying to convert bjson structure to a schema in marshmallow library.
Below is the marshmallow schema:
class GeneSchema(Schema):
"""description of class"""
id_entrez = fields.Integer(required = True, error_messages={'required': "The 'id_entrez' field is requeired."})
symbol = fields.String()
#validates('id_entrez')
def validate_id_entrez(self, data):
if data <= 0:
raise ValidationError("The 'id_entrez' field must be greater than zero.")
Below is the bjson will be converted to schema:
[{"symbol": "VAMP4", "_id": {"$oid": "57ae3b175a945932fcbdf41d"}, "id_entrez": 8674}, {"symbol": "CCT5", "_id": {"$oid": "57ae3b175a945932fcbdf41e"}, "id_entrez": 22948}]
Note that the bjson has the "_id" as ObjectId - "$oid". This is because the result of the query using the mongodb.
Please, does anyone know why not be to convert from bjson to marshmallow schema correctly ?
Thank you all!
I don't know if this question is still valid, but I would like to show my solution, how to push an ObjectId to a Marshmallow schema:
I simply use the pre processing method pre_load to convert the ObjectId to a String:
#pre_load
def convert_objectid(self, in_data, **kwargs):
if "_id" in in_data:
in_data["_id"] = str(in_data["_id"])
return in_data
You can still use your schema to parse MongoDB output, just ignore extra "_id" field. If on the other hand you do want to parse that "_id", just add extra non-required field in your schema.

How to set set JSON encoder in marshmallow?

How do I override the JSON encoder used the marshmallow library so that it can serialize a Decimal field?I think I can do this by overriding json_module in the base Schema or Meta class, but I don't know how:
https://github.com/marshmallow-code/marshmallow/blob/dev/marshmallow/schema.py#L194
I trawled all the docs and read the code, but I'm not a Python native.
If you want to serialize a Decimal field (and keep the value as a number), you can override the default json module used by Marshmallow in its dumps() call to use simplejson instead.
To do this, just add a class Meta definition to your schema, and specify a json_module property for this class.
Example:
import simplejson
class MySchema(Schema):
amount = fields.Decimal()
class Meta:
json_module = simplejson
Then, to serialize:
my_schema = MySchema()
my_schema.dumps(my_object)
I think the solution is to use marshmallow.fields.Decimal with as_string=True:
This field serializes to a decimal.Decimal object by default. If you
need to render your data as JSON, keep in mind that the json module
from the standard library does not encode decimal.Decimal. Therefore,
you must use a JSON library that can handle decimals, such as
simplejson, or serialize to a string by passing as_string=True.
I had the same issue and I endup changing the field on Schema to string. In my case, since I'm only going to return it in json, it really doesn't matter if it is string or decimal.
from marshmallow_sqlalchemy import ModelSchema
from marshmallow import fields
class CurrencyValueSchema(ModelSchema):
class Meta:
model = CurrencyValue
value = fields.String()
My returned json:
{
"currency_values": [
{
"id": 9,
"timestamp": "2016-11-18T23:59:59+00:00",
"value": "0.944304"
},
{
"id": 10,
"timestamp": "2016-11-18T23:59:59+00:00",
"value": "3.392204"
},
}

Django Object Serialize And Custom Date Formatting

So I am using this code to serialize Django objects to a JSON format (so I can send to RabbitMQ over Celery):
import django.core.serializers as serializers
import json
def serialize(obj):
d = json.loads(serializers.serialize('json', [obj]).strip('[]'))
filtered_fields = {
'id': d['pk'],
}
for key, value in d['fields'].iteritems():
filtered_fields[underscore_to_camelcase(str(key))] = value
return filtered_fields
This returns something like:
{
"firstName": "Foo",
"lastName": "Bar",
"createdAt": "2013-12-15T20:53:59.615",
"updatedAt": "2013-12-15T20:53:59.615",
"dateOfBirth": "1990-05-17",
"id": "foo#bar.com"
}
Is there a way to tell the Django serialiser to convert date time objects to a Zulu format? So instead of:
2013-12-15T20:53:59.615
I want:
2013-12-15T20:53:59Z
If you use django's simplejson for serializing you could do something like:
simplejson.dumps(data, cls=LazyEncoder)
and have that LazyEncoder take care of special cases like translations and in your case datetimeformat:
class LazyEncoder(simplejson.JSONEncoder):
def default(self, obj):
elif isinstance(obj, datetime.datetime):
return #your formatting goes here
Only problem is that simplejson.dumps returns string. But if you look into it, then perhaps you can use the same lazyencoder for your purposes (https://docs.djangoproject.com/en/dev/topics/serialization/#id2). Just an idea here - i have not had the need to dig deeper into this, so i am not sure if this is useful to you

Excluding primary key in Django dumpdata with natural keys

How do you exclude the primary key from the JSON produced by Django's dumpdata when natural keys are enabled?
I've constructed a record that I'd like to "export" so others can use it as a template, by loading it into a separate databases with the same schema without conflicting with other records in the same model.
As I understand Django's support for natural keys, this seems like what NKs were designed to do. My record has a unique name field, which is also used as the natural key.
So when I run:
from django.core import serializers
from myapp.models import MyModel
obj = MyModel.objects.get(id=123)
serializers.serialize('json', [obj], indent=4, use_natural_keys=True)
I would expect an output something like:
[
{
"model": "myapp.mymodel",
"fields": {
"name": "foo",
"create_date": "2011-09-22 12:00:00",
"create_user": [
"someusername"
]
}
}
]
which I could then load into another database, using loaddata, expecting it to be dynamically assigned a new primary key. Note, that my "create_user" field is a FK to Django's auth.User model, which supports natural keys, and it output as its natural key instead of the integer primary key.
However, what's generated is actually:
[
{
"pk": 123,
"model": "myapp.mymodel",
"fields": {
"name": "foo",
"create_date": "2011-09-22 12:00:00",
"create_user": [
"someusername"
]
}
}
]
which will clearly conflict with and overwrite any existing record with primary key 123.
What's the best way to fix this? I don't want to retroactively change all the auto-generated primary key integer fields to whatever the equivalent natural keys are, since that would cause a performance hit as well as be labor intensive.
Edit: This seems to be a bug that was reported...2 years ago...and has largely been ignored...
Updating the answer for anyone coming across this in 2018 and beyond.
There is a way to omit the primary key through the use of natural keys and unique_together method. Taken from the Django documentation on serialization:
You can use this command to test :
python manage.py dumpdata app.model --pks 1,2,3 --indent 4 --natural-primary --natural-foreign > dumpdata.json ;
Serialization of natural keys
So how do you get Django to emit a natural key when serializing an object? Firstly, you need to add another method – this time to the model itself:
class Person(models.Model):
objects = PersonManager()
first_name = models.CharField(max_length=100)
last_name = models.CharField(max_length=100)
birthdate = models.DateField()
def natural_key(self):
return (self.first_name, self.last_name)
class Meta:
unique_together = (('first_name', 'last_name'),)
That method should always return a natural key tuple – in this example, (first name, last name). Then, when you call serializers.serialize(), you provide use_natural_foreign_keys=True or use_natural_primary_keys=True arguments:
serializers.serialize('json', [book1, book2], indent=2, use_natural_foreign_keys=True, use_natural_primary_keys=True)
When use_natural_foreign_keys=True is specified, Django will use the natural_key() method to serialize any foreign key reference to objects of the type that defines the method.
When use_natural_primary_keys=True is specified, Django will not provide the primary key in the serialized data of this object since it can be calculated during deserialization:
{
"model": "store.person",
"fields": {
"first_name": "Douglas",
"last_name": "Adams",
"birth_date": "1952-03-11",
}
}
The problem with json is that you can't omit the pk field since it will be required upon loading of the fixture data again. If not existing, json will fail with
$ python manage.py loaddata some_data.json
[...]
File ".../django/core/serializers/python.py", line 85, in Deserializer
data = {Model._meta.pk.attname : Model._meta.pk.to_python(d["pk"])}
KeyError: 'pk'
As pointed out in the answer to this question, you can use yaml or xml if you really want to omit the pk attribute OR just replace the primary key value with null.
import re
from django.core import serializers
some_objects = MyClass.objects.all()
s = serializers.serialize('json', some_objects, use_natural_keys=True)
# Replace id values with null - adjust the regex to your needs
s = re.sub('"pk": [0-9]{1,5}', '"pk": null', s)
Override the Serializer class in a separate module:
from django.core.serializers.json import Serializer as JsonSerializer
class Serializer(JsonSerializer):
def end_object(self, obj):
self.objects.append({
"model" : smart_unicode(obj._meta),
"fields" : self._current,
# Original method adds the pk here
})
self._current = None
Register it in Django:
serializers.register_serializer("json_no_pk", "path.to.module.with.custom.serializer")
Add use it:
serializers.serialize('json_no_pk', [obj], indent=4, use_natural_keys=True)

PYMongo : Parsing|Serializing query output of a collection

By default collection.find or collection.findone() functions results in a dictionary types and if you pass paramater as_class=SomeUserClass than it will try to parse the result into this class format.
but it seems this class should also be derived class of dictionary (as it required __setitem__ function to be defined and i can add keys in the class ).
Here i want to set the properties of the class. how can i do achieve this?
Also, my collection class contains some child classes as properties .So how can i set the properties of child classes also.
It sounds like you want something like an object-relational mapper. I am the primary author of one Ming , but there exist several others for Python as well. In Ming, you might do the following to set up your mapping:
from ming import schema, Field
from ming.orm import (mapper, Mapper, RelationProperty,
ForeignIdProperty)
WikiDoc = collection(‘wiki_page', session,
Field('_id', schema.ObjectId()),
Field('title', str, index=True),
Field('text', str))
CommentDoc = collection(‘comment', session,
Field('_id', schema.ObjectId()),
Field('page_id', schema.ObjectId(), index=True),
Field('text', str))
class WikiPage(object): pass
class Comment(object): pass
ormsession.mapper(WikiPage, WikiDoc, properties=dict(
comments=RelationProperty('WikiComment')))
ormsession.mapper(Comment, CommentDoc, properties=dict(
page_id=ForeignIdProperty('WikiPage'),
page=RelationProperty('WikiPage')))
Mapper.compile_all()
Then you can query for some particular page via:
pg = WikiPage.query.get(title='MyPage')
pg.comments # loads comments via a second query from MongoDB
The various ODMs I know of for MongoDB in Python are listed below.
Ming
MongoKit
MongoEngine
I have solved this by adding __setitem__ in class.
than i do
result = as_class()
for key,value in dict_expr.items():
result.__setitem__(key,value)
and in my class __setitem__ is like
def __setitem__(self,key,value):
try:
attr = getattr(class_obj,key)
if(attr!=None):
if(isinstance(value,dict)):
for child_key,child_value in value.items():
attr.__setitem__(child_key,child_value)
setattr(class_obj,key,attr)
else:
setattr(class_obj,key,value)
except AttributeError:
pass

Categories