Marshmallow Field not loading default value - python

I have a Marshmallow Schema defined as:
class MySchema:
myfield = fields.Str(required=False, default=“value”)
When I do:
s = MySchema().load({})
I would expect the return to be:
{‘myfield’:’value’}
But I am getting {} in return.
Is there anything am I missing?
Edit:
I’m using marshmallow 3.11 due to project limitations.
I can’t upgrade to 3.15. Tried with 3.15 and it is working as expected.

Until marshmallow 3.12:
default: value to use by default when dumping (serializing)
missing: value to use by default when loading (deserializing)
Since marshmallow 3.13:
dump_default: value to use by default when dumping (serializing)
load_default: value to use by default when loading (deserializing)
default / missing still behave the same in marshmallow 3.13+ but issue a deprecation warning.
https://marshmallow.readthedocs.io/en/stable/changelog.html#id4

Related

Am using drf-spectacular for my Django rest api documentation am getting below error when i am trying to over SPECTACULAR_SETTINGS in setting.py

Settings are configurable in settings.py in the scope SPECTACULAR_SETTINGS. You can override any setting, otherwise the defaults below are used.
for ref: https://drf-spectacular.readthedocs.io/en/latest/settings.html
[
This is supposed to be a comment, could you please include your code where you are overriding settings? But I'd guess you tried to add a type to your setting (Dict) without importing Dict. I don't believe you need the type there if you have it.
So you can either take out the types and just have
`SPECTACULAR_SETTINGS = { ... }`
or import the types you are using from typing import Dict, Any

How to apply 'load_from' and 'dump_to' to every field in a marshmallow schema?

I've been trying to implement an 'abstract' schema class that will automatically convert values in CamelCase (serialized) to snake_case (deserialized).
class CamelCaseSchema(marshmallow.Schema):
#marshmallow.pre_load
def camel_to_snake(self, data):
return {
utils.CaseConverter.camel_to_snake(key): value for key, value in data.items()
}
#marshmallow.post_dump
def snake_to_camel(self, data):
return {
utils.CaseConverter.snake_to_camel(key): value for key, value in data.items()
}
While using something like this works nicely, it does not achieve everything applying load_from and dump_to to a field does. Namely, it fails to provide correct field names when there's an issue with deserialization. For instance, I get:
{'something_id': [u'Not a valid integer.']} instead of {'somethingId': [u'Not a valid integer.']}.
While I can post-process these emitted errors, this seems like an unnecessary coupling that I wish to avoid if I'm to make the use of schema fully transparent.
Any ideas? I tried tackling the metaclasses involved, but the complexity was a bit overwhelming and everything seemed exceptionally ugly.
You're using marshmallow 2. Marshmallow 3 is now out and I recommend using it. My answer will apply to marshmallow 3.
In marshmallow 3, load_from / dump_to have been replace by a single attribute : data_key.
You'd need to alter data_key in each field when instantiating the schema. This will happen after field instantiation but I don't think it matters.
You want to do that ASAP when the schema is instantiated to avoid inconsistency issues. The right moment to do that would be in the middle of Schema._init_fields, before the data_key attributes are checked for consistency. But duplicating this method would be a pity. Besides, due to the nature of the camel/snake case conversion the consistency checks can be applied before the conversion anyway.
And since _init_fields is private API, I'd recommend doing the modification at the end of __init__.
class CamelCaseSchema(Schema):
def __init__(self, **kwargs):
super().__init__(**kwargs)
for field_name, field in self.fields.items():
fields.data_key = utils.CaseConverter.snake_to_camel(field_name)
I didn't try that but I think it should work.

mongoengine save method is deprecated?

I wonder why my python says that mongoengine save() method is deprecated? I don't see any info about this into official documentation https://mongoengine.readthedocs.io/en/v0.9.0/apireference.html
class MyModel(Document):
user_id = StringField(required=True)
date = DateTimeField(required=True, default=datetime.datetime.now)
my = MyModel()
my.user_id = 'user'
my.save()
and now i see:
/Library/Python/2.7/site-packages/mongoengine/document.py:340:
DeprecationWarning: save is deprecated. Use insert_one or replace_one
instead
I've python 2.7 and also installed pymongo, mongoengine and bottle-mongo (maybe some issues with that?)
MongoEngine wraps PyMongo, which deprecated "save" in PyMongo 3.0:
http://api.mongodb.com/python/current/changelog.html#collection-changes
MongoEngine might need to deprecate its save method, or suppress the deprecation warning, or perhaps some other fix to handle this PyMongo change. I recommend you search MongoEngine's bug tracker and report this issue if it has not been already.
MongoEngine Bug - https://github.com/MongoEngine/mongoengine/issues/1491
Using col.replace_one({‘_id': doc['_id']}, doc, True) instead.
The api is replace_one(filter, replacement, upsert=False, bypass_document_validation=False, collation=None, session=None).
Using upsert = True to insert a new doc if the filter find nothing.

Get Python type of Django's model field?

How can I get corresponding Python type of a Django model's field class ?
from django.db import models
class MyModel(models.Model):
value = models.DecimalField()
type(MyModel._meta.get_field('value')) # <class 'django.db.models.fields.DecimalField'>
I'm looking how can I get corresponding python type for field's value - decimal.Decimal in this case.
Any idea ?
p.s. I've attempted to work around this with field's default attribute, but it probably won't work in all cases where field has no default value defined.
I don't think you can decide the actual python type programmatically there. Part of this is due to python's dynamic type. If you look at the doc for converting values to python objects, there is no hard predefined type for a field: you can write a custom field that returns object in different types depending on the database value. The doc of model fields specifies what Python type corresponds to each field type, so you can do this "statically".
But why would you need to know the Python types in advance in order to serialize them? The serialize modules are supposed to do this for you, just throw them the objects you need to serialize. Python is a dynamically typed language.
An ugly alternative is to check the field's repr():
if 'DecimalField' in repr(model._meta.get_field(fieldname)):
return decimal.Decimal
else:
...
However, you have to this for all types seperatly.

Django BigInteger auto-increment field as primary key?

I'm currently building a project which involves a lot of collective intelligence. Every user visiting the web site gets created a unique profile and their data is later used to calculate best matches for themselves and other users.
By default, Django creates an INT(11) id field to handle models primary keys. I'm concerned with this being overflown very quickly (i.e. ~2.4b devices visiting the page without prior cookie set up). How can I change it to be represented as BIGINT in MySQL and long() inside Django itself?
I've found I could do the following (http://docs.djangoproject.com/en/dev/ref/models/fields/#bigintegerfield):
class MyProfile(models.Model):
id = BigIntegerField(primary_key=True)
But is there a way to make it autoincrement, like usual id fields? Additionally, can I make it unsigned so that I get more space to fill in?
Thanks!
Django now has a BigAutoField built in if you are using Django 1.10:
https://docs.djangoproject.com/en/1.10/ref/models/fields/#bigautofield
Inspired by lfagundes but with a small but important correction:
class BigAutoField(fields.AutoField):
def db_type(self, connection): # pylint: disable=W0621
if 'mysql' in connection.__class__.__module__:
return 'bigint AUTO_INCREMENT'
return super(BigAutoField, self).db_type(connection)
add_introspection_rules([], [r"^a\.b\.c\.BigAutoField"])
Notice instead of extending BigIntegerField, I am extending AutoField. This is an important distinction. With AutoField, Django will retrieve the AUTO INCREMENTed id from the database, whereas BigInteger will not.
One concern when changing from BigIntegerField to AutoField was the casting of the data to an int in AutoField.
Notice from Django's AutoField:
def to_python(self, value):
if value is None:
return value
try:
return int(value)
except (TypeError, ValueError):
msg = self.error_messages['invalid'] % str(value)
raise exceptions.ValidationError(msg)
and
def get_prep_value(self, value):
if value is None:
return None
return int(value)
It turns out this is OK, as verified in a python shell:
>>> l2 = 99999999999999999999999999999
>>> type(l2)
<type 'long'>
>>> int(l2)
99999999999999999999999999999L
>>> type(l2)
<type 'long'>
>>> type(int(l2))
<type 'long'>
In other words, casting to an int will not truncate the number, nor will it change the underlying type.
NOTE: This answer as modified, according to Larry's code. Previous solution extended fields.BigIntegerField, but better to extend fields.AutoField
I had the same problem and solved with following code:
from django.db.models import fields
from south.modelsinspector import add_introspection_rules
class BigAutoField(fields.AutoField):
def db_type(self, connection):
if 'mysql' in connection.__class__.__module__:
return 'bigint AUTO_INCREMENT'
return super(BigAutoField, self).db_type(connection)
add_introspection_rules([], ["^MYAPP\.fields\.BigAutoField"])
Apparently this is working fine with south migrations.
You could alter the table afterwards. That may be a better solution.
Since Django 3.2 the type of implicit primary key can be controlled with the DEFAULT_AUTO_FIELD setting (documentation). So, there is no need anymore to override primary keys in all your models.
#This setting will change all implicitly added primary keys to BigAutoField
DEFAULT_AUTO_FIELD = 'django.db.models.BigAutoField'
Note that starting with Django 3.2 new projects are generated with DEFAULT_AUTO_FIELD set to BigAutoField (release notes).
As stated before you could alter the table afterwards. That is a good solution.
To do that without forgetting, you can create a management module under your application package and use the post_syncdb signal.
https://docs.djangoproject.com/en/dev/ref/signals/#post-syncdb
This can cause django-admin.py flush to fail. But it is still the best alternative I know.
I also had the same problem. Looks like there is no support for BigInteger auto fields in django.
I've tried to create some custom field BigIntegerAutoField but I faced a problem with south migration system (south couldn't create sequence for my field).
After giving a try couple of different approaches I decided to follow Matthew's advice and do alter table (e.g. ALTER TABLE table_name ALTER COLUMN id TYPE bigint; in postgre)
Would be great to have solution supported by django (like built in BigIntegerAutoField) and south.

Categories