I have been doing some research but couldn't find an effective yet simple way to update instances without making some repetitive code. Here is an example:
PartySerializer(Model.serializers)
def update(self, instance, validated_data):
instance.name = validated_data.get('name', instance.name)
instance.event_date = validated_data.get('event_date', instance.event_date)
instance.time_start = validated_data.get('time_start', instance.time_start)
instance.time_end = validated_data.get('time_end', instance.time_end)
instance.main_color = validated_data.get('main_color', instance.main_color)
instance.is_age_limited = validated_data.get('is_age_limited', instance.is_age_limited)
instance.is_open_bar = validated_data.get('is_open_bar', instance.is_open_bar)
instance.is_open_food = validated_data.get('is_open_food', instance.is_open_food)
instance.description = validated_data.get('description', instance.description)
instance.location = validated_data.get('location', instance.location)
instance.category = validated_data.get('category', instance.category)
instance.save()
Is there any cleaner and more efficient approach ?
Guys I finally found a better way of doing this. #Neeraj as i am using nested objects I need to specify an override for method update. So when I use super() I end up calling the same function I am in. #VJ Magar I am not doing a partial update in this case, all fields are present at least its key. My solution was this:
def update(self, instance, validated_data):
for key, obj in validated_data.items():
if not key == 'location':
setattr(instance, key, obj)
else:
location_serializer = LocationSerializer(instance.location, data=validated_data.get('location'))
if location_serializer.is_valid():
location_serializer.save()
instance.save()
return instance
Related
I'm trying to get a queryset from the cache, but am unsure if this even has a point.
I have the following method (simplified) inside a custom queryset:
def queryset_from_cache(self, key: str=None, timeout: int=60):
# Generate a key based on the query.
if key is None:
key = self.__generate_key # ()
# If the cache has the key, return the cached object.
cached_object = cache.get(key, None)
# If the cache doesn't have the key, set the cache,
# and then return self (from DB) as cached_object
if cached_object is None:
cached_object = self
cache.set(key, cached_object , timeout=timeout)
return cached_object
The usage is basically to append it to a django QuerySet method, for example:
queryset = MyModel.objects.filter(id__range=[0,99]).queryset_from_cache()
My question:
Would usage like this work?
Or would it call MyModel.objects.filter(id__range=[0,99]) from the database no matter what?
Since normally caching would be done like this:
cached_object = cache.get(key, None)
if cached_object is None:
cached_object = MyModel.objects.filter(id__range=[0,99])
#Only now call the query
cache.set(key, cached_object , timeout=timeout)
And thus the queryset filter() method only gets called when the key is not present in the cache, as opposed to always calling it, and then trying to get it from the cache with the queryset_from_cache method.
This is a really cool idea, but I'm not sure if you can Cache full-on Objects.. I think it's only attributes
Now this having a point. Grom what I'm seeing from the limited code I've seen idk if it does have a point, unless filtering for Jane and John (and only them) is very common. Very narrow.
Maybe just try caching ALL the users or just individual Users, and only the attributes you need
Update
Yes! you are completetly correct, you can cache full on objects- how cool!
I don't think your example method of queryset = MyModel.objects.filter(id__range=[0,99]).queryset_from_cache() would work.
but you can do something similar by using Model Managers and do something like: queryset = MyModel.objects.queryset_from_cache(filterdict)
Models
Natually you can return just the qs, this is just for the example to show it actually is from the cache
from django.db import models
class MyModelManager(models.Manager):
def queryset_from_cache(self, filterdict):
from django.core.cache import cache
cachekey = 'MyModelCache'
qs = cache.get(cachekey)
if qs:
d = {
'in_cache': True,
'qs': qs
}
else:
qs = MyModel.objects.filter(**filterdict)
cache.set(cachekey, qs, 300) # 5 min cache
d = {
'in_cache': False,
'qs': qs
}
return d
class MyModel(models.Model):
name = models.CharField(max_length=200)
#
# other attributes
#
objects = MyModelManager()
Example Use
from app.models import MyModel
filterdict = {'pk__range':[0,99]}
r = MyModel.objects.queryset_from_cache(filterdict)
print(r['qs'])
While it's not exactly what you wanted, it might be close enough
I'm trying to make fields in my model write once and read only there after. The solution I came up with is using the property decorators. Please tell me if there is a better solution, I'm new to django. I get into an infinite loop when I try to instantiate the model in the django shell.
class MapPointable(models.Model):
loc_latitude = models.FloatField(null = True)
loc_longtitude = models.FloatField(null = True)
#property
def loc_latitude(self):
return self.loc_latitude
#loc_latitude.setter
def loc_latitude(self, value):
if self.loc_latitude == None:
self.loc_latitude = value
else:
raise ValueError("Read-only field, the value cannot be set")
#property
def loc_longtitude(self):
return self.loc_longtitude
#loc_longtitude.setter
def loc_longtitude(self, value):
if self.loc_longtitude == None:
self.loc_longtitude = value
else:
raise ValueError("Read-only field, the value cannot be set")
your property's name is the same name as the model field. it should be:
#property
def loc_latitude_prop(self):
return self.loc_latitude
#loc_latitude_prop.setter
def set_loc_latitude(self, value):
#...
otherwise they start calling each other and you get stuck in infinite loop..
but I would not use properties in django, because django does not recognize these in ORM, it only knows django fields
I have a need to track changes on Django model instances. I'm aware of solutions like django-reversion but they are overkill for my cause.
I had the idea to create a parameterized class decorator to fit this purpose. The arguments are the field names and a callback function. Here is the code I have at this time:
def audit_fields(fields, callback_fx):
def __init__(self, *args, **kwargs):
self.__old_init(*args, **kwargs)
self.__old_state = self.__get_state_helper()
def save(self, *args, **kwargs):
new_state = self.__get_state_helper()
for k,v in new_state.items():
if (self.__old_state[k] != v):
callback_fx(self, k, self.__old_state[k], v)
val = self.__old_save(*args, **kwargs)
self.__old_state = self.__get_state_helper()
return val
def __get_state_helper(self):
# make a list of field/values.
state_dict = dict()
for k,v in [(field.name, field.value_to_string(self)) for field in self._meta.fields if field.name in fields]:
state_dict[k] = v
return state_dict
def fx(clazz):
# Stash originals
clazz.__old_init = clazz.__init__
clazz.__old_save = clazz.save
# Override (and add helper)
clazz.__init__ = __init__
clazz.__get_state_helper = __get_state_helper
clazz.save = save
return clazz
return fx
And use it as follows (only relevant part):
#audit_fields(["status"], fx)
class Order(models.Model):
BASKET = "BASKET"
OPEN = "OPEN"
PAID = "PAID"
SHIPPED = "SHIPPED"
CANCELED = "CANCELED"
ORDER_STATES = ( (BASKET, 'BASKET'),
(OPEN, 'OPEN'),
(PAID, 'PAID'),
(SHIPPED, 'SHIPPED'),
(CANCELED, 'CANCELED') )
status = models.CharField(max_length=16, choices=ORDER_STATES, default=BASKET)
And test on the Django shell with:
from store.models import Order
o=Order()
o.status=Order.OPEN
o.save()
The error I receive then is:
TypeError: int() argument must be a string or a number, not 'Order'
The full stacktrace is here: https://gist.github.com/4020212
Thanks in advance and let me know if you would need more info!
EDIT: Question answered by randomhuman, code edited and usable as shown!
You do not need to explicitly pass a reference to self on this line:
val = self.__old_save(self, *args, **kwargs)
It is a method being called on an object reference. Passing it explicitly in this way is causing it to be seen as one of the other parameters of the save method, one which is expected to be a string or a number.
I'm using django-filter app. There is however one problem I do not know how to solve. It's almost exactly the same thing as is described in django documentation:
https://docs.djangoproject.com/en/1.2/topics/db/queries/#spanning-multi-valued-relationships
I want to make a query where I select all Blogs that has an entry with both "Lennon" in headline and was published in 2008, eg.:
Blog.objects.filter(entry__headline__contains='Lennon',
entry__pub_date__year=2008)
Not to select Blogs that has an entry with "Lennon" in headline and another entry (possibly the same) that was published in 2008:
Blog.objects.filter(entry__headline__contains='Lennon').filter(
entry__pub_date__year=2008)
However, if I set up Filter such that there are two fields (nevermind __contains x __exact, just an example):
class BlogFilter(django_filters.FilterSet):
entry__headline = django_filters.CharFilter()
entry__pub_date = django_filters.CharFilter()
class Meta:
model = Blog
fields = ['entry__headline', 'entry__pub_date', ]
django-filter will generete the latter:
Blog.objects.filter(entry__headline__exact='Lennon').filter(
entry__pub_date__exact=2008)
Is there a way to combine both filters into a single filter field?
Well, I came with a solution. It is not possible to do using the regular django-filters, so I extended it a bit. Could've been improved, this is quick-n-dirty solution.
1st added a custom "grouped" field to django_filters.Filter and a filter_grouped method (almost copy of filter method)
class Filter(object):
def __init__(self, name=None, label=None, widget=None, action=None,
lookup_type='exact', required=False, grouped=False, **kwargs):
(...)
self.grouped = grouped
def filter_grouped(self, qs, value):
if isinstance(value, (list, tuple)):
lookup = str(value[1])
if not lookup:
lookup = 'exact' # we fallback to exact if no choice for lookup is provided
value = value[0]
else:
lookup = self.lookup_type
if value:
return {'%s__%s' % (self.name, lookup): value}
return {}
the only difference is that instead of creating a filter on query set, it returns a dictionary.
2nd updated BaseFilterSet qs method/property:
class BaseFilterSet(object):
(...)
#property
def qs(self):
if not hasattr(self, '_qs'):
qs = self.queryset.all()
grouped_dict = {}
for name, filter_ in self.filters.iteritems():
try:
if self.is_bound:
data = self.form[name].data
else:
data = self.form.initial.get(name, self.form[name].field.initial)
val = self.form.fields[name].clean(data)
if filter_.grouped:
grouped_dict.update(filter_.filter_grouped(qs, val))
else:
qs = filter_.filter(qs, val)
except forms.ValidationError:
pass
if grouped_dict:
qs = qs.filter(**grouped_dict)
(...)
return self._qs
The trick is to store all "grouped" filters in a dictionary and then use them all as a single filter.
The filter will look something like this then:
class BlogFilter(django_filters.FilterSet):
entry__headline = django_filters.CharFilter(grouped=True)
entry__pub_date = django_filters.CharFilter(grouped=True)
class Meta:
model = Blog
fields = ['entry__headline', 'entry__pub_date', ]
I want to implement a simple VersionedModel base model class for my app engine app. I'm looking for a pattern that does not involve explicitly choosing fields to copy.
I am trying out something like this, but it is to hacky for my taste and did not test it in the production environment yet.
class VersionedModel(BaseModel):
is_history_copy = db.BooleanProperty(default=False)
version = db.IntegerProperty()
created = db.DateTimeProperty(auto_now_add=True)
edited = db.DateTimeProperty()
user = db.UserProperty(auto_current_user=True)
def put(self, **kwargs):
if self.is_history_copy:
if self.is_saved():
raise Exception, "History copies of %s are not allowed to change" % type(self).__name__
return super(VersionedModel, self).put(**kwargs)
if self.version is None:
self.version = 1
else:
self.version = self.version +1
self.edited = datetime.now() # auto_now would also affect copies making them out of sync
history_copy = copy.copy(self)
history_copy.is_history_copy = True
history_copy._key = None
history_copy._key_name = None
history_copy._entity = None
history_copy._parent = self
def tx():
result = super(VersionedModel, self).put(**kwargs)
history_copy._parent_key = self.key()
history_copy.put()
return result
return db.run_in_transaction(tx)
Does anyone have a simpler cleaner solution for keeping history of versions for app engine models?
EDIT: Moved copy out of tx. Thx #Adam Crossland for the suggestion.
Take a look at the properties static method on Model classes. With this, you can get a list of properties, and use that to get their values, something like this:
#classmethod
def clone(cls, other, **kwargs):
"""Clones another entity."""
klass = other.__class__
properties = other.properties().items()
kwargs.update((k, p.__get__(other, klass)) for k, p in properties)
return cls(**kwargs)