For a Django model I'm using django-import-export package.
If need to export more then just available model fields, like properties or custom fields, new can be added with import_export.fields.Field class and optionally dehydrate_<field> method.
from import_export import resources, fields, instance_loaders
class ProductResource(resources.ModelResource):
categories = fields.Field()
price = fields.Field(attribute='unit_price')
class Meta:
model = Product
def dehydrate_categories(self, product):
return ';'.join(
'/%s' % '/'.join([c.name for c in cat.parents()] + [cat.name])
for cat in product.category.iterator() )
It does work well, but only for exporting. What about import, the reverse process ? Is there some counterpart to dehydrate_ method ?
So far I've overridden get_or_init_instance method:
class ProductResource(resources.ModelResource):
def get_or_init_instance(self, instance_loader, row):
row['unit_price'] = row['price']; row.pop('price')
return super(ProductResource, self).get_or_init_instance(instance_loader, row)
but doubt this is the right way.
Would appreciate any hint how to handle imports of custom fields.
You can override import_obj instead. See Import workflow for more details.
Another approach is to subclass Field and override export and save methods and do all required data manipulation in a field.
I know this is very old but I came across the same problem and this is how I fixed it (based on the direction the original asker was heading).
First, you can add any custom/modified fields you need by overriding the 'before_import_row' function, like so:
def before_import_row(self, row, **kwargs):
row['extra_info'] = 'Some Info'
return super(RetailLocationResource, self).before_import_row(row, **kwargs)
Then you can pass this into your instance by overriding get_or_init_instance like so:
def get_or_init_instance(self, instance_loader, row):
instance, bool = super(RetailLocationResource, self).get_or_init_instance(instance_loader, row)
instance.extra_info = row['extra_info']
return instance, bool
Hope this helps anyone!
Related
I am using django-import-export 1.0.1 with admin integration in Django 2.1.1. I have two models
from django.db import models
class Sector(models.Model):
code = models.CharField(max_length=30, primary_key=True)
class Location(models.Model):
code = models.CharField(max_length=30, primary_key=True)
sector = ForeignKey(Sector, on_delete=models.CASCADE, related_name='locations')
and they can be imported/exported just fine using model resources
from import_export import resources
from import_export.fields import Field
from import_export.widgets import ForeignKeyWidget
class SectorResource(resources.ModelResource):
code = Field(attribute='code', column_name='Sector')
class Meta:
model = Sector
import_id_fields = ('code',)
class LocationResource(resources.ModelResource):
code = Field(attribute='code', column_name='Location')
sector = Field(attribute='sector', column_name='Sector',
widget=ForeignKeyWidget(Sector, 'code'))
class Meta:
model = Location
import_id_fields = ('code',)
and import/export actions can be integrated into the admin by
from django.contrib import admin
from import_export.admin import ImportExportModelAdmin
class SectorAdmin(ImportExportModelAdmin):
resource_class = SectorResource
class LocationAdmin(ImportExportModelAdmin):
resource_class = LocationResource
admin.site.register(Sector, SectorAdmin)
admin.site.register(Location, LocationAdmin)
For Reasons™, I would like to change this set-up so that a spreadsheet of Locations which does not contain a Sector column can be imported; the value of sector (for each imported row) should be taken from an extra field on the ImportForm in the admin.
Such a field can indeed be added by overriding import_action on the ModelAdmin as described in Extending the admin import form for django import_export. The next step, to use this value for all imported rows, is missing there, and I have not been able to figure out how to do it.
EDIT(2): Solved through the use of sessions. Having a get_confirm_import_form hook would still really help here, but even better would be having the existing ConfirmImportForm carry across all the submitted fields & values from the initial import form.
EDIT: I'm sorry, I thought I had this nailed, but my own code wasn't working as well as I thought it was. This doesn't solve the problem of passing along the sector form field in the ConfirmImportForm, which is necessary for the import to complete. Currently looking for a solution which doesn't involve pasting the whole of import_action() into an ImportMixin subclass. Having a get_confirm_import_form() hook would help a lot here.
Still working on a solution for myself, and when I have one I'll update this too.
Don't override import_action. It's a big complicated method that you don't want to replicate. More importantly, as I discovered today: there are easier ways of doing this.
First (as you mentioned), make a custom import form for Location that allows the user to choose a Sector:
class LocationImportForm(ImportForm):
sector = forms.ModelChoiceField(required=True, queryset=Sector.objects.all())
In the Resource API, there's a before_import_row() hook that is called once per row. So, implement that in your LocationResource class, and use it to add the Sector column:
def before_import_row(self, row, **kwargs):
sector = self.request.POST.get('sector', None)
if contract:
self.request.session['import_context_sector'] = sector
else:
# if this raises a KeyError, we want to know about it.
# It means that we got to a point of importing data without
# contract context, and we don't want to continue.
try:
sector = self.request.session['import_context_sector']
except KeyError as e:
raise Exception("Sector context failure on row import, " +
f"check resources.py for more info: {e}")
row['sector'] = sector
(Note: This code uses Django sessions to carry the sector value from the import form to the import confirmation screen. If you're not using sessions, you'll need to find another way to do it.)
This is all you need to get the extra data in, and it works for both the dry-run preview and the actual import.
Note that self.request doesn't exist in the default ModelResource - we have to install it by giving LocationResource a custom constructor:
def __init__(self, request=None):
super()
self.request = request
(Don't worry about self.request sticking around. Each LocationResource instance doesn't persist beyond a single request.)
The request isn't usually passed to the ModelResource constructor, so we need to add it to the kwargs dict for that call. Fortunately, Django Import/Export has a dedicated hook for that. Override ImportExportModelAdmin's get_resource_kwargs method in LocationAdmin:
def get_resource_kwargs(self, request, *args, **kwargs):
rk = super().get_resource_kwargs(request, *args, **kwargs)
rk['request'] = request
return rk
And that's all you need.
So far I'm extremely happy with Django Rest Framework, which is why I alsmost can't believe there's such a large omission in the codebase. Hopefully someone knows of a way how to support this:
class PinSerializer(serializers.ModelSerializer):
item = ItemSerializer(read_only=True, source='item')
item = serializers.IntegerSerializer(write_only=True)
class Meta:
model = Pin
with the goal
The goal here is to read:
{pin: item: {name: 'a', url: 'b'}}
but to write using an id
{pin: item: 10}
An alternative would be to use two serializers, but that looks like a really ugly solution:
django rest framework model serializers - read nested, write flat
Django lets you access the Item on your Pin with the item attribute, but actually stores the relationship as item_id. You can use this strategy in your serializer to get around the fact that a Python object cannot have two attributes with the same name (a problem you would encounter in your code).
The best way to do this is to use a PrimaryKeyRelatedField with a source argument. This will ensure proper validation gets done, converting "item_id": <id> to "item": <instance> during field validation (immediately before the serializer's validate call). This allows you to manipulate the full object during validate, create, and update methods. Your final code would be:
class PinSerializer(serializers.ModelSerializer):
item = ItemSerializer(read_only=True)
item_id = serializers.PrimaryKeyRelatedField(write_only=True,
source='item',
queryset=Item.objects.all())
class Meta:
model = Pin
fields = ('id', 'item', 'item_id',)
Note 1: I also removed source='item' on the read-field as that was redundant.
Note 2: I actually find it rather unintuitive that Django Rest is set up such that a Pin serializer without an Item serializer specified returns the item_id as "item": <id> and not "item_id": <id>, but that is beside the point.
This method can even be used with forward and reverse "Many" relationships. For example, you can use an array of pin_ids to set all the Pins on an Item with the following code:
class ItemSerializer(serializers.ModelSerializer):
pins = PinSerializer(many=True, read_only=True)
pin_ids = serializers.PrimaryKeyRelatedField(many=True,
write_only=True,
source='pins',
queryset=Pin.objects.all())
class Meta:
model = Item
fields = ('id', 'pins', 'pin_ids',)
Another strategy that I previously recommended is to use an IntegerField to directly set the item_id. Assuming you are using a OneToOneField or ForeignKey to relate your Pin to your Item, you can set item_id to an integer without using the item field at all. This weakens the validation and can result in DB-level errors from constraints being violated. If you want to skip the validation DB call, have a specific need for the ID instead of the object in your validate/create/update code, or need simultaneously writable fields with the same source, this may be better, but I wouldn't recommend anymore. The full line would be:
item_id = serializers.IntegerField(write_only=True)
If you are using DRF 3.0 you can implement the new to_internal_value method to override the item field to change it to a PrimaryKeyRelatedField to allow the flat writes. The to_internal_value takes unvalidated incoming data as input and should return the validated data that will be made available as serializer.validated_data. See the docs: http://www.django-rest-framework.org/api-guide/serializers/#to_internal_valueself-data
So in your case it would be:
class ItemSerializer(ModelSerializer):
class Meta:
model = Item
class PinSerializer(ModelSerializer):
item = ItemSerializer()
# override the nested item field to PrimareKeyRelatedField on writes
def to_internal_value(self, data):
self.fields['item'] = serializers.PrimaryKeyRelatedField(queryset=Item.objects.all())
return super(PinSerializer, self).to_internal_value(data)
class Meta:
model = Pin
Two things to note: The browsable web api will still think that writes will be nested. I'm not sure how to fix that but I only using the web interface for debug so not a big deal. Also, after you write the item returned will have flat item instead of the nested one. To fix that you can add this code to force the reads to use the Item serializer always.
def to_representation(self, obj):
self.fields['item'] = ItemSerializer()
return super(PinSerializer, self).to_representation(obj)
I got the idea from this from Anton Dmitrievsky's answer here: DRF: Simple foreign key assignment with nested serializers?
You can create a Customized Serializer Field (http://www.django-rest-framework.org/api-guide/fields)
The example took from the link:
class ColourField(serializers.WritableField):
"""
Color objects are serialized into "rgb(#, #, #)" notation.
"""
def to_native(self, obj):
return "rgb(%d, %d, %d)" % (obj.red, obj.green, obj.blue)
def from_native(self, data):
data = data.strip('rgb(').rstrip(')')
red, green, blue = [int(col) for col in data.split(',')]
return Color(red, green, blue)
Then use this field in your serializer class.
I create a Field type that tries to solve the problem of the Data Save requests with its ForeignKey in Integer, and the requests to read data with nested data
This is the class:
class NestedRelatedField(serializers.PrimaryKeyRelatedField):
"""
Model identical to PrimaryKeyRelatedField but its
representation will be nested and its input will
be a primary key.
"""
def __init__(self, **kwargs):
self.pk_field = kwargs.pop('pk_field', None)
self.model = kwargs.pop('model', None)
self.serializer_class = kwargs.pop('serializer_class', None)
super().__init__(**kwargs)
def to_representation(self, data):
pk = super(NestedRelatedField, self).to_representation(data)
try:
return self.serializer_class(self.model.objects.get(pk=pk)).data
except self.model.DoesNotExist:
return None
def to_internal_value(self, data):
return serializers.PrimaryKeyRelatedField.to_internal_value(self, data)
And so it would be used:
class PostModelSerializer(serializers.ModelSerializer):
message = NestedRelatedField(
queryset=MessagePrefix.objects.all(),
model=MessagePrefix,
serializer_class=MessagePrefixModelSerializer
)
I hope this helps you.
Edited with own answer: original question below.
In here it recommends a few approaches, and the simplest one to me is to just add an extra field but override save to update it, then I get standard functionality for free. So my app is now:
#views.py
highest_p2w = Car.objects.filter().order_by('-p2w')[0]
lowest_p2w = Car.objects.filter().order_by('p2w')[0]
.
#models.py
p2w = models.FloatField("Power to weight ratio",editable=False)
def save(self, *args, **kwargs):
self.p2w = (float(self.bhp) * 1000 ) / float(self.weight)
super(Car, self).save(*args, **kwargs)
The only disadvantage is that I had to do a save() on any existing records to update the value. But any new records enter the value at save() time.
Original Question
I'd like to have a dynamic value which is calculated from 2 fields returned in Django from the model.
I think I can do this with a method, but I need to be able to sort on it just like the other fields.
#Models.py:
class Car(models.Model):
weight = models.IntegerField("Weight in KG")
bhp = models.IntegerField("BHP")
I'd like to have a field called power_to_weight_ratio that just calculates ( self.bhp * 1000 ) / self.weight
As this is a dynamic value, it doesn't need to be stored. BUT it does need to be sortable, as I sorted on all the other fields in the model.
I'd think I could just do something like
power_to_weight = ( self.bhp * 1000 ) / self.weight
but I assume I need to start overriding methods to give me the ability to sort. Django docs don't seem to mention this in the model custom field documentation.
Thanks.
I am so glad that I've done it! ^_^ free to enjoy it
I've tried the following that is most similar to satisfy your need (tested sqlite3 database)
admin.py
from django.contrib import admin
from .models import Car
class CarAdmin(admin.ModelAdmin):
list_display = ('weight','bhp','power_to_weight2',)
def queryset(self,request):
return super(CarAdmin,self).queryset(request).extra(select={'ptw':'(CAST((bhp) AS FLOAT))/weight'})
def power_to_weight2(self,obj):
return obj.bhp*1000/float(obj.weight)#python 2.x,float not need in python3.x
power_to_weight2.short_description = 'power_to_weight'
power_to_weight2.admin_order_field = 'ptw'
admin.site.register(Car,CarAdmin)
about model.objects.extra() see: https://docs.djangoproject.com/en/dev/ref/models/querysets/#extra
When query database, / means you are doing integer division
so use CAST AS FLOAT to convert it to float,detail see here: What is wrong with this SQL Server query division calculation?
I'm using tastypie and I want to create a Resource for a "singleton" non-model object.
For the purposes of this question, let's assume what I want the URL to represent is some system settings that exist in an ini file.
What this means is that...:
The fields I return for this URL will be custom created for this Resource - there is no model that contains this information.
I want a single URL that will return the data, e.g. a GET request on /api/v1/settings.
The returned data should return in a format that is similar to a details URL - i.e., it should not have meta and objects parts. It should just contain the fields from the settings.
It should not be possible to GET a list of such object nor is it possible to perform POST, DELETE or PUT (this part I know how to do, but I'm adding this here for completeness).
Optional: it should play well with tastypie-swagger for API exploration purposes.
I got this to work, but I think my method is kind of ass-backwards, so I want to know what is the common wisdom here. What I tried so far is to override dehydrate and do all the work there. This requires me to override obj_get but leave it empty (which is kind of ugly) and also to remove the need for id in the details url by overriding override_urls.
Is there a better way of doing this?
You should be able to achieve this with the following. Note I haven't actually tested this, so some tweaking may be required. A more rich example can be found in the Tastypie Docs
class SettingsResource(Resource):
value = fields.CharField(attribute='value', help_text='setting value')
class Meta:
resource_name = 'setting'
fields = ['value']
allowed_methods = ['get']
def detail_uri_kwargs(self, bundle_or_obj):
kwargs = {}
return kwargs
def get_object_list(self, request):
return [self.obj_get()]
def obj_get_list(self, request=None, **kwargs):
return [self.obj_get()]
def obj_get(self, request=None, key=None, **kwargs):
setting = SettingObject()
setting.value = 'whatever value'
return setting
The SettingObject must support the getattr and setattr methods. You can use this as a template:
class SettingObject(object):
def __init__(self, initial=None):
self.__dict__['_data'] = {}
if initial:
self.update(initial)
def __getattr__(self, name):
return self._data.get(name, None)
def __setattr__(self, name, value):
self.__dict__['_data'][name] = value
def update(self, other):
for k in other:
self.__setattr__(k, other[k])
def to_dict(self):
return self._data
This sounds like something completely outside of TastyPie's wheelhouse. Why not have a single view somewhere decorated with #require_GET, if you want to control headers, and return an HttpResponse object with the desired payload as application/json?
The fact that your object is a singleton and all other RESTful interactions with it are prohibited suggests that a REST library is the wrong tool for this job.
I have a model with a version number in it. I want it to self-increment when new data is posted with an existing id via TastyPie. I'm currently doing this via the hydrate method, which works as long as two users don't try to update at once:
class MyResource(ModelResource):
...
def hydrate_version(self, bundle):
if 'id' in bundle.data:
target = self._meta.queryset.get(id=int(bundle.data['id']))
bundle.data['version'] = target.version+1
return bundle
I'd like to do this more robustly by using Django's F() expressions, e.g.:
def hydrate_version(self, bundle):
if 'id' in bundle.data:
from django.db.models import F
target = self._meta.queryset.get(id=int(bundle.data['id']))
bundle.data['version'] = F('version')+1
return bundle
However, this gives me an error:
TypeError: int() argument must be a string or a number, not 'ExpressionNode'
Is there a way to more robustly increment the version number with TastyPie?
thanks!
I would override the save() method for your Django Model instead, and perform the update there. That has the added advantage of ensuring the same behavior regardless of an update from tastypie or from the django/python shell.
def save(self, *args, **kwargs):
self.version = F('version') + 1
super(MyModel, self).save(*args, **kwargs)
This has been answered at github here, though I haven't tried this myself yet. To quote from that link:
You're setting bundle.data inside a hydrate method. Usually you modify bundle.obj in hydrate methods and bundle.data in dehydrate methods.
Also, those F objects are meant to be applied to Django model fields.
I think what you want is:
def hydrate_version(self, bundle):
if bundle.obj.id is not None:
from django.db.models import F
bundle.obj.version = F('version')+1
return bundle