hierarchal permission on same model - python

Sorry, I couldn't find a suitable title, please edit the title if you understood the problem.
I want to achieve 4 level hierarchy (django Group): Country-Manager,State-Manager, City-Manager and Field-staff.
One user can belong to only one group at a time and any user can add a lead.
I have a model named Lead and I want to realize the following hierarchy:
"User of higher level (say State-Manager) can view leads added by him and and all entries added by users of lower level (say City-Manager and Field Staff) but can not view entries added by other users of same level or higher lever (say Country-Manager)"
To keep track who has added the entry, I am saving user and Group object as foreign key in the Lead model.
Please suggest any strategy or code snippet.
--
p.s: I am on Django 1.5

I solved this problem with help of mixins. ( http://eflorenzano.com/blog/2008/05/17/exploring-mixins-django-model-inheritance/ )
for I create a Hierarchy class:
class Hierarchy(models.Model):
parent = models.ForeignKey('self',null=True,blank=True)
def get_children(self):
return self._default_manager.filter(parent=self)
def get_descendants(self):
descs = set(self.get_children())
for node in list(descs):
descs.update(node.get_descendants())
return descs
class Meta:
abstract = True
and inherited it in class named GroupHirerarchy:
class GroupHierarchy(Hierarchy):
group = models.ForeignKey(Group)
def __unicode__(self):
return self.group.name
Now I can get children of a group using
group_object.get_children()
and all descendants by :
group_object.get_descendants()
Now using this I achieve get hierarchical permission on model:
glist = []
groups = request.user.groups.all()
for group in groups:
try:
group_hierarchy = GroupHierarchy.objects.get(group = group)
group_descendants = group_hierarchy.get_descendants()
for group_descendant in group_descendants:
glist.append(Group.objects.get(name=group_descendant.group))
except:
pass
obj = Model.objects.filter(user_group__in = glist)

Related

How to implement forms for sets and supersets of objects with Django

Imagine the following two Django models:
class Item(models.Model):
'''
A single Item of something.
'''
name = models.CharField(unique = True)
sets = model.ManyToManyField('Set', blank = True)
def get_supersets(self):
'''
Returns the list of all the supersets the item belongs to EXCLUDING the
directly linked sets.
'''
res = []
for set in self.sets:
res = res + set.get_all_supersets()
return res
class Set(models.Model):
'''
A set of items wich can either contain items or not (empty set allowed).
Sets can be grouped in supersets. Supersets will contain all items of
the related subsets.
'''
name = models.CharField(unique = True)
superset = models.ForeignKey('self', on_delete = models.SET_NULL, null = True, blank = True)
# Note: Self-reference to the same object is avoided by excluding it
from the forms queryset for the superset field.
def get_all_spersets(self):
'''
Returns all supersets of the set.
'''
if self.superset:
return [self.superset] + self.superset.get_all_supersets()
else:
return []
I found two options for implementing the connection between supersets and the corresponding items of the sets in a given superset:
On saving a set or an item update the item_set of the supersets. With this, all relations will be stored in the database. This also needs to include some logic regarding circular relations.
Decide for "direct-links-only", which means an item will only be linked to its directly related set in the database. The relations to the supersets will be found on the fly if requested (e.g get all supersets) with model methods.
For me, option 2 seems much more attractive in terms of data integrity since connected relations will be updated on the fly. However, once a user enters an item -> set relation one needs to make sure the user does not unnecessarily select a superset of a set the item already belongs to, which would break the logic and in the worst case lead to infinite recursion in the model methods to retrieve the supersets.
Since the selection will take place in a form, the Item form looks like this:
class ItemForm(forms.ModelForm):
def __init__(self, *args, **kwargs):
self.fields['sets'].queryset = Set.objects.all()
# For now, this shows all available sets. One could limit this
queryset to only the sets not in the items get_superset().
However, this would not allow the user to see wich supersets
the item already belongs to.
class Meta:
model = Item
widgets = {'sets': forms.CheckboxSelectMultiple()}
# Available sets are shown as checkboxes.
The Set form looks like this:
class SetForm(forms.ModelForm):
def __init__(self):
self.fields['superset'].queryset = Set.objects.all.exclude(id__exact=self.instance.id)
# As mentioned, avoiding self-reference.
class Meta:
model = Set
My question:
1) How can I show the Items supersets in the ItemForm but avoid that a user chooses them?
2) If a user chooses a set, wich is part of a superset, this superset should immediately become unavailable in the ItemForm. Is something like this possible and how?

Looking for ForeignKey active and add to QuerySet

class Control(models.Model):
period = models.DurationField()
active = models.BooleanField()
device_collection = models.ForeignKey(DeviceSet)
class DeviceSet(models.Model):
name = models.CharField()
date_last_control = models.DateField()
def get_next_control(self):
return self.date_last_control + self.control_actif.period
#property
def control_actif(self):
if not hasattr(self, "_control"):
setattr(self, "_control", self.control_set.get(active=True))
return self._control
There are several Control associated with DeviceSet but only one Control which is active by DeviceSet.
I'd like to get the active Control of the DeviceSet when I get the queryset in a column _control.
I already try :
DeviceSet.objects.annotate(_control = Q(control__active=True))
That don't work
'WhereNode' object has no attribute 'output_field'
And after set output_field=Control I have the following exception:
type object 'Control' has no attribute 'resolve_expression'
I just want to have like a prefetch_related with filter but in a new column to use the _control attribute in model's method.
You are getting errors from what you've attempted because annotate method needs an aggregate function (eg Sum, Count etc) rather than a Q object.
Since Django 1.7 it's possible to do what you want using prefetch_related, see docs here:
https://docs.djangoproject.com/en/1.8/ref/models/querysets/#django.db.models.Prefetch
DeviceSet.objects.prefetch_related(
Prefetch('control_set',
queryset=Control.objects.filter(active=True),
to_attr='_control')
)

Duplicate Django Model Instance and All Foreign Keys Pointing to It

I want to create a method on a Django model, call it model.duplicate(), that duplicates the model instance, including all the foreign keys pointing to it. I know that you can do this:
def duplicate(self):
self.pk = None
self.save()
...but this way all the related models still point to the old instance.
I can't simply save a reference to the original object because what self points to changes during execution of the method:
def duplicate(self):
original = self
self.pk = None
self.save()
assert original is not self # fails
I could try to save a reference to just the related object:
def duplicate(self):
original_fkeys = self.fkeys.all()
self.pk = None
self.save()
self.fkeys.add(*original_fkeys)
...but this transfers them from the original record to the new one. I need them copied over and pointed at the new record.
Several answers elsewhere (and here before I updated the question) have suggested using Python's copy, which I suspect works for foreign keys on this model, but not foreign keys on another model pointing to it.
def duplicate(self):
new_model = copy.deepcopy(self)
new_model.pk = None
new_model.save()
If you do this new_model.fkeys.all() (to follow my naming scheme thus far) will be empty.
You can create new instance and save it like this
def duplicate(self):
kwargs = {}
for field in self._meta.fields:
kwargs[field.name] = getattr(self, field.name)
# or self.__dict__[field.name]
kwargs.pop('id')
new_instance = self.__class__(**kwargs)
new_instance.save()
# now you have id for the new instance so you can
# create related models in similar fashion
fkeys_qs = self.fkeys.all()
new_fkeys = []
for fkey in fkey_qs:
fkey_kwargs = {}
for field in fkey._meta.fields:
fkey_kwargs[field.name] = getattr(fkey, field.name)
fkey_kwargs.pop('id')
fkey_kwargs['foreign_key_field'] = new_instance.id
new_fkeys.append(fkey_qs.model(**fkey_kwargs))
fkeys_qs.model.objects.bulk_create(new_fkeys)
return new_instance
I'm not sure how it'll behave with ManyToMany fields. But for simple fields it works. And you can always pop the fields you are not interested in for your new instance.
The bits where I'm iterating over _meta.fields may be done with copy but the important thing is to use the new id for the foreign_key_field.
I'm sure it's programmatically possible to detect which fields are foreign keys to the self.__class__ (foreign_key_field) but since you can have more of them it'll better to name the one (or more) explicitly.
Although I accepted the other poster's answer (since it helped me get here), I wanted to post the solution I ended up with in case it helps someone else stuck in the same place.
def duplicate(self):
"""
Duplicate a model instance, making copies of all foreign keys pointing
to it. This is an in-place method in the sense that the record the
instance is pointing to will change once the method has run. The old
record is still accessible but must be retrieved again from
the database.
"""
# I had a known set of related objects I wanted to carry over, so I
# listed them explicitly rather than looping over obj._meta.fields
fks_to_copy = list(self.fkeys_a.all()) + list(self.fkeys_b.all())
# Now we can make the new record
self.pk = None
# Make any changes you like to the new instance here, then
self.save()
foreign_keys = {}
for fk in fks_to_copy:
fk.pk = None
# Likewise make any changes to the related model here
# However, we avoid calling fk.save() here to prevent
# hitting the database once per iteration of this loop
try:
# Use fk.__class__ here to avoid hard-coding the class name
foreign_keys[fk.__class__].append(fk)
except KeyError:
foreign_keys[fk.__class__] = [fk]
# Now we can issue just two calls to bulk_create,
# one for fkeys_a and one for fkeys_b
for cls, list_of_fks in foreign_keys.items():
cls.objects.bulk_create(list_of_fks)
What it looks like when you use it:
In [6]: model.id
Out[6]: 4443
In [7]: model.duplicate()
In [8]: model.id
Out[8]: 17982
In [9]: old_model = Model.objects.get(id=4443)
In [10]: old_model.fkeys_a.count()
Out[10]: 2
In [11]: old_model.fkeys_b.count()
Out[11]: 1
In [12]: model.fkeys_a.count()
Out[12]: 2
In [13]: model.fkeys_b.count()
Out[13]: 1
Model and related_model names changed to protect the innocent.
I tried the other answers in Django 2.1/Python 3.6 and they didn't seem to copy one-to-many and many-to-many related objects (self._meta.fields doesn't include one-to-many related fields but self._meta.get_fields() does). Also, the other answers required prior knowledge of the related field name or knowledge of which foreign keys to copy.
I wrote a way to do this in a more generic fashion, handling one-to-many and many-to-many related fields. Comments included, and suggestions welcome:
def duplicate_object(self):
"""
Duplicate a model instance, making copies of all foreign keys pointing to it.
There are 3 steps that need to occur in order:
1. Enumerate the related child objects and m2m relations, saving in lists/dicts
2. Copy the parent object per django docs (doesn't copy relations)
3a. Copy the child objects, relating to the copied parent object
3b. Re-create the m2m relations on the copied parent object
"""
related_objects_to_copy = []
relations_to_set = {}
# Iterate through all the fields in the parent object looking for related fields
for field in self._meta.get_fields():
if field.one_to_many:
# One to many fields are backward relationships where many child objects are related to the
# parent (i.e. SelectedPhrases). Enumerate them and save a list so we can copy them after
# duplicating our parent object.
print(f'Found a one-to-many field: {field.name}')
# 'field' is a ManyToOneRel which is not iterable, we need to get the object attribute itself
related_object_manager = getattr(self, field.name)
related_objects = list(related_object_manager.all())
if related_objects:
print(f' - {len(related_objects)} related objects to copy')
related_objects_to_copy += related_objects
elif field.many_to_one:
# In testing so far, these relationships are preserved when the parent object is copied,
# so they don't need to be copied separately.
print(f'Found a many-to-one field: {field.name}')
elif field.many_to_many:
# Many to many fields are relationships where many parent objects can be related to many
# child objects. Because of this the child objects don't need to be copied when we copy
# the parent, we just need to re-create the relationship to them on the copied parent.
print(f'Found a many-to-many field: {field.name}')
related_object_manager = getattr(self, field.name)
relations = list(related_object_manager.all())
if relations:
print(f' - {len(relations)} relations to set')
relations_to_set[field.name] = relations
# Duplicate the parent object
self.pk = None
self.save()
print(f'Copied parent object ({str(self)})')
# Copy the one-to-many child objects and relate them to the copied parent
for related_object in related_objects_to_copy:
# Iterate through the fields in the related object to find the one that relates to the
# parent model (I feel like there might be an easier way to get at this).
for related_object_field in related_object._meta.fields:
if related_object_field.related_model == self.__class__:
# If the related_model on this field matches the parent object's class, perform the
# copy of the child object and set this field to the parent object, creating the
# new child -> parent relationship.
related_object.pk = None
setattr(related_object, related_object_field.name, self)
related_object.save()
text = str(related_object)
text = (text[:40] + '..') if len(text) > 40 else text
print(f'|- Copied child object ({text})')
# Set the many-to-many relations on the copied parent
for field_name, relations in relations_to_set.items():
# Get the field by name and set the relations, creating the new relationships
field = getattr(self, field_name)
field.set(relations)
text_relations = []
for relation in relations:
text_relations.append(str(relation))
print(f'|- Set {len(relations)} many-to-many relations on {field_name} {text_relations}')
return self
Here is a somewhat simple-minded solution. This does not depend on any undocumented Django APIs. It assumes that you want to duplicate a single parent record, along with its child, grandchild, etc. records. You pass in a whitelist of classes that should actually be duplicated, in the form of a list of names of the one-to-many relationships on each parent object that point to its child objects. This code assumes that, given the above whitelist, the entire tree is self-contained, with no external references to worry about.
One more thing about this code: it is truly recursive, in that it calls itself for each new level of descendants.
from collections import OrderedDict
def duplicate_model_with_descendants(obj, whitelist, _new_parent_pk=None):
kwargs = {}
children_to_clone = OrderedDict()
for field in obj._meta.get_fields():
if field.name == "id":
pass
elif field.one_to_many:
if field.name in whitelist:
these_children = list(getattr(obj, field.name).all())
if children_to_clone.has_key(field.name):
children_to_clone[field.name] |= these_children
else:
children_to_clone[field.name] = these_children
else:
pass
elif field.many_to_one:
if _new_parent_pk:
kwargs[field.name + '_id'] = _new_parent_pk
elif field.concrete:
kwargs[field.name] = getattr(obj, field.name)
else:
pass
new_instance = obj.__class__(**kwargs)
new_instance.save()
new_instance_pk = new_instance.pk
for ky in children_to_clone.keys():
child_collection = getattr(new_instance, ky)
for child in children_to_clone[ky]:
child_collection.add(duplicate_model_with_descendants(child, whitelist=whitelist, _new_parent_pk=new_instance_pk))
return new_instance
Example usage:
from django.db import models
class Book(models.Model)
class Chapter(models.Model)
book = models.ForeignKey(Book, related_name='chapters')
class Page(models.Model)
chapter = models.ForeignKey(Chapter, related_name='pages')
WHITELIST = ['books', 'chapters', 'pages']
original_record = models.Book.objects.get(pk=1)
duplicate_record = duplicate_model_with_descendants(original_record, WHITELIST)

sqlalchemy access parent class attribute

Looking at the bottom of the post you can see i have three classes. The code here is pseudo code written on the fly and untested however it adequately shows my problem. If we need the actual classes I can update this question tomorrow when at work. So ignore syntax issues and code that only represents a thought rather than the actual "code" that would do what i describe there.
Question 1
If you look at the Item search class method you can see that when the user does a search i call search on the base class then based on that result return the correct class/object. This works but seems kludgy. Is there a better way to do this?
Question 2
If you look at the KitItem class you can see that I am overriding the list price. If the flag calc_list is set to true then I sum the list price of the components and return that as the list price for the kit. If its not marked as true I want to return the "base" list price. However as far as I know there is no way to access a parent attribute since in a normal setup it would be meaningless but with sqlalchemy and shared table inheritance it could be useful.
TIA
class Item(DeclarativeBase):
__tablename__ = 'items'
item_id = Column(Integer,primary_key=True,autoincrement=True)
sku = Column(Unicode(50),nullable=False,unique=True)
list_price = Column(Float)
cost_price = Column(Float)
item_type = Column(Unicode(1))
__mapper_args__ = {'polymorphic_on': item_type}
__
def __init__(self,sku,list_price,cost_price):
self.sku = sku
self.list_price = list_price
self.cost_price = cost_price
#classmethod
def search(cls):
"""
" search based on sku, description, long description
" return item as proper class
"""
item = DBSession.query(cls).filter(...) #do search stuff here
if item.item_type == 'K': #Better way to do this???
return DBSession.query(KitItem).get(item.item_id)
class KitItem(Item):
__mapper_args__ = {'polymorphic_identity': 'K'}
calc_list = Column(Boolean,nullable=False,default=False)
#property
def list_price(self):
if self.calc_list:
list_price = 0.0
for comp in self.components:
list_price += comp.component.list_price * comp.qty
return list_price
else:
#need help here
item = DBSession.query(Item).get(self.item_id)
return item.list_price
class KitComponent(DeclarativeBase):
__tablename__ = "kit_components"
kit_id = Column(Integer,ForeignKey('items.item_id'),primarykey=True)
component_id = Column(Integer,ForeignKey('items.item_id'),primarykey=True)
qty = Column(Integer,nullable=False, default=1)
kit = relation(KitItem,backref=backref("components"))
component = relation(Item)
Answer-1: in fact you do not need to do anything special here: given that you configured your inheritance hierarchy properly, your query will already return proper class for every row (Item or KitItem). This is the advantage of the ORM part. What you could do though is to configure the query to immediatelly load also the additional columns which do belong to children of Item (from your code this is only calc_list column), which you can do by specifying with_polymorphic('*'):
#classmethod
def search(cls):
item = DBSession.query(cls).with_polymorphic('*').filter(...) #do search stuff here
return item
Read more on this in Basic Control of Which Tables are Queried.
To see the difference, enabled SQL logging, and compare your tests scripts with and without with_polymorphic(...) - you will most probably require less SQL statements being executed.
Answer-2: I would not override one entry attributed with one which is purely computed. Instead I would just create another computed attribute (lets call it final_price), which would look like following for each of two classes:
class Item(Base):
...
#property
def total_price(self):
return self.list_price
class KitItem(Item):
...
#property
def total_price(self):
if self.calc_list:
_price = 0.0
for comp in self.components:
_price += comp.component.list_price * comp.qty
return _price
else:
# #note: again, you do not need to perform any query here at all, as *self* is that you need
return self.list_price
Also in this case, you might think of configuring the relationship KitItem.components to be eagerly loaded, so that the calculation of the total_price will not trigger additional SQL. But you have to decide yourself if this is beneficial for your use cases (again, analyse the SQLs generated in your scenario).

Django model ON DELETE CASCADE policy, emulate ON DELETE RESTRICT instead, general solution

I'd like to delete an instance of a model, but only if it doesn't have any other instance of another class with a foreign key pointing to it. From Django documentation:
When Django deletes an object, it emulates the behavior of the SQL constraint ON DELETE CASCADE -- in other words, any objects which had foreign keys pointing at the object to be deleted will be deleted along with it.
In a given example:
class TestA(models.Model)
name = models.CharField()
class TestB(models.Model)
name = models.CharField()
TestAs = models.ManyToManyField(TestA)
# More classes with a ManyToMany relationship with TestA
# ........
I'd like something like:
tA = TestA(name="testA1")
tB = TestB(name="testB1")
tB.testAs.add(tA)
t = TestA.objects.get(name="testA1")
if is_not_foreignkey(t):
t.delete()
else:
print "Error, some instance is using this"
Should print the error. I know I can check for specific instances the foreignkey sets, like in this case check t.TestB_set(), but I am looking for a more general solution for any given model.
I finally solved it using this Nullable ForeignKeys and deleting a referenced model instance, the solution looks like:
# Check foreign key references
instances_to_be_deleted = CollectedObjects()
object._collect_sub_objects(instances_to_be_deleted)
# Count objects to delete
count_instances_to_delete = 0
for k in instances_to_be_deleted.unordered_keys():
count_instances_to_delete += len(instances_to_be_deleted[k])
if count_instances_to_delete == 1:
object.delete()
else:
pass
Check the related objects length
t=TestA.objects.get(name="textA1")
if not t.testB_set.all().count():#related members
t.delete()
CollectedObjects() was removed in Django 1.3 -- here's a current method:
from compiler.ast import flatten
from django.db import DEFAULT_DB_ALIAS
from django.contrib.admin.util import NestedObjects
def delete_obj_if_no_references(obj):
collector = NestedObjects(using=DEFAULT_DB_ALIAS)
collector.collect([obj])
objs = flatten(collector.nested())
if len(objs) == 1 and objs[0] is obj:
obj.delete()
return True
return False

Categories