This is my code:
class SocialNodeSubscription(model.Model):
def __init__(self, *args, **kwargs):
permissions=["post","read","reply","admin"]
for p in permissions:
self.__dict__["can_"+p]=model.BooleanProperty(default=True)
I need to dynamically define fields in my model but this doesn't seem to work because dict is not the right place where to put my fields.
For who don't know about ndb, this is how it would look like going the easier way.
class SocialNodeSubscription(model.Model):
def __init__(self, *args, **kwargs):
self.can_write=model.BooleanProperty(default=True)
self.can_read=model.BooleanProperty(default=True)
...
Edit:
Now my code looks like this:
def __init__(self, *args, **kwargs):
permissions=["post","read","reply","admin"]
for p in permissions:
self._properties["can_"+p]=model.BooleanProperty(default=True)
self._fix_up_properties()
But still i get this error.
File "C:\Program Files
(x86)\Google\google_appengine\google\appengine\ext\ndb\model.py", line
972, in _store_value
entity._values[self._name] = value TypeError: 'NoneType' object does not support item assignment
What does it mean?
It's _properties,
just have a look at its metaclass MetaModel and class method _fix_up_properties.
Definition of _properties:
# Class variables updated by _fix_up_properties()
_properties = None
Method:
#classmethod
def _fix_up_properties(cls):
"""Fix up the properties by calling their _fix_up() method.
Note: This is called by MetaModel, but may also be called manually
after dynamically updating a model class.
"""
Use an expando model for a model with dynamic properties.
Related
I want to create an immutable copy of the model instance, such that the user be able to access the details of the model, including its attributes, but not the save and the delete methods.
The use case is that there are two repos accessing the django model, where one is supposed to have a writable access to the model, while another should only have a readable access to it.
I have been researching ways of doing this. One way, I could think is the readable repo gets the model instance with a wrapper, which is a class containing the model instance as a private variable.
class ModelA(models.Model):
field1=models.CharField(max_length=11)
class ModelWrapper:
def __init__(self,instance):
self.__instance=instance
def __getattr__(self,name):
self.__instance.__getattr__(name)
The obvious problem with this approach is that the user can access the instance from the wrapper instance:
# model_wrapper is the wrapper created around the instance. Then
# model_wrapper._ModelWrapper__instance refers to the ModelA instance. Thus
instance = model_wrapper._ModelWrapper__instance
instance.field2="changed"
instance.save()
Thus, he would be able to update the value. Is there a way to restrict this behaviour?
Try overriding the models save and delete in webapp where you want to restrict that:
class ModelA(models.Model):
field1=models.CharField(max_length=11)
def save(self, *args, **kwargs):
return # Or raise an exception if needed
def delete(self, *args, **kwargs):
return # Or raise an exception if needed
If you are using update or delete on a queryset you might also need a pre_save and pre_delete signal:
from django.db.models.signals import pre_delete
#receiver(pre_delete, sender=ModelA)
def pre_delete_handler(sender, instance, *args, **kwargs):
raise Exception('Cannot delete')
Edit: Looks like querysets don't send the pre_save/post_save signal so that cannot be used there, the delete signals are emitted though.
class ModelA(models.Model):
field1=models.CharField(max_length=11)
class ModelWrapper:
def __init__(self, instance):
self.__instance=instance
# Delete unwanted attributes
delattr(self.__instance, 'save')
delattr(self.__instance, 'delete')
def __getattr__(self,name):
self.__instance.__getattr__(name)
I am trying to work out how to inherit variables from a parent class.
I have two classes (simplified but same principle):
class Database(object):
def __init__(self, post, *args, **kwargs):
self.post = post
self.report()
def report(self):
#... obtain variables from post ...
self.database_id = self.post['id']
#... save data to database
class PDF(Database):
def __init__(self, post, *args, **kwargs):
Database.__init__(self, post, *args, **kwargs)
#... if i try to access self.database_id now, it returns an error ...
print(self.database_id)
instantiating script:
Database(request.POST)
PDF(request.POST)
I have tried just instantiating CreatePDF, as i thought the Database.__init__(self, post, *args, **kwargs) line would the Database class, but this does not work either.
I am trying to find the most pythonic way to do inherit. I can obviously obtain self.database_id from the post dict passed to PDF(), however I do not see the point in doing this twice, if I can use inheritance.
Thanks
Use:
class PDF(Database):
def __init__(self, post, *args, **kwargs):
# Stuff
super().__init__(post, *args, **kwargs)
The correct approach to instantiated an inherited class is to call super().init(args), which in this case calls Database.init because of method resolution order.
See http://amyboyle.ninja/Python-Inheritance
Why do I get the following error, and how do I resolve it?
TypeError: super(type, obj): obj must be an instance or subtype of type
Another way this error can occur is when you reload the module with the class in a Jupiter notebook.
Easy solution is to restart the kernel.
http://thomas-cokelaer.info/blog/2011/09/382/
Check out #Mike W's answer for more detail.
You should call super using the UrlManager class as first argument not the URL model. super cannot called be with an unrelated class/type:
From the docs,
super(type[, object-or-type]):
Return a proxy object that delegates method calls to a parent or
sibling class of type.
So you cannot do:
>>> class D:
... pass
...
>>> class C:
... def __init__(self):
... super(D, self).__init__()
...
>>> C()
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "<stdin>", line 3, in __init__
TypeError: super(type, obj): obj must be an instance or subtype of type
You should do:
qs_main = super(UrlManager, self).all(*args, **kwargs)
Or in Python 3:
qs_main = super().all(*args, **kwargs)
Elaborating in #Oğuz Şerbetci's answer, in python3 (not necessary only in Jupyter), when there is the need to reload a library, for example we have class Parent and class Child defined as
class Parent(object):
def __init__(self):
# do something
class Child(Parent):
def __init__(self):
super(Child, self).__init__(self)
then if you do this
import library.Child
reload(library)
Child()
you will get TypeError: super(type, obj): obj must be an instance or subtype of type, the solution is just to re import the class after the reload
import library.Child
reload(library)
import library.Child
Child()
For Jupyter only
You can get his issue in because reload logic have some bugs (issue)
Here is a simple solution/workaround that works for me until issue is not fixed
Add typo like 1001xx at the bottom of the file which you call in the cell
Run your cell - you will see some exception, just skip it
Remove typo which was added on step 1
Run the cell
Profit
Another interesting way is if a merge of branches has duplicated the class, so that in the file you have two definitions for the same name, e.g.
class A(Foo):
def __init__(self):
super(A, self).__init__()
#...
class A(Foo):
def __init__(self):
super(A, self).__init__()
#...
If you try to create an instance from a static reference to the first definition of A, once it tries to call super, inside the __init__ method, A will refer to the second definition of A, since it has been overwritten. The solution - ofcourse - is to remove the duplicate definition of the class, so it doesn't get overwritten.
This may seem like something that would never happen, but it just happened to me, when I wasn't paying close enough attention to the merge of two branches. My tests failed with the error message described in the question, so I thought I'd leave my findings here, even though it doesn't exactly answer the specific question.
The best solution that I have found for this problem is only available using python 3. You then don't need to specify the arguments of "super", then you won't have the error any more writing your class like this :
class D:
pass
class C(D):
def __init__(self):
super().__init__()# no arguments given to super()
This error also pops out when you simply do not instantiate child class
, and try to call a method on a class itself, like in :
class Parent:
def method():
pass
class Child(Parent):
def method():
super().method()
P = Parent()
C = Child
C.method()
Similar to #Eldamir, I solved it by realizing I had written two classes with the same name, and the second one was overwriting the first.
If that's the case, change the name of one of the classes.
So I just pasted in a form in forms.py.
I just made a fast look to see if I needed to change anything, but I didn't see that.
Then I got this super(type, obj): obj must be an instance or subtype of type error, so I searched for it on the browser, but before I checked any of the answers I looked one more time and this time I spotted the issue.
As you can see, many answers on this question says it was wrong with the super. Yes it was the same issue for me.
make sure that you look if you have any super and see if the class added matches with the class. At least that's what I did.
Before and After
Where I spotted it in my code
forms.py
Before:
class userProfileForm(ModelForm):
class Meta:
model = user_profile
fields = ("user", "rating", )
def __init__(self, *args, **kwargs):
# https://stackoverflow.com/a/6866387/15188026
hide_condition = kwargs.pop('hide_condition',None)
super(ProfileForm, self).__init__(*args, **kwargs)
if hide_condition:
self.fields['user'].widget = HiddenInput()
After:
class userProfileForm(ModelForm):
class Meta:
model = user_profile
fields = ("user", "rating", )
def __init__(self, *args, **kwargs):
# https://stackoverflow.com/a/6866387/15188026
hide_condition = kwargs.pop('hide_condition',None)
super(userProfileForm, self).__init__(*args, **kwargs)
if hide_condition:
self.fields['user'].widget = HiddenInput()
You see that the super got changed to the class name
In my Django project, I want all my model fields to have an additional argument called documentation. (It would be similar to verbose_name or help_text, but instead for internal documentation.)
This seems straightforward: just subclass and override the field's __init__:
def __init__(self, verbose_name=None, name=None, documentation=None, **kwargs):
self.documentation = documentation
super(..., self).__init__(verbose_name, name, **kwargs)
The question is how do I make this apply to all of the 20-something field classes in django.db.models (BooleanField, CharField, PositiveIntegerField, etc.)?
The only way I see is to use metaprogramming with the inspect module:
import inspect
import sys
from django.db.models import *
current_module = sys.modules[__name__]
all_field_classes = [Cls for (_, Cls) in inspect.getmembers(current_module,
lambda m: inspect.isclass(m) and issubclass(m, Field))]
for Cls in all_field_classes:
Cls.__init__ = <???>
I am not used to seeing code like this, and don't even know if it will work. I wish I could just add the attribute to the base Field class, and have it inherit to all the child classes, but I don't see how that could be done.
Any ideas?
Indeed - you are ont he right track.
In python, introspection is a normal thing, and and you don even need to use the inspect module just because it "I am using introspection and meta programing, I must need inspect ) :-)
One thing that is not considered that much of a good practice, though, is Monkey patching - that is, if you change the classes as they are in the django.db.models itself, so that other modules will import the modified classes from there and use the modified version. (Note that in this case: not recommended != will not work) - so you would be better creating all the new model classes in your own module, and importing them from your own module, instead of from django.db.models
So, something along:
from django.db import models
# A decorator to implement the behavior you want for the
# __init__ method
def new_init(func):
def __init__(self, *args, **kw):
self.documentation = kw.pop("documentation", None)
return func(self, *args, **kw)
for name, obj in models.__dict__.items():
#check if obj is a class:
if not isinstance(obj, type):
continue
# creates a new_init, retrieving the original one -
# taking care for not to pick it as an unbound method -
# check: http://pastebin.com/t1SAusPS
new_init_method = new_init(obj.__dict__.get("__init__", lambda s:None))
# dynamically creates a new sublass of obj, overriding just the __init__ method:
new_class = type(name, (obj,), {"__init__": new_init_method})
# binds the new class to this module's scope:
globals().__setitem__(name, new_class)
Or if you prefer using monkey patching, as it is easier :-p
from django.db import models
def new_init(func):
def __init__(self, *args, **kw):
self.documentation = kw.pop("documentation", None)
return func(self, *args, **kw)
for name, obj in models.__dict__.items():
#check if obj is a class:
if not isinstance(obj, type):
continue
obj.__init__ = new_init(obj.__dict__["__init__"])
Why this code doesn't work?
I see in debugger (PyCharm) that init line is executed but nothing more.
I have tried to put there raise exception to be really sure and again nothing happend.
class polo(object):
def __init__(self):
super(polo, self).__init__()
self.po=1 <- this code is newer executed
class EprForm(forms.ModelForm, polo):
class Meta:
model = models.Epr
You use multiple inheritance so in general Python will look for methods in left-to-right order. So if your class do not have __init__ it'll look for it in ModelForm and that (only if not found) in polo. In your code the polo.__init__ is never called because ModelForm.__init__ is called.
To call the constructors of both base classes use explicit constructor call:
class EprForm(forms.ModelForm, polo):
def __init__(self, *args, **kwargs)
forms.ModelForm.__init__(self, *args, **kwargs) # Call the constructor of ModelForm
polo.__init__(self, *args, **kwargs) # Call the constructor of polo
class Meta:
model = models.Epr