Django custom model fields: to_python() not called - python

I am quite new to Python and Django, and totally new on Stack Overflow, so I hope I won't break any rules here and I respect the question format.
I am facing a problem trying to implement a custom model field with Django (Python 3.3.0, Django 1.5a1), and I didn't find any similar topics, I am actually quite stuck on this one...
So there is a Player, he has got a Hand (of Card). The Hand inherits from CardContainer, which is basically a list of cards with some (hidden here) helper functions.
Here is the corresponding code:
from django.db import models
class Card:
def __init__(self, id):
self.id = id
class CardContainer:
def __init__(self, cards=None):
if cards is None:
cards = []
self.cards = cards
class Hand(CardContainer):
def __init__(self, cards=None):
super(Hand, self).__init__(cards)
class CardContainerField(models.CommaSeparatedIntegerField):
__metaclass__ = models.SubfieldBase
def __init__(self, cls, *args, **kwargs):
if not issubclass(cls, CardContainer):
raise TypeError('{} is not a subclass of CardContainer'.format(cls))
self.cls = cls
kwargs['max_length'] = 10
super(CardContainerField, self).__init__(*args, **kwargs)
def to_python(self, value):
if not value:
return self.cls()
if isinstance(value, self.cls):
return value
if isinstance(value, list):
return self.cls([i if isinstance(i, Card) else Card(i) for i in value])
# String: '1,2,3,...'
return self.cls([Card(int(i)) for i in value.split(',')])
def get_prep_value(self, value):
if value is None:
return ''
return ','.join([str(card.id) for card in value.cards])
class Player(models.Model):
hand = CardContainerField(Hand)
But when I get a player, lets say, like this: Player.objects.get(id=3).hand, instead of getting a Hand instance (or even a CardContainer instance at all!), I am just getting a comma-separated string of integers like "1,2,3", which is fine in the database (it is the format I'd like to see IN the database)...
It seems to me that to_python doesn't get called, so the returned data is the raw value, hence the string. When I searched for this type of problems, people missed the __metaclass__ = models.SubfieldBase... I hoped I could have missed that too but, hey, it would have been too simple!
Did I miss something trivial, or am I wrong for the whole thing? :D
Thanks a lot!!

In python 3 the module-global __metaclass__ variable is no longer supported. You must use:
class CardContainerField(models.CommaSeparatedIntegerField, metaclass=models.SubfieldBase):
...

for Django 1.10 and latest
class CardContainerField(models.CommaSeparatedIntegerField):
def from_db_value(self,value, expression, connection, context):
.......

Related

Python create instance of derived class

I want to have abstract class Task and some derived classes like TaskA, TaskB, ...
I need static method in Task fetching all the tasks and returning list of them. But problem is that I have to fetch every task differently. I want Task to be universal so when I create new class for example TaskC it should work without changing class Task. Which design pattern should I use?
Let's say every derived Task will have decorator with its unique id, I am looking for function that would find class by id and create instance of it. How to do it in python?
There are a couple of ways you could achieve this.
the first and most simple is using the __new__ method as a factory to decide what subclass should be returned.
class Base:
UUID = "0"
def __new__(cls, *args, **kwargs):
if args == "some condition":
return A(*args, **kwargs)
elif args == "another condition":
return B(*args, **kwargs)
class A(Base):
UUID = "1"
class B(Base):
UUID = "2"
instance = Base("some", "args", "for", "the", condition=True)
in this example, if you wanted to make sure that the class is selected by uuid. you can replace the if condition to read something like
if a.UUID == "an argument you passed":
return A
but it's not really useful. since you have knowledge of the specific UUID, you might as well not bother going through the interface.
since I don't know what you want the decorator for, I can't think of a way to integrate it.
EDIT TO ADDRESS THE NOTE:
you don't need to have update it every time, if you do your expressions smartly.
let's say that the defining factor comes from a config file, that says "use class B"
for sub_classs in self.__subclasses__():
if sub_class.UUID == config.uuid:
return sub_class(*args, **kwargs) # make an instance and return it
the problem with that is that uuid is not useful to us as people. it would be easier to understand if instead we used a config.name to replace every place we have uuid in the example
I was fighting with this a lot of time and this is exactly what I wanted:
def class_id(id:int):
def func(cls):
cls.class_id = lambda : id
return cls
return func
def find_subclass_by_id(cls:type, id:int) -> type:
for t in cls.__subclasses__():
if getattr(t, "class_id")() == id:
return t
def get_class_id(obj)->int:
return getattr(type(obj), "class_id")()
class Task():
def load(self, dict:Dict) -> None:
pass
#staticmethod
def from_dict(dict:Dict) -> 'Task':
task_type = int(dict['task_type'])
t = find_subclass_by_id(Task, task_type)
obj:Task = t()
obj.load(dict)
return obj
#staticmethod
def fetch(filter: Dict):
return [Task.from_dict(doc) for doc in list_of_dicts]
#class_id(1)
class TaskA(Task):
def load(self, dict:Dict) -> None:
...
...

Python - how to add property accessors programmatically

class A:
def __init__(self, *args, **kwargs):
for item in ["itemA", "itemB"]:
setattr(self, item, property(lambda : self.__get_method(item)))
def __get_method(self, item):
# do some stuff and return result
# this is pretty complex method which requires db lookups etc.
return result
I am trying to come up with above example to create class properties during init. Items list will get bigger in the future and don't want to add #property every time new entry is added.
However can't get the result from property but object location.
a = A()
a.itemA # returns <property at 0x113a41590>
Initially it was like this and realized this could be better.
class A:
#property
def itemA(self):
return self.__get_method("itemA")
#property
def itemX(self):
...
# and so on
How could I add new properties just by adding new entries to the items list and the class itself will create accessor for it?
Additional to #juanpa.arrivillaga comment.
You also can implement __getattr__ method
For example:
class A:
def __getattr__(self, name):
#make everybody happy

Python: Storing class type on a class variable durig class initialization

I'm trying to initialize an objects field with a class that needs to know the type that is using it:
class Device(Model):
objects = AbstractManager(Device)
# the rest of the class here
This is how AbstractManager is defined:
class AbstractManager:
def __init__(self, cls: type):
self.cls = cls
def all(self):
result = []
for cls in self._get_subclasses():
result.extend(list(cls.objects.all()))
return result
def _get_subclasses(self):
return self.cls.__subclasses__()
So I can later call this and returns all() from all subclasses:
Device.objects.all()
The issue here is that I cannot use Device while initializing Device.objects, since Device is still not initialized.
As a work-around I'm initializing this outside of the class, but there's gotta be a better way:
class Device(Model):
objects = None
# the rest of the class here
Device.objects = AbstractManager(Device)
PD: I have a C#/C++ background, so maybe I'm thinking too much about this in a static-typing mindset, can't tell
You don't need to add any additional logic for this. Django allows you to access model class from manager using self.model attribute:
def _get_subclasses(self):
return self.model.__subclasses__()
You do not have to do that. Django will automatically call the contribute_to_class method, where it will pass the model, and for a manager, it will be stored in self.model. You can thus simply implement this as:
from django.db.models.manager import ManagerDescriptor
class AbstractManager(models.Manager):
def all(self):
result = []
for cls in self._get_subclasses():
result.extend(list(cls.objects.all()))
return result
def contribute_to_class(self, model, name):
self.name = self.name or name
self.model = model
setattr(model, name, AbstractManagerDescriptor(self))
model._meta.add_manager(self)
def _get_subclasses(self):
return self.model.__subclasses__()
class AbstractManagerDescriptor(ManagerDescriptor):
def __get__(self, instance, cls=None):
if instance is not None:
raise AttributeError("Manager isn't accessible via %s instances" % cls.__name__)
if cls._meta.swapped:
raise AttributeError(
"Manager isn't available; '%s.%s' has been swapped for '%s'" % (
cls._meta.app_label,
cls._meta.object_name,
cls._meta.swapped,
)
)
return cls._meta.managers_map[self.manager.name]
and add the manager as:
class Device(models.Model):
objects = AbstractManager()
That being said, I'm not sure that this is a good idea for two reasons:
you are returning a list, and normally .all() returns a QuerySet, you thus here "destroy" the laziness of the queryset, which can result in expensive querying; and
if one would use Device.objects.filter() for example, it would simply circumvent.
You might want to subclass the queryset, and then aim to implement that differently.

In Python, what's the best way for me to write reusable code that provides me methods for interacting with a list of objects?

I'm not quite sure how to ask this question, let alone find the answer, partially because I may be completely wrong in my approach to solving this problem.
I'm writing some Python, and I have a class (Users) which is basically used to instantiate a number of objects of a particular type (User), and then provide a number of methods to help me work with those objects in a more straightforward manner. The code I have looks like this:
from defusedxml.ElementTree import parse
class Users:
def __init__(self, path):
self.path = path
self.users = []
users = parse(path).getroot()
for user in users:
u = User.user_from_xml(user)
self.users.append(u)
def __iter__(self):
self.i = 0
return self
def __next__(self):
if self.i < len(self.users):
self.i += 1
return self.users[(self.i - 1)]
else:
raise StopIteration
def get_user_by_id(self, user_id):
return next((user for user in self.users if user.id == user_id), None)
def search_attribute(self, attribute, value):
return [user for user in self.users if
getattr(user, attribute, None) != None and
value.lower() in str(getattr(user, attribute).lower())]
class User:
def __init__(self, user_id, username, email, first_name, last_name):
self.id = int(user_id)
self.username = username
self.email = email
self.first_name = first_name
self.last_name = last_name
def __repr__(self):
if self.first_name == None or self.last_name == None:
return "%s (User Id: %s)" % (self.username, self.id)
return "%s %s (%s)" % (self.first_name, self.last_name, self.username)
#staticmethod
def user_from_xml(user):
return User(
user.get("id"),
element.find("username").text,
element.find("email").text,
element.find("firstname").text,
element.find("lastname").text
)
I have a number of other objects stored in XML in a similar way - for example, Events. I can see the need to use the same methods defined in Users, with the only real difference being the type of object contained in the list created in __init__.
So the question is: what's the best way for me to make this code reuseable, while maintaining readability, etc.? Or maybe I'm on completely the wrong track.
If these class methods will truly be identical, I think the simplest method would be to just make a more generic class to replace Users that takes another class (e.g., User or Event) as an argument in its __init__ method. Your class might look like so:
class Things(object):
def __init__(self, PATH, Thing): #Thing is a class
self.PATH = PATH
self.users = []
users = parse(PATH).getroot()
for thing in things:
t = Thing.thing_from_xml(thing)
self.things.append(t)
def methods...
A more robust/scalable solution might be to use inheritance.
You could create an abstract base class that has all of your methods, and then override the base class's __init__ method within each child class. I'll draw out an example of this:
class AbstractBaseClass(object):
def __init__(self, PATH):
self.PATH = PATH
self.things = []
def methods...
class Users(AbstractBaseClass):
def __init__(self, PATH):
super(Users, self).__init__() # calls the parent __init__ method
users = parse(PATH).getroot()
for user in users:
u = User.user_from_xml(user)
self.things.append(u)
#no need to define methods, as they were already defined in parent class
#but you can override methods or add new ones if you want
Your Events class would also inherit AbstractBaseClass and thereby have all of the same methods of Users. You should read up on inheritance, it's a great tool.
EDIT TO ADDRESS YOUR COMMENT:
Properties might be a good way to get that attribute users back into to your Users class. Change things to _things to suggest that it is private, and then create a users property, like so:
class Users(AbstractBaseClass):
#property
def users(self):
return self._things
This way you can call Users.users and get Users._things.
If you really, really care about code reuse, you could even do something dynamic like this in __init__:
class AbstractBaseClass(object):
def __init__(self, PATH):
self._things = []
self.PATH = PATH
setattr(self, self.__class__.__name__.lower(), self._things)
#This creates an attribute that is the lowercase version of the
#class name and assigns self._things to it
Note: I think this is a little ugly and unnecessary. Also, since you would have two attributes that are the same thing - it might lead to your object being in an incoherent state.
That said, to me Users.users seems redundant. I'm not fully aware of the context of your problem but I think I would prefer to have my Users objects simply behave like the list users, but with extra methods (those you defined).
In AbstractBaseClass you could define __iter__ to be the __iter__ of the _things attribute.
class AbstractBaseClass(object):
def __init__(self, PATH):
self._things = []
self.PATH = PATH
def __iter__(self):
return self._things.__iter__()
#You might also want this - it lets you do list-like indexing
def __getitem__(self, i):
return self._things.__getitem__(i)
I think the above does essentially what you were doing with __iter__ and __next__ in your original code, but in a cleaner way. This way, you don't have to access _things or users directly to play with a list of your user objects; you can play with a list of users through your Users class, which, by its name, seems like the purpose of the class.

How to add multiple similar properties in Python

I'm building a simulator, which will model various types of entities. So I've got a base class, ModelObject, and will use subclasses for all the different entities. Each entity will have a set of properties that I want to keep track of, so I've also got a class called RecordedDetail, that keeps tracks of changes (basically builds a list of (time_step, value) pairs) and each ModelObject has a dict to store these in. So I've got, effectively,
class ModelObject(object):
def __init__(self):
self.details = {}
self.time_step = 0
def get_detail(self, d_name):
""" get the current value of the specified RecordedDetail"""
return self.details[d_name].current_value()
def set_detail(self, d_name, value):
""" set the current value of the specified RecordedDetail"""
self.details[d_name].set_value(value, self.time_step)
class Widget(ModelObject):
def __init__(self):
super().__init__(self)
self.details["level"] = RecordedDetail()
self.details["angle"] = RecordedDetail()
#property
def level(self):
return self.get_detail("level")
#level.setter
def level(self, value):
self.set_detail("level", value)
#property
def angle(self):
return self.get_detail("angle")
#angle.setter
def angle(self):
self.set_detail("angle", value)
This gets terribly repetitious, and I can't help thinking there must be a way of automating it using a descriptor, but I can't work out how. I end up with
class RecordedProperty(object):
def __init__(self, p_name):
self.p_name = p_name
def __get__(self, instance, owner):
if instance is None:
return self
return instance.get_detail(self.p_name)
def __set__(self, instance, value):
instance.set_detail(self.p_name, value)
class Widget(ModelObject):
level = RecordedProperty("level")
angle = RecordedProperty("angle")
def __init__(self):
super().__init__(self)
self.details["level"] = RecordedDetail()
self.details["angle"] = RecordedDetail()
which is a bit of an improvement, but still a lot of typing.
So, a few questions.
Can I just add the descriptor stuff (__get__, __set__ etc) into the RecordedDetail class? Would there be any advantage to doing that?
Is there any way of typing the new property name (such as "level") fewer than three times, in two different places?
or
Am I barking up the wrong tree entirely?
The last bit of code is on the right track. You can make the process less nasty by using a metaclass to create a named RecordedProperty and a matching RecordedDetail for every item in a list. Here's a simple example:
class WidgetMeta(type):
def __new__(cls, name, parents, kwargs):
'''
Automate the creation of the class
'''
for item in kwargs['_ATTRIBS']:
kwargs[item] = RecordedProperty(item)
return super(WidgetMeta, cls).__new__(cls, name, parents, kwargs)
class Widget(ModelObject):
_ATTRIBS = ['level', 'angle']
__metaclass__ = WidgetMeta
def __init__(self, *args, **kwargs):
super().__init__(self)
self.Details = {}
for detail in self._ATTRIBS:
self.Details[detail] = RecordedDetail()
Subclasses would then just need to have different data in _ATTRIBS.
As an alternative (I think it's more complex) you could use the metaclass to customize the init in the same way you customize the new, creating the RecordedDetails out of the _ATTRIBS list.
A third option would be to create a RecordedDetail in every instance on first access. That would work fine as long as you don't have code that expects a RecordedDetail for every property even if the RecordedDetail has not been touched.
Caveat I'm not super familiar with Python3; I've used the above pattern often in 2.7x

Categories