App Engine's UnindexedProperty contains strange code - python

Please help me understand this:
On v1.6.6 it's in line 2744 of google/appengine/ext/db/__init__.py:
class UnindexedProperty(Property):
"""A property that isn't indexed by either built-in or composite indices.
TextProperty and BlobProperty derive from this class.
"""
def __init__(self, *args, **kwds):
"""Construct property. See the Property class for details.
Raises:
ConfigurationError if indexed=True.
"""
self._require_parameter(kwds, 'indexed', False)
kwds['indexed'] = True
super(UnindexedProperty, self).__init__(*args, **kwds)
.
.
.
After they constrained the indexed parameter to be False - They set it to True!

Before 1.2.2, you could do filter queries for any property type, even Text and Blob. They did only return empty lists, but it worked. Version 1.2.2 introduced the indexed attribute for properties which allows you to disable indexing of selected properties[1]. Since then, the property you want to query on must be indexed or it will throw an exception.
We know that Text and Blob properties can not be indexed. Not changing anything else, queries on those properties would be raising exceptions from 1.2.2 on (which they didn't before). In order to not introduce a regression and break existing apps, the line kwds['indexed'] = True was added to the UnindexedProperty class.
If we would have control over all the depending code, it would have been a cleaner solution to start raising an exception. But in the light of not breaking existing apps, it was decided to patch it.
You can try it yourself by changing kwds['indexed'] = True to kwds['indexed'] = False and run this snippet:
from google.appengine.ext import db
class TestModel(db.Model):
text = db.TextProperty()
TestModel(text='foo').put()
print TestModel.all().filter('text =', 'foo').fetch(10)
[1] http://code.google.com/p/googleappengine/source/browse/trunk/python/RELEASE_NOTES#1165

Related

"Updater" design pattern, as opposed to "Builder"

This is actually language agnostic, but I always prefer Python.
The builder design pattern is used to validate that a configuration is valid prior to creating an object, via delegation of the creation process.
Some code to clarify:
class A():
def __init__(self, m1, m2): # obviously more complex in life
self._m1 = m1
self._m2 = m2
class ABuilder():
def __init__():
self._m1 = None
self._m2 = None
def set_m1(self, m1):
self._m1 = m1
return self
def set_m2(self, m1):
self._m2 = m2
return self
def _validate(self):
# complicated validations
assert self._m1 < 1000
assert self._m1 < self._m2
def build(self):
self._validate()
return A(self._m1, self._m2)
My problem is similar, with an extra constraint that I can't re-create the object each time due to to performance limitations.
Instead, I want to only update an existing object.
Bad solutions I came up with:
I could do as suggested here and just use setters like so
class A():
...
set_m1(self, m1):
self._m1 = m1
# and so on
But this is bad because using setters
Beats the purpose of encapsulation
Beats the purpose of the buillder (now updater), which is supposed to validate that some complex configuration is preserved after the creation, or update in this case.
As I mentioned earlier, I can't recreate the object every time, as this is expensive and I only want to update some fields, or sub-fields, and still validate or sub-validate.
I could add update and validation methods to A and call those, but this beats the purpose of delegating the responsibility of updates, and is intractable in the number of fields.
class A():
...
def update1(m1):
pass # complex_logic1
def update2(m2):
pass # complex_logic2
def update12(m1, m2):
pass # complex_logic12
I could just force to update every single field in A in a method with optional parameters
class A():
...
def update("""list of all fields of A"""):
pass
Which again is not tractable, as this method will soon become a god method due to the many combinations possible.
Forcing the method to always accept changes in A, and validating in the Updater also can't work, as the Updater will need to look at A's internal state to make a descision, causing a circular dependency.
How can I delegate updating fields in my object A
in a way that
Doesn't break encapsulation of A
Actually delegates the responsibility of updating to another object
Is tractable as A becomes more complicated
I feel like I am missing something trivial to extend building to updating.
I am not sure I understand all of your concerns, but I want to try and answer your post. From what you have written I assume:
Validation is complex and multiple properties of an object must be checked to decide if any change to the object is valid.
The object must always be in a valid state. Changes that make the object invalid are not permitted.
It is too expensive to copy the object, make the change, validate the object, and then reject the change if the validation fails.
Move the validation logic out of the builder and into a separate class like ModelValidator with a validateModel(model) method
The first option is to use a command pattern.
Create abstract class or interface named Update (I don't think Python abstract classes/interfaces, but that's fine). The Update interface implements two methods, execute() and undo().
A concrete class has a name like UpdateAdress, UpdatePortfolio, or UpdatePaymentInfo.
Each concrete Update object also holds a reference to your model object.
The concrete classes hold the state needed to for a particular kind of update. Imageine these methods exist on the UpdateAddress class:
UpdateAddress
setStreetNumber(...)
setCity(...)
setPostcode(...)
setCountry(...)
The update object needs to hold both the current and new values of a property. Like:
setStreetNumber(aString):
self.oldStreetNumber = model.getStreetNumber
self.newStreetNumber = aString
When the execute method is called, the model is updated:
execute:
model.setStreetNumber(newStreetNumber)
model.setCity(newCity)
# Set postcode and country
if not ModelValidator.isValid(model):
self.undo()
raise ValidationError
and the undo method looks like:
undo:
model.setStreetNumber(oldStreetNumber)
model.setCity(oldCity)
# Set postcode and country
That is a lot of typing, but it would work. Mutating your model object is nicely encapsulated by different kinds of updates. You can execute or undo the changes by calling those methods on the update object. You can even store a list of update objects for multi-level undos and re-tries.
However, it is a lot of typing for the programmer. Consider using persistent data structures. Persistent data structures can be used to copy objects very quickly -- approximately constant time complexity. Here is a python library of persistent data structures.
Let's assume your data was in a persistent data structure version of a dict. The library I referenced calls it a PMap.
The implementation of the update classes can be simpler. Starting with the constructor:
UpdateAddress(pmap)
self.oldPmap = pmap
self.newPmap = pmap
The setters are easier:
setStreetNumber(aString):
self.newPmap = newPmap.set('streetNumber', aString)
Execute passes back a new instance of the model, with all the updates.
execute:
if ModelValidator.isValid(newModel):
return newModel;
else:
raise ValidationError
The original object has not changed at all, thanks to the magic of persistent data structures.
The best thing is to not do any of this. Instead, use an ORM or object database. That is the "enterprise grade" solution. These libraries give you sophisticated tools like transactions and object version history.

Idiomatic python - property or method?

I have a django model class that maintains state as a simple property. I have added a couple of helper properties to the class to access aggregate states - e.g. is_live returns false if the state is any one of ['closed', 'expired', 'deleted'] etc.
As a result of this my model has a collection of is_ properties, that do very simple lookups on internal properties of the object.
I now want to add a new property, is_complete - which is semantically the same as all the other properties - a boolean check on the state of the object - however, this check involves loading up dependent (one-to-many) child objects, checking their state and reporting back based on the results - i.e. this property actually does some (more than one) database query, and processes the results.
So, is it still valid to model as a property (using the #property decorator), or should I instead forego the decorator and leave it as a method?
Pro of using a property is that it's semantically consistent with all the other is_ properties.
Pro of using a method is that it indicates to other developers that this is something that has a more complex implementation, and so should be used sparingly (i.e. not inside a for.. loop).
from django.db import models
class MyModel(models.Model):
state = CharField(default='new')
#property
def is_open(self):
# this is a simple lookup, so makes sense as a property
return self.state in ['new', 'open', 'sent']
def is_complete(self):
# this is a complex database activity, but semantically correct
related_objects = self.do_complicated_database_lookup()
return len(related_objects)==0
EDIT: I come from a .NET background originally, where the split is admirably defined by Jeff Atwood as
"if there's any chance at all that code could spawn an hourglass, it definitely should be a method."
EDIT 2: slight update to the question - would it be a problem to have it as a method, called is_complete, so that there are mixed properties and methods with similar names - or is that just confusing?
So - it would look something like this:
>>> m = MyModel()
>>> m.is_live
True
>>> m.is_complete()
False
It is okay to do that, especially if you will use the following pattern:
class SomeClass(models.Model):
#property
def is_complete(self):
if not hasattr(self, '_is_complete'):
related_objects = self.do_complicated_database_lookup()
self._is_complete = len(related_objects) == 0
return self._is_complete
Just remember that it "caches" the results, so first execution does calculation, but subsequent use existing results.

db.StringProperty but property is occasionally datastore_types.Text?

Are there any circumstances under which GAE datastore might change a property type from StringProperty to TextProperty (effectively ignoring the model definition)?
Consider the following situation (simplified):
class MyModel(db.Model):
user_input = db.StringProperty(default='', multiline=True)
Some entity instances of this model in my datastore have a datatype of TextProperty for 'user_input' (rather than simple str) and are therefore not indexed. I can fix this by retrieving the entity, setting model_instance.user_input = str(model_instance.user_input) and then putting the entity back into the datastore.
What I don't understand is how this is happening to only some entities, when there have been no changes to this model. Is it possible that the'type' of a db.model's property can be overridden from StringProperty to TextProperty?
At least in ndb there is a special code path for StringProperties containing Unicode if the property is not indexed. In this case it is changed into a Text Property:
The relevant snippet is:
if isinstance(value, str):
v.set_stringvalue(value)
elif isinstance(value, unicode):
v.set_stringvalue(value.encode('utf8'))
if not self._indexed:
p.set_meaning(entity_pb.Property.TEXT)
I'm unable to observe this on the Python side (see db.StringProperty but property is occasionally datastore_types.Text?) but I see it in the datastore and it caused headaches when loading the Datastore into BigQuery.
So use only:
StringProperty(indexed=True)
TextProperty()
Avoid StringProperty(indexed=False) If you might write mixed unicode and str values to the property - as it tends to happen with external data.
There's a length limit on StringProperty of 500 characters. I don't know for sure, but maybe the datastore will convert a StringProperty to a TextProperty if it goes over the limit.
Having said that, I doubt GAE would just change an indexed property to a non-indexed one implicitly. But it's all I can think of.
With your use of setattr setattr(some_instance, 'string_type', some_instance.text_type) you you have actually made instance property named string_type property actually point to the text_type property. So two names one property
The db, ndb, users, urlfetch, and memcache modules are imported.
> from google.appengine.ext import db
> class TestModel(db.Model):
... string_type = db.StringProperty(default='', multiline=True)
... text_type = db.TextProperty()
...
>
> some_instance = TestModel()
> some_instance.text_type = 'foobar'
> setattr(some_instance, 'string_type', some_instance.text_type)
> print type(some_instance.string_type)
<class 'google.appengine.api.datastore_types.Text'>
y> repr(some_instance.string_type)
"u'foobar'"
> id(some_instance.string_type)
168217452
> id(some_instance.text_type)
168217452
> some_instance.string_type.__class__
<class 'google.appengine.api.datastore_types.Text'>
> some_instance.text_type.__class__
<class 'google.appengine.api.datastore_types.Text'>
> dir(some_instance.text_type.__class__)
Note the id of the two properties above.
So you are effectively rebinding the instance level property definition, then this modified model is written with the type of the field to the datastore.
If you want to use setattr (though can't see why in this contrived example) you should be getting the value from the property first using setattr(some_instance.string_type,TestModel.string_type.get_value_for_datastore(some_instance)
) to get the value to be assigned and not rebind the instances property.
Turns out there IS a way the type of a property from the model's definition can be bypassed (and this was our case)! The following self contained example runs in the interactive console and demonstrates the bug. Essentially, setattr bypasses the property's type if you assign an existing property of a different type:
from google.appengine.ext import db
class TestModel(db.Model):
string_type = db.StringProperty(default='', multiline=True)
text_type = db.TextProperty()
some_instance = TestModel()
some_instance.text_type = 'foobar'
setattr(some_instance, 'string_type', some_instance.text_type)
some_instance.put()
retrieved_instance = db.get(some_instance.key())
print id(retrieved_instance.string_type) #10166414447674124776
print id(retrieved_instance.text_type) #10166414447721032360
print type(retrieved_instance.string_type) #OOPS: string_type is now Text type!

Filter by an object in SQLAlchemy

I have a declared model where the table stores a "raw" path identifier of an object. I then have a #hybrid_property which allows directly getting and setting the object which is identified by this field (which is not another declarative model). Is there a way to query directly on this high level?
I can do this:
session.query(Member).filter_by(program_raw=my_program.raw)
I want to be able to do this:
session.query(Member).filter_by(program=my_program)
where my_program.raw == "path/to/a/program"
Member has a field program_raw and a property program which gets the correct Program instance and sets the appropriate program_raw value. Program has a simple raw field which identifies it uniquely. I can provide more code if necessary.
The problem is that currently, SQLAlchemy simply tries to pass the program instance as a parameter to the query, instead of its raw value. This results in a Error binding parameter 0 - probably unsupported type. error.
Either, SQLAlchemy needs to know that when comparing the program, it must use Member.program_raw and match that against the raw property of the parameter. Getting it to use Member.program_raw is done simply using #program.expression but I can't figure out how to translate the Program parameter correctly (using a Comparator?), and/or
SQLAlchemy should know that when I filter by a Program instance, it should use the raw attribute.
My use-case is perhaps a bit abstract, but imagine I stored a serialized RGB value in the database and had a property with a Color class on the model. I want to filter by the Color class, and not have to deal with RGB values in my filters. The color class has no problems telling me its RGB value.
Figured it out by reading the source for relationship. The trick is to use a custom Comparator for the property, which knows how to compare two things. In my case it's as simple as:
from sqlalchemy.ext.hybrid import Comparator, hybrid_property
class ProgramComparator(Comparator):
def __eq__(self, other):
# Should check for case of `other is None`
return self.__clause_element__() == other.raw
class Member(Base):
# ...
program_raw = Column(String(80), index=True)
#hybrid_property
def program(self):
return Program(self.program_raw)
#program.comparator
def program(cls):
# program_raw becomes __clause_element__ in the Comparator.
return ProgramComparator(cls.program_raw)
#program.setter
def program(self, value):
self.program_raw = value.raw
Note: In my case, Program('abc') == Program('abc') (I've overridden __new__), so I can just return a "new" Program all the time. For other cases, the instance should probably be lazily created and stored in the Member instance.

Python - caching a property to avoid future calculations

In the following example, cached_attr is used to get or set an attribute on a model instance when a database-expensive property (related_spam in the example) is called. In the example, I use cached_spam to save queries. I put print statements when setting and when getting values so that I could test it out. I tested it in a view by passing an Egg instance into the view and in the view using {{ egg.cached_spam }}, as well as other methods on the Egg model that make calls to cached_spam themselves. When I finished and tested it out the shell output in Django's development server showed that the attribute cache was missed several times, as well as successfully gotten several times. It seems to be inconsistent. With the same data, when I made small changes (as little as changing the print statement's string) and refreshed (with all the same data), different amounts of misses / successes happened. How and why is this happening? Is this code incorrect or highly problematic?
class Egg(models.Model):
... fields
#property
def related_spam(self):
# Each time this property is called the database is queried (expected).
return Spam.objects.filter(egg=self).all() # Spam has foreign key to Egg.
#property
def cached_spam(self):
# This should call self.related_spam the first time, and then return
# cached results every time after that.
return self.cached_attr('related_spam')
def cached_attr(self, attr):
"""This method (normally attached via an abstract base class, but put
directly on the model for this example) attempts to return a cached
version of a requested attribute, and calls the actual attribute when
the cached version isn't available."""
try:
value = getattr(self, '_p_cache_{0}'.format(attr))
print('GETTING - {0}'.format(value))
except AttributeError:
value = getattr(self, attr)
print('SETTING - {0}'.format(value))
setattr(self, '_p_cache_{0}'.format(attr), value)
return value
Nothing wrong with your code, as far as it goes. The problem probably isn't there, but in how you use that code.
The main thing to realise is that model instances don't have identity. That means that if you instantiate an Egg object somewhere, and a different one somewhere else, even if they refer to the same underlying database row they won't share internal state. So calling cached_attr on one won't cause the cache to be populated in the other.
For example, assuming you have a RelatedObject class with a ForeignKey to Egg:
my_first_egg = Egg.objects.get(pk=1)
my_related_object = RelatedObject.objects.get(egg__pk=1)
my_second_egg = my_related_object.egg
Here my_first_egg and my_second_egg both refer to the database row with pk 1, but they are not the same object:
>>> my_first_egg.pk == my_second_egg.pk
True
>>> my_first_egg is my_second_egg
False
So, filling the cache on my_first_egg doesn't fill it on my_second_egg.
And, of course, objects won't persist across requests (unless they're specifically made global, which is horrible), so the cache won't persist either.
Http servers that scale are shared-nothing; you can't rely on anything being singleton. To share state, you need to connect to a special-purpose service.
Django's caching support is appropriate for your use case. It isn't necessarily a global singleton either; if you use locmem://, it will be process-local, which could be the more efficient choice.

Categories