Adding a value to an Object retrieved via a QuerySet - python

In Django I often call Objects via QuerySets and put them in a dict to give them to a template.
ObjectInstance = Object.objects.get(pk=pk)
ObjectsInstanceDict = {'value1': ObjectInstance.value1,
'value2': ObjectInstance.value2,
'specialvalue': SomeBusinessLogic(Data_to_aggregate)
Sometimes specialvalue is just a timestamp converted to a string other times there are some analysis done with the data.
Instead of creating a dict I would like to add a special value to the ObjectInstance instead so I don't have to repeat all the existing values and just adding new computed values.
How would I do this?
And please tell me if I made a fundamental mistake I work around here.

Django model instances are normal Python objects. Like almost any other Python object, you can freely add attributes to them at any time.
object_instance = Object.objects.get(pk=pk)
object_instance.special_value = SomeBusinessLogic(data_to_aggregate)

Related

Is there a way to map data from form to insert in to database without explicitly defining each variable?

I have made a really long form with the help of colander alchemy and deform.
This form has 100 or so fields and currently the only way I know to add the data back to the database once form is submitted is to explicitly re-define each variable and then add that to the database but there must be a better way.
#my schema
class All(colander.MappingSchema):
setup_schema(None,atr)
atrschema =atr.__colanderalchemy__
setup_schema(None,chemicals)
chemicalsschema =chemicals.__colanderalchemy__
setup_schema(None,data_aquisition)
data_aquisitionschema =data_aquisition.__colanderalchemy__
setup_schema(None,depositor)
depositorschema =depositor.__colanderalchemy__
setup_schema(None,dried_film)
dried_filmschema =dried_film.__colanderalchemy__
form = All()
form = deform.Form(form,buttons=('submit',))
# this is how I get it to work by redefining each field but there must be a better way
if 'submit' in request.POST:
prism_material = request.params['prism_material']
angle_of_incidence_degrees =
request.params['angle_of_incidence_degrees']
number_of_reflections = request.params['number_of_reflections']
prism_size_mm = request.params['prism_size_mm']
spectrometer_ID = 6
page = atr (spectrometer_ID=spectrometer_ID,prism_size_mm=prism_size_mm,number_of_reflections=number_of_reflections,angle_of_incidence_degrees=angle_of_incidence_degrees,prism_material=prism_material)
request.dbsession.add(page)
Would like to somehow just be able to remap all of that 'multi dictionary' that is returned back to the database?
So, you have a dict (request.params) and want to pass the key-value pars from that dict to a function? Python has a way to do that using **kwargs syntax:
if 'submit' in request.POST:
page = Page(spectrometer_ID=6,**request.params)
request.dbsession.add(page)
(this works also because SQLAlchemy provides a default constructor which assigns the passed values to the mapped columns, no need to define it manually)
Of course, this is a naive approach which will only work for the simplest use-cases - for example, it may allow passing parameters not defined in your schema which may create a security problem; the field names in your schema must match the field names in your SQLAlchemy model; it may not work with lists (i.e. multiple values with the same name which you can access via request.params.get_all(name)).

Building Django Q() objects from other Q() objects, but with relation crossing context

I commonly find myself writing the same criteria in my Django application(s) more than once. I'll usually encapsulate it in a function that returns a Django Q() object, so that I can maintain the criteria in just one place.
I will do something like this in my code:
def CurrentAgentAgreementCriteria(useraccountid):
'''Returns Q that finds agent agreements that gives the useraccountid account current delegated permissions.'''
AgentAccountMatch = Q(agent__account__id=useraccountid)
StartBeforeNow = Q(start__lte=timezone.now())
EndAfterNow = Q(end__gte=timezone.now())
NoEnd = Q(end=None)
# Now put the criteria together
AgentAgreementCriteria = AgentAccountMatch & StartBeforeNow & (NoEnd | EndAfterNow)
return AgentAgreementCriteria
This makes it so that I don't have to think through the DB model more than once, and I can combine the return values from these functions to build more complex criterion. That works well so far, and has saved me time already when the DB model changes.
Something I have realized as I start to combine the criterion from these functions that is that a Q() object is inherently tied to the type of object .filter() is being called on. That is what I would expect.
I occasionally find myself wanting to use a Q() object from one of my functions to construct another Q object that is designed to filter a different, but related, model's instances.
Let's use a simple/contrived example to show what I mean. (It's simple enough that normally this would not be worth the overhead, but remember that I'm using a simple example here to illustrate what is more complicated in my app.)
Say I have a function that returns a Q() object that finds all Django users, whose username starts with an 'a':
def UsernameStartsWithAaccount():
return Q(username__startswith='a')
Say that I have a related model that is a user profile with settings including whether they want emails from us:
class UserProfile(models.Model):
account = models.OneToOneField(User, unique=True, related_name='azendalesappprofile')
emailMe = models.BooleanField(default=False)
Say I want to find all UserProfiles which have a username starting with 'a' AND want use to send them some email newsletter. I can easily write a Q() object for the latter:
wantsEmails = Q(emailMe=True)
but find myself wanting to something to do something like this for the former:
startsWithA = Q(account=UsernameStartsWithAaccount())
# And then
UserProfile.objects.filter(startsWithA & wantsEmails)
Unfortunately, that doesn't work (it generates invalid PSQL syntax when I tried it).
To put it another way, I'm looking for a syntax along the lines of Q(account=Q(id=9)) that would return the same results as Q(account__id=9).
So, a few questions arise from this:
Is there a syntax with Django Q() objects that allows you to add "context" to them to allow them to cross relational boundaries from the model you are running .filter() on?
If not, is this logically possible? (Since I can write Q(account__id=9) when I want to do something like Q(account=Q(id=9)) it seems like it would).
Maybe someone suggests something better, but I ended up passing the context manually to such functions. I don't think there is an easy solution, as you might need to call a whole chain of related tables to get to your field, like table1__table2__table3__profile__user__username, how would you guess that? User table could be linked to table2 too, but you don't need it in this case, so I think you can't avoid setting the path manually.
Also you can pass a dictionary to Q() and a list or a dictionary to filter() functions which is much easier to work with than using keyword parameters and applying &.
def UsernameStartsWithAaccount(context=''):
field = 'username__startswith'
if context:
field = context + '__' + field
return Q(**{field: 'a'})
Then if you simply need to AND your conditions you can combine them into a list and pass to filter:
UserProfile.objects.filter(*[startsWithA, wantsEmails])

What is the correct way to use refresh_from_db in Django?

I'm using Django 1.8, Mezzanine, Cartridge, and I use Postgresql as the database.
I've updated the num_in_stock directly from the database. The quantities are all correct in the database but not on my website. I know the solution is here, but I don't know what to do with that. I really need it spelled out for me.
How exactly would you use this in Cartridge to refresh the num_in_stock?
This should be all you need to do to update one object. Replace object_name with your object.
object_name.refresh_from_db()
I assume you're using an F expression.
According to the documentation an F expression:
...makes it possible to refer to model field values and perform
database operations using them without actually having to pull them
out of the database into Python memory.
You're working directly in the database. Python knows nothing about the values of the model fields. There's nothing on memory, everything is happening on the database.
The documentation's example:
from django.db.models import F
reporter = Reporters.objects.get(name='Tintin')
reporter.stories_filed = F('stories_filed') + 1
reporter.save()
Although reporter.stories_filed = F('stories_filed') + 1 looks like a
normal Python assignment of value to an instance attribute, in fact
it’s an SQL construct describing an operation on the database.
So, for Python to know about this value you need to reload the object.
To access the new value saved this way, the object must be reloaded:
reporter = Reporters.objects.get(pk=reporter.pk)
# Or, more succinctly:
reporter.refresh_from_db()
In your example:
object_name.refresh_from_db()
And one more thing...
F() assignments persist after Model.save()
F() objects assigned to
model fields persist after saving the model instance and will be
applied on each save().
reporter = Reporters.objects.get(name='Tintin')
reporter.stories_filed = F('stories_filed') + 1
reporter.save()
reporter.name = 'Tintin Jr.'
reporter.save()
stories_filed will be updated twice in this case. If it’s initially
1, the final value will be 3. This persistence can be avoided by
reloading the model object after saving it, for example, by using
refresh_from_db().
I assume the num_in_stock is an attribute of your model class. If true you should get an instance of the class (i.e object_name) then
object_name.refresh_from_db()
After which, you can access it like object_name.num_in_stock

What is the difference between a mongoengine.DynamicEmbeddedDocument vs mongoengine.DictField?

A mongoengine.DynamicEmbeddedDocument can be used to leverage MongoDB's flexible schema-less design. It's expandable and doesn't apply type constraints to the fields, afaik.
A mongoengine.DictField similarly allows for use of MongoDB's schema-less nature. In the documentation they simply say (w.r.t. the DictField)
This is similar to an embedded document, but the structure is not defined.
Does that mean, then, the mongoengine.fields.DictField and the mongoengine.DynamicEmbeddedDocument are completely interchangeable?
EDIT (for more information):
mongoengine.DynamicEmbeddedDocument inherits from mongoengine.EmbeddedDocument which, from the code is:
A mongoengine.Document that isn't stored in its own collection. mongoengine.EmbeddedDocuments should be used as fields on mongoengine.Documents through the mongoengine.EmbeddedDocumentField field type.
A mongoengine.fields.EmbeddedDocumentField is
An embedded document field - with a declared document_type. Only valid values are subclasses of EmbeddedDocument.
Does this mean the only thing that makes the DictField and DynamicEmbeddedDocument not totally interchangeable is that the DynamicEmbeddedDocument has to be defined through the EmbeddedDocumentField field type?
From what I’ve seen, the two are similar, but not entirely interchangeable. Each approach may have a slight advantage based on your needs. First of all, as you point out, the two approaches require differing definitions in the document, as shown below.
class ExampleDynamicEmbeddedDoc(DynamicEmbeddedDocument):
pass
class ExampleDoc(Document):
dict_approach = DictField()
dynamic_doc_approach = EmbeddedDocumentField(ExampleDynamicEmbeddedDoc, default = ExampleDynamicEmbeddedDoc())
Note: The default is not required, but the dynamic_doc_approach field will need to be set to a ExampleDynamicEmbeddedDoc object in order to save. (i.e. trying to save after setting example_doc_instance.dynamic_doc_approach = {} would throw an exception). Also, you could use the GenericEmbeddedDocumentField if you don’t want to tie the field to a specific type of EmbeddedDocument, but the field would still need to be point to an object subclassed from EmbeddedDocument in order to save.
Once set up, the two are functionally similar in that you can save data to them as needed and without restrictions:
e = ExampleDoc()
e.dict_approach["test"] = 10
e.dynamic_doc_approach.test = 10
However, the one main difference that I’ve seen is that you can query against any values added to a DictField, whereas you cannot with a DynamicEmbeddedDoc.
ExampleDoc.objects(dict_approach__test = 10) # Returns a QuerySet containing our entry.
ExampleDoc.objects(dynamic_doc_approach__test = 10) # Throws an exception.
That being said, using an EmbeddedDocument has the advantage of validating fields which you know will be present in the document. (We simply would need to add them to the ExampleDynamicEmbeddedDoc definition). Because of this, I think it is best to use a DynamicEmbeddedDocument when you have a good idea of a schema for the field and only anticipate adding fields minimally (which you will not need to query against). However, if you are not concerned about validation or anticipate adding a lot of fields which you’ll query against, go with a DictField.

ZODB multiple object references

im developing an application that it is used to fill some huge forms. There are several projects that a form can belong to. Also the form has two sections a that can be filled in many times, like objectives and activities, so a form can have many objectives and activities defined.
I have a class to represent the projects, another for the form and two simple classes to represent the objective and activities. Project has a list of forms, and Form has a list of activities and objectives.
class Project(persistent.Persistent):
forms = PersistentList()
...
class Form(persistent.Persistent):
objectives = PersistentList()
activities = PersistentList()
...
My question is, im planning on storing this data in ZODB like this:
db['projects'] = OOBTree()
db['forms'] = OOBTree()
db['activities'] = OOBTree()
db['objectives'] = OOBTree()
project = Project(...)//fill data with some parameters
form = Form(...)//fill data with some parameters
objective1 = Objective(...)
objective2 = Objective(...)
activity1 = Activitiy(...)
activity2 = Activitiy(...)
form.addObjective(objective1)
form.addObjective(objective2)
form.addActivity(activity1)
form.addActivity(activity2)
project.addForm(form)
db['projects']['projectID'] = project
db['forms']['formID'] = form
db['activities']['activityID'] = activity1
db['activities']['activityID'] = activity2
db['objectives']['objectiveID'] = objective1
db['objectives']['objectiveID'] = ojective2
transaction.commit()
I know that when storing the project, the list of forms gets persisted as well, and the corresponding list of objectives and activities from the form too.
But what happens in the case of the other OOBTrees, 'forms', 'activities' and 'objectives'?
im doing this in order to be easier to traverse or look for individual forms/objectives/activities. But im not sure if the ZODB will cache those objects and only persist them once when saving the project, and keeping a reference to that object. So when any of those are modified, all references are updated.
Meaning that when doing db['forms']['formID'] = form the OOBTree will point to the same object as the project OOBTree and thus not persisting the same object twice.
Is that the way it works? or ill get duplicated persisted objects and will all be independent instances?
I know that theres repoze catalog to handle indexing and stuff, but i dont need that much, just be able to access a form without having to iterate over projects.
Thanks!
Yes, as long as the target objects you are storing have classes that subclass persistent.Persistent somewhere in their inheritance, any references to the same object will point to exactly the same (persistent) object. You should not expect duplication as you have described this.
The short-long-version: ZODB uses special pickling techniques, when serializing the source/referencing object, it sees that the reference is to a persistent object, instead of storing that object again, it stores a tuple of the class dotted name and the internal OID of the target object.
Caveat: this only works within the same object database. You should not have cross-database references in your application.

Categories