What's the use of this function in tryton? - python

Reading code modules of Tryton, I met a lot this method but I did not figure out what this is for.
What's the use of this function in Tryton?
#classmetod
def __register__(cls,module_name):
TableHandler = backend.get('TableHandler')
cursor = Transaction().cursor
table = TableHandler(cursor,cls,module_name)
super(Adress,cls).__register__(module_name)
table.not_null_action('sequence', action='remove')

The __register__ method is called every time the model is updated, and it's used to alter the database structure of the current module. Normally tryton, creates all the missing fields for you (this is done on ModelSQL class), but some actions are not possible to be detected automatically, so you must write a migration for it. This is done on the __register__ method of the model.
The code you copied ensures that the sequence field is nullable and if not, it alters the column from null to not null.

Related

What is the correct way to use refresh_from_db in Django?

I'm using Django 1.8, Mezzanine, Cartridge, and I use Postgresql as the database.
I've updated the num_in_stock directly from the database. The quantities are all correct in the database but not on my website. I know the solution is here, but I don't know what to do with that. I really need it spelled out for me.
How exactly would you use this in Cartridge to refresh the num_in_stock?
This should be all you need to do to update one object. Replace object_name with your object.
object_name.refresh_from_db()
I assume you're using an F expression.
According to the documentation an F expression:
...makes it possible to refer to model field values and perform
database operations using them without actually having to pull them
out of the database into Python memory.
You're working directly in the database. Python knows nothing about the values of the model fields. There's nothing on memory, everything is happening on the database.
The documentation's example:
from django.db.models import F
reporter = Reporters.objects.get(name='Tintin')
reporter.stories_filed = F('stories_filed') + 1
reporter.save()
Although reporter.stories_filed = F('stories_filed') + 1 looks like a
normal Python assignment of value to an instance attribute, in fact
it’s an SQL construct describing an operation on the database.
So, for Python to know about this value you need to reload the object.
To access the new value saved this way, the object must be reloaded:
reporter = Reporters.objects.get(pk=reporter.pk)
# Or, more succinctly:
reporter.refresh_from_db()
In your example:
object_name.refresh_from_db()
And one more thing...
F() assignments persist after Model.save()
F() objects assigned to
model fields persist after saving the model instance and will be
applied on each save().
reporter = Reporters.objects.get(name='Tintin')
reporter.stories_filed = F('stories_filed') + 1
reporter.save()
reporter.name = 'Tintin Jr.'
reporter.save()
stories_filed will be updated twice in this case. If it’s initially
1, the final value will be 3. This persistence can be avoided by
reloading the model object after saving it, for example, by using
refresh_from_db().
I assume the num_in_stock is an attribute of your model class. If true you should get an instance of the class (i.e object_name) then
object_name.refresh_from_db()
After which, you can access it like object_name.num_in_stock

Python peewee save() doesn't work as expected

I'm inserting/updating objects into a MySQL database using the peewee ORM for Python. I have a model like this:
class Person(Model):
person_id = CharField(primary_key=True)
name = CharField()
I create the objects/rows with a loop, and each time through the loop have a dictionary like:
pd = {"name":"Alice","person_id":"A123456"}
Then I try creating an object and saving it.
po = Person()
for key,value in pd.items():
setattr(po,key,value)
po.save()
This takes a while to execute, and runs without errors, but it doesn't save anything to the database -- no records are created.
This works:
Person.create(**pd)
But also throws an error (and terminates the script) when the primary key already exists. From reading the manual, I thought save() was the function I needed -- that peewee would perform the update or insert as required.
Not sure what I need to do here -- try getting each record first? Catch errors and try updating a record if it can't be created? I'm new to peewee, and would normally just write INSERT ... ON DUPLICATE KEY UPDATE or even REPLACE.
Person.save(force_insert=True)
It's documented: http://docs.peewee-orm.com/en/latest/peewee/models.html#non-integer-primary-keys-composite-keys-and-other-tricks
I've had a chance to re-test my answer, and I think it should be replaced. Here's the pattern I can now recommend; first, use get_or_create() on the model, which will create the database row if it doesn't exist. Then, if it is not created (object is retrieved from db instead), set all the attributes from the data dictionary and save the object.
po, created = Person.get_or_create(person_id=pd["person_id"],defaults=pd)
if created is False:
for key in pd:
setattr(fa,key,pd[key])
po.save()
As before, I should mention that these are two distinct transactions, so this should not be used with multi-user databases requiring a true upsert in one transaction.
I think you might try get_or_create()? http://peewee.readthedocs.org/en/latest/peewee/querying.html#get-or-create
You may do something like:
po = Person()
for key,value in pd.items():
setattr(po,key,value)
updated = po.save()
if not updated:
po.save(force_insert=True)

What is the difference between a mongoengine.DynamicEmbeddedDocument vs mongoengine.DictField?

A mongoengine.DynamicEmbeddedDocument can be used to leverage MongoDB's flexible schema-less design. It's expandable and doesn't apply type constraints to the fields, afaik.
A mongoengine.DictField similarly allows for use of MongoDB's schema-less nature. In the documentation they simply say (w.r.t. the DictField)
This is similar to an embedded document, but the structure is not defined.
Does that mean, then, the mongoengine.fields.DictField and the mongoengine.DynamicEmbeddedDocument are completely interchangeable?
EDIT (for more information):
mongoengine.DynamicEmbeddedDocument inherits from mongoengine.EmbeddedDocument which, from the code is:
A mongoengine.Document that isn't stored in its own collection. mongoengine.EmbeddedDocuments should be used as fields on mongoengine.Documents through the mongoengine.EmbeddedDocumentField field type.
A mongoengine.fields.EmbeddedDocumentField is
An embedded document field - with a declared document_type. Only valid values are subclasses of EmbeddedDocument.
Does this mean the only thing that makes the DictField and DynamicEmbeddedDocument not totally interchangeable is that the DynamicEmbeddedDocument has to be defined through the EmbeddedDocumentField field type?
From what I’ve seen, the two are similar, but not entirely interchangeable. Each approach may have a slight advantage based on your needs. First of all, as you point out, the two approaches require differing definitions in the document, as shown below.
class ExampleDynamicEmbeddedDoc(DynamicEmbeddedDocument):
pass
class ExampleDoc(Document):
dict_approach = DictField()
dynamic_doc_approach = EmbeddedDocumentField(ExampleDynamicEmbeddedDoc, default = ExampleDynamicEmbeddedDoc())
Note: The default is not required, but the dynamic_doc_approach field will need to be set to a ExampleDynamicEmbeddedDoc object in order to save. (i.e. trying to save after setting example_doc_instance.dynamic_doc_approach = {} would throw an exception). Also, you could use the GenericEmbeddedDocumentField if you don’t want to tie the field to a specific type of EmbeddedDocument, but the field would still need to be point to an object subclassed from EmbeddedDocument in order to save.
Once set up, the two are functionally similar in that you can save data to them as needed and without restrictions:
e = ExampleDoc()
e.dict_approach["test"] = 10
e.dynamic_doc_approach.test = 10
However, the one main difference that I’ve seen is that you can query against any values added to a DictField, whereas you cannot with a DynamicEmbeddedDoc.
ExampleDoc.objects(dict_approach__test = 10) # Returns a QuerySet containing our entry.
ExampleDoc.objects(dynamic_doc_approach__test = 10) # Throws an exception.
That being said, using an EmbeddedDocument has the advantage of validating fields which you know will be present in the document. (We simply would need to add them to the ExampleDynamicEmbeddedDoc definition). Because of this, I think it is best to use a DynamicEmbeddedDocument when you have a good idea of a schema for the field and only anticipate adding fields minimally (which you will not need to query against). However, if you are not concerned about validation or anticipate adding a lot of fields which you’ll query against, go with a DictField.

Overwriting an entity by reusing its id

If we add a second entity of same Model(NDB) with the same id, would the first entity get replaced by the second entity?
Is this the right way? In future, would this cause any problem?
I use GAE Python with NDB.
Eg,
class X (ndb.Model):
command = ndb.StringProperty ()
x_record = X (id="id_value", command="c1")
x_record.put ()
# After some time
x_record = X (id="id_value", command="c2")
x_record.put ()
I did find a mention of this in official Google docs.
CONTEXT
I intend to use it to reduce code steps. Presently, first the code checks if an entity with key X already exists. If it exists, it updates its properties. Else, it creates a new one with that key(X). New approach would be to just blindly create a new entity with key X.
Yes, you would simply replace the model.
Would it cause any problems? Only if you wanted the original model back...

Django: When to customize save vs using post-save signal

I have a series of tests and cases in a database. Whenever a test is obsoleted, it gets end dated, and any sub-cases of that test should also be end dated. I see two ways to accomplish this:
1) Modify the save function to end date sub-cases.
2) Create a receiver which listens for Test models being saved, and then end dates their sub-cases.
Any reason to use one other than the other?
Edit: I see this blog post suggests to use the save method whenever you check given values of the model. Since I'm checking the end_date, maybe that suggests I should use a custom save?
Edit2: Also, for the record, the full hierarchy is Protocol -> Test -> Case -> Planned_Execution, and anytime one is end_dated, every child must also be endDated. I figure I'll end up doing basically the same thing for each.
Edit3: It turns out that in order to tell whether the current save() is the one that is endDating the Test, I need to have access to the old data and the new data, so I used a custom save. Here's what it looks like:
def save(self):
"""Use a custom save to end date any subCases"""
try:
orig = Test.objects.get(id=self.id)
enddated = (not orig.end_date) and self.end_date is not None
except:
enddated = False
super(Test, self).save()
if enddated:
for case in self.case_set.exclude(end_date__isnull=False):
case.end_date = self.end_date
case.enddater = self.enddater
case.save()
I generally use this rule of thumb:
If you have to modify data so that the save won't fail, then override save() (you don't really have another option). For example, in an app I'm working on, I have a model with a text field that has a list of choices. This interfaces with old code, and replaces an older model that had a similar text field, but with a different list of choices. The old code sometimes passes my model a choice from the older model, but there's a 1:1 mapping between choices, so in such a case I can modify the choice to the new one. Makes sense to do this in save().
Otherwise, if the save can proceed without intervention, I generally use a post-save signal.
In my understanding, signals are a means for decoupling modules. Since your task seems to happen in only one module I'd customize save.

Categories