I'm trying to insert/update the list of objects extracted using SQLAlchemy ORM.
def truncate_mytable(self):
with self.session.begin():
current_records = self.session.query(MyTable).all()
self.session.query(MyTable).delete()
self.session.expunge_all()
return current_records
def compensate_truncate_mytable(self, objects):
with self.session.begin():
self.session.bulk_save_objects(objects)
But while the objects have been extracted correctly, they are not getting written to the DB.
Could it be because there are also some protected attributes inside the objects, such as <sqlalchemy.orm.state.InstanceState object at 0x11471bf70> and <ClassManager of <class 'lib.kaizen_models.models.MyTable'> at 1146673b0>? The objects' type in the list is <lib.kaizen_models.models.MyTable object at 0x11471bd00>.
(I'm writing compensation methods, following Saga pattern.)
The problem is that the objects are in a detached state, and this means bulk_save_objects will try to update rather than insert them*.
The state can be reset to transient by calling orm.make_transient on each object, after which they can be saved by bulk_save_objects.
def truncate_mytable(self):
with self.session.begin():
current_records = self.session.query(MyTable).all()
self.session.query(MyTable).delete()
self.session.expunge_all()
for record in current records:
orm.make_transient(record)
return current_records
Alternatively, you could merge the objects back into the session before calling bulk_save_objects, but this might reduce the performance benefits that you want to obtain from the bulk operation.
* By default bulk_create_objects' update_changed_only argument is True, and since there a no changes in the objects' attribute histories no updates are attempted. Setting it to False will emit UPDATE statements, but result in a StaleDataError because the UPDATE matches no rows in the empty table.
Related
I have an instance of an object (with many attributes) which I want to duplicate.
I copy it using deepcopy() then modify couple of attributes.
Then I save my new object to the database using Python / PeeWee save() but the save() actually updates the original object (I assume it is because that the id was copied from the original object).
(btw no primary key is defined in the object model)
How do I force save the new object? can I change its id?
Thanks.
Turns out that I can set the id to None (obj.id = None) which will create a new record when performing save().
Set the id to None (obj.id = None) resolves if you are using sqlite, otherwise use:
data = obj.__dict__['_data']
data.pop('id')
obj.insert(data).execute()
I'm using Django 1.8, Mezzanine, Cartridge, and I use Postgresql as the database.
I've updated the num_in_stock directly from the database. The quantities are all correct in the database but not on my website. I know the solution is here, but I don't know what to do with that. I really need it spelled out for me.
How exactly would you use this in Cartridge to refresh the num_in_stock?
This should be all you need to do to update one object. Replace object_name with your object.
object_name.refresh_from_db()
I assume you're using an F expression.
According to the documentation an F expression:
...makes it possible to refer to model field values and perform
database operations using them without actually having to pull them
out of the database into Python memory.
You're working directly in the database. Python knows nothing about the values of the model fields. There's nothing on memory, everything is happening on the database.
The documentation's example:
from django.db.models import F
reporter = Reporters.objects.get(name='Tintin')
reporter.stories_filed = F('stories_filed') + 1
reporter.save()
Although reporter.stories_filed = F('stories_filed') + 1 looks like a
normal Python assignment of value to an instance attribute, in fact
it’s an SQL construct describing an operation on the database.
So, for Python to know about this value you need to reload the object.
To access the new value saved this way, the object must be reloaded:
reporter = Reporters.objects.get(pk=reporter.pk)
# Or, more succinctly:
reporter.refresh_from_db()
In your example:
object_name.refresh_from_db()
And one more thing...
F() assignments persist after Model.save()
F() objects assigned to
model fields persist after saving the model instance and will be
applied on each save().
reporter = Reporters.objects.get(name='Tintin')
reporter.stories_filed = F('stories_filed') + 1
reporter.save()
reporter.name = 'Tintin Jr.'
reporter.save()
stories_filed will be updated twice in this case. If it’s initially
1, the final value will be 3. This persistence can be avoided by
reloading the model object after saving it, for example, by using
refresh_from_db().
I assume the num_in_stock is an attribute of your model class. If true you should get an instance of the class (i.e object_name) then
object_name.refresh_from_db()
After which, you can access it like object_name.num_in_stock
Reading code modules of Tryton, I met a lot this method but I did not figure out what this is for.
What's the use of this function in Tryton?
#classmetod
def __register__(cls,module_name):
TableHandler = backend.get('TableHandler')
cursor = Transaction().cursor
table = TableHandler(cursor,cls,module_name)
super(Adress,cls).__register__(module_name)
table.not_null_action('sequence', action='remove')
The __register__ method is called every time the model is updated, and it's used to alter the database structure of the current module. Normally tryton, creates all the missing fields for you (this is done on ModelSQL class), but some actions are not possible to be detected automatically, so you must write a migration for it. This is done on the __register__ method of the model.
The code you copied ensures that the sequence field is nullable and if not, it alters the column from null to not null.
I have a problem about insert function. If i have a array of objects to insert[bad,good,good]. if the first object is bad, and object insert action will fail, then the rest of the objects will never hit the database even the object is good.
How can i deal with it ?
You can validate the model instances before saving to ensure they are valid eg:
valid_docs = [d for d in docs if d.validate()]
Or pass in the continue_on_error=True as a write_options eg:
Doc.objects.insert(docs, write_options={"continue_on_error": True})
My question is, what is the best way to create a new model entity, and then read it immediately after. For example,
class LeftModel(ndb.Model):
name = ndb.StringProperty(default = "John")
date = ndb.DateTimeProperty(auto_now_add=True)
class RightModel(ndb.Model):
left_model = ndb.KeyProperty(kind=LeftModel)
interesting_fact = ndb.StringProperty(default = "Nothing")
def do_this(self):
# Create a new model entity
new_left = LeftModel()
new_left.name = "George"
new_left.put()
# Retrieve the entity just created
current_left = LeftModel.query().filter(LeftModel.name == "George").get()
# Create a new entity which references the entity just created and retrieved
new_right = RightModel()
new_right.left_model = current_left.key
new_right.interesting_fact = "Something"
new_right.put()
This quite often throws an exception like:
AttributeError: 'NoneType' object has no attribute 'key'
I.e. the retrieval of the new LeftModel entity was unsuccessful. I've faced this problem a few times with appengine and my solution has always been a little hacky. Usually I just put everything in a try except or a while loop until the entity is successfully retrieved. How can I ensure that the model entity is always retrieved without running the risks of infinite loops (in the case of the while loop) or messing up my code (in the case of the try except statements)?
Why are you trying to fetch the object via a query immediately after you have performed the put().
You should use the new_left you just created and immediately assign it to the new_right as in new_right.left_model = current_left.key
The reason you can not query immediately is because HRD uses an eventual consistency model, which means you result of the put will be visible eventualy. If you want a consistent result then you must perform ancestor queries and this implies an ancestor in the key on creation. Given you are creating a tree this is probably not practical. Have a read about Structuring Data for Strong Consistency https://developers.google.com/appengine/docs/python/datastore/structuring_for_strong_consistency
I don't see any reason why you just don't use the entity you just created without the additional query.