I'm trying to solve a bug. The following functions adds a Key object to a user's attribute called tabs. For some reason, after calling put() on the user entity, the newly added key isn't saved. Couldn't figure out why. Maybe there is some delay that prevents the changes from appearing immediately? In that case is memcache the solution?
class User(GeoModel):
tabs = db.ListProperty(db.Key)
#db.transactional
def add_tab_transaction(self, user_key, tab_key):
user = models.User.get(user_key)
user.tabs.append(tab_key)
user.put()
logging.debug('tabs this user has:')
logging.debug(user.tabs) # prints the list with the new value
user = models.User.get(user_key)
logging.debug('rechecking the same thing:')
logging.debug(user.tabs) # prints the list without the new value
This behaviour is explained in Isolation and Consistency.
"In a transaction, all reads reflect the current, consistent state of the Datastore at the time the transaction started. This does not include previous puts and deletes inside the transaction."
Related
I'm using Odoo 10. After a new user sign up (through localhost:8069/web/signup) i want him to be automatically allocated inside a group i created on my very own custom module (the user will need authentication from an admin later on so he can be converted to a regular portal user; after signup he will receive restricted access).
I have tried many things. My latest effort looks like this:
class RestrictAccessOnSignup(auth_signup_controller.AuthSignupHome):
def do_signup(self, *args):
super(RestrictAccessOnSignup, self).do_signup(*args)
request.env['res.groups'].sudo().write({'groups_id': 'group_unuser'})
Note that I have import odoo.addons.auth_signup.controllers.main as auth_signup_controller so that I can override the auth_signup controller.
I have located that method as the responsible for doing the signup. So I call it in my new method and then try to change the newly created user's group_id.
What i miss is a fundamental understanding of how to overwrite a field's value from another model inside a controller method context. I'm using the 'request' object although i'm not sure of it. I have seen people using 'self.pool['res.users'] (e.g.) for such purposes but i don't understand how to apply it inside my problem's context.
I believe, also, that there is a way to change the default group for a user after it is created (i would like to know), but i also want to understand how to solve the general problem (accessing and overwriting a field's value from another module).
Another weird thing is that the field groups_id does exist in 'res.users' model, but it does not appear as a column in my pgAdmin interface when i click to see the 'res.users' table... Any idea why?
Thanks a lot!
i don't know if after calling :
super(RestrictAccessOnSignup,self).do_signup(*args)
you will have access to user record in request object but if so just add
the group to user like this, if not you have to find where the user record or id is saved after calling do_signup because you need to update that record to ad this group.
# create env variable i hate typing even i'm typing here ^^
env = request.env
env.user.sudo().write({'groups_id': [
# in odoo relation field accept a list of commands
# command 4 means add the id in the second position must be an integer
# ref return an object so we return the id
( 4, env.ref('your_module_name.group_unuser').id),
]
})
and if changes are not committed in database you may need to commit them
request.env.cr.commit()
Note: self.env.ref you must pass the full xmlID.
This is what worked for me:
def do_signup(self, *args):
super(RestrictAccessOnSignup, self).do_signup(*args)
group_id = request.env['ir.model.data'].get_object('academy2', 'group_unuser')
group_id.sudo().write({'users': [(4, request.env.uid)]})
In the get_object i pass as arguments the 'module' and the 'xmlID' of the group i want to fetch.
It is still not clear to me why 'ir.model.data' is the environment used, but this works as a charm. Please note that here we are adding a user to the group, and not a group to the user, and to me that actually makes more sense.
Any further elucidation or parallel solutions are welcome, the methods aren't as clear to me as they should be.
thanks.
I'm using Pony ORM version 0.7 with a Sqlite3 database on disk, and running into this issue: I am performing a select, then an update, then a select, then another update, and getting an error message of
pony.orm.core.UnrepeatableReadError: Value of Task.order_id for
Task[23654] was updated outside of current transaction (was: 1, now: 2)
I've reduced the problem to the minimum set of commands that causes the problem (i.e. removing anything causes the problem not to occur):
#db_session
def test_method():
tasks = list(map(Task.to_dict, Task.select()))
db.execute("UPDATE Task SET order_id=order_id*2")
task_to_move = select(task for task in Task if task.order_id == 2).first()
task_to_move.order_id = 1
test_method()
For completeness's sake, here is the definition of Task:
class Task(db.Entity):
text = Required(unicode)
heading = Required(int)
create_timestamp = Required(datetime)
done_timestamp = Optional(datetime)
order_id = Required(int)
Also, if I remove the constraint that task.order_id == 2 from my select, the problem no longer occurs, so I assume the problem has something to do with querying based on a field that has been changed since the transaction has started, but I don't know why the error message is telling me that it was changed by a different transaction (unless maybe db.execute is executing in a separate transaction because it is raw SQL?)
I've already looked at this similar question, but the problem was different (Pony ORM reports record "was updated outside of current transaction" while there is not other transaction) and at this documentation (https://docs.ponyorm.com/transactions.html) but neither solved my problem.
Any ideas what might be going on here?
Pony uses optimistic concurrency control by default. For each attribute Pony remembers its current value (potentially modified by application code) as well as original value which was read from the database. During UPDATE Pony checks that the value of column in the database is still the same. If the value is changed, Pony assumes that some concurrent transaction did it, and throw exception in order to avoid the "lost update" situation.
If you execute some raw SQL query, Pony does not know what exactly was modified in the database. So when Pony encounters that the counter value was changed, it mistakenly thinks that the value was changed by another transaction.
In order to avoid the problem you can mark order_id attribute as volatile. Then Pony will assume, that the value of attribute can change at any time (by trigger or raw SQL update), and will exclude that attribute from optimistic checks:
class Task(db.Entity):
text = Required(unicode)
heading = Required(int)
create_timestamp = Required(datetime)
done_timestamp = Optional(datetime)
order_id = Required(int, volatile=True)
Note that Pony will cache the value of volatile attribute and will not re-read the value from the database until the object was saved, so in some situation you can get obsolete value in Python.
Update:
Starting from release 0.7.4 you can also specify optimistic=False option to db_session to turn off optimistic checks for specific transaction that uses raw SQL queries:
with db_session(optimistic=False):
...
or
#db_session(optimistic=False)
def some_function():
...
Also it is possible now to specify optimistic=False option for attribute instead of specifying volatile=True. Then Pony will not make optimistic checks for that attribute, but will still consider treat it as non-volatile
I have a method running in a separate thread, that does some contacts matching.
I'm writing tests to check if the contacts have been synced. The test case goes something like this:
class ContactSyncTestCase(TestCase):
fixtures = ['fix.json']
def setUp(self):
# get a few contacts that exist in the database to be sent for matching
self.few_contacts = CompanyContact.objects.all().order_by('?')[:5].values_list('contact_number',flat=True)
def test_submit_contacts(self):
# Pick up a random user
user = User.objects.all().order_by('?')[0]
# Get API Key for that user
key = ApiKey.objects.get(user=user).key
# The url that submits contacts for matching, returns a matching key immediately and starts a separate thread to sync contacts
sync_request_url = '/sync_contacts/?username=%s&api_key=%s'%(user.username,key)
sync_request_response = self.client.post(path=sync_request_url,
data=json.dumps({"contacts":','.join(self.few_contacts)}),
content_type="application/json")
# Key that is used to fetch the status of contacts syncing and returns a json if contacts are matched
key = sync_request_response.content
# At this point, the other thread is doing the task of syncing inside the method mentioned next
# Hence I put the test on pause so that it does not fail and exit
try:
while True:
time.sleep(100)
except KeyboardInterrupt:
pass
The async method that matches the numbers starts something like this:
def match_numbers(key, contacts, user):
# Get all contacts stored in the system
"""
:param key:
:param contacts:
:param user:
"""
import pdb;pdb.set_trace()
system_contacts = CompanyContact.objects.all().values_list('contact_number', flat=True)
Now the weird issue here is that:
CompanyContact.objects.all().values_list('contact_number', flat=True)
Returns an empty queryset while testing. However, during runtime it works fine.
For that matter, any query (including User model) returns an empty queryset.
Any ideas why?
EDIT:
Turns out that inheriting from TrasactionTestCase solves this issue. I still have my doubts and I dug up more for the same.
My database's default transaction level is Repeatable Read.
Reading from this post,
REPEATABLE READ (default) : ensure that is a transaction issues the same SELECT twice, it gets the same result both times, regardless of committed or uncommitted changes made by other transactions. In other words, it gets a consistent result from different executions of the same query. In some database systems, REPEATABLE READ isolation level allows phantoms, such that if another transaction inserts new rows,in the interval between the SELECT statements, the second SELECT will see them. This is not true for InnoDB; phantoms do not occur for the REPEATABLE READ level.
Summary of this: I should still have got the existing records, which I didn't.
You can find a good explanation in the django LiveServerTestCase
class LiveServerTestCase(TransactionTestCase):
"""
...
Note that it inherits from TransactionTestCase instead of TestCase because
the threads do not share the same transactions (unless if using in-memory
sqlite) and each thread needs to commit all their transactions so that the
other thread can see the changes.
"""
I'm relatively new to Python, coming from the PHP world. In PHP, I would routinely fetch an row, which would correspond to and object from the database, say User, and add properties to it before passing the user object to my view page.
For example, the user has properties email, name and id.
I get 5 users from the database and in a for loop, I assign a dynamic property to the user, say image.
This doesn't seem to work in Python/Google App Engine datastore models (I think it has to do more with the datastore model than python) in a for loop. It works within the for loop (meaning I can reference user.image within the for loop, but once the for loop ends, all of the objects seem to not have the new attribute image anymore.
Here is a code example:
# Model
Class User(ndb.Model):
email = ndb.StringProperty()
name = ndb.StringProperty()
# And then a function that returns a list of users
users = User.get_users()
user_list = []
# For loop
for user in user:
# For example, get image
user.image = Image.get_image(user.key)
user_list.append(user)
# If I print or log this user in the for loop, I see a result
logging.info(user.image) # WORKS!
for ul in user_list:
print ul.image # Results in None/ATTR Error
Can anyone explain to me why this is happening and how to achieve this goal?
I've searched the forms, but I couldn't find anything.
Try using Expando Model
Sometimes you don't want to declare your properties ahead of time. A
special model subclass, Expando, changes the behavior of its entities
so that any attribute assigned (as long as it doesn't start with an
underscore) is saved to the Datastore.
My question is, what is the best way to create a new model entity, and then read it immediately after. For example,
class LeftModel(ndb.Model):
name = ndb.StringProperty(default = "John")
date = ndb.DateTimeProperty(auto_now_add=True)
class RightModel(ndb.Model):
left_model = ndb.KeyProperty(kind=LeftModel)
interesting_fact = ndb.StringProperty(default = "Nothing")
def do_this(self):
# Create a new model entity
new_left = LeftModel()
new_left.name = "George"
new_left.put()
# Retrieve the entity just created
current_left = LeftModel.query().filter(LeftModel.name == "George").get()
# Create a new entity which references the entity just created and retrieved
new_right = RightModel()
new_right.left_model = current_left.key
new_right.interesting_fact = "Something"
new_right.put()
This quite often throws an exception like:
AttributeError: 'NoneType' object has no attribute 'key'
I.e. the retrieval of the new LeftModel entity was unsuccessful. I've faced this problem a few times with appengine and my solution has always been a little hacky. Usually I just put everything in a try except or a while loop until the entity is successfully retrieved. How can I ensure that the model entity is always retrieved without running the risks of infinite loops (in the case of the while loop) or messing up my code (in the case of the try except statements)?
Why are you trying to fetch the object via a query immediately after you have performed the put().
You should use the new_left you just created and immediately assign it to the new_right as in new_right.left_model = current_left.key
The reason you can not query immediately is because HRD uses an eventual consistency model, which means you result of the put will be visible eventualy. If you want a consistent result then you must perform ancestor queries and this implies an ancestor in the key on creation. Given you are creating a tree this is probably not practical. Have a read about Structuring Data for Strong Consistency https://developers.google.com/appengine/docs/python/datastore/structuring_for_strong_consistency
I don't see any reason why you just don't use the entity you just created without the additional query.