simulating django settings to retrieve from database, values getting cached - python

am trying to simulate django's settings file. Where I've built a model to host some of the settings where it can be changed by admin. The concepts works fine, but its acting strange when the value is changed the code is not picking up the new changes.
Here is my core_settings
class CoreSettings(object):
def __getattr__(self, item):
try:
return Configuration.objects.get(key=item).value
except Configuration.DoesNotExist :
return getattr(settings, item)
core_settings = CoreSettings()
am basically using the above as follows
SF_PUBLICATION_PAGINATION_PP = int(getattr(core_settings, 'SF_PUBLICATION_PAGINATION_PP'))
SF_PUBLICATION_PAGINATION_PP is getting the correct value from the Configuration model, but when I update the field value its not reflected. Only when I alter the file causing it to recompile where am getting the changes..
Any ideas?
update:
it seems like am seeing the changes only when the runserver is refreshed.

Yes, the value of your setting is not being refreshed, because its value is set when your settings.py is 'loaded', which happen when e.g you do 'runserver' in a dev enviroment.
So, you can deal with, doing something like this:
def get_live_setting(key):
try:
return Configuration.objects.get(key=key).value
except Configuration.DoesNotExist :
return getattr(settings, key)
# get SF_PUBLICATION_PAGINATION_PP's value
get_live_setting('SF_PUBLICATION_PAGINATION_PP')
with this you can solve your problem, but the correct way is using lazy evaluation, here some samples and here a funny post about 'lazy-python'.

What about using a package designed for this purpose?
Have a look at: django-livesettings
Even if you decide not to use it, you can always have a look at how it's done there!
Regarding your specific issue. how do you update the field value? Are you sure the value is retrieved from the database and not from your except clause?

Related

Django does not Override Settings in Template Tag Testing

Well, I haven't been getting some answers or commentary partly because the codes in the original content below is so restricted to their own small contexts, so instead of that, I wanted to share the whole codebase with you (don't worry, I will permalink the selected lines) because I intend to open the source anyway so that you can review as much as you'd like to.
The whole codebase is here. It's perma/1 branch of the repository.
Original Content
I have a custom template tag as below:
# other imports
from django.conf import settings
DPS_TEMPLATE_TRUE_DEFAULT = getattr(settings, "DPS_TEMPLATE_TRUE_DEFAULT", "True")
#register.simple_tag(name="var")
def get_var(name, rit=DPS_TEMPLATE_TRUE_DEFAULT, rif="False", rin=""):
"""
A template tag to render value of a variable.
"""
_LOGGER.debug("Rendering value for `%s`...", name)
variable = models.Variable.objects.get(name=name)
value = variable.value
if value is None:
return rin
if isinstance(value, bool):
if value:
return rit
else:
return rif
return variable.value
As you can see, I would like to set rit by DPS_TEMPLATE_TRUE_DEFAULT. I test this behavior as below:
# `template_factory` and `context_factory` creates Template and Context instances accordingly.
# i use them in other tests. they work.
#pytest.mark.it("Render if True by settings")
def test_render_if_true_settings(
self, template_factory, context_factory, variable_factory, settings
):
settings.DPS_TEMPLATE_TRUE_DEFAULT = "this is true by settings"
variable_factory(True)
template = template_factory("FOO", tag_name=self.tag_name).render(
context_factory()
)
assert "<p>this is true by settings</p>" in template
I use pytest-django and, as the docs put, I can kinda mock the settings. However, when I run the test, it does not see DPS_TEMPLATE_TRUE_DEFAULT and uses "True". I debugged this behavior by removing "True" on getattr.
Why does it not see DPS_TEMPLATE_TRUE_DEFAULT even if I set it in tests?
Addition / New Content
In the custom template tag, you can see that I'd like to grab DPS_TEMPLATE_TRUE_DEFAULT from django.conf.settings and use it as rit kwarg in my var tag.
This is where I test this behavior by mutating the related setting with settings fixture of pytest-django and it fails.
As the troubleshooting section states I have also tried the other and possibly official ways to do this, they produce the same behavior. As to why it does that, I have no clue.
Troubleshooting
Using Standard Solutions
The odd thing is I have also tried good-old django.test.utils.override_settings and modify_settings, which show the same behavior.
Eager Initialization
I thought, maybe, the problem was I was using getattr outside the scope of get_var function, which would load it before it executes, which means before the tests and somehow does not let me set it again. So I moved getattr inside get_var function but the behavior was the same. It behaves like DPS_TEMPLATE_TRUE_DEFAULT does not exist in settings.
Hardcoding into Settings File
So I have hardcoded the "failing to see" setting in the settings.py file as below:
DPS_TEMPLATE_TRUE_DEFAULT = "this is true by settings"
Still it behaves like DPS_TEMPLATE_TRUE_DEFAULT does not exist.
This is also proven by removing the default value "True" from getattr in this line.
Environment
Django 2.2.8
Pytest Django 3.7.0

Issue with creating/retrieving cookies in Flask

When the class AnonUser is initialized, the code should check if a cookie exists and create a new one if it doesn't. The relevant code snippet is the following:
class AnonUser(object):
"""Anonymous/non-logged in user handling"""
cookie_name = 'anon_user_v1'
def __init__(self):
self.cookie = request.cookies.get(self.cookie_name)
if self.cookie:
self.anon_id = verify_cookie(self.cookie)
else:
self.anon_id, self.cookie = create_signed_cookie()
res = make_response()
res.set_cookie(self.cookie_name, self.cookie)
For some reason, request.cookies.get(self.cookie_name) always returns None. Even if I log "request.cookies" immediately after res.set_cookie, the cookie is not there.
The strange thing is that this code works on another branch with identical code and, as far as I can tell, identical configuration settings (it's not impossible I'm missing something, but I've been searching for the past couple hours for any difference with no luck). The only thing different seems to be the domain.
Does anyone know why this might happen?
I figured out what the problem was. I was apparently wrong about it working on the other branch; for whatever reason it would work if the anonymous user already had some saved collections (what the cookies are used for), and I'm still not sure why that is, but the following ended up resolving the issue:
#app.after_request
def set_cookie(response):
if not request.cookies.get(g.cookie_session.cookie_name):
response.set_cookie(g.cookie_session.cookie_name, g.cookie_session.cookie)
return response
The main things I needed to do were import "request" from flask and realize that I could reference the cookie and cookie name through just referring to the anonymous user ("cookie_session") class where they were set.

Odoo - Changing user group id just right after signup (ecommerce)

I'm using Odoo 10. After a new user sign up (through localhost:8069/web/signup) i want him to be automatically allocated inside a group i created on my very own custom module (the user will need authentication from an admin later on so he can be converted to a regular portal user; after signup he will receive restricted access).
I have tried many things. My latest effort looks like this:
class RestrictAccessOnSignup(auth_signup_controller.AuthSignupHome):
def do_signup(self, *args):
super(RestrictAccessOnSignup, self).do_signup(*args)
request.env['res.groups'].sudo().write({'groups_id': 'group_unuser'})
Note that I have import odoo.addons.auth_signup.controllers.main as auth_signup_controller so that I can override the auth_signup controller.
I have located that method as the responsible for doing the signup. So I call it in my new method and then try to change the newly created user's group_id.
What i miss is a fundamental understanding of how to overwrite a field's value from another model inside a controller method context. I'm using the 'request' object although i'm not sure of it. I have seen people using 'self.pool['res.users'] (e.g.) for such purposes but i don't understand how to apply it inside my problem's context.
I believe, also, that there is a way to change the default group for a user after it is created (i would like to know), but i also want to understand how to solve the general problem (accessing and overwriting a field's value from another module).
Another weird thing is that the field groups_id does exist in 'res.users' model, but it does not appear as a column in my pgAdmin interface when i click to see the 'res.users' table... Any idea why?
Thanks a lot!
i don't know if after calling :
super(RestrictAccessOnSignup,self).do_signup(*args)
you will have access to user record in request object but if so just add
the group to user like this, if not you have to find where the user record or id is saved after calling do_signup because you need to update that record to ad this group.
# create env variable i hate typing even i'm typing here ^^
env = request.env
env.user.sudo().write({'groups_id': [
# in odoo relation field accept a list of commands
# command 4 means add the id in the second position must be an integer
# ref return an object so we return the id
( 4, env.ref('your_module_name.group_unuser').id),
]
})
and if changes are not committed in database you may need to commit them
request.env.cr.commit()
Note: self.env.ref you must pass the full xmlID.
This is what worked for me:
def do_signup(self, *args):
super(RestrictAccessOnSignup, self).do_signup(*args)
group_id = request.env['ir.model.data'].get_object('academy2', 'group_unuser')
group_id.sudo().write({'users': [(4, request.env.uid)]})
In the get_object i pass as arguments the 'module' and the 'xmlID' of the group i want to fetch.
It is still not clear to me why 'ir.model.data' is the environment used, but this works as a charm. Please note that here we are adding a user to the group, and not a group to the user, and to me that actually makes more sense.
Any further elucidation or parallel solutions are welcome, the methods aren't as clear to me as they should be.
thanks.

Continue after Try has failed

I have a function:
def save_to_models(all_item_tags):
from article.models import Articles
for item in all_item_tags:
newobj = Articles()
try:
newobj.pub_date =item.contents[9].contents[0]
except:
continue
try:
newobj.title =item.contents[1].contents[0]
except:
continue
try:
newobj.description =item.contents[5].contents[0]
except:
continue
try:
newobj.image =get_the_picture(item.contents[7])
except:
continue
newobj.save()
each model has unique=True so I'm using try, except to skip over the error I get when its trying to input a data that's already in the database. How can I condense this code? I feel like its a lot of unnecessary lines of code.
Django is smart: like stated in one of the comments, it's only gonna raise an error when the save() method is called. Until then, Article is a normal Python object. What you should would look more like this :
from psycopg2 import IntegrityError # this is the actual error raised by psycopg2 (the postgres python driver)
from article.models import Articles
for item in all_item_tags:
try:
new_article = Articles(
pub_date=item.contents[9].contents[0],
title=item.contents[1].contents[0],
description=item.contents[5].contents[0],
image=get_the_picture(item.contents[7])
new_article.save() # this is where the actual save happens
except IntegrityError:
# Handle the exception here
Another (more advanced) option is to override the save() method and put your logic there.
That said, you could also use get_or_created to do that. It looks like this:
for item in all_item_tags:
# this methods returns a boolean True of False if the object is already in the DB.
# use your unique identifier here
article, created = Article.objects.get_or_create(unique_id=...)
# You can then do a classic if-else without handling any errors...
if created:
# Great! The object a
else:
# the object exist, do something with it or not...
However, there are a few things I would suggest. My feeling is that you are diving into Django without really knowing Python. Django is a big beast that makes a lot of things really convenient (almost magical) but it's still Python. If you dive too deep and something breaks, it will be very hard for you to know what's going on. I would suggest furthering your knowledge of Python (it's an amazing language so it's gonna be fun) and then go back to Django or maybe start with a smaller framework like Flask which does less magic! For now, here's a link to the official doc on error handling so you can learn a bit more about it. Also, Django has really good doc so I would first look there if a problem arises.
Cheers and happy coding!

SQLAlchemy DetachedInstanceError with regular attribute (not a relation)

I just started using SQLAlchemy and get a DetachedInstanceError and can't find much information on this anywhere. I am using the instance outside a session, so it is natural that SQLAlchemy is unable to load any relations if they are not already loaded, however, the attribute I am accessing is not a relation, in fact this object has no relations at all. I found solutions such as eager loading, but I can't apply to this because this is not a relation. I even tried "touching" this attribute before closing the session, but it still doesn't prevent the exception. What could be causing this exception for a non-relational property even after it has been successfully accessed once before? Any help in debugging this issue is appreciated. I will meanwhile try to get a reproducible stand-alone scenario and update here.
Update: This is the actual exception message with a few stacks:
File "/home/hari/bin/lib/python2.6/site-packages/SQLAlchemy-0.6.1-py2.6.egg/sqlalchemy/orm/attributes.py", line 159, in __get__
return self.impl.get(instance_state(instance), instance_dict(instance))
File "/home/hari/bin/lib/python2.6/site-packages/SQLAlchemy-0.6.1-py2.6.egg/sqlalchemy/orm/attributes.py", line 377, in get
value = callable_(passive=passive)
File "/home/hari/bin/lib/python2.6/site-packages/SQLAlchemy-0.6.1-py2.6.egg/sqlalchemy/orm/state.py", line 280, in __call__
self.manager.deferred_scalar_loader(self, toload)
File "/home/hari/bin/lib/python2.6/site-packages/SQLAlchemy-0.6.1-py2.6.egg/sqlalchemy/orm/mapper.py", line 2323, in _load_scalar_attributes
(state_str(state)))
DetachedInstanceError: Instance <ReportingJob at 0xa41cd8c> is not bound to a Session; attribute refresh operation cannot proceed
The partial model looks like this:
metadata = MetaData()
ModelBase = declarative_base(metadata=metadata)
class ReportingJob(ModelBase):
__tablename__ = 'reporting_job'
job_id = Column(BigInteger, Sequence('job_id_sequence'), primary_key=True)
client_id = Column(BigInteger, nullable=True)
And the field client_id is what is causing this exception with a usage like the below:
Query:
jobs = session \
.query(ReportingJob) \
.filter(ReportingJob.job_id == job_id) \
.all()
if jobs:
# FIXME(Hari): Workaround for the attribute getting lazy-loaded.
jobs[0].client_id
return jobs[0]
This is what triggers the exception later out of the session scope:
msg = msg + ", client_id: %s" % job.client_id
I found the root cause while trying to narrow down the code that caused the exception. I placed the same attribute access code at different places after session close and found that it definitely doesn't cause any issue immediately after the close of query session. It turns out the problem starts appearing after closing a fresh session that is opened to update the object. Once I understood that the state of the object is unusable after a session close, I was able to find this thread that discussed this same issue. Two solutions that come out of the thread are:
Keep a session open (which is obvious)
Specify expire_on_commit=False to sessionmaker().
The 3rd option is to manually set expire_on_commit to False on the session once it is created, something like: session.expire_on_commit = False. I verified that this solves my issue.
We were getting similar errors, even with expire_on_commit set to False. In the end it was actually caused by having two sessionmakers that were both getting used to make sessions in different requests. I don't really understand what was going on, but if you see this exception with expire_on_commit=False, make sure you don't have two sessionmakers initialized.
I had a similar problem with the DetachedInstanceError: Instance <> is not bound to a Session;
The situation was quite simple, I pass the session and the record to be updated to my function and it would merge the record and commit it to the database. In the first sample I would get the error, as I was lazy and thought that I could just return the merged object so my operating record would be updated (ie its is_modified value would be false). It did return the updated record and is_modified was now false but subsequent uses threw the error. I think this was compounded because of related child records but not entirely sure of that.
def EditStaff(self, session, record):
try:
r = session.merge(record)
session.commit()
return r
except:
return False
After much googling and reading about sessions etc, I realized that since I had captured the instance r before the commit and returned it, when that same record was sent back to this function for another edit/commit it had lost its session.
So to fix this I just query the database for the record just updated and return it to keep it in session and mark its is_modified value back to false.
def EditStaff(self, session, record):
try:
session.merge(record)
session.commit()
r = self.GetStaff(session, record)
return r
except:
return False
Setting the expire_on_commit=False also avoided the error as mentioned above, but I don't think it actually addresses the error, and could lead to many other issues IMO.
To throw my cause & solution into the ring, I use flask and flask-sqlalchemy to manage all my session stuff. This is fine when I'm doing things via the site, but when doing things via command line and scripts, you have to ensure that anything that's doing flask-y things has to do it with the flask context.
So, in my situation, I needed to get things from a database (using flask-sqlalchemy), then render them to templates (using flask's render_template), then email them (using flask-mail).
In code, what I'd done was something like,
def render_obj(db_obj):
with app.app_context():
return render_template('template_for_my_db_obj.html', db_obj=db_obj
def get_renders():
my_db_objs = MyDbObj.query.all()
renders = []
for day, _db_objs in itertools.groupby(my_db_objs, MyDbObj.get_date):
renders.extend(list(map(render_obj, _db_obj)))
return renders
def email_report():
renders = get_renders()
report = '\n'.join(renders)
with app.app_context():
mail.send(Message('Subject', ['me#me.com'], html=report))
(this is basically pseudocode, I was doing other things in the grouping section)
And when I was running, I'd get through the first _db_obj, but then I'd get the error on any run after.
The culprit? with app.app_context().
Basically it does a few things when you come out of that context, including kinda freshening up the db connections. One of the things that comes from that is getting rid of the last session that was around, which was the session that all the my_db_objs were associated with.
There's a few different options for solutions, but I went with a variant of,
def render_obj(db_obj):
return render_template('template_for_my_db_obj.html', db_obj=db_obj
def get_renders():
my_db_objs = MyDbObj.query.all()
renders = []
for day, _db_objs in itertools.groupby(my_db_objs, MyDbObj.get_date):
renders.extend(list(map(render_obj, _db_obj)))
return renders
def email_report():
with app.app_context():
renders = get_renders()
report = '\n'.join(renders)
mail.send(Message('Subject', ['me#me.com'], html=report))
Only 1 with app.app_context() which wraps them all. The main thing you need to do (if you've a setup like mine) is ensure any dB object you're using to be "inside" any app_context you're using. If you do what I did in the first iteration, all your dB objects will lose their session, ending in DetachedInstanceError like me.
My solution was a simple oversight;
I created an object, added and ,committed it to the db and after that I tried to access on of the original object attributes without refreshing session session.refresh(object)
user = UserFactory()
session.add(user)
session.commit()
# missing session.refresh(user) and causing the problem
print(user.name)
As for me (newbie), I made a mistake on the indent and close the session inside my loop, in which I loop each row, do some operation and commit each time.
So for those newbie like me, check your code before setting things like expire_on_commit=False, it may lead your to another trap.
My solution to this error was also a simple oversight, which I don't think any of the other answers cover.
My function is fetching object x, modifying it, then returning the original x, because I would like the older version.
Before committing and returning x, I was calling expunge_all, but it was "too late", as the object was already marked dirty.
The solution was simply to expunge the object as early as possible.
# pseudo code
x = session.fetch_x()
# adding the following line fixed it
session.expunge(x)
y = session.update(x)
return x
I have a similar problem in my current project and this fix works for me. Please check in your DB relationship for options lazy=True and change it to lazy='dynamic'.

Categories