sqlalchemy doesn't seem to like __getattr__() - python

In my flask sqlalchemy based app, I have a model like this:
class Foo(object):
start_date = db.Column(db.DateTime())
It works fine but when I use the template engine to print out the date onto HTML page by doing {{ foo.start_date }}, it'll print a ugly line like 2015-8-8 00:00:00. Trying to come up with my own format, I thought of something clever (at least before I hit the problem)
class Foo(object):
start_date = db.Column(db.DateTime())
def __getattr__(self, name):
return "{0}-{1}-{2}".format(self.start_date.year....)
However, something internal of SQLAlchemy doesn't like this. When a new object is created, it'll raise the following exception:
File "<string>", line 6, in __init__
File "/home/wenliang/work/template/flask/local/lib/python2.7/site-packages/sqlalchemy/ext/declarative/base.py", line 526, in _declarative_constructor
setattr(self, k, kwargs[k])
File "/home/wenliang/work/template/flask/local/lib/python2.7/site-packages/sqlalchemy/orm/attributes.py", line 226, in __set__
instance_dict(instance), value, None)
File "/home/wenliang/work/template/flask/local/lib/python2.7/site-packages/sqlalchemy/orm/attributes.py", line 694, in set
state._modified_event(dict_, self, old)
AttributeError: 'NoneType' object has no attribute '_modified_event'
Any one seen this before?

SQLAlchemy uses __getattr__ internally. If you overload it, the whole alchemy magic will not work anymore.
The field you are accessing is a DateTime object. If you want to format that into a string in any other way as the default rendering, you should use strftime().
https://stackoverflow.com/a/4830620/3929826 gives an example.
https://docs.python.org/2/library/datetime.html#strftime-and-strptime-behavior has the documenatation on the format string.

Related

How to override `Model.__init__` and respect `.using(db)` in Django?

I have the following code:
print(f"current database: {self.db}\ninfusate from database: {infusate._state.db}\ntracer from database: {tracer._state.db}")
FCirc.objects.using(self.db).get_or_create(
serum_sample=sample,
tracer=tracer,
element=label.element,
)
That is producing the following output and exception:
current database: validation
infusate from database: validation
tracer from database: validation
Validating FCirc updater: {'update_function': 'is_last_serum_peak_group', 'update_field': 'is_last', 'parent_field': 'serum_sample', 'child_fields': [], 'update_label': 'fcirc_calcs', 'generation': 2}
Traceback (most recent call last):
File "/Users/rleach/PROJECT-local/TRACEBASE/tracebase/.venv/lib/python3.9/site-packages/django/db/models/query.py", line 581, in get_or_create
return self.get(**kwargs), False
File "/Users/rleach/PROJECT-local/TRACEBASE/tracebase/.venv/lib/python3.9/site-packages/django/db/models/query.py", line 435, in get
raise self.model.DoesNotExist(
DataRepo.models.fcirc.FCirc.DoesNotExist: FCirc matching query does not exist.
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/Users/rleach/PROJECT-local/TRACEBASE/tracebase/DataRepo/views/loading/validation.py", line 91, in validate_load_files
call_command(
File "/Users/rleach/PROJECT-local/TRACEBASE/tracebase/.venv/lib/python3.9/site-packages/django/core/management/__init__.py", line 181, in call_command
return command.execute(*args, **defaults)
File "/Users/rleach/PROJECT-local/TRACEBASE/tracebase/.venv/lib/python3.9/site-packages/django/core/management/base.py", line 398, in execute
output = self.handle(*args, **options)
File "/Users/rleach/PROJECT-local/TRACEBASE/tracebase/DataRepo/management/commands/load_animals_and_samples.py", line 134, in handle
loader.load_sample_table(
File "/Users/rleach/PROJECT-local/TRACEBASE/tracebase/DataRepo/utils/sample_table_loader.py", line 426, in load_sample_table
FCirc.objects.using(self.db).get_or_create(
File "/Users/rleach/PROJECT-local/TRACEBASE/tracebase/.venv/lib/python3.9/site-packages/django/db/models/query.py", line 588, in get_or_create
return self.create(**params), True
File "/Users/rleach/PROJECT-local/TRACEBASE/tracebase/.venv/lib/python3.9/site-packages/django/db/models/query.py", line 451, in create
obj = self.model(**kwargs)
File "/Users/rleach/PROJECT-local/TRACEBASE/tracebase/DataRepo/models/maintained_model.py", line 430, in __init__
super().__init__(*args, **kwargs)
File "/Users/rleach/PROJECT-local/TRACEBASE/tracebase/.venv/lib/python3.9/site-packages/django/db/models/base.py", line 485, in __init__
_setattr(self, field.name, rel_obj)
File "/Users/rleach/PROJECT-local/TRACEBASE/tracebase/.venv/lib/python3.9/site-packages/django/db/models/fields/related_descriptors.py", line 229, in __set__
raise ValueError('Cannot assign "%r": the current database router prevents relation between database "%s" and "%s".' % (value, instance._state.db, value._state.db))
ValueError: Cannot assign "<Tracer: lysine-[13C6]>": the current database router prevents this relation.
Cannot assign "<Tracer: lysine-[13C6]>": the current database router prevents this relation.
Knowing that this error relates to foreign relations between records in different databases, as a sanity check, I modified the source of related_descriptors.py to include more info:
raise ValueError('Cannot assign "%r": the current database router prevents relations between database "%s" and "%s".' % (value, instance._state.db, value._state.db))
And that prints:
Cannot assign "<Tracer: lysine-[13C6]>": the current database router prevents relations between database "default" and "validation".
So I was going nuts. Why is it ignoring my .using(self.db) call?!
Then I realized, "Oh yeah - I over-rode __init__ in the superclass to FCirc! I'm probably circumventing using(db).":
class FCirc(MaintainedModel, HierCachedModel):
...
class MaintainedModel(Model):
...
Out of the 2 superclass mixes, MaintainedModel seems to be the culprit in this case. It's the only one that overrides __init__. That override looks like this:
def __init__(self, *args, **kwargs):
"""
This over-ride of the constructor is to prevent developers from explicitly setting values for automatically
maintained fields. It also performs a one-time validation check of the updater_dicts.
"""
# ... about 80 lines of code that I'm very confident are unrelated to the problem. See the docstring above. Will paste upon request ...
# vvv THIS LINE IS LINE 430 FROM maintained_model.py IN THE TRACE ABOVE
super().__init__(*args, **kwargs)
How do I pass along self.db in the super constructor?
I figured it out! Another developer had added calls to full_clean (which I have a separate question about) in derived classes of the QuerySet class for a few of the models. E.g.:
class TracerQuerySet(models.QuerySet):
...
class Tracer(MaintainedModel):
objects = TracerQuerySet().as_manager()
...
Those calls to full_clean appear to only work on the default database. At least, I haven't been able to figure out how to tell full_clean to operate on our validation database.
I changed the 3 or 4 calls I found QuerySet derived classes to only call full_clean if the current database is the default database:
if self._db == settings.DEFAULT_DB:
tracer.full_clean()
After that, I no longer get the exception on the FCirc get_or_create call and the database operated on is the validation database.
I was curious though if I had any other database operations that were being assigned to the default database, so I added a debug print to another piece of code in django.db.utils.py: _router_func:
def _router_func(action):
def _route_db(self, model, **hints):
chosen_db = None
for router in self.routers:
try:
method = getattr(router, action)
except AttributeError:
# If the router doesn't have a method, skip to the next one.
pass
else:
chosen_db = method(model, **hints)
if chosen_db:
return chosen_db
instance = hints.get('instance')
if instance is not None and instance._state.db:
return instance._state.db
###
### See what code is getting the default database
###
print(f"Returning default database. Trace:")
traceback.print_stack()
return DEFAULT_DB_ALIAS
return _route_db
That revealed 2 other places that were querying the default database when I knew it should only be querying the validation database.
This one made sense, because it's a fresh database query:
def tracer_labeled_elements(self):
"""
This method returns a unique list of the labeled elements that exist among the tracers.
"""
from DataRepo.models.tracer_label import TracerLabel
return list(
TracerLabel.objects.filter(tracer__infusates__id=self.id)
.order_by("element")
.distinct("element")
.values_list("element", flat=True)
)
So I modified it to use the current database (based on the instance) instead:
def tracer_labeled_elements(self):
"""
This method returns a unique list of the labeled elements that exist among the tracers.
"""
from DataRepo.models.tracer_label import TracerLabel
db = self.get_using_db()
return list(
TracerLabel.objects.using(db).filter(tracer__infusates__id=self.id)
.order_by("element")
.distinct("element")
.values_list("element", flat=True)
)
And I added this to my MaintainedModel class:
def get_using_db(self):
"""
If an instance method makes an unrelated database query and a specific database is currently in use, this
method will return that database to be used in the fresh query's `.using()` call. Otherwise, django's code
base will set the ModelState to the default database, which may differ from where the current model object came
from.
"""
db = settings.DEFAULT_DB
if hasattr(self, "_state") and hasattr(self._state, "db"):
db = self._state.db
return db
That works, however I don't like that it requires the developer to include code in the derived class specific to the super class. Plus, settings.DEFAULT_DB is project-specific, so it creates an inter-dependency.
What I really need to do is change the default database that the django base code sets when no database is explicitly specified...
My bet is that that's easy to do. I've just never done it before. I'll start looking into that and I'll probably edit this answer again soon.
Previous "answer"
Alright, this is not really an answer, because I don't want developers to have to jump through these ridiculous hoops to use get_or_create on a model that inherits from MaintainedModel, but it solves the problem. It prevents the exception and everything is applied to the correct database.
Maybe this will give someone else a hint as to how to correctly solve the problem inside the override of the __init__ constructor in MaintainedModel:
from django.db.models.base import ModelState
ms = ModelState
setattr(ms, "db", self.db)
print(f"current database: {self.db}\ninfusate from database: {infusate._state.db}\ntracer from database: {tracer._state.db}\nsample from database: {sample._state.db}\n_state type: {type(tracer._state)} db type: {type(tracer._state.db)}")
using_obj = FCirc.objects.using(self.db)
setattr(using_obj, "_state", ms)
print(f"using_obj database: {using_obj._state.db}")
using_obj.get_or_create(
serum_sample=sample,
tracer=tracer,
element=label.element,
)
The output is:
current database: validation
infusate from database: validation
tracer from database: validation
sample from database: validation
_state type: <class 'django.db.models.base.ModelState'> db type: <class 'str'>
using_obj database: validation

Does Context/Scoping of a SQLAlchemy Session Require Non-Automatic Object/Attribute Expiration?

The Situation: Simple Class with Basic Attributes
In an application I'm working on, instances of particular class are persisted at the end of their lifecycle, and while they are not subsequently modified, their attributes may need to be read. For example, the end_time of the instance or its ordinal position relative to other instances of the same class (first instance initialized gets value 1, the next has value 2, etc.).
class Foo(object):
def __init__(self, position):
self.start_time = time.time()
self.end_time = None
self.position = position
# ...
def finishFoo(self):
self.end_time = time.time()
self.duration = self.end_time - self.start_time
# ...
The Goal: Persist an Instance using SQLAlchemy
Following what I believe to be a best practice - using a scoped SQLAlchemy Session, as suggested here, by way of contextlib.contextmanager - I save the instance in a newly-created Session which immediately commits. The very next line references the newly persistent instance by mentioning it in a log record, which throws a DetachedInstanceError because the attribute its referencing expired when the Session committed.
class Database(object):
# ...
def scopedSession(self):
session = self.sessionmaker()
try:
yield session
session.commit()
except:
session.rollback()
logger.warn("blah blah blah...")
finally:
session.close()
# ...
def saveMyFoo(self, foo):
with self.scopedSession() as sql_session:
sql_session.add(foo)
logger.info("Foo number {0} finished at {1} has been saved."
"".format(foo.position, foo.end_time))
## Here the DetachedInstanceError is raised
Two Known Possible Solutions: No Expiring or No Scope
I know I can set the expire_on_commit flag to False to circumvent this issue, but I'm concerned this is a questionable practice -- automatic expiration exists for a reason, and I'm hesitant to arbitrarily lump all ORM-tied classes into a non-expiry state without sufficient reason and understanding behind it. Alternatively, I can forget about scoping the Session and just leave the transaction pending until I explicitly commit at a (much) later time.
So my question boils down to this:
Is a scoped/context-managed Session being used appropriately in the case I described?
Is there an alternative way to reference expired attributes that is a better/more preferred approach? (e.g. using a property to wrap the steps of catching expiration/detached exceptions or to create & update a non-ORM-linked attribute that "mirrors" the ORM-linked expired attribute)
Am I misunderstanding or misusing the SQLAlchemy Session and ORM? It seems contradictory to me to use a contextmanager approach when that precludes the ability to subsequently reference any of the persisted attributes, even for a task as simple and broadly applicable as logging.
The Actual Exception Traceback
The example above is simplified to focus on the question at hand, but should it be useful, here's the actual exact traceback produced. The issue arises when str.format() is run in the logger.debug() call, which tries to execute the Set instance's __repr__() method.
Unhandled Error
Traceback (most recent call last):
File "/opt/zenith/env/local/lib/python2.7/site-packages/twisted/python/log.py", line 73, in callWithContext
return context.call({ILogContext: newCtx}, func, *args, **kw)
File "/opt/zenith/env/local/lib/python2.7/site-packages/twisted/python/context.py", line 118, in callWithContext
return self.currentContext().callWithContext(ctx, func, *args, **kw)
File "/opt/zenith/env/local/lib/python2.7/site-packages/twisted/python/context.py", line 81, in callWithContext
return func(*args,**kw)
File "/opt/zenith/env/local/lib/python2.7/site-packages/twisted/internet/posixbase.py", line 614, in _doReadOrWrite
why = selectable.doRead()
--- <exception caught here> ---
File "/opt/zenith/env/local/lib/python2.7/site-packages/twisted/internet/udp.py", line 248, in doRead
self.protocol.datagramReceived(data, addr)
File "/opt/zenith/operations/network.py", line 311, in datagramReceived
self.reactFunction(datagram, (host, port))
File "/opt/zenith/operations/schema_sqlite.py", line 309, in writeDatapoint
logger.debug("Data written: {0}".format(dataz))
File "/opt/zenith/operations/model.py", line 1770, in __repr__
repr_info = "Set: {0}, User: {1}, Reps: {2}".format(self.setNumber, self.user, self.repCount)
File "/opt/zenith/env/local/lib/python2.7/site-packages/sqlalchemy/orm/attributes.py", line 239, in __get__
return self.impl.get(instance_state(instance), dict_)
File "/opt/zenith/env/local/lib/python2.7/site-packages/sqlalchemy/orm/attributes.py", line 589, in get
value = callable_(state, passive)
File "/opt/zenith/env/local/lib/python2.7/site-packages/sqlalchemy/orm/state.py", line 424, in __call__
self.manager.deferred_scalar_loader(self, toload)
File "/opt/zenith/env/local/lib/python2.7/site-packages/sqlalchemy/orm/loading.py", line 563, in load_scalar_attributes
(state_str(state)))
sqlalchemy.orm.exc.DetachedInstanceError: Instance <Set at 0x1c96b90> is not bound to a Session; attribute refresh operation cannot proceed
1.
Most likely, yes. It's used correctly insofar as correctly saving data to the database. However, because your transaction only spans the update, you may run into race conditions when updating the same row. Depending on the application, this can be okay.
2.
Not expiring attributes is the right way to do it. The reason the expiration is there by default is because it ensures that even naive code works correctly. If you are careful, it shouldn't be a problem.
3.
It's important to separate the concept of the transaction from the concept of the session. The contextmanager does two things: it maintains the session as well as the transaction. The lifecycle of each ORM instance is limited to the span of each transaction. This is so you can assume the state of the object is the same as the state of the corresponding row in the database. This is why the framework expires attributes when you commit, because it can no longer guarantee the state of the values after the transaction commits. Hence, you can only access the instance's attributes while a transaction is active.
After you commit, any subsequent attribute you access will result in a new transaction being started so that the ORM can once again guarantee the state of the values in the database.
But why do you get an error? This is because your session is gone, so the ORM has no way of starting a transaction. If you do a session.commit() in the middle of your context manager block, you'll notice a new transaction being started if you access one of the attributes.
Well, what if I want to just access the previously-fetched values? Then, you can ask the framework not to expire those attributes.

Can I patch a static method in python?

I've a class in python that contains a static method. I want to mock.patch it in order to see if it was called. When trying to do it I get an error:
AttributeError: path.to.A does not have the attribute 'foo'
My setup can be simplified to:
class A:
#staticMethod
def foo():
bla bla
Now the test code that fails with error:
def test():
with mock.patch.object("A", "foo") as mock_helper:
mock_helper.return_value = ""
A.some_other_static_function_that_could_call_foo()
assert mock_helper.call_count == 1
You can always use patch as a decorator, my preferred way of patching things:
from mock import patch
#patch('absolute.path.to.class.A.foo')
def test(mock_foo):
mock_foo.return_value = ''
# ... continue with test here
EDIT: Your error seems to hint that you have a problem elsewhere in your code. Possibly some signal or trigger that requires this method that is failing?
I was getting that same error message when trying to patch a method using the #patch decorator.
Here is the full error I got.
Traceback (most recent call last):
File "/usr/local/lib/python3.6/site-packages/tornado/testing.py", line 136, in __call__
result = self.orig_method(*args, **kwargs)
File "/usr/local/lib/python3.6/unittest/mock.py", line 1171, in patched
arg = patching.__enter__()
File "/usr/local/lib/python3.6/unittest/mock.py", line 1243, in __enter__
original, local = self.get_original()
File "/usr/local/lib/python3.6/unittest/mock.py", line 1217, in get_original
"%s does not have the attribute %r" % (target, name)
AttributeError: <module 'py-repo.models.Device' from
'/usr/share/projects/py-repo/models/Device.py'> does not have the attribute 'get_device_from_db'
What I ended up doing to fix this was changing the patch decorator I used
from
#patch('py-repo.models.Device.get_device_from_db')
to #patch.object(DeviceModel, 'get_device_from_db')
I really wish I could explain further why that was the issue but I'm still pretty new to Python myself. The patch documentation was especially helpful in figuring out what was available to work with. Important: I should note that get_device_from_db uses the #staticmethod decorator which may be changing things. Hope it helps though.
What worked for me:
#patch.object(RedisXComBackend, '_handle_conn')
def test_xcoms(self, mock_method: MagicMock):
mock_method.return_value = fakeredis.FakeStrictRedis()
'_handle_conn' (static function) looks like this:
#staticmethod
def _handle_conn():
redis_hook = RedisHook()
conn: Redis = redis_hook.get_conn()

How to use SQLAlchemy reflection with Sybase? [answer: turns out it's not supported!]

I'm trying to learn more about the .egg concept and overriding methods in Python. Here's the error message I'm receiving:
Traceback (most recent call last):
File "C:/local/work/scripts/plmr/plmr_db.py", line 42, in <module>
insp.reflecttable(reo_daily_table, column_list)
File "build\bdist.win32\egg\sqlalchemy\engine\reflection.py", line 370, in reflecttable
File "build\bdist.win32\egg\sqlalchemy\engine\reflection.py", line 223, in get_columns
File "build\bdist.win32\egg\sqlalchemy\engine\base.py", line 260, in get_columns
NotImplementedError
Here's the specific function from base.py:
def get_columns(self, connection, table_name, schema=None, **kw):
"""Return information about columns in `table_name`.
Given a :class:`.Connection`, a string
`table_name`, and an optional string `schema`, return column
information as a list of dictionaries with these keys:
name
the column's name
type
[sqlalchemy.types#TypeEngine]
nullable
boolean
default
the column's default value
autoincrement
boolean
sequence
a dictionary of the form
{'name' : str, 'start' :int, 'increment': int}
Additional column attributes may be present.
"""
raise NotImplementedError()
So my question is - do I override this function by writing a new method in my main module? Or am I missing a step somewhere along the way with my imports? Or am I just completely off track here?
Any and all help is appreciated :)
edit: adding my code
import sys
from sqlalchemy import create_engine, select, Table, MetaData
from sqlalchemy.engine import reflection
dbPath = 'connection_string'
engine = create_engine(dbPath, echo=True)
connection = engine.connect()
#reflect tables into memory
meta = MetaData()
reo_daily_table = Table('reo_daily',meta)
insp = reflection.Inspector.from_engine(engine)
column_list=[...]
insp.reflecttable(reo_daily_table, column_list)
connection.close()
EDIT:
The Sybase dialect currently lacks the ability to reflect tables.
You have misunderstood completely. You do not need to subclass anything and this problem has nothing to do with eggs and .ini files at all.
You are not supposed to instantiate Inspector this way. If you read
SQLAlchemy docs carefully, you will notice that you are not supposed to use Reflection constructor directly; instead you should write
insp = reflection.Inspector.from_engine(engine)

AppEngine -> "AttributeError: 'unicode' object has no attribute 'has_key'" when using blobstore

There have been a number of other questions on AttributeErrors here, but I've read through them and am still not sure what's causing the type mismatch in my specific case.
Thanks in advance for any thoughts on this.
My model:
class Object(db.Model):
notes = db.StringProperty(multiline=False)
other_item = db.ReferenceProperty(Other)
time = db.DateTimeProperty(auto_now_add=True)
new_files = blobstore.BlobReferenceProperty(required=True)
email = db.EmailProperty()
is_purple = db.BooleanProperty()
My BlobstoreUploadHandler:
class FormUploadHandler(blobstore_handlers.BlobstoreUploadHandler):
def post(self):
try:
note = self.request.get('notes')
email_addr = self.request.get('email')
o = self.request.get('other')
upload_file = self.get_uploads()[0]
# Save the object record
new_object = Object(notes=note,
other=o,
email=email_addr,
is_purple=False,
new_files=upload_file.key())
db.put(new_object)
# Redirect to let user know everything's peachy.
self.redirect('/upload_success.html')
except:
self.redirect('/upload_failure.html')
And every time I submit the form that uploads the file, it throws the following exception:
ERROR 2010-10-30 21:31:01,045 __init__.py:391] 'unicode' object has no attribute 'has_key'
Traceback (most recent call last):
File "/home/user/Public/dir/google_appengine/google/appengine/ext/webapp/__init__.py", line 513, in __call__
handler.post(*groups)
File "/home/user/Public/dir/myapp/myapp.py", line 187, in post
new_files=upload_file.key())
File "/home/user/Public/dir/google_appengine/google/appengine/ext/db/__init__.py", line 813, in __init__
prop.__set__(self, value)
File "/home/user/Public/dir/google_appengine/google/appengine/ext/db/__init__.py", line 3216, in __set__
value = self.validate(value)
File "/home/user/Public/dir/google_appengine/google/appengine/ext/db/__init__.py", line 3246, in validate
if value is not None and not value.has_key():
AttributeError: 'unicode' object has no attribute 'has_key'
What perplexes me most is that this code is nearly straight out of the documentation, and jives with other examples of blob upload handler's I've found online in tutorials as well.
I've run --clear-datastore to ensure that any changes I've made to the DB schema aren't causing problems, and have tried casting upload_file as all sorts of things to see if it would appease Python - any ideas on what I've screwed up?
Edit: I've found a workaround, but it's suboptimal.
Altering the UploadHandler to this instead resolves the issue:
...
# Save the object record
new_object = Object()
new_object.notes = note
new_object.other = o
new_object.email = email.addr
new_object.is_purple = False
new_object.new_files = upload_file.key()
db.put(new_object)
...
I made this switch after noticing that commenting out the files line threw the same issues for the other line, and so on. This isn't an optimal solution, though, as I can't enforce validation this way (in the model, if I set anything as required, I can't declare an empty entity like above without throwing an exception).
Any thoughts on why I can't declare the entity and populate it at the same time?
You're passing in o as the value of other_item (in your sample code, you call it other, but I presume that's a typo). o is a string fetched from the request, though, and the model definition specifies that it's a ReferenceProperty, so it should either be an instance of the Other class, or a db.Key object.
If o is supposed to be a stringified key, pass in db.Key(o) instead, to deserialize it.
Object is a really terrible name for a datastore class (or any class, really), by the way - the Python base object is called object, and that's only one capitalized letter away - very easy to mistake.
has_key error is due to the ReferenceProperty other_items. You are most likely passing in '' for other_items when appengine's api expects a dict. In order to get around this, you need to convert other_items to hash.
[caveat lector: I know zilch about "google_app_engine"]
The message indicates that it is expecting a dict (the only known object that has a has_key attribute) or a work-alike object, not the unicode object that you supplied. Perhaps you should be passing upload_file, not upload_file.key() ...

Categories