So I'm a flask/sqlalchemy newbie but this seems like it should be a pretty simple. Yet for the life of me I can't get it to work and I can't find any documentation for this anywhere online. I have a somewhat complex query I run that returns me a list of database objects.
items = db.session.query(X, func.count(Y.x_id).label('total')).filter(X.size >= size).outerjoin(Y, X.x_id == Y.x_id).group_by(X.x_id).order_by('total ASC')\
.limit(20).all()
after I get this list of items I want to loop through the list and for each item update some property on it.
for it in items:
it.some_property = 'xyz'
db.session.commit()
However what's happening is that I'm getting an error
it.some_property = 'xyz'
AttributeError: 'result' object has no attribute 'some_property'
I'm not crazy. I'm positive that the property does exist on model X which is subclassed from db.Model. Something about the query is preventing me from accessing the attributes even though I can clearly see they exist in the debugger. Any help would be appreciated.
class X(db.Model):
x_id = db.Column(db.Integer, primary_key=True)
size = db.Column(db.Integer, nullable=False)
oords = db.relationship('Oords', lazy=True, backref=db.backref('x', lazy='joined'))
def __init__(self, capacity):
self.size = size
Given your example your result objects do not have the attribute some_property, just like the exception says. (Neither do model X objects, but I hope that's just an error in the example.)
They have the explicitly labeled total as second column and the model X instance as the first column. If you mean to access a property of the X instance, access that first from the result row, either using index, or the implicit label X:
items = db.session.query(X, func.count(Y.x_id).label('total')).\
filter(X.size >= size).\
outerjoin(Y, X.x_id == Y.x_id).\
group_by(X.x_id).\
order_by('total ASC').\
limit(20).\
all()
# Unpack a result object
for x, total in items:
x.some_property = 'xyz'
# Please commit after *all* the changes.
db.session.commit()
As noted in the other answer you could use bulk operations as well, though your limit(20) will make that a lot more challenging.
You should use the update function.
Like that:
from sqlalchemy import update
stmt = update(users).where(users.c.id==5).\
values(name='user #5')
Or :
session = self.db.get_session()
session.query(Organisation).filter_by(id_organisation = organisation.id_organisation).\
update(
{
"name" : organisation.name,
"type" : organisation.type,
}, synchronize_session = False)
session.commit();
session.close()
The sqlAlchemy doc : http://docs.sqlalchemy.org/en/latest/core/dml.html
Related
I'm running into an issue using mongoengine. A raw query that works on Compass isn't working using _ _ raw _ _ on mongoengine. I'd like to rewrite it using mongoengine's methods, but I'd like to understand why it's not working using _ _ raw_ _ either.
I'm running an embedded document list field that has inheritence. The query is "give me all sequences that are have a 'type A' Assignment "
My schema:
class Sequence(Document):
seq = StringField(required = True)
samples = EmbeddedDocumentListField(Sample)
assignments = EmbeddedDocumentListField(Assignment)
class Sample(EmbeddedDocument):
name = StringField()
class Assignment(EmbeddedDocument):
name = StringField()
meta = {'allow_inheritance': True}
class TypeA(Assignment):
pass
class TypeB(Assignment):
other_field = StringField()
pass
Writing {'assignments._cls': 'TypeA'} into Compass returns a list. But on mongoengine I get an empty field:
from mongo_objects import Sequence
def get_samples_assigned_as_class(cls : str):
query_raw = Sequence.objects(__raw__={'assignments._cls': cls}) # raw query, fails
#query2 = Sequence.objects(assignments___cls = cls) # Fist attempt, failed
#query3 = Sequence.objects.get().assignments.filter(cls = cls) # Second attempt, also failed. Didn't like that it queried everything first
print(query_raw) # empty list, iterating does nothing.
get_samples_assigned_as_class('TypeA')
"Assignments" is a list because one sequence may have multiples of the same class. An in depth awnser on how to query these lists for categorical information would be ideal, as I'm not sure how to properly go about it. I'm mostly filtering on the inheritence _cls, but eventually I'd like to do nested queries (cls : TypeA, sample : Sample_1)
Thanks
I have an object stored in mongo that has a list of reference fields. In a restplus app I need to parse this list of objects and map them into a JSON doc to return for a client.
# Classes I have saved in Mongo
class ThingWithList(Document):
list_of_objects = ListField(ReferenceField(InfoHolder))
class InfoHolder(Document):
thing_id = StringField()
thing_i_care_about = ReferenceField(Info)
class Info(Document):
name = StringField()
foo = StringField()
bar = StringField()
I am finding iterating through the list to be very slow. I guess because I am having to do another database query every time I dereference children of objects in the list.
Simple (but rubbish) method:
info_to_return = []
thing = ThingWithList.get_from_id('thingsId')
for o in list_of_objects:
info = {
'id': o.id,
'name': o.thing_i_care_about.name,
'foo': o.thing_i_care_about.foo,
'bar': o.thing_i_care_about.bar
}
info_to_return.append(info)
return(info_to_return)
I thought I would be able to solve this by using select_related which sounds like it should do the dereferencing for me N levels deep so that I only do one big mongo call rather than several per iteration. When I add
thing.select_related(3)
it seems to have no effect. Have I just misunderstood what this function is for. How else could I speed up my query?
I am trying to use SQLAlchemy to insert some records into a database with a foreign key but the foreign key does not seem to be changing as I would expect.
I have a simple 1-M relationship between two objects, Account & Transaction. The Account objects have been inserted into the database fine but when I try to iterate through these accounts and add transaction objects to them, the foreign key for all of the transactions is the id of the last account being iterated through.
Can anyone help me figure out why this is? it has been driving me crazy!
My account and transaction objects:
class Account(db.Model):
number = db.Column(db.String(80), primary_key=True)
bsb = db.Column(db.String(80), primary_key=True)
name = db.Column(db.String(80))
description = db.Column(db.String(80))
class Transaction(db.Model):
id = db.Column(db.Integer, primary_key=True)
account_id = db.Column(db.String(80), db.ForeignKey('account.number'))
amount = db.Column(db.Float(precprecision=2))
Here is where I am iterating through the accounts in the DB and trying to add transaction objects to them:
def Sync_Transactions():
#The following definately returns two account objects from the DB
accounts = [db.session.query(Account).filter(Account.number == '999999999').first(), db.session.query(Account).filter(Account.number == '222222222').first()]
for acc in accounts:
#The following parses a CSV file for the account and returns transaction objecs as a list
transactions = myLib.ParseCsvFile('C:\\transactions_%s.csv' % (acc.number))
acc.transactions = transactions
db.session.merge(acc)
db.session.commit()
The above, if only 1 account is retrieved from db, works fine. As soon as I start iterating over multiple accounts all of the transactions get given the same foreign key (the key of the last account - 222222222 in the above case)
Here is where the issue is
def ParseCsvFile(self, fileLocation, existingTransactionList=[]):
with open(fileLocation, 'rb') as f:
reader = csv.DictReader(f,['Date','TransactionAmount','C','D','TransactionType','TransactionDescription','Balance'])
for row in reader:
if not row['TransactionDescription'] == '':
existingTransactionList.append(Transaction(
float(row['TransactionAmount'])
)
)
return existingTransactionList
For some reason having the parameter existingTransactionList causes the issue. If I change the above code to the following the problem goes away but I still don't understand why due to my lack of python knowledge I am guessing :)
def ParseCsvFile(self, fileLocation, existingTransactionList=[]):
existingTransactionList=[]
with open(fileLocation, 'rb') as f:
reader = csv.DictReader(f,['Date','TransactionAmount','C','D','TransactionType','TransactionDescription','Balance'])
for row in reader:
if not row['TransactionDescription'] == '':
existingTransactionList.append(Transaction(
float(row['TransactionAmount'])
)
)
return existingTransactionList
The reason I have the existingTransactionList variable as a parameter is because I will eventually want to pass in a list of existing transactions and only the unique ones will get returned by using something like the following:
transactionList = list(set(existingTransactionList))
The issue is that you are adding all of your transactions to the last account. Change:
def ParseCsvFile(self, fileLocation, existingTransactionList=[]):
to
def ParseCsvFile(self, fileLocation, existingTransactionList=None):
if existingTransactionList is None:
existingTransactionList = []
Why does this happen
Python only parses the declaration for a function once and all of the default arguments are bound to their values during this compilation stage. That means that instead of every invocation of ParseCsvFile being given a new list instead every call to ParseCsvFile uses the same list.
To see what is going on, try the following on the command line:
def test_mutable_defaults(value, x=[], y={}):
print("x's memory id this run is", id(x))
print("Contents of x", x)
print("y's memory id this run is", id(y))
print("Contents of y", y)
x.append(value)
y[value] = value
return x, y
then use this function:
test_mutable_defaults(1)
test_mutable_defaults(2)
test_mutable_defaults(3)
If Python re-evaluated x and y on each call you would see different memory IDs for each call and x would be [1], then [2], then [3]. Because Python only evaluates x and y when it first compiles test_mutable_defaults you will see the same memory ID for x each call (and likewise for y), and x will be [1], then [1, 2] and then [1, 2, 3].
I've just introspected a pretty nasty schema from a CRM app with sqlalchemy. All of the tables have a deleted column on them and I wanted to auto filter all those entities and relations flagged as deleted. Here's what I came up with:
class CustomizableQuery(Query):
"""An overridden sqlalchemy.orm.query.Query to filter entities
Filters itself by BinaryExpressions
found in :attr:`CONDITIONS`
"""
CONDITIONS = []
def __init__(self, mapper, session=None):
super(CustomizableQuery, self).__init__(mapper, session)
for cond in self.CONDITIONS:
self._add_criterion(cond)
def _add_criterion(self, criterion):
criterion = self._adapt_clause(criterion, False, True)
if self._criterion is not None:
self._criterion = self._criterion & criterion
else:
self._criterion = criterion
And it's used like this:
class UndeletedContactQuery(CustomizableQuery):
CONDITIONS = [contacts.c.deleted != True]
def by_email(self, email_address):
return EmailInfo.query.by_module_and_address('Contacts', email_address).contact
def by_username(self, uname):
return self.filter_by(twod_username_c=uname).one()
class Contact(object):
query = session.query_property(UndeletedContactQuery)
Contact.query.by_email('someone#some.com')
EmailInfo is the class that's mapped to the join table between emails and the other Modules that they're related to.
Here's an example of a mapper:
contacts_map = mapper(Contact, join(contacts, contacts_cstm), {
'_emails': dynamic_loader(EmailInfo,
foreign_keys=[email_join.c.bean_id],
primaryjoin=contacts.c.id==email_join.c.bean_id,
query_class=EmailInfoQuery),
})
class EmailInfoQuery(CustomizableQuery):
CONDITIONS = [email_join.c.deleted != True]
# More methods here
This gives me what I want in that I've filtered out all deleted Contacts. I can also use this as the query_class argument to dynamic_loader in my mappers - However...
Is there a better way to do this, I'm not really happy with poking around with the internals of a compicated class like Query as I am.
Has anyone solved this in a different way that they can share?
You can map to a select. Like this:
mapper(EmailInfo, select([email_join], email_join.c.deleted == False))
I'd consider seeing if it was possible to create views for these tables that filter out the deleted elements, and then you might be able to map directly to that view instead of the underlying table, at least for querying operations. However I've never tried this myself!
I have some problems with setting up the dictionary collection in Python's SQLAlchemy:
I am using declarative definition of tables. I have Item table in 1:N relation with Record table. I set up the relation using the following code:
_Base = declarative_base()
class Record(_Base):
__tablename__ = 'records'
item_id = Column(String(M_ITEM_ID), ForeignKey('items.id'))
id = Column(String(M_RECORD_ID), primary_key=True)
uri = Column(String(M_RECORD_URI))
name = Column(String(M_RECORD_NAME))
class Item(_Base):
__tablename__ = 'items'
id = Column(String(M_ITEM_ID), primary_key=True)
records = relation(Record, collection_class=column_mapped_collection(Record.name), backref='item')
Now I want to work with the Items and Records. Let's create some objects:
i1 = Item(id='id1')
r = Record(id='mujrecord')
And now I want to associate these objects using the following code:
i1.records['source_wav'] = r
but the Record r doesn't have set the name attribute (the foreign key). Is there any solution how to automatically ensure this? (I know that setting the foreign key during the Record creation works, but it doesn't sound good for me).
Many thanks
You want something like this:
from sqlalchemy.orm import validates
class Item(_Base):
[...]
#validates('records')
def validate_record(self, key, record):
assert record.name is not None, "Record fails validation, must have a name"
return record
With this, you get the desired validation:
>>> i1 = Item(id='id1')
>>> r = Record(id='mujrecord')
>>> i1.records['source_wav'] = r
Traceback (most recent call last):
[...]
AssertionError: Record fails validation, must have a name
>>> r.name = 'foo'
>>> i1.records['source_wav'] = r
>>>
I can't comment yet, so I'm just going to write this as a separate answer:
from sqlalchemy.orm import validates
class Item(_Base):
[...]
#validates('records')
def validate_record(self, key, record):
record.name=key
return record
This is basically a copy of Gunnlaugur's answer but abusing the validates decorator to do something more useful than exploding.
You have:
backref='item'
Is this a typo for
backref='name'
?