Grabbing multiple documents with flask-mongoengine - python

I'm using Flask-MongoEngine in my python application, and I'm trying to grab a list of documents WHERE a field equals some value. I know how to grab a single document based on the value of a field using get(name="chris"), but how would I be able to do this with returning multiple documents? Nothing in the docs is really sticking out.

MongoEngine Document classes have an objects attribute, which is used for accessing the objects in the database associated with the class. example:
uk_users = User.objects(country='uk')
For advanced queries you can use the filter attribute:
uk_female_users = User.objects(country='uk').filter(gender='f')
This is the related documentation MongoEngine - Querying the database

Related

Create Pony ORM entity without fetching related objects?

I recently started using Pony ORM, and I think it's awesome. Even though the api is well documented on the official website, I'm having a hard time working with relationships. In particular I want to insert a new entity which is part of a set. However I cannot seem to find a way to create the entity without fetching related objects first:
post = Post(
#author = author_id, # Python complains about author's type, which is int and should be User
author = User[author_id], # This works, but as I understand it actually fetches the user object
#author = User(id=author_id), # This will try and (fail to) create a new record for the User table when I commit, am I right?
# ...
)
In the end only the id value is inserted in the table, why should I fetch the entire object when I only need the id?
EDIT
I had a quick Peek at Pony ORM source code and using the primary key of the reverse entity should work, but even in that case we end up calling _get_by_raw_pkval_ which fetches the object either from the local cache or from the database, thus probably it's not possible.
It is a part of inner API and also not the way Pony assumes you use it, but you actually can use author = User._get_by_raw_pkval_((author_id,)) if you sure that you has those objects with these ids.

Mongoengine: Check if document is already in DB

I am working on a kind of initialization routine for a MongoDB using mongoengine.
The documents we deliver to the user are read from several JSON files and written into the database at the start of the application using the above mentioned init routine.
Some of these documents have unique keys which would raise a mongoengine.errors.NotUniqueError error if a document with a duplicate key is passed to the DB. This is not a problem at all since I am able to catch those errors using try-except.
However, some other documents are something like a bunch of values or parameters. So there is no unique key which a can check in order to prevent those from being inserted to the DB twice.
I thought I could read all existing documents from the desired collection like this:
docs = MyCollection.objects()
and check whether the document to be inserted is already available in docs by using:
doc = MyCollection(parameter='foo')
print(doc in docs)
Which prints false even if there is a MyCollection(parameter='foo') document in the the DB already.
How can I achieve a duplicate detection without using unique keys?
You can check using an if statement:
if not MyCollection.objects(parameter='foo'):
# insert your documents

Accessing all columns from MySQL table using flask sqlalchemy

I'm trying to get the columns for a MySQL table whether in a string or a list format.
Since I defined the table through my main app, I could use dir(table_name) to get a list of attributes but those contain private attributes and other built-in attributes like "query" or "query_class". Filtering these would be possible but I'm trying to find an easier way to get the columns without going through the attribute route.
Between reading the flask-sqlalchemy documentation and sqlalchemy documentation, I noticed that using table_name.c will work for sqlalchelmy. Is there an equivalent for flask-sqlalchemy?
Flask-SQLAlchemy does nothing to the SQLAlchemy side of things, it is just a wrapper to make it easier to use with Flask along with some convenience features. You can inspect orm objects the same way you normally would. For example, inspecting a model returns its mapper.
m = db.inspect(MyModel)
# all orm attributes
print(list(m.all_orm_descriptors.keys()))
# just the columns
print(list(m.columns.keys()))
# etc.

Clean way to find many to many relations by array of ids in python-eve and mongo

I have 2 models in python eve say foo and bar,and in foo I have an array of objectIds referring to bars. Is there a clean way to do it in python-eve without defining the custom route in flash and run the query manually with mongo?
and if I force to do it in mongo what is the recommended way to communicate with mongo instance
I am not sure I understand your question but, have you looked into the data_relation setting? See Embedded Resource Serialization. Quoting from the Limitations paragraph:
Currently we support embedding of documents by references located in any subdocuments (nested dicts and lists). For example, a query /invoices?/embedded={"user.friends":1} will return a document with user and all his friends embedded, but only if user is a subdocument and friends is a list of reference (it could be a list of dicts, nested dict, ect.). We do not support multiple layers embeddings. This feature is about serialization on GET requests. There’s no support for POST, PUT or PATCH of embedded documents.
UPDATED
If you simply want to query for documents which are referencing documents in other collections something like this would work:
?where={"reference_field":"54e328ec537d3d20bbdf2ed5"}
That's assuming reference_field is either a list of ids (of type objectid) or a objectid. Also see this answer.
Hope this helps.

When an entry is deleted from the datastore, is its corresponding search document also deleted?

I am using Google App Engine's Search API to index entities from the Datastore. After I create or modify an object, I have to add it to the Search index. I do this by creating a add_to_search_index method for each model whose entities are indexed, for example:
class Location(ndb.Model):
...
def add_to_search_index(self):
fields = [
search.TextField(name="name", value=self.name),
search.GeoField(name="location", value= search.GeoPoint(self.location.lat, self.location.lon)),
]
document = search.Document(doc_id=str(self.key.id()), fields=fields)
index = search.Index(name='Location_index')
index.put(document)
Does the search API automatically maintain any correspondence between indexed documents and datastore entities?
I suspect they are not, meaning that the Search API will maintain deleted, obsolete entities in its index. If that's the case, then I suppose the best approach would be to use the NDB hook methods to create a remove_from_search_index method that is called before put (for edits/updates) and delete. Please advise if there is a better solution for maintaining correspondence between the datastore and search indices.
Since the datastore (NDB) and the search API are separate back ends they are to be maintained separately. I see you're using the key.id() as the document id. You can use this document id to get a document or to delete it. Maintaining the creation of the search document can be done in the model's _post_put_hook and _post_delete_hook. You may also use the repository pattern to do this. How you do this is up to you.
index = search.Index(name='Location_index')
index.delete([doc_id])

Categories