i am using python with firebase SDK, and have a table named jobs, each record, has a field named client, that is a map, each client has an id field. I would like to query the table for all the jobs that have the client with a certain id value, I found this explaining for how to query by array members but can't find anything about query by values of map fields.
will something like
.where("client.id", "==", id) work and be effective? how can I do this query in an effective way? create index maybe?
enter code here
It should work without creating an index. What you wrote should filter on the id property of the client field object for all documents of a collection.
See also:
Firestore: Query documents by property of object
Query Google Firestore database by custom object fields
Related
I'm trying to query a mongodb collection with documents containing both an _id and and id field (I know this is not an ideal db design, but I don't own the db).
Is there a way to access the _id field using MongoEngine in Flask?
doc['id'] returns the id field and doc['_id'] throws a KeyError.
Lots of solutions to querying mongoDB using date/time field in MongoDB but what if the mongo doc doesn't have a date/time field?
I've noticed that when I hover the mouse over a document _id (using NoSQLBooster for MongoDB) I get a "createdAt" dropdown (see screenshot below). Just wondering if there is anyway to do a query using pymongo where documents are filtered based on a date/time range using their "createdAt" metadata?
In MongoDB the id of the docs contains the timestamp of creation, this is mentioned on this other question.
You can make a script that insert a date/field using this information to perform those queries or perform the query directly to using the objectId as in here.
I am trying to bulk insert data to MondoDB without overwriting existing data. I want to insert new data to the database if no match with unique id (sourceID). Looking at the documentation for Pymongo I have written some code but cannot make it work. Any ideas to what I am doing wrong?
db.bulk_write(UpdateMany({"sourceID"}, test, upsert=True))
db is the name of my database, SourceID is the unique ID of the documents that I don't want to overwrite in the existing data, test is the array that I am tying to insert.
Either I don't understand your requirement or you misunderstands the UpdateMany operation. As per documentation, this operation serves for modifying the existing data (those matching the query) and only if no documents match the query, and upsert=True, insert new documents. Are you sure you don't want to use insert_many method?
Also, in your example, the first parameter which should be a filter for update, is not a valid query which has to be in a form {"key": "value"}.
I don't want to create a custom "id" field in MongoDB and make it my primary key.
But I want to override the "_id" field and make it an integer primary key so that every time I post a new document, it should have a integer id rather than ObectID.
Can it be possible by overriding some code in MongoEngine(PyMongo)? or this is the core functionality of MongoDB?
I've tried to insert documents directly in MongoDB database(without using Django), specifying the integer "_id" and it saves the document with no problem. However, when I try to query that particular document then it gives no result.
Same happens with Django, when I hit the API endpoint to see the list of documents it shows the one which has integer id.
But when I try to access that particular one, it says "no result found"
How can I deal with this, please advise.
Note: I'm new to both Django and MongoDB.
When defining a document I used the unique=True attribute on a field to ensure I had such an ID. I'm uncertain if this is your goal.
class Post(Document):
p_id = StringField(min_length=64, max_length=64, required=True, unique=True)
I was using a 64character sha256 hexdigest as the p_id
I'm using a polymorphic association in SQLAlchemy as described in this example. It can also be found in the examples directory in the SQLAlchemy source code.
Given this setup, I want to query for all Addresses that are associated with Users. Not a user but any user.
In raw SQL, I could do that like this:
select addresses.* from addresses
join address_associations
on addresses.assoc_id = address_associations.assoc_id
where address_associations.type = 'user'
Is there a way to do this using the ORM Session?
Could I just run this raw SQL and then apply the Address class to each row in the results?
ad-hoc joins using the ORM are described at:
http://www.sqlalchemy.org/docs/orm/tutorial.html#querying-with-joins
for address in sess.query(Address).join(Address.association).filter_by(type='users'):
print "Street", address.street, "Member", address.member