MongoDB Update with Array Filters [duplicate] - python

I am trying to update a value in the nested array but can't get it to work.
My object is like this
{
"_id": {
"$oid": "1"
},
"array1": [
{
"_id": "12",
"array2": [
{
"_id": "123",
"answeredBy": [], // need to push "success"
},
{
"_id": "124",
"answeredBy": [],
}
],
}
]
}
I need to push a value to "answeredBy" array.
In the below example, I tried pushing "success" string to the "answeredBy" array of the "123 _id" object but it does not work.
callback = function(err,value){
if(err){
res.send(err);
}else{
res.send(value);
}
};
conditions = {
"_id": 1,
"array1._id": 12,
"array2._id": 123
};
updates = {
$push: {
"array2.$.answeredBy": "success"
}
};
options = {
upsert: true
};
Model.update(conditions, updates, options, callback);
I found this link, but its answer only says I should use object like structure instead of array's. This cannot be applied in my situation. I really need my object to be nested in arrays
It would be great if you can help me out here. I've been spending hours to figure this out.
Thank you in advance!

General Scope and Explanation
There are a few things wrong with what you are doing here. Firstly your query conditions. You are referring to several _id values where you should not need to, and at least one of which is not on the top level.
In order to get into a "nested" value and also presuming that _id value is unique and would not appear in any other document, you query form should be like this:
Model.update(
{ "array1.array2._id": "123" },
{ "$push": { "array1.0.array2.$.answeredBy": "success" } },
function(err,numAffected) {
// something with the result in here
}
);
Now that would actually work, but really it is only a fluke that it does as there are very good reasons why it should not work for you.
The important reading is in the official documentation for the positional $ operator under the subject of "Nested Arrays". What this says is:
The positional $ operator cannot be used for queries which traverse more than one array, such as queries that traverse arrays nested within other arrays, because the replacement for the $ placeholder is a single value
Specifically what that means is the element that will be matched and returned in the positional placeholder is the value of the index from the first matching array. This means in your case the matching index on the "top" level array.
So if you look at the query notation as shown, we have "hardcoded" the first ( or 0 index ) position in the top level array, and it just so happens that the matching element within "array2" is also the zero index entry.
To demonstrate this you can change the matching _id value to "124" and the result will $push an new entry onto the element with _id "123" as they are both in the zero index entry of "array1" and that is the value returned to the placeholder.
So that is the general problem with nesting arrays. You could remove one of the levels and you would still be able to $push to the correct element in your "top" array, but there would still be multiple levels.
Try to avoid nesting arrays as you will run into update problems as is shown.
The general case is to "flatten" the things you "think" are "levels" and actually make theses "attributes" on the final detail items. For example, the "flattened" form of the structure in the question should be something like:
{
"answers": [
{ "by": "success", "type2": "123", "type1": "12" }
]
}
Or even when accepting the inner array is $push only, and never updated:
{
"array": [
{ "type1": "12", "type2": "123", "answeredBy": ["success"] },
{ "type1": "12", "type2": "124", "answeredBy": [] }
]
}
Which both lend themselves to atomic updates within the scope of the positional $ operator
MongoDB 3.6 and Above
From MongoDB 3.6 there are new features available to work with nested arrays. This uses the positional filtered $[<identifier>] syntax in order to match the specific elements and apply different conditions through arrayFilters in the update statement:
Model.update(
{
"_id": 1,
"array1": {
"$elemMatch": {
"_id": "12","array2._id": "123"
}
}
},
{
"$push": { "array1.$[outer].array2.$[inner].answeredBy": "success" }
},
{
"arrayFilters": [{ "outer._id": "12" },{ "inner._id": "123" }]
}
)
The "arrayFilters" as passed to the options for .update() or even
.updateOne(), .updateMany(), .findOneAndUpdate() or .bulkWrite() method specifies the conditions to match on the identifier given in the update statement. Any elements that match the condition given will be updated.
Because the structure is "nested", we actually use "multiple filters" as is specified with an "array" of filter definitions as shown. The marked "identifier" is used in matching against the positional filtered $[<identifier>] syntax actually used in the update block of the statement. In this case inner and outer are the identifiers used for each condition as specified with the nested chain.
This new expansion makes the update of nested array content possible, but it does not really help with the practicality of "querying" such data, so the same caveats apply as explained earlier.
You typically really "mean" to express as "attributes", even if your brain initially thinks "nesting", it's just usually a reaction to how you believe the "previous relational parts" come together. In reality you really need more denormalization.
Also see How to Update Multiple Array Elements in mongodb, since these new update operators actually match and update "multiple array elements" rather than just the first, which has been the previous action of positional updates.
NOTE Somewhat ironically, since this is specified in the "options" argument for .update() and like methods, the syntax is generally compatible with all recent release driver versions.
However this is not true of the mongo shell, since the way the method is implemented there ( "ironically for backward compatibility" ) the arrayFilters argument is not recognized and removed by an internal method that parses the options in order to deliver "backward compatibility" with prior MongoDB server versions and a "legacy" .update() API call syntax.
So if you want to use the command in the mongo shell or other "shell based" products ( notably Robo 3T ) you need a latest version from either the development branch or production release as of 3.6 or greater.
See also positional all $[] which also updates "multiple array elements" but without applying to specified conditions and applies to all elements in the array where that is the desired action.

I know this is a very old question, but I just struggled with this problem myself, and found, what I believe to be, a better answer.
A way to solve this problem is to use Sub-Documents. This is done by nesting schemas within your schemas
MainSchema = new mongoose.Schema({
array1: [Array1Schema]
})
Array1Schema = new mongoose.Schema({
array2: [Array2Schema]
})
Array2Schema = new mongoose.Schema({
answeredBy": [...]
})
This way the object will look like the one you show, but now each array are filled with sub-documents. This makes it possible to dot your way into the sub-document you want. Instead of using a .update you then use a .find or .findOne to get the document you want to update.
Main.findOne((
{
_id: 1
}
)
.exec(
function(err, result){
result.array1.id(12).array2.id(123).answeredBy.push('success')
result.save(function(err){
console.log(result)
});
}
)
Haven't used the .push() function this way myself, so the syntax might not be right, but I have used both .set() and .remove(), and both works perfectly fine.

Related

Column names in great expectations

Are there any specific rules for column names in great expectations? In particular, if you have a column like a.age ? would it have to be renamed to a_age in order to run an expectation on it?
The application of an expectation expects that it uses the name that the column assumes in the starting dataset. However you can put in the comments section a reference to what you mean. For example, in the case you mentioned:
{
"expectation_type": "expect_column_values_to_not_be_null",
"kwargs": {
"column": "a.age"
},
"meta": {
"notes": {
"content": "a_age",
"format": "markdown"
}
}
}
and this you will find in the json file of the results. To answer your question: changing the name of the column by altering the dataset, with great_expectations, is not possible because one of the fundamental objectives is to apply expectations by making as few alterations as possible on the data.

DynamoDB Query by by nested array

I have a Companies table in DynamoDB that looks like this:
company: {
id: "11",
name: "test",
jobs: [
{
"name": "painter",
"id": 3
},
{
"name": "gardner"
"id": 2
}
]
}
And I want to make a scan query that get all the companies with the "painter" job inside their jobs array
I am using python and boto3
I tried something like this but it didn't work
jobs = ["painter"]
response = self.table.scan(
FilterExpression=Attr('jobs.name').is_in(jobs)
)
Please help.
Thanks.
It looks like this may not be doable in general, however it's possible that the method applied in that link may still be useful. If you know the maximum length of the jobs array over all of your data, you could create an expression for each index chained with ORs. Notably I could not find documentation for handling map and list scan expressions, so I can't really say whether you'd also need to check that you're not going out of bounds.

How to collect specific values in a deeply nested structure with Python

I'm trying to get a list of instance IDs that I get from the describe_instances call using boto3 api in my python script. For those of you who aren't aware of aws, I can post a detailed code after removing the specifics if you need it. I'm trying to access a item from a structure like this
u'Reservations':[
{
u'Instances':[
{
u'InstanceId':'i-0000ffffdd'
},
{ }, ### each of these dict contain a id like above
{ },
{ },
{ }
]
},
{
u'Instances':[
{ },
{ },
{ },
{ },
{ }
]
},
{
u'Instances':[
{ }
]
}
]
I'm currently accessing it like
instanceLdict = []
instanceList = []
instances = []
for r in reservations:
instanceList.append(r['Instances'])
for ilist in instanceList:
for i in ilist:
instanceLdict.append(i)
for i in instanceLdict:
instances.append(i['InstanceId']) ####i need them in a list
print instances
fyi: my reservations variable contains the whole list of u'Reservations':
I feel this is inefficient and since I'm a python newbie I really think there must be some better way to do this rather than the multiple for and if. Is there a better way to do this? Kindly point to the structure/method etc., that might be useful in my scenario
Your solution is not actually that inefficient, except you don't really have to create all those top level lists just to save the instance ids in the end. What you could do is a nested loop and keep only what you need:
instances = list()
for r in reservations:
for ilist in r['Instances']:
for i in ilist:
instances.append(i['InstanceId']) # That's what you looping for
Yes, there are ways to do this with shorter code, but explicit is better than implicit and stick to what you can read best. Python is quite good with iterations and remember maintainability first, performance second. Also, this part is hardly the bottleneck of what you doing after all those API calls, DB lookups etc.
But if you really insist to do fancy one-liner, go have a look at itertools helpers, chain.from_iterable() is what you need:
from itertools import chain
instances = [i['InstanceId'] for i in chain.from_iterable(r['Instances'] for r in reservations)]

Multiple FOR loops in iterating over dictionary in Python

This is a simplistic example of a dictionary created by a json.load that I have t deal with:
{
"name": "USGS REST Services Query",
"queryInfo": {
"timePeriod": "PT4H",
"format": "json",
"data": {
"sites": [{
"id": "03198000",
"params": "[00060, 00065]"
},
{
"id": "03195000",
"params": "[00060, 00065]"
}]
}
}
}
Sometimes there may be 15-100 sites with unknown sets of parameters at each site. My goal is to either create two lists (one storing "site" IDs and the other storing "params") or a much simplified dictionary from this original dictionary. Is there a way to do this using nested for loops with kay,value pairs using the iteritem() method?
What I have tried to far is this:
queryDict = {}
for key,value in WS_Req_dict.iteritems():
if key == "queryInfo":
if value == "data":
for key, value in WS_Req_dict[key][value].iteritems():
if key == "sites":
siteVal = key
if value == "params":
paramList = [value]
queryDict["sites"] = siteVal
queryDict["sites"]["params"] = paramList
I run into trouble getting the second FOR loop to work. I haven't looked into pulling out lists yet.
I think this maybe an overall stupid way of doing it, but I can't see around it yet.
I think you can make your code much simpler by just indexing, when feasible, rather than looping over iteritems.
for site in WS_Req_dict['queryInfo']['data']['sites']:
queryDict[site['id']] = site['params']
If some of the keys might be missing, dict's get method is your friend:
for site in WS_Req_dict.get('queryInfo',{}).get('data',{}).get('sites',[]):
would let you quietly ignore missing keys. But, this is much less readable, so, if I needed it, I'd encapsulate it into a function -- and often you may not need this level of precaution! (Another good alternative is a try/except KeyError encapsulation to ignore missing keys, if they are indeed possible in your specific use case).

CouchDB - filter latest log per logged instance from a list

I could use some help filtering distinct values from a couchdb view.
I have a database that stores logs with information about computers.
Periodically new logs for a computer are written to the db.
A bit simplified i store entries like these:
{
"name": "NAS",
"os": "Linux",
"timestamp": "2011-03-03T16:26:39Z",
}
{
"name": "Server1",
"os": "Windows",
"timestamp": "2011-02-03T19:31:31Z",
}
{
"name": "NAS",
"os": "Linux",
"timestamp": "2011-02-03T18:21:29Z",
}
So far i am struggling to filter this list by distinct entries.
What i'd like to receive is the latest logfile for each device.
I have a view like this:
function(doc) {
emit([doc.timestamp,doc.name], doc);
}
Im querying this view with python (couchdbkit) and the best solution i came up with so far looks like this:
def get_latest_logs(cls):
unique = []
for log in cls.view("logs/timestamp", descending=True):
if log.name not in unique_names:
unique.append(log)
return unique
Okay ... this works. But i have the strong feeling, that this is not the best solution as python needs to iterate the whole list of logfiles (which could become quite long).
I guess i need a reduce function but i couldn't really find any examples
or explanations that i could adapt to my problem.
So, what i am looking for is a (pure couchdb) view, that only spits out the latest log for a given device.
Here is what I do. This is borderline CouchDB abuse however I have had much success.
Usually, reduce will compute a sum, or a count, or something like that. However, think of reduce as an elimination tournament. Many values go in. Only one comes out. A reduction! Repeat over and over and you have the ultimate winner (a re-reduction). In this case, the log with the latest timestamp is the winner.
Of course, welterweights can't fight heavyweights. There have to be leagues and weight classes. It only makes sense for certain documents to do battle with certain other similar documents. That is exactly what the reduce group parameter will do. It will ensure that only evenly-matched gladiators enter the steel cage in our bloodsport. (Coffee is kicking in.)
First, emit all logs keyed by device. The value emitted is simply a copy of the document.
function(doc) {
emit(doc.name, doc);
}
Next, write a reduce function to return the latest timestamp of all given values. If you see a fight between two gladiators from different leagues (two logs from different systems), stop the fight! Something went wrong (somebody queried without the correct group value).
function(keys, vals, re) {
var challenger, winner = null;
for(var a = 0; a < vals.length; a++) {
challenger = vals[a];
if(!winner) {
// The title is unchallenged. This value is the winner.
winner = challenger;
} else {
// Fight!
if(winner.name !== challenger.name) {
// Stop the fight! He's gonna kill him!
return null; // With a grouping query, this will never happen.
} else if(winner.timestamp > challenger.timestamp) {
// The champ wins! (Nothing to do.)
} else {
// The challenger wins!
winner = challenger;
}
}
}
// Today's champion lives to fight another day.
return winner;
}
(Note, the timestamp comparison is probably wrong. You will have to convert to a Date probably.)
Now, when you query a view with ?group=true, then CouchDB will only reduce (find the winner between) values with the same key, which is your machine name.
(You can also emit an array as a key, which gives a bit more flexibility. You could emit([doc.name, doc.timestamp], doc) instead. So you can see all logs by system with a query like ?reduce=false&startkey=["NAS", null]&endkey=["NAS", {}] or you could see latest logs by system with ?group_level=1.
Finally, the "stop the fight" stuff is optional. You could simply always return the document with the latest timestamp. However, I prefer to keep it there because in similar situations, I want to see if I am map-reducing incorrectly, and a null reduce output is my big clue.

Categories