I'm trying to get a list of instance IDs that I get from the describe_instances call using boto3 api in my python script. For those of you who aren't aware of aws, I can post a detailed code after removing the specifics if you need it. I'm trying to access a item from a structure like this
u'Reservations':[
{
u'Instances':[
{
u'InstanceId':'i-0000ffffdd'
},
{ }, ### each of these dict contain a id like above
{ },
{ },
{ }
]
},
{
u'Instances':[
{ },
{ },
{ },
{ },
{ }
]
},
{
u'Instances':[
{ }
]
}
]
I'm currently accessing it like
instanceLdict = []
instanceList = []
instances = []
for r in reservations:
instanceList.append(r['Instances'])
for ilist in instanceList:
for i in ilist:
instanceLdict.append(i)
for i in instanceLdict:
instances.append(i['InstanceId']) ####i need them in a list
print instances
fyi: my reservations variable contains the whole list of u'Reservations':
I feel this is inefficient and since I'm a python newbie I really think there must be some better way to do this rather than the multiple for and if. Is there a better way to do this? Kindly point to the structure/method etc., that might be useful in my scenario
Your solution is not actually that inefficient, except you don't really have to create all those top level lists just to save the instance ids in the end. What you could do is a nested loop and keep only what you need:
instances = list()
for r in reservations:
for ilist in r['Instances']:
for i in ilist:
instances.append(i['InstanceId']) # That's what you looping for
Yes, there are ways to do this with shorter code, but explicit is better than implicit and stick to what you can read best. Python is quite good with iterations and remember maintainability first, performance second. Also, this part is hardly the bottleneck of what you doing after all those API calls, DB lookups etc.
But if you really insist to do fancy one-liner, go have a look at itertools helpers, chain.from_iterable() is what you need:
from itertools import chain
instances = [i['InstanceId'] for i in chain.from_iterable(r['Instances'] for r in reservations)]
Related
I have a Companies table in DynamoDB that looks like this:
company: {
id: "11",
name: "test",
jobs: [
{
"name": "painter",
"id": 3
},
{
"name": "gardner"
"id": 2
}
]
}
And I want to make a scan query that get all the companies with the "painter" job inside their jobs array
I am using python and boto3
I tried something like this but it didn't work
jobs = ["painter"]
response = self.table.scan(
FilterExpression=Attr('jobs.name').is_in(jobs)
)
Please help.
Thanks.
It looks like this may not be doable in general, however it's possible that the method applied in that link may still be useful. If you know the maximum length of the jobs array over all of your data, you could create an expression for each index chained with ORs. Notably I could not find documentation for handling map and list scan expressions, so I can't really say whether you'd also need to check that you're not going out of bounds.
I am trying to update a value in the nested array but can't get it to work.
My object is like this
{
"_id": {
"$oid": "1"
},
"array1": [
{
"_id": "12",
"array2": [
{
"_id": "123",
"answeredBy": [], // need to push "success"
},
{
"_id": "124",
"answeredBy": [],
}
],
}
]
}
I need to push a value to "answeredBy" array.
In the below example, I tried pushing "success" string to the "answeredBy" array of the "123 _id" object but it does not work.
callback = function(err,value){
if(err){
res.send(err);
}else{
res.send(value);
}
};
conditions = {
"_id": 1,
"array1._id": 12,
"array2._id": 123
};
updates = {
$push: {
"array2.$.answeredBy": "success"
}
};
options = {
upsert: true
};
Model.update(conditions, updates, options, callback);
I found this link, but its answer only says I should use object like structure instead of array's. This cannot be applied in my situation. I really need my object to be nested in arrays
It would be great if you can help me out here. I've been spending hours to figure this out.
Thank you in advance!
General Scope and Explanation
There are a few things wrong with what you are doing here. Firstly your query conditions. You are referring to several _id values where you should not need to, and at least one of which is not on the top level.
In order to get into a "nested" value and also presuming that _id value is unique and would not appear in any other document, you query form should be like this:
Model.update(
{ "array1.array2._id": "123" },
{ "$push": { "array1.0.array2.$.answeredBy": "success" } },
function(err,numAffected) {
// something with the result in here
}
);
Now that would actually work, but really it is only a fluke that it does as there are very good reasons why it should not work for you.
The important reading is in the official documentation for the positional $ operator under the subject of "Nested Arrays". What this says is:
The positional $ operator cannot be used for queries which traverse more than one array, such as queries that traverse arrays nested within other arrays, because the replacement for the $ placeholder is a single value
Specifically what that means is the element that will be matched and returned in the positional placeholder is the value of the index from the first matching array. This means in your case the matching index on the "top" level array.
So if you look at the query notation as shown, we have "hardcoded" the first ( or 0 index ) position in the top level array, and it just so happens that the matching element within "array2" is also the zero index entry.
To demonstrate this you can change the matching _id value to "124" and the result will $push an new entry onto the element with _id "123" as they are both in the zero index entry of "array1" and that is the value returned to the placeholder.
So that is the general problem with nesting arrays. You could remove one of the levels and you would still be able to $push to the correct element in your "top" array, but there would still be multiple levels.
Try to avoid nesting arrays as you will run into update problems as is shown.
The general case is to "flatten" the things you "think" are "levels" and actually make theses "attributes" on the final detail items. For example, the "flattened" form of the structure in the question should be something like:
{
"answers": [
{ "by": "success", "type2": "123", "type1": "12" }
]
}
Or even when accepting the inner array is $push only, and never updated:
{
"array": [
{ "type1": "12", "type2": "123", "answeredBy": ["success"] },
{ "type1": "12", "type2": "124", "answeredBy": [] }
]
}
Which both lend themselves to atomic updates within the scope of the positional $ operator
MongoDB 3.6 and Above
From MongoDB 3.6 there are new features available to work with nested arrays. This uses the positional filtered $[<identifier>] syntax in order to match the specific elements and apply different conditions through arrayFilters in the update statement:
Model.update(
{
"_id": 1,
"array1": {
"$elemMatch": {
"_id": "12","array2._id": "123"
}
}
},
{
"$push": { "array1.$[outer].array2.$[inner].answeredBy": "success" }
},
{
"arrayFilters": [{ "outer._id": "12" },{ "inner._id": "123" }]
}
)
The "arrayFilters" as passed to the options for .update() or even
.updateOne(), .updateMany(), .findOneAndUpdate() or .bulkWrite() method specifies the conditions to match on the identifier given in the update statement. Any elements that match the condition given will be updated.
Because the structure is "nested", we actually use "multiple filters" as is specified with an "array" of filter definitions as shown. The marked "identifier" is used in matching against the positional filtered $[<identifier>] syntax actually used in the update block of the statement. In this case inner and outer are the identifiers used for each condition as specified with the nested chain.
This new expansion makes the update of nested array content possible, but it does not really help with the practicality of "querying" such data, so the same caveats apply as explained earlier.
You typically really "mean" to express as "attributes", even if your brain initially thinks "nesting", it's just usually a reaction to how you believe the "previous relational parts" come together. In reality you really need more denormalization.
Also see How to Update Multiple Array Elements in mongodb, since these new update operators actually match and update "multiple array elements" rather than just the first, which has been the previous action of positional updates.
NOTE Somewhat ironically, since this is specified in the "options" argument for .update() and like methods, the syntax is generally compatible with all recent release driver versions.
However this is not true of the mongo shell, since the way the method is implemented there ( "ironically for backward compatibility" ) the arrayFilters argument is not recognized and removed by an internal method that parses the options in order to deliver "backward compatibility" with prior MongoDB server versions and a "legacy" .update() API call syntax.
So if you want to use the command in the mongo shell or other "shell based" products ( notably Robo 3T ) you need a latest version from either the development branch or production release as of 3.6 or greater.
See also positional all $[] which also updates "multiple array elements" but without applying to specified conditions and applies to all elements in the array where that is the desired action.
I know this is a very old question, but I just struggled with this problem myself, and found, what I believe to be, a better answer.
A way to solve this problem is to use Sub-Documents. This is done by nesting schemas within your schemas
MainSchema = new mongoose.Schema({
array1: [Array1Schema]
})
Array1Schema = new mongoose.Schema({
array2: [Array2Schema]
})
Array2Schema = new mongoose.Schema({
answeredBy": [...]
})
This way the object will look like the one you show, but now each array are filled with sub-documents. This makes it possible to dot your way into the sub-document you want. Instead of using a .update you then use a .find or .findOne to get the document you want to update.
Main.findOne((
{
_id: 1
}
)
.exec(
function(err, result){
result.array1.id(12).array2.id(123).answeredBy.push('success')
result.save(function(err){
console.log(result)
});
}
)
Haven't used the .push() function this way myself, so the syntax might not be right, but I have used both .set() and .remove(), and both works perfectly fine.
I want to keep some large, static dictionaries in config to keep my main application code clean. Another reason for doing that is so the dicts can be occasionally edited without having to touch the application.
I thought a good solution was using a json config a la:
http://www.ilovetux.com/Using-JSON-Configs-In-Python/
JSON is a natural, readable format for this type of data. Example:
{
"search_dsl_full": {
"function_score": {
"boost_mode": "avg",
"functions": [
{
"filter": {
"range": {
"sort_priority_inverse": {
"gte": 200
}
}
},
"weight": 2.4
}
],
"query": {
"multi_match": {
"fields": [
"name^10",
"search_words^5",
"description",
"skuid",
"backend_skuid"
],
"operator": "and",
"type": "cross_fields"
}
},
"score_mode": "multiply"
}
}
The big problem is, when I import it into my python app and set a dict equal to it like this:
with open("config.json", "r") as fin:
config = json.load(fin)
...
def create_query()
query_dsl = config['search_dsl_full']
return query_dsl
and then later, only when a certain condition is met, I need to update that dict like this:
if (special condition is met):
query_dsl['function_score']['query']['multi_match']['operator'] = 'or'
Since query_dsl is a reference, it updates the config dictionary too. So when I call the function again, it reflects the updated-for-special-condition version ("or") rather than the the desired config default ("and").
I realize this is a newb issue (yes, I'm a python newb), but I can't seem to figure out a 'pythonic' solution. I'm trying to not be a hack.
Possible options:
When I set query_dsl equal to the config dict, use copy.deepcopy()
Figure out how to make all nested slices of the config dictionary immutable
Maybe find a better way to accomplish what I'm trying to do? I'm totally open to this whole approach being a preposterous newbie mistake.
Any help appreciated. Thanks!
This is a simplistic example of a dictionary created by a json.load that I have t deal with:
{
"name": "USGS REST Services Query",
"queryInfo": {
"timePeriod": "PT4H",
"format": "json",
"data": {
"sites": [{
"id": "03198000",
"params": "[00060, 00065]"
},
{
"id": "03195000",
"params": "[00060, 00065]"
}]
}
}
}
Sometimes there may be 15-100 sites with unknown sets of parameters at each site. My goal is to either create two lists (one storing "site" IDs and the other storing "params") or a much simplified dictionary from this original dictionary. Is there a way to do this using nested for loops with kay,value pairs using the iteritem() method?
What I have tried to far is this:
queryDict = {}
for key,value in WS_Req_dict.iteritems():
if key == "queryInfo":
if value == "data":
for key, value in WS_Req_dict[key][value].iteritems():
if key == "sites":
siteVal = key
if value == "params":
paramList = [value]
queryDict["sites"] = siteVal
queryDict["sites"]["params"] = paramList
I run into trouble getting the second FOR loop to work. I haven't looked into pulling out lists yet.
I think this maybe an overall stupid way of doing it, but I can't see around it yet.
I think you can make your code much simpler by just indexing, when feasible, rather than looping over iteritems.
for site in WS_Req_dict['queryInfo']['data']['sites']:
queryDict[site['id']] = site['params']
If some of the keys might be missing, dict's get method is your friend:
for site in WS_Req_dict.get('queryInfo',{}).get('data',{}).get('sites',[]):
would let you quietly ignore missing keys. But, this is much less readable, so, if I needed it, I'd encapsulate it into a function -- and often you may not need this level of precaution! (Another good alternative is a try/except KeyError encapsulation to ignore missing keys, if they are indeed possible in your specific use case).
I have a document layout like this:
Program = {
'_id':ObjectId('4321...'),
'Title':'The Title',
'Episodes':[
{
'Ep_ID':'234122', # this is unique
'Title': 'Ep1',
'Duration':45.2 },
'Ep_ID':'342343' # unique
'Title': 'Ep2',
'Duration':32.3 }
]
}
What I would like to do is at another embedded doc within each Episode, like this:
Program = {
'_id':ObjectId('4321...'),
'Title':'The Title',
'Episodes':[
{
'Ep_ID':'234122' # this is unique
'Title': 'Ep1',
'Duration':45.2,
'FileAssets':[
{ 'FileName':'video1.mov', 'FileSize':2348579234 },
{ 'FileName':'video2.mov', 'FileSize':32343233 }
]
},
{
'Ep_ID':'342343' # unique
'Title': 'Ep2',
'Duration':32.3,
'FileAssets':[
{ 'FileName':'video1.mov', 'FileSize':12423773 },
{ 'FileName':'video2.mov', 'FileSize':456322 }
]
}
]
}
However, I can't figure out how to add/mod/del a doc at that '3rd' level. Is it possible or even good design? I would dearly love to have all the data in one doc, but managing is starting to seem too complex.
The other thought I had was to use the unique values that happen to exist for the sub-docs as keys. I've thought about my sub-docs and they all have some kind of unique value. So I could do this:
Program = {
'_id':ObjectId('4321...'),
'Title':'The Title',
'Ep_ID_234122':{episode data},
'Ep_ID_342343':{episode data},
'FileAsset_BigRaid_Video1.mov':{'Ep_ID_234122', + other file asset data},
'FileAsset_BigRaid_video2.mov':{'Ep_ID_234122', + other file asset data}
}
Any thoughts would be great!
Yes, you can definitely structure your data to have that kind of nesting. Whats more, you definitely shouldn't need to do anything special to accomplish it (at least using pymongo). just get your docs with an update cursor if you need to update existing documents, if that's your problem?
at least, for your first idea. your second idea for a schema is not at all a nice way to structure that data. for one, it'll be impossible for you to easily iterate over subsets of a Program document without doing string matching on the keys, and that will get expensive.
That said, I'm currently dealing with some major MongoDB performance issues, so I would probably recommend you keep your file assets in a separate collection. it will make it easier for you to scale later, if you plan for this data set to become large.