Accessing Python object in tuple - python

I am using easyzone and dnspython to extract DNS records from a zone file. When extracting A records I am given back a string and an object in a tuple. I am new to Python coming from PHP and am not quite sure how to get at this object to get the value of it? I had no problems getting the string value in the tuple.
In this code snippet I iterate through the A records and write the values into a CSV:
# Write all A records
for a in z.names.items():
c.writerow([domain, 'A', a.__getitem__(0), a])
a contains the following:
('www.121dentalcare.com.', <easyzone.easyzone.Name object at 0x1012dd190>)
How would I access this object within a which is in the 2nd half of this tuple??

You can use indices to get items from a tuple:
sometuple[1]
just as you can do with lists and strings (see sequence types).
The documentation of easyzone is a little on the thin side, but from looking at the source code it appears the easyzone.easyzone.Name objects have .name, .soa and .ttl attributes:
print sometuple[1].name
The .soa attribute is another custom class, with .mname, .rname, .serial, .refresh, .retry, .expire and .minttl properties.

Related

get item from dict and count number of items in the list

I used jsondiff.diff() to compare the flattened items (as a json) from two jsons of the mapping of an Elasticsearch index. One is what I loaded, the original mapping I created, and the other is the mapping queried from Elasticsearch, so it could technically be different if the data inserted was different from expected.
The mappings are identical, so to test finding the diff I added 2 different fields to the original mapping.
Using jsondiff.diff() works beautifully, and I find my differences.
diff(dfpulledlist, dfloadedlist)
diff(dfloadedlist, dfpulledlist)
That produces:
{insert: [(114, 'mappings.properties.saw_something.properties.monster_fish'), (121, 'mappings.properties.aliens')]}
It looks like a dictionary. Type() tells me it is a <class 'dict'>, but trying to get the items under insert gives me errors. I therefore printed the items and found something strange. Now I'm trying to figure out how to get to the items of insert so I can count them and don't know how to work my way through this.
print(diff_pulled_to_loaded.items())
Gives me:
dict_items([(insert, [(114, 'mappings.properties.saw_something.properties.monster_fish'), (121, 'mappings.properties.aliens')])])
Looking at this page, it looks like it is a dictionary from sequence having each item as a pair. I can't access it as a dictionary though. dict['insert'] or dict.insert . On each attempt I get KeyError: 'insert'.
How do I get to the values of insert, which is an array or list of tuples, so I can query that information? What am I missing/misunderstanding here? I want to get to this:
[(114, 'mappings.properties.saw_something.properties.monster_fish'),
(121, 'mappings.properties.aliens')]
It's a dictionary, but keys like insert and replace are not strings, they're instances of the jsondiff.Symbol class. That's why they don't have quotes around them -- this class has a custom representation that just returns the symbol name.
So to access it I think you have to use
d = diff(dfpulledlist, dfloadedlist)
print(d[jsondiff.insert])

Python dict automatically converting list-type values to simple objects

I'm facing a strange problem with python dictionary.
expanding file key-value pair:
As you can clearly see above that inside my request data dictionary, I have an object named file which is a list (of file objects).
When I try to get this object, I get the actual file object located at index 0.
Even the type shows that it's an object rather than a list. Why is that behaviour?
type(request.data['file'])
django.core.files.uploadedfile.InMemoryUploadedFile
I tried extracting this perticular key,value pair but I still get the value as an object rather than a list which is not what i want.
{k:v for k,v in request.data.items() if k in ['file']}
Out[16]: {'file': <InMemoryUploadedFile: irreg-total2.jpg (image/jpeg)>}
Can anyone explain this behaviour?
I'm using python 3
data['file'] is not list. It is a single instance of InMemoryUploadedFile as indicated by {InMemoryUploadedFile}.
<class 'list'>: [<InMemoryUploadedFile: ireg-totla2.jpg (image/jpeg)>] is just the representation of that object (as returned from __repr__() method.
See UploadedFile.__repr__() method on https://docs.djangoproject.com/en/2.1/_modules/django/core/files/uploadedfile/#InMemoryUploadedFile
Compare that to rest of the elements in data dict

Python/Django Referencing Nested JSON

I'm using the Google Places Reverse Geocode API to return the latitude and longatude of a given address. The API returns a massive JSON file with tons of nested objects. Here's Google's documentation on it.
I'm trying to grab the geometry.location.lat object (see documentation). Here's my current code, followed by the error it returns:
address_coords = gmaps.geocode(raw_address)
# gmaps makes the API request and returns the JSON into address_coords
address_lat = address_coords['results']['geometry']['location']['lat']
address_lon = address_coords['results']['geometry']['location']['lng']
TypeError: List indices must be integers or slices, not str
I'm not sure how to reference that JSON object.
The error is telling you that you're trying to index an array as if it were a property.
Most likely results is the array, so you'd need to do the following to get the value for the first item at index 0:
address_coords['results'][0]['geometry']['location']['lng']
The other possibility is that geometry contains multiple coordinates which you could similarly index:
address_coords['results']['geometry'][0]['location']['lng']
Either way, you should pretty print the structure to see which attributes are arrays versus dictionaries.
import pprint
pprint.pprint(address_coords)

Python - wrap same object to make it unique

I have a dictionary that is being built while iterating through objects. Now same object can be accessed multiple times. And I'm using object itself as a key.
So if same object is accessed more than once, then key becomes not unique and my dictionary is no longer correct.
Though I need to access it by object, because later on if someone wants access contents by it, they can request to get it by current object. And it will be correct, because it will access the last active object at that time.
So I'm wondering if it is possible to wrap object somehow, so it would keep its state and all attributes the same, but the only difference would be this new kind of object which is actually unique.
For example:
dct = {}
for obj in some_objects_lst:
# Well this kind of wraps it, but it loses state, so if I would
# instantiate I would lose all information that was in that obj.
wrapped = type('Wrapped', (type(obj),), {})
dct[wrapped] = # add some content
Now if there are some better alternatives than this, I would like to hear it too.
P.S. objects being iterated would be in different context, so even if object is the same, it would be treated differently.
Update
As requested, to give better example where the problem comes from:
I have this excel reports generator module. Using it, you can generate various excel reports. For that you need to write configuration using python dictionary.
Now before report is generated, it must do two things. Get metadata (metadata here is position of each cell that will be when report is about to be created) and second, parse configuration to fill cells with content.
One of the value types that can be used in this module, is formula (excel formulas). And the problem in my question is specifically with one of the ways formula can be computed: formula values that are retrieved for parent , that are in their childs.
For example imagine this excel file structure:
A | B | C
Total Childs Name Amount
1 sum(childs)
2 child_1 10
3 child_2 20
4 sum(childs)
...
Now in this example sum on cell 1A, would need to be 10+20=30 if sum would use expression to sum their childs column (in this case C column). And all of this is working until same object (I call it iterables) is repeated. Because when building metadata it I need to store it, to retrieve later. And key is object being iterated itself. So now when it will be iterated again when parsing values, it will not see all information, because some will overwritten by same object.
For example imagine there are invoice objects, then there are partner objects which are related with invoices and there are some other arbitrary objects that given invoice and partner produce specific amounts.
So when extracting such information in excel, it goes like this:
inoice1 -> partner1 -> amount_obj1, amount_obj2
invoice2 -> partner1 -> amount_obj3, amount_obj4.
Notice that partner in example is the same. Here is the problem, because I can't store this as key, because when parsing values, I will iterate over this object twice when metadata will actually hold values for amount_obj3 and amount_obj4
P.S Don't know if I explained it better, cause there is lots of code and I don't want to put huge walls of code here.
Update2
I'll try to explain this problem from more abstract angle, because it seems being too specific just confuses everyone even more.
So given objects list and empty dictionary, dictionary is built by iterating over objects. Objects act as a key in dictionary. It contains metadata used later on.
Now same list can be iterated again for different purpose. When its done, it needs to access that dictionary values using iterated object (same objects that are keys in that dictionary). But the problem is, if same object was used more than once, it will have only latest stored value for that key.
It means object is not unique key here. But the problem is the only thing I know is the object (when I need to retrieve the value). But because it is same iteration, specific index of iteration will be the same when accessing same object both times.
So uniqueness I guess then is (index, object).
I'm not sure if I understand your problem, so here's two options. If it's object content that matters, keep object copies as a key. Something crude like
new_obj = copy.deepcopy(obj)
dct[new_obj] = whatever_you_need_to_store(new_obj)
If the object doesn't change between the first time it's checked by your code and the next, the operation is just performed the second time with no effect. Not optimal, but probably not a big problem. If it does change, though, you get separate records for old and new ones. For memory saving you will probably want to replace copies with hashes, __str__() method that writes object data or whatever. But that depends on what your object is; maybe hashing will take too much time for miniscule savings in memory. Run some tests and see what works.
If, on the other hand, it's important to keep the same value for the same object, whether the data within it have changed or not (say, object is a user session that can change its data between login and logoff), use object ids. Not the builtin id() function, because if the object gets GCed or deleted, some other object may get its id. Define an id attribute for your objects and make sure different objects cannot possibly get the same one.

Convert string representation of list of objects back to list in python

I have a list of objects, that has been stringified:
u'[<object: objstuff1, objstuff2>, <object: objstuff1, objstuff2>]'
I want to convert this back into a list:
[<object: objstuff1, objstuff2>, <object: objstuff1, objstuff2>]
I've tried using ast.literal_eval(), but unfortunately, it doesn't seem to work if the elements are objects, and I get a SyntaxError.
Is there any way I can reconvert my string representation of the list of objects back into a list?
You need to have a look at the pickle module to do this.
Basically, dump your objects using pickle.dumps, and load them back using pickle.loads.
ast.literal_eval doesn't work obviously, because there is a lot of information related to the objects (like attributes, and values) which is simply not captured in that string. Also note that you will be able to resurrect only the pickled data, if all you have are those string representations right now, you won't be able to create the objects back from them because of the information loss.

Categories