Getting values from another worksheet through loops in python - python

What I'm trying to do is getting each column in Source worksheet and pasting them to Target_sheet in another worksheet, each pasting action should start from the third row though (Ex: A3:A, B3:B ...)
However I get error such as:
ata.values[1731]","description": "Invalid value at 'data.values[1731]' (type.googleapis.com/google.protobuf.ListValue), \"x23232x2x2x442x42x42x42\""
},
{
"field": "data.values[1732]",
"description": "Invalid value at 'data.values[1732]' (type.googleapis.com/google.protobuf.ListValue), \"x242x42x42x42x42x442x427\""
},
{
"field": "data.values[1733]",
"description": "Invalid value at 'data.values[1733]' (type.googleapis.com/google.protobuf.ListValue), \"x42x424242x42454555x56666\""
}
.
.
.
My code:
sh = client.open('Target')
sh.values_clear("Target_sheet!A3:J10000")
source = client.open('Source')
source_col_numbers = source.sheet1.col_count
i = 1
# creating a holder for the values in Source.sheet1
columns = {}
#getting the values in each column at the Source.sheet1
while i <= source_col_numbers:
columns[i] = list(filter(None , source.sheet1.col_values(i)))
i += 1
# will use this variable to iterate between columns in the Target.Target_sheet
charn=ord("A")
#updating the columns in the Target with values from Source
b=1
while b <= source_col_numbers:
sh.values_update(
"Target_sheet!"+chr(charn)+"3:"+chr(charn)
,
params={
'valueInputOption': 'USER_ENTERED'
} ,
body={
'values': columns[b]
}
)
charn+=1
b+=1
#carlesgg97 tried with get_value but still getting error I mentioned under your comment:
target_worksheet.values_clear("Target!A3:J10000")
source = client.open('Source')
source_col_numbers = source.sheet1.col_count
source_values=source.values_get('Sheet1!A:J')
last_column=chr(source_col_numbers)
target_worksheet.values_update(
"Target!A3:"+last_column ,
params={ 'valueInputOption': 'USER_ENTERED' },
body={ 'values': source_values }
)

Using col_values you obtain the values of the column, in a list. However, when using the values_update method, the body requires the "values" property to be a list of lists (see: col_values).
Further to that, I believe the task you are attempting to accomplish can be done in a much more simpler way, using values_get. An example that would move the range A1:J9997 to A4:J10000 (moved 3 rows down) would look as follows:
sh = client.open('Target')
sh.values_clear("Target_sheet!A3:J10000")
response = sh.values_get("A1:J9997")
spreadsheet.values_update(
'Target_sheet!A3:J10000',
params = {
'valueInputOption': 'USER_ENTERED'
},
body = {
'values': response['values']
}
)

Related

Django adding data into model from nested json returning TypeError: 'NoneType' object is not subscriptable

I am using a third-party API to get data and add it into my database via objects.update_or_create() method. This data has many records and some of the fields in the response only exists for certain records.
Below the a snippet of the JSON that is returned from the API. However this data is only present for some of the records in the JSON response. When I try to add this data into my model, I am getting the following error:
'f_name': i.get('card_faces')[0].get('name'),
TypeError: 'NoneType' object is not subscriptable
I am trying to have it so that if the card_faces field exists, True is added to the card_face column in the database, and then the card_faces name to the database. If card_faces doesn't exist, then False is added to the card_face column in the database, and subsequent fields are null.
JSON:
{
"data": [
{
"name": "Emeria Captain"
},
{
"name": "Emeria's Call // Emeria, Shattered Skyclave",
"card_faces": [
{
"object": "card_face",
"name": "Emeria's Call"
},
{
"object": "card_face",
"name": "Emeria, Shattered Skyclave"
}
]
}
]
}
views.py:
for i in card_data:
Card.objects.update_or_create(
id=i.get('id'),
defaults={
'name': i.get('name'),
'card_faces': i.get('card_faces'),
'f_name': i.get('card_faces')[0].get('name'),
'b_name': i.get('card_faces')[1].get('name'),
}
)
If the card_faces field doesn't exist, then the result of .get('card_faces') will be None, which you can't then call index 0 on
Break apart your line and do a logic check instead - this solution assumes that if card_faces does exist, there will be an index 0 and 1; you haven't provided enough information to assume otherwise
card_faces = i.get('card_faces')
f_name = None
b_name = None
if card_faces:
f_name = card_faces[0].get('name')
b_name = card_faces[1].get('name')
defaults = {
'name': i.get('name'),
'card_faces': True if card_faces else False,
'f_name': f_name,
'b_name': b_name,
}

Safe get when parent is null in dictionary

I am looking for a way to safe get a value from a nested dictionary.
.get() will give None if the value is not present in a dictionary but if a value is None None.get("value_2") will throw an error.
Sample Dictionary:
[
{
"value": {
"value_2": "string"
}
},
{
"value": null
}
]
When iterating through the array for 0th element let us say a a.get("value").get("value_2") will give string as output, but for the second element a.get("value").get("value_2") gives an error. There needs to be a check if value is None, if not only then get value_2
Is there any way to skip the if check and make python return None. If the dictionary is nested for more than one level then I will have to check for None at multiple levels.
I would suggest to implement function like below
vals = [
{
"value": {
"value_2": "string"
}
},
{
"value": None
}
]
def get_from_dict(dict_, path):
path = path.split("/")[::-1]
dict_ = dict_.get(path.pop())
while dict_ is not None and len(path)>0:
dict_ = dict_.get(path.pop())
return dict_
for a in vals:
print(get_from_dict(a, "value/value_2"))

How to get length as 0 if the dictionary is not available in Json file while parsing using python

I am trying to get the length of a dictionary as below . for dictionary "ZZZZ" i may have multiple records available
for j in range(len(json_file['entitity'][i]['XXXX']['YYYYY']['ZZZZ']))
But if the dictionary doesn't exists in the json file i want to return them as 0
As per the above value i have requirement to get a variable value like below.
temp['EMPID'] = json_file['entities'][i]['XXXX']['YYYYY']['ZZZZ'][j]['re']['id']
Please help with an suggestion , how can i get "j" variable as 0 if the dictionary doesn't exist. Please find below example
"YYYYY": [
{
"ZZZZ": {
"id": "Z1234",
"type": "p1"
},
"id": "wer1234",
"prop": {
"dir": "South",
"Type": "C1"
}
},
{
"ZZZZ": {
"id": "Y1234",
"type": "p2"
},
"id": "ert12345",
"prop": {
"dir": "North",
"relationshipType": "C2"
}
}
]
In the above example , i am trying to get the value [ZZZZ][id] ( Value should be : "Z1234" ). In the same way i have one more record with
value "Y1234". I have totally 2 records because of that i am trying to capture the length as per below command and get the id value.
for j in range(len(json_file['YYYYY'])) ------###to capture the lenght as i have 2 records so i am trying to capture length 2
temp['EMPID'] = json_file['YYYYY'][j]['ZZZZ']['id'] -------##to capture the attribute value
But in some cases i may not receive these attributes in my source Json Files, where i want to handle if the attributes are available and have
multiple records then as per above statement i want to get the values else we can populate null values for these id columns.
You can accomplish this by using the dict.get(key, default) method, supplying an empty list as the default value if the key in the dictionary doesn't exist.
This will allow you to iterate over the keys in the dictionary at the specified key, if it exists, and skip it otherwise.
Ex:
data = {
'one': {},
'two': {
'a': {
're': {
'id': 1
}
},
'b': {
're': {
'id': 1
}
}
}
}
# Example with empty dictionary
for key in data.get('one', []):
print(f'data[\'one\'] - {key}: {data["one"][key]}')
# Example with populated dictionary
for key in data.get('two', []):
print(f'data[\'two\'] - {key}: {data["two"][key]}')
# Example with non-existent dictionary
for key in data.get('foo', []):
print(f'data[\'foo\'] - {key}: {data["foo"][key]}')

Google DLP: "ValueError: Protocol message Value has no "stringValue" field."

I have a method where I build a table for multiple items for Google's DLP inspect API which can take either a ContentItem, or a table of values
Here is how the request is constructed:
def redact_text(text_list):
dlp = google.cloud.dlp.DlpServiceClient()
project = 'my-project'
parent = dlp.project_path(project)
items = build_item_table(text_list)
info_types = [{'name': 'EMAIL_ADDRESS'}, {'name': 'PHONE_NUMBER'}]
inspect_config = {
'min_likelihood': "LIKELIHOOD_UNSPECIFIED",
'include_quote': True,
'info_types': info_types
}
response = dlp.inspect_content(parent, inspect_config, items)
return response
def build_item_table(text_list):
rows = []
for item in text_list:
row = {"values": [{"stringValue": item}]}
rows.append(row)
table = {"table": {"headers": [{"name": "something"}], "rows": rows}}
return table
When I run this I get back the error ValueError: Protocol message Value has no "stringValue" field. Even though the this example and the docs say otherwise.
Is there something off in how I build the request?
Edit: Here's the output from build_item_table
{
'table':
{
'headers':
[
{'name': 'value'}
],
'rows':
[
{
'values':
[
{
'stringValue': 'My name is Jenny and my number is (555) 867-5309, you can also email me at anemail#gmail.com, another email you can reach me at is email#email.com. '
}
]
},
{
'values':
[
{
'stringValue': 'Jimbob Doe (555) 111-1233, that one place down the road some_email#yahoo.com'
}
]
}
]
}
}
Try string_value .... python uses the field names, not the type name.

KeyError: 'Bytes_Written' python

I do not understand why I get this error Bytes_Written is in the dataset but why can't python find it? I am getting this information(see dataset below) from a VM, I want to select Bytes_Written and Bytes_Read and then subtract the previous values from current value and print a json object like this
{'Bytes_Written': previousValue-currentValue, 'Bytes_Read': previousValue-currentValue}
here is what the data looks like:
{
"Number of Devices": 2,
"Block Devices": {
"bdev0": {
"Backend_Device_Path": "/dev/disk/by-path/ip-192.168.26.1:3260-iscsi-iqn.2010-10.org.openstack:volume-d1c8e7c6-8c77-444c-9a93-8b56fa1e37f2-lun-010.0.0.142",
"Capacity": "2147483648",
"Guest_Device_Name": "vdb",
"IO_Operations": "97069",
"Bytes_Written": "34410496",
"Bytes_Read": "363172864"
},
"bdev1": {
"Backend_Device_Path": "/dev/disk/by-path/ip-192.168.26.1:3260-iscsi-iqn.2010-10.org.openstack:volume-b27110f9-41ba-4bc6-b97c-b5dde23af1f9-lun-010.0.0.146",
"Capacity": "2147483648",
"Guest_Device_Name": "vdb",
"IO_Operations": "93",
"Bytes_Written": "0",
"Bytes_Read": "380928"
}
}
}
This is the complete code that I am running.
FIELDS = ("Bytes_Written", "Bytes_Read", "IO_Operation")
def counterVolume_one(state):
url = 'http://url'
r = requests.get(url)
data = r.json()
for field in FIELDS:
state[field] += data[field]
return state
state = {"Bytes_Written": 0, "Bytes_Read": 0, "IO_Operation": 0}
while True:
counterVolume_one(state)
time.sleep(1)
for field in FIELDS:
print("{field:s}: {count:d}".format(field=field, count=state[field]))
counterVolume_one(state)
Your returned JSON structure does not have any of these FIELDS = ("Bytes_Written", "Bytes_Read", "IO_Operation") keys directly.
You'll need to modify your code slightly.
data = r.json()
for block_device in data['Block Devices'].iterkeys():
for field in FIELDS:
state[field] += int(data['Block Devices'][block_device][field])

Categories