Extracting values from a dictionary by key, string indices must be integers - python

I'm trying to extract values from dictionary recieved with websocket-client via key and for some reason it throws me an error "String indices must be integers".
no matter how im trying to do it im constantly getting the same error unless i'm writing it as lines of code then it works, unfotunately that's not what I'm after...
Example:
ws = websocket.WebSocket()
ws.connect("websocket link")
info = ws.recv()
print(info["c"])
ws.close()
Output:
Traceback (most recent call last):
File "C:\Python\project\venv\example\example.py", line 14, in <mod
ule>
print(info["c"])
TypeError: string indices must be integers
While if im taking the same dictionary and writing it down suddenly it works...
Example:
example = {"a":"hello","b":123,"c":"whatever"}
print(example["c"])
Output:
whatever
Any help is appreciated, thanks!
SOLUTION
firstly you have to import the websocket and json module as you receive dictionary json object and then you have to load that json objects.
import websocket
import json
ws = websocket.WebSocket()
ws.connect("websocket link")
info = json.loads(ws.recv())
print(info["c"])
ws.close()

Likely the dictionary you receive from the web socket is a json object:
import websocket
import json
ws = websocket.WebSocket()
ws.connect("websocket link")
info = json.loads(ws.recv())
print(info["c"])
ws.close()

firstly you have to import the websocket and json module as you receive dictionary json object
and then you have to load that json objects.
import websocket
import json
and then load
info = json.loads(ws.recv())

Related

Parse SQS stringify json message Python

I have a SQS queue which triggers a lambda function where as a message I pass the stringify json. I am trying to get the whole message body from the Records, but throws an error,
[ERROR] KeyError: 'efsPathIn'
Traceback (most recent call last):
File "/var/task/lambda_function.py", line 20, in lambda_handler
key = bodyDict['efsPathIn']
I'm sending this stringify json as a message in SQS queue,
{"Records": [{"body": "{\"efsPathIn\": \"163939.jpeg\",\"imageWidth\": \"492\",\"imageHeight\": \"640\",\"bucketOut\":\"output-bucket\",\"keyOut\":\"163939.webp\"}"}]}
And the code is which extracts the values,
for item in event['Records']:
body = str(item['body'])
bodyDict = json.loads(body)
key = bodyDict['efsPathIn']
bucket_out = bodyDict['bucketOut']
width = bodyDict['imageWidth']
height = bodyDict['imageHeight']
key_out = bodyDict['keyOut']
I've tried with json.dumps(item['body']) also which further loads the json, but still getting the same error.
When I test from AWS Lambda test console using the above mentioned json message, the function gets successfully executed, but I get this error while sending a message from a SQS queue.
json.dumps() is for converting a Python object into a JSON string. You have a JSON string that you are need to convert into a Python object. You should be calling json.loads() like so:
body = json.loads(item['body'])
After which you will have a Python object that you can do things like: body['efsPathIn']

Scrape brand suggestions from Keepa with Python

I'm trying to scrape brand suggestions from https://keepa.com/#!finder
For example when you're typing 'ni' in field "brand" the list with suggestions appears. It includes all brands started with 'ni' like 'nike'.
I know that a websocket session is responsible for that. When you press a key, it sends gzipped JSON string to that session and receives response like this.
So the yellow items are gzipped strings.
When I'm trying to recreate this in Python, it fails.
The following code
from websocket import create_connection
import zlib
websocket_resource_url = 'wss://dyn.keepa.com/apps/cloud/?app=keepaWebsite&version=1.6'
ws = create_connection(websocket_resource_url)
msg = {"path":"pro/autocomplete","content":"nike","domainId":1,"maxResults":25,"type":"brand","version":3,"id":111111,"user":"user"}
ws.send(zlib.compress((json.dumps(msg)+'\n').encode('utf-8')))
print('Result: {}'.format(zlib.decompress(ws.recv())))
returns {"status":108, "n":73}
But expected output is somethins like:
{"startTimeStamp":1637576348774,"id":7596,"timeStamp":1637576348774,"status":200,"version":3,"lastUpdate":0,"suggestions":[{"value":"nike","count":1100713},{"value":"nike golf","count":7497},{"value":"nikeelando","count":358},{"value":"nikea","count":195},{"value":"nike sb","count":87}]}
What I'm doing wrong?

How to get the status of the pull request of an issue in jira via python

I am not able to understand how to fetch the status of the pull request via Python jira API.
I have gone through https://jira.readthedocs.io/en/latest/examples.html,
and searched the internet for it. But I was not able to link the jira issue with the pull request, I saw that the pull request is linked to jira issue id, but was not able to understand how to implement it.
I am using python 3.7
from jira import JIRA
issue = auth_jira.issue('XYZ-000')
pull_request = issue.id.pullrequest
I am getting this error:
AttributeError: 'str' object has no attribute 'pullrequest'
I am not sure how to access pullrequest data in jira.
Any leads would help.
I did something similar with another python wrapper for the jira-API: atlassian-python-api.
Look if it works in your case:
from atlassian import Jira
from pprint import pprint
import json
jira = Jira(
url='https://your.jira.url',
username=user,
password=pwd)
issue = jira.get_issue(issue_key)
# get the custom field ref of the "Development" field (I don't know if it's always the same):
dev_field_string = issue["fields"]["customfield_13900"]
# the value of this field is a huge string containing a json, that we must parse ourselves:
json_str = dev_field_string.split("devSummaryJson=")[1][:-1]
# we load it with the json module (this ensures json is converted as dict, i.e. 'true' is interpreted as 'True'...)
devSummaryJson = json.loads(json_str)
# the value of interest are under cachedValue/summary:
dev_field_dic = devSummaryJson["cachedValue"]["summary"]
pprint(dev_field_dic)
# you can now access the status of your pull requests (actually only the last one):
print(dev_field_dic['pullrequest']['overall']['state'])

AWS boto3 invoke lambda returns None payload

I'm not quite sure why this piece of code fails, please I'd be happy to hear your thoughts. I've used examples from boto3 and its works in general, but I'd get an AttributeError all of a sudden in some cases. Please explain what I am doing wrong because if a payload is None, I will get a JSON decoding error but not a None object.
Here is a simplified version of a code that causes the exception.
import boto3
import json
client = boto3.client('lambda', region_name='...')
res = client.invoke(
FunctionName='func',
InvocationType='RequestResponse',
Payload=json.dumps({'param': 123})
)
payload = json.loads(res["Payload"].read().decode('utf-8'))
for k in payload.keys():
print(f'{k} = {payload[k]}')
The error
----
[ERROR] AttributeError: 'NoneType' object has no attribute 'keys'
Traceback (most recent call last):
.....
Just replicated your issue in my environment by creating a lambda that doesn't return anything and calling it using boto3. The "null" object passes through the json loads without error but doesn't have any keys since it's not a dictionary.
def lambda_handler(event, context):
pass
I created my code just like yours and got the same error. Weirdly, I was able to get the json error by attempting to print out the streaming body with this line
print(res["Payload"].read().decode('utf-8'))
before loading it. I have no idea why this happens.
Edit: Looks like once you read from the StreamingBody object it's empty from then on. https://botocore.amazonaws.com/v1/documentation/api/latest/reference/response.html#botocore.response.StreamingBody. My recommendation is to read the data from the streaming body and check for "null" and process as normal.

google.cloud.vision_v1.types.image_annotator.AnnotateImageResponse to Json in python

I am using Google Vision document_text_detection function and I am trying to dump the AnnotateImageResponse to json
Earlier this code used to word
client = vision.ImageAnnotatorClient()
image = vision.Image(content=image)
response = client.document_text_detection(image=image)
texts = MessageToDict(response)
text_json = json.dumps(texts)
Now it throws this error AttributeError: 'DESCRIPTOR'
I tried all the responses in other answers but none of them worked. I also tried protobuf3-to-dict but it also throws error
from protobuf_to_dict import protobuf_to_dict
text_json = protobuf_to_dict(response)
It throws:
AttributeError: 'ListFields'
I know I can iterate it the object but I need to dump it in json file to maintain cache.
Found a better answer on the GitHub thread I was following, posted yesterday. Translated for this question:
import proto
client = vision.ImageAnnotatorClient()
image = vision.Image(content=image)
response = client.document_text_detection(image=image)
texts = proto.Message.to_json(response)
text_json = json.dumps(texts)
If response needed as a dict, do the following instead of json.dumps:
mydict = json.loads(texts)
All message types are now defined using proto-plus, which uses different methods for serialization and deserialization.
I ran into a similar problem today with the new Google Vision Client (2.0.0) and solved it unpacking the AnnotateImageResponse object. I Don't think this is how the new API should work but at the moment no solution is proposed in the documentation. Try this:
client = vision.ImageAnnotatorClient()
image = vision.Image(content=image)
response = client.document_text_detection(image=image)
texts = MessageToDict(response._pb)
text_json = json.dumps(texts)
Note the use of response._pb instead of response.

Categories