Parse SQS stringify json message Python - python

I have a SQS queue which triggers a lambda function where as a message I pass the stringify json. I am trying to get the whole message body from the Records, but throws an error,
[ERROR] KeyError: 'efsPathIn'
Traceback (most recent call last):
File "/var/task/lambda_function.py", line 20, in lambda_handler
key = bodyDict['efsPathIn']
I'm sending this stringify json as a message in SQS queue,
{"Records": [{"body": "{\"efsPathIn\": \"163939.jpeg\",\"imageWidth\": \"492\",\"imageHeight\": \"640\",\"bucketOut\":\"output-bucket\",\"keyOut\":\"163939.webp\"}"}]}
And the code is which extracts the values,
for item in event['Records']:
body = str(item['body'])
bodyDict = json.loads(body)
key = bodyDict['efsPathIn']
bucket_out = bodyDict['bucketOut']
width = bodyDict['imageWidth']
height = bodyDict['imageHeight']
key_out = bodyDict['keyOut']
I've tried with json.dumps(item['body']) also which further loads the json, but still getting the same error.
When I test from AWS Lambda test console using the above mentioned json message, the function gets successfully executed, but I get this error while sending a message from a SQS queue.

json.dumps() is for converting a Python object into a JSON string. You have a JSON string that you are need to convert into a Python object. You should be calling json.loads() like so:
body = json.loads(item['body'])
After which you will have a Python object that you can do things like: body['efsPathIn']

Related

How to get receipt handle from sqs queue response, getting(TypeError 'sqs.Message' object is not subscriptable

I have one queue I'm sending some message in that and want to get receipt handle from output response.
messages = queue.receive_messages()
print(messages)
I am receiving this type of response:
[sqs.Message(queue_url='someurl', receipt_handle='abcd')]
Now I want to extract only receipt handle from the response,
here what I have tried
message = messages[0]
receipt_handle = message['receipt_handle']
print(receipt_handle)
but I'm getting below error:
TypeError 'sqs.Message' object is not subscriptable
How can I get receipt_handle from response?
The sqs.Message object uses attributes:
message.receipt_handle
See the documentation: https://boto3.amazonaws.com/v1/documentation/api/latest/reference/services/sqs.html#message
Your attempt only works if you use the boto3-client, then the response is a dict. See https://boto3.amazonaws.com/v1/documentation/api/latest/reference/services/sqs.html#message

dynamodb get_item boto3 Parameter validation failed

using boto3 I am trying to get a object in a dynamodb table.
following this stack overflow post the correct syntax is
client = boto3.client('dynamodb')
response = client.get_item(TableName='Garbage_collector_table', Key={'topic':{'S':str(my_topic)}})
Not able to get_item from AWS dynamodb using python?
http://boto3.readthedocs.io/en/latest/reference/services/dynamodb.html
I have tried various iterations to get proper syntax my current is this
if event['hcpcs_codes'] != None:
# codes must be in numerical, alphabetical order
# multi codes should be seperated by a comma with no spaces
client = boto3.client('dynamodb')
payload = {
"HCPCSCodeSet": event['hcpcs_codes']
}
response = client.get_item(TableName="mce-hcpcs-associations-table-dev",
Key={'HCPCSCodeSet':{'S': payload}})
print('here comes the tacos brah')
print(response)
Im not sure what this thing wants. What is the proper syntax?
Invalid type for parameter Key.HCPCSCodeSet, value: tacoman, type: <class 'str'>, valid types: <class 'dict'>
Traceback (most recent call last):
File "/var/task/lambdafile.py", line 18, in lambda_handler
Key= payload)
File "/var/task/botocore/client.py", line 415, in _api_call
the primary key name for the dynamodb database is
Partition key
HCPCSCodeSet (String)
This code:
payload = {
"HCPCSCodeSet": event['hcpcs_codes']
}
response = client.get_item(TableName="mce-hcpcs-associations-table-dev",
Key={'HCPCSCodeSet':{'S': payload}})
Ends up trying to send the following to DynamoDB:
Key={'HCPCSCodeSet':{'S': {
"HCPCSCodeSet": event['hcpcs_codes']
}}}
Which is obviously not correct. It's unclear why you are building the payload object at all. You could just do this:
response = client.get_item(TableName="mce-hcpcs-associations-table-dev",
Key={'HCPCSCodeSet':{'S': event['hcpcs_codes']}})

AWS boto3 invoke lambda returns None payload

I'm not quite sure why this piece of code fails, please I'd be happy to hear your thoughts. I've used examples from boto3 and its works in general, but I'd get an AttributeError all of a sudden in some cases. Please explain what I am doing wrong because if a payload is None, I will get a JSON decoding error but not a None object.
Here is a simplified version of a code that causes the exception.
import boto3
import json
client = boto3.client('lambda', region_name='...')
res = client.invoke(
FunctionName='func',
InvocationType='RequestResponse',
Payload=json.dumps({'param': 123})
)
payload = json.loads(res["Payload"].read().decode('utf-8'))
for k in payload.keys():
print(f'{k} = {payload[k]}')
The error
----
[ERROR] AttributeError: 'NoneType' object has no attribute 'keys'
Traceback (most recent call last):
.....
Just replicated your issue in my environment by creating a lambda that doesn't return anything and calling it using boto3. The "null" object passes through the json loads without error but doesn't have any keys since it's not a dictionary.
def lambda_handler(event, context):
pass
I created my code just like yours and got the same error. Weirdly, I was able to get the json error by attempting to print out the streaming body with this line
print(res["Payload"].read().decode('utf-8'))
before loading it. I have no idea why this happens.
Edit: Looks like once you read from the StreamingBody object it's empty from then on. https://botocore.amazonaws.com/v1/documentation/api/latest/reference/response.html#botocore.response.StreamingBody. My recommendation is to read the data from the streaming body and check for "null" and process as normal.

Extracting values from a dictionary by key, string indices must be integers

I'm trying to extract values from dictionary recieved with websocket-client via key and for some reason it throws me an error "String indices must be integers".
no matter how im trying to do it im constantly getting the same error unless i'm writing it as lines of code then it works, unfotunately that's not what I'm after...
Example:
ws = websocket.WebSocket()
ws.connect("websocket link")
info = ws.recv()
print(info["c"])
ws.close()
Output:
Traceback (most recent call last):
File "C:\Python\project\venv\example\example.py", line 14, in <mod
ule>
print(info["c"])
TypeError: string indices must be integers
While if im taking the same dictionary and writing it down suddenly it works...
Example:
example = {"a":"hello","b":123,"c":"whatever"}
print(example["c"])
Output:
whatever
Any help is appreciated, thanks!
SOLUTION
firstly you have to import the websocket and json module as you receive dictionary json object and then you have to load that json objects.
import websocket
import json
ws = websocket.WebSocket()
ws.connect("websocket link")
info = json.loads(ws.recv())
print(info["c"])
ws.close()
Likely the dictionary you receive from the web socket is a json object:
import websocket
import json
ws = websocket.WebSocket()
ws.connect("websocket link")
info = json.loads(ws.recv())
print(info["c"])
ws.close()
firstly you have to import the websocket and json module as you receive dictionary json object
and then you have to load that json objects.
import websocket
import json
and then load
info = json.loads(ws.recv())

Request-streaming gRPC client request error

When I run my gRPC client and it attempts to stream a request to the server I get this error: "TypeError: has type list_iterator, but expected one of: bytes, unicode"
Do I need to encode the text I'm sending in some way? Error message makes some sense, as I am definitely passing in an iterator. I assumed from the gRPC documentation that this is what was needed. (https://grpc.io/docs/tutorials/basic/python.html#request-streaming-rpc)Anyway, sending a list or string yields a similar error.
At the moment I am sending a small test list of strings to the server in the request, but I plan to stream requests with very large amounts of text in the future.
Here's some of my client code.
def gen_tweet_space(text):
for tweet in text:
yield tweet
def run():
channel = grpc.insecure_channel('localhost:50050')
stub = ProseAndBabel_pb2_grpc.ProseAndBabelStub(channel)
while True:
iterator = iter(block_of_text)
response = stub.UserMarkov(ProseAndBabel_pb2.UserTweets(tweets=iterator))
Here's relevant server code:
def UserMarkov(self, request_iterator, context):
return ProseAndBabel_pb2.Babel(prose=markov.get_sentence(request_iterator.tweets))
Here's the proto where the rpc and messages are defined:
service ProseAndBabel {
rpc GetHaiku (BabelRequest) returns (Babel) {}
rpc GetBabel (BabelRequest) returns (Babel) {}
rpc UserMarkov (stream UserTweets) returns (UserBabel) {}
}
message BabelRequest{
string ask = 1;
}
message Babel{
string prose = 1;
}
message UserTweets{
string tweets = 1;
}
message UserBabel{
string prose = 1;
}
I've been successful getting the non-streaming rpc to work, but having trouble finding walkthroughs for request side streaming for python applications so I'm sure I'm missing something here. Any guidance/direction appreciated!
You need to pass the iterator of requests to the gRPC client stub, not to the protobuf constructor. The current code tries to instantiate a UserTweets protobuf with an iterator rather than an individual string, resulting in the type error.
response = stub.UserMarkov(ProseAndBabel_pb2.UserTweets(tweets=iterator))
You'll instead need to have your iterator to return instances of ProseAndBabel_pb2.UserTweets, each of which wraps one of the request strings you would like to send, and pass the iterator itself to the stub. Something like:
iterator = iter([ProseAndBabel_pb2.UserTweets(tweets=x) for x in block_of_text])
response = stub.UserMarkov(iterator)

Categories