I have read the official AWS docs and several forums, still I cant find what I am doing wrong while adding item to string_set using Python/Boto3 and Dynamodb. Here is my code:
table.update_item(
Key={
ATT_USER_USERID: event[ATT_USER_USERID]
},
UpdateExpression="add " + key + " :val0" ,
ExpressionAttributeValues = {":val0" : set(["example_item"]) },
)
The error I am getting is:
An error occurred (ValidationException) when calling the UpdateItem operation: An operand in the update expression has an incorrect data type\"
It looks like you figured out a method for yourself, but for others who come here looking for an answer:
Your 'Key' syntax needs a data type (like 'S' or 'N')
You need to use "SS" as the data type in ExpressionAttributeValues, and
You don't need "set" in your ExpressionAttributeValues.
Here's an example I just ran (I had an existing set, test_set, with 4 existing values, and I'm adding a 5th, the string 'five'):
import boto3
db = boto3.client("dynamodb")
db.update_item(TableName=TABLE,
Key={'id':{'S':'test_id'}},
UpdateExpression="ADD test_set :element",
ExpressionAttributeValues={":element":{"SS":['five']}})
So before, the string set looked like ['one','two','three','four'], and after, it looked like ['one','two','three','four','five']
Building off of #joe_stech's answer, you can now do it without having to define the type.
An example is:
import boto3
class StringSetTable:
def __init__(self) -> None:
dynamodb = boto3.resource("dynamodb")
self.dynamodb_table = dynamodb.Table("NAME_OF_TABLE")
def get_str_set(self, key: str) -> typing.Optional[typing.Set[str]]:
response = self.dynamodb_table.get_item(
Key={KEY_NAME: key}, ConsistentRead=True
)
r = response.get("Item")
if r is None:
print("No set stored")
return None
else:
s = r["string_set"]
s.remove("EMPTY_IF_ONLY_THIS")
return s
def add_to_set(self, key: str, str_set: typing.Set[str]) -> None:
new_str_set = str_set.copy()
new_str_set.add("EMPTY_IF_ONLY_THIS")
self.dynamodb_table.update_item(
Key={KEY_NAME: key},
UpdateExpression="ADD string_set :elements",
ExpressionAttributeValues={":elements": new_str_set},
)
Related
Here is my sample code
import boto3
import os
ENV = "dev"
DB = "http://awsservice.com"
REGION = "us-east-1"
TABLE = "traffic-count"
def main():
os.environ["AWS_PROFILE"] = ENV
client = boto3.resource("dynamodb", endpoint_url=DB, region_name=REGION)
kwargs = {'Key': {'id': 'D-D0000012345-P-1'},
'UpdateExpression': 'ADD #count.#car :delta \n SET #parentKey = :parent_key, #objectKey = :object_key',
'ExpressionAttributeValues': {':delta': 1, ':parent_key': 'District-D0000012345', ':object_key': 'Street-1'},
'ExpressionAttributeNames': {'#car': 'car', '#count': 'count', '#parentKey': 'parentKey', '#objectKey': 'objectKey'}}
client.Table(TABLE).update_item(**kwargs)
if __name__ == "__main__":
main()
What I want to achieve is this:
With a single API call (in this update_item), I want to be able to
If the item does not exit. create an item with a map count and initialise it with {'car': 1} and set the fields parent_key and object_key.
or
If the item already exists, update the field to {'car': 2} (if the original count is 1)
Previously, if I did not use a map, I can successfully update with this expression,
SET #count = if_not_exist(#count, :zero) + :delta,
#parentKey = :parent_key, #objectKey = :object_key
However I am getting this error:
botocore.exceptions.ClientError: An error occurred
(ValidationException) when calling the UpdateItem operation: The
document path provided in the update expression is invalid for update
Which document path is causing the problem? How can I fix it?
For those who landed on this page with similar error:
The document path provided in the update expression is invalid for update
The reason may be:
for the item on which the operation is being performed,
this attribute (count, for example) is not yet set.
Considering the sample code from question,
The exception could be coming from all those items where count is empty or not set. So the update query doesn't know on which map the new value(s) (car, for example) needs to be set or updated.
In the question, it was possible for the OP in the beginning because, the attribute is not a map and the process is simply setting the value to count as is. It's not trying to access a key of an unknown map to set the value.
This can be handled by catching the exception. For example:
from botocore.exceptions import ClientError
...
try:
response = table.update_item(
Key={
"pk": pk
},
UpdateExpression="set count.car = :c,
ExpressionAttributeValues={
':c': "some car"
},
ReturnValues="UPDATED_NEW"
)
except ClientError as e:
if e.response['Error']['Code'] == 'ValidationException':
response = table.update_item(
Key={
"pk": pk
},
UpdateExpression="set count = :count",
ExpressionAttributeValues={
':count': {
':c': "some car"
}
},
ReturnValues="UPDATED_NEW"
)
By default delete_item from boto3 does not return an error even if operation is performed on an Item that does not exists.
id = '123'
timenow = '1589046426'
dynamodb = boto3.resource('dynamodb')
boto3_table = dynamodb.Table(MY_TABLE)
response = boto3_table.delete_item(Key={"ID": id, "TIMENOW": timenow})
How do I change the code above to force the delete_item to return an error when item does not exist?
If anyone has the same problem, here is the solution:
response = boto3_table.delete_item(Key={"IDID": idid, "TIMENOW": timenow},
ConditionExpression="attribute_exists(ID) AND attribute_exists(TIMENOW)")
The ConditionExpression parameter with attribute_exists will only delete if the ID and TIMENOW are present in the record.
One way would be to do a conditional delete using a condition-expression on the your partition key attribute name:
response = table.delete_item(
Key={
'pk': "jim.bob",
"sk": "metadata"
},
ConditionExpression="attribute_exists (pk)",
)
If the item exists with this key AND the attribute that is the partition key exists on that key, it deletes the item. If the item does not exist, then you get:
The conditional request failed
Another approach would be to use ReturnValues='ALL_OLD' which returns attributes associated with a given key if they existed prior removal:
response = boto3_table.delete_item(Key={"ID": id, "TIMENOW": timenow}, ReturnValues='ALL_OLD')
if not res['Attributes']:
raise KeyError('Item not found!')
I know this question has been asked before but none of the questions were helpful hence asking again..
I am using graphene and parsing some Elasticsearch data before passing it to Graphene
PFB :- my resolved function
def resolve_freelancers(self, info):
session = get_session()
[ids, scores] = self._get_freelancers()
freelancers = session.query(FreelancerModel).filter(FreelancerModel.id.in_(ids)).all()
for index in range(len(ids)):
print("index", scores[index])
freelancers[index].score = scores[index]
if self.sort:
reverse = self.sort.startswith("-")
self.sort = self.sort.replace("-", "")
if self.sort == "alphabetical":
freelancers = sorted(freelancers, key=lambda f: f.name if f.name else "", reverse=reverse)
if self.sort == "created":
freelancers = sorted(freelancers, key=lambda f: f.created_on, reverse=reverse)
if self.sort == "modified":
freelancers = sorted(freelancers, key=lambda f: f.modified_at, reverse=reverse)
freelancers = [Freelancer(f) for f in freelancers[self.start:self.end]]
session.close()
return freelancers
now if I do
print(freelancers[index].score)
it gives me 10.989184 and the type of this is <class 'float'>
In my class Freelancer(graphene.ObjectType):
I have added score = graphene.Float()
Now when I try to add score to my query it gives the error .. otherwise there is no issue .. all I am interested is in getting that score value in the json response .. I do not understand what is causing this error and I am fairly new to Python so any advise will be appreciated.
Please feel free to ask for additional code or information as I have tried to paste whatever I thought was relevant
So I can't comment or I would, and I very well may be wrong, but here goes.
My guess is that somewhere you are calling float(score), but the graphene.Float() type cannot be directly converted to a Python float via float(). This is probably due to the graphene.Float type having so much data it can hold in its data structure due to inheriting from graphene.Scalar (graphene GH/Scalars).
My guess would be to hunt down the float() call and remove it. If that doesn't work, I would then move onto Float.num field in your query.
Again, all conjecture here, but I hope it helped.
Actually I cannot pass the fields directly to the Graphene Object and we need to pass it within the init method of the class which has the Graphene Object and then we need to return in a resolver method ( in my case resolve_score )
I am trying to convert boto3 dynamoDB conditional expressions (using types from boto3.dynamodb.conditions) to its string representation. Of course this could be hand coded but naturally I would prefer to be able to find something developed by AWS itself.
Key("name").eq("new_name") & Attr("description").begins_with("new")
would become
"name = 'new_name' and begins_with(description, 'new')"
I have been checking in the boto3 and boto core code but so far no success, but I assume it must exist somewhere in the codebase...
In the boto3.dynamodb.conditions module there is a class called ConditionExpressionBuilder. You can convert a condition expression to string by doing the following:
condition = Key("name").eq("new_name") & Attr("description").begins_with("new")
builder = ConditionExpressionBuilder()
expression = builder.build_expression(condition, is_key_condition=True)
expression_string = expression.condition_expression
expression_attribute_names = expression.attribute_name_placeholders
expression_attribute_values = expression.attribute_value_placeholders
I'm not sure why this isn't documented anywhere. I just randomly found it looking through the source code at the bottom of this page https://boto3.amazonaws.com/v1/documentation/api/latest/_modules/boto3/dynamodb/conditions.html.
Unfortunately, this doesn't work for the paginator format string notation, but it should work for the Table.query() format.
From #Brian's answer with
ConditionExpressionBuilder I had to add the Dynamodb's {'S': 'value'} type notation before the query execution.
I changed it with expression_attribute_values[':v0'] = {'S': pk_value}, where :v0 is the first Key/Attr in the condition. Not sure but should work for next values (:v0, :v1, :v2...).
Here is the full code, using pagination to retrieve only part of data
from boto3.dynamodb.conditions import Attr, Key, ConditionExpressionBuilder
from typing import Optional, List
import boto3
client_dynamodb = boto3.client("dynamodb", region_name="us-east-1")
def get_items(self, pk_value: str, pagination_config: dict = None) -> Optional[List]:
if pagination_config is None:
pagination_config = {
# Return only first page of results when no pagination config is not provided
'PageSize': 300,
'StartingToken': None,
'MaxItems': None,
}
condition = Key("pk").eq(pk_value)
builder = ConditionExpressionBuilder()
expression = builder.build_expression(condition, is_key_condition=True)
expression_string = expression.condition_expression
expression_attribute_names = expression.attribute_name_placeholders
expression_attribute_values = expression.attribute_value_placeholders
# Changed here to make it compatible with dynamodb typing
python expression_attribute_values[':v0'] = {'S': pk_value}
paginator = client_dynamodb.get_paginator('query')
page_iterator = paginator.paginate(
TableName="TABLE_NAME",
IndexName="pk_value_INDEX",
KeyConditionExpression=expression_string,
ExpressionAttributeNames=expression_attribute_names,
ExpressionAttributeValues=expression_attribute_values,
PaginationConfig=pagination_config
)
for page in page_iterator:
resp=page
break
if ("Items" not in resp) or (len(resp["Items"]) == 0):
return None
return resp["Items"]
EDIT:
I used this question to get string representation for Dynamodb Resource's query, which is not compatible (yet) with dynamodb conditions, but then I found a better solution from Github (Boto3)[https://github.com/boto/boto3/issues/2300]:
Replace paginator with the one from meta
dynamodb_resource = boto3.resource("dynamodb")
paginator = dynamodb_resource.meta.client.get_paginator('query')
And now I can simply use Attr and Key
I use flask, an api and SQLAlchemy with SQLite.
I begin in python and flask and i have problem with the list.
My application work, now i try a news functions.
I need to know if my json informations are in my db.
The function find_current_project_team() get information in the API.
def find_current_project_team():
headers = {"Authorization" : "bearer "+session['token_info']['access_token']}
user = requests.get("https://my.api.com/users/xxxx/", headers = headers)
user = user.json()
ids = [x['id'] for x in user]
return(ids)
I use ids = [x['id'] for x in user] (is the same that) :
ids = []
for x in user:
ids.append(x['id'])
To get ids information. Ids information are id in the api, and i need it.
I have this result :
[2766233, 2766237, 2766256]
I want to check the values ONE by One in my database.
If the values doesn't exist, i want to add it.
If one or all values exists, I want to check and return "impossible sorry, the ids already exists".
For that I write a new function:
def test():
test = find_current_project_team()
for find_team in test:
find_team_db = User.query.filter_by(
login=session['login'], project_session=test
).first()
I have absolutely no idea to how check values one by one.
If someone can help me, thanks you :)
Actually I have this error :
sqlalchemy.exc.InterfaceError: (InterfaceError) Error binding
parameter 1 - probably unsupported type. 'SELECT user.id AS user_id,
user.login AS user_login, user.project_session AS user_project_session
\nFROM user \nWHERE user.login = ? AND user.project_session = ?\n
LIMIT ? OFFSET ?' ('my_tab_login', [2766233, 2766237, 2766256], 1, 0)
It looks to me like you are passing the list directly into the database query:
def test():
test = find_current_project_team()
for find_team in test:
find_team_db = User.query.filter_by(login=session['login'], project_session=test).first()
Instead, you should pass in the ID only:
def test():
test = find_current_project_team()
for find_team in test:
find_team_db = User.query.filter_by(login=session['login'], project_session=find_team).first()
Asides that, I think you can do better with the naming conventions though:
def test():
project_teams = find_current_project_team()
for project_team in project_teams:
project_team_result = User.query.filter_by(login=session['login'], project_session=project_team).first()
All works thanks
My code :
project_teams = find_current_project_team()
for project_team in project_teams:
project_team_result = User.query.filter_by(project_session=project_team).first()
print(project_team_result)
if project_team_result is not None:
print("not none")
else:
project_team_result = User(login=session['login'], project_session=project_team)
db.session.add(project_team_result)
db.session.commit()