Podio: How do I use the Item.update-method in Python 3? - python

There is no example code for the Podio API for Python but this is the example code written for Ruby:
Podio::Item.update(210606, {
:fields => {
'title' => 'The API documentation is much more funny',
'business_value' => { :value => 20000, :currency => 'EUR' },
'due_date' => { :start => '2011-05-06 11:27:20', :end =>
5.days.from_now.to_s(:db) }
}
})
I can't for the life of me figure out how to translate this to Python 3. I've tried using dictionaries, dicts inside lists, referencing the field with both their field-id and their names etc. But it never actually updates anything.
This is my failed attempt at translating the above to Python code (with different fields since the fields in my 'Bugs (API example) app' aren't the same as in the example code):
newValues = {'fields':{'title': "This is my title",'description_of_problem':
"the not work"}}
try:
podio.Item.update(629783395, newValues['fields'])
print('updating was successful')
except:
print('updating was not successful')
With podio being:
podio = api.OAuthClient(
client_id,
client_secret,
username,
password,
)
The 'fields' part in my code doesn't make any sense really, but I couldn't figure out what else to do with that part of the Ruby code, I suspect that is the issue. The program always prints 'updating was successful' as if the Item.update method was successfully called, but as I said it doesn't actually update anything in Podio. Can anyone see what's wrong?

I'd just follow the Item update API, and pass in a dictionary that matches the request section there:
{
"revision": The revision of the item that is being updated. This is optional,
"external_id": The new external_id of the item,
"fields": The values for each field, see the create item operation for details,
"file_ids": The list of attachments,
"tags": The list of tags,
"reminder": Optional reminder on this task
{
"remind_delta": Minutes (integer) to remind before the due date
},
"recurrence": The recurrence for the task, if any,
{
"name": The name of the recurrence, "weekly", "monthly" or "yearly",
"config": The configuration for the recurrence, depends on the type
{
"days": List of weekdays ("monday", "tuesday", etc) (for "weekly"),
"repeat_on": When to repeat, "day_of_week" or "day_of_month" (for "monthly")
},
"step": The step size, 1 or more,
"until": The latest date the recurrence should take place
},
"linked_account_id": The linked account to use for meetings,
"ref" The reference of the item
{
"type": The type of reference,
"id": The id of the reference
}
}
The documentation further points to the item creation API for further examples. Note how that object has a "fields" key in the outermost mapping.
All the Ruby documentation does is build that mapping as a Ruby hash (in Python, a dict) with the entries that need updating; :field is an immutable string (called a symbol) that defines a key in that hash pointing to a nested hash. The Python implementation for the update method just converts that dictionary to a JSON post body.
A direct translation of the Ruby code to Python is:
from datetime import datetime, timedelta
podio.item.update(210606, {
'fields': {
'title': 'The API documentation is much more funny',
'business_value': {'value': 20000, 'currency': 'EUR'},
'due_date': {
'start': '2011-05-06 11:27:20',
'end': (datetime.now() + timedelta(days=5)).strftime('%Y-%m-%d %H:%M:%S')}
}
})
What you did wrong in your case is not include the 'fields' key in the outermost dictionary; you unwrapped the outermost dictionary and only posted the nested dictionary under 'fields'. Instead, include that outer dictionary:
newValues = {
'fields': {
'title': "This is my title",
'description_of_problem': "the not work"
}
}
podio.Item.update(629783395, newValues)

Related

Update nested map dynamodb

I have a dynamodb table with an attribute containing a nested map and I would like to update a specific inventory item that is filtered via a filter expression that results in a single item from this map.
How to write an update expression to update the location to "in place three" of the item with name=opel,tags include "x1" (and possibly also f3)?
This should just update the first list elements location attribute.
{
"inventory": [
{
"location": "in place one", # I want to update this
"name": "opel",
"tags": [
"x1",
"f3"
]
},
{
"location": "in place two",
"name": "abc",
"tags": [
"a3",
"f5"
]
}],
"User" :"test"
}
Updated Answer - based on updated question statement
You can update attributes in a nested map using update expressions such that only a part of the item would get updated (ie. DynamoDB would apply the equivalent of a patch to your item) but, because DynamoDB is a document database, all operations (Put, Get, Update, Delete etc.) work on the item as a whole.
So, in your example, assuming User is the partition key and that there is no sort key (I didn't see any attribute that could be a sort key in that example), an Update request might look like this:
table.update_item(
Key={
'User': 'test'
},
UpdateExpression="SET #inv[0].#loc = :locVal",
ExpressionAttributeNames={
'#inv': 'inventory',
'#loc': 'location'
},
ExpressionAttributeValues={
':locVal': 'in place three',
},
)
That said, you do have to know what the item schema looks like and which attributes within the item should be updated exactly.
DynamoDB does NOT have a way to operate on sub-items. Meaning, there is no way to tell Dynamo to execute an operation such as "update item, set 'location' property of elements of the 'inventory' array that have a property of 'name' equal to 'opel'"
This is probably not the answer you were hoping for, but it is what's available today. You may be able to get closer to what you want by changing the schema a bit.
If you need to reference the sub-items by name, perhaps storing something like:
{
"inventory": {
"opel": {
"location": "in place one", # I want to update this
"tags": [ "x1", "f3" ]
},
"abc": {
"location": "in place two",
"tags": [ "a3", "f5" ]
}
},
"User" :"test"
}
Then your query would be:
table.update_item(
Key={
'User': 'test'
},
UpdateExpression="SET #inv.#brand.#loc = :locVal",
ExpressionAttributeNames={
'#inv': 'inventory',
'#loc': 'location',
'#brand': 'opel'
},
ExpressionAttributeValues={
':locVal': 'in place three',
},
)
But YMMV as even this has limitations because you are limited to identifying inventory items by name (ie. you still can't say "update inventory with tag 'x1'"
Ultimately you should carefully consider why you need Dynamo to perform these complex operations for you as opposed to you being specific about what you want to update.
You can update the nested map as follow:
First create and empty item attribute of type map. In the example graph is the empty item attribute.
dynamoTable = dynamodb.Table('abc')
dynamoTable.put_item(
Item={
'email': email_add,
'graph': {},
}
Update nested map as follow:
brand_name = 'opel'
DynamoTable = dynamodb.Table('abc')
dynamoTable.update_item(
Key={
'email': email_add,
},
UpdateExpression="set #Graph.#brand= :name, ",
ExpressionAttributeNames={
'#Graph': 'inventory',
'#brand': str(brand_name),
},
ExpressionAttributeValues = {
':name': {
"location": "in place two",
'tag': {
'graph_type':'a3',
'graph_title': 'f5'
}
}
Updating Mike's answer because that way doesn't work any more (at least for me).
It is working like this now (attention for UpdateExpression and ExpressionAttributeNames):
table.update_item(
Key={
'User': 'test'
},
UpdateExpression="SET inv.#brand.loc = :locVal",
ExpressionAttributeNames={
'#brand': 'opel'
},
ExpressionAttributeValues={
':locVal': 'in place three',
},
)
And whatever goes in Key={}, it is always partition key (and sort key, if any).
EDIT:
Seems like this way only works when with 2 level nested properties. In this case you would only use "ExpressionAttributeNames" for the "middle" property (in this example, that would be #brand: inv.#brand.loc). I'm not yet sure what is the real rule now.
DynamoDB UpdateExpression does not search on the database for matching cases like SQL (where you can update all items that match some condition). To update an item you first need to identify it and get primary key or composite key, if there are many items that match your criteria, you need to update one by one.
then the issue to update nested objects is to define UpdateExpression,ExpressionAttributeValues & ExpressionAttributeNames to pass to Dynamo Update Api .
I use a recursive function to update nested Objects on dynamoDB. You ask for Python but I use javascript, I think is easy to see this code and implents on Python:
https://gist.github.com/crsepulv/4b4a44ccbd165b0abc2b91f76117baa5
/**
* Recursive function to get UpdateExpression,ExpressionAttributeValues & ExpressionAttributeNames to update a nested object on dynamoDB
* All levels of the nested object must exist previously on dynamoDB, this only update the value, does not create the branch.
* Only works with objects of objects, not tested with Arrays.
* #param obj , the object to update.
* #param k , the seed is any value, takes sense on the last iteration.
*/
function getDynamoExpression(obj, k) {
const key = Object.keys(obj);
let UpdateExpression = 'SET ';
let ExpressionAttributeValues = {};
let ExpressionAttributeNames = {};
let response = {
UpdateExpression: ' ',
ExpressionAttributeNames: {},
ExpressionAttributeValues: {}
};
//https://stackoverflow.com/a/16608074/1210463
/**
* true when input is object, this means on all levels except the last one.
*/
if (((!!obj) && (obj.constructor === Object))) {
response = getDynamoExpression(obj[key[0]], key);
UpdateExpression = 'SET #' + key + '.' + response['UpdateExpression'].substring(4); //substring deletes 'SET ' for the mid level values.
ExpressionAttributeNames = {['#' + key]: key[0], ...response['ExpressionAttributeNames']};
ExpressionAttributeValues = response['ExpressionAttributeValues'];
} else {
UpdateExpression = 'SET = :' + k;
ExpressionAttributeValues = {
[':' + k]: obj
}
}
//removes trailing dot on the last level
if (UpdateExpression.indexOf(". ")) {
UpdateExpression = UpdateExpression.replace(". ", "");
}
return {UpdateExpression, ExpressionAttributeValues, ExpressionAttributeNames};
}
//you can try many levels.
const obj = {
level1: {
level2: {
level3: {
level4: 'value'
}
}
}
}
I had the same need.
Hope this code helps. You only need to invoke compose_update_expression_attr_name_values passing the dictionary containing the new values.
def compose_update_expression_attr_name_values(data: dict) -> (str, dict, dict):
""" Constructs UpdateExpression, ExpressionAttributeNames, and ExpressionAttributeValues for updating an entry of a DynamoDB table.
:param data: the dictionary of attribute_values to be updated
:return: a tuple (UpdateExpression: str, ExpressionAttributeNames: dict(str: str), ExpressionAttributeValues: dict(str: str))
"""
# prepare recursion input
expression_list = []
value_map = {}
name_map = {}
# navigate the dict and fill expressions and dictionaries
_rec_update_expression_attr_name_values(data, "", expression_list, name_map, value_map)
# compose update expression from single paths
expression = "SET " + ", ".join(expression_list)
return expression, name_map, value_map
def _rec_update_expression_attr_name_values(data: dict, path: str, expressions: list, attribute_names: dict,
attribute_values: dict):
""" Recursively navigates the input and inject contents into expressions, names, and attribute_values.
:param data: the data dictionary with updated data
:param path: the navigation path in the original data dictionary to this recursive call
:param expressions: the list of update expressions constructed so far
:param attribute_names: a map associating "expression attribute name identifiers" to their actual names in ``data``
:param attribute_values: a map associating "expression attribute value identifiers" to their actual values in ``data``
:return: None, since ``expressions``, ``attribute_names``, and ``attribute_values`` get updated during the recursion
"""
for k in data.keys():
# generate non-ambiguous identifiers
rdm = random.randrange(0, 1000)
attr_name = f"#k_{rdm}_{k}"
while attr_name in attribute_names.keys():
rdm = random.randrange(0, 1000)
attr_name = f"#k_{rdm}_{k}"
attribute_names[attr_name] = k
_path = f"{path}.{attr_name}"
# recursion
if isinstance(data[k], dict):
# recursive case
_rec_update_expression_attr_name_values(data[k], _path, expressions, attribute_names, attribute_values)
else:
# base case
attr_val = f":v_{rdm}_{k}"
attribute_values[attr_val] = data[k]
expression = f"{_path} = {attr_val}"
# remove the initial "."
expressions.append(expression[1:])

Python Proper JSON Format

I need to post data to a REST API. One field, incident_type, needs to be passed in the below JSON format ( must include brackets, can't be just curly brackets ):
"incident_type_ids": [{
"name": "Phishing - General"
}],
When I try to force this in my code, it doesn't come out quite right. There will usually be some extra quote-escapes ( ex. output: "incident_type_ids": "[\\"{ name : Phishing - General }\\"]")and I realized that was because I was double-encoding the JSON data in the incident type variable to forcibly add the brackets ( in line 6 which has since been commented out ):
#incident variables
name = 'Incident Name 2'
description = 'This is the description'
corpID = 'id'
incident_type = '{ name : Phishing - General }'
#incident_type = json.dumps([incident_type])
incident_owner = 'Security Operations Center'
payload = {
'name':name,
'discovered_date':'0',
'owner_id':incident_owner,
'description':description,
'exposure_individual_name':corpID,
'incident_type_ids':incident_type
}
body=json.dumps(payload)
create = s.post(url, data=body, headers=headers, verify=False)
However since I commented out the line, I can't get incident_type in the format I need ( with brackets ).
So, my question is: How can I get the incident_type variable in the proper format in the final payload?
Input I manually got to work using product's interactive REST API:
{
"name": "Incident Name 2",
"incident_type_ids": [{
"name": "Phishing - General"
}],
"description": "This is the description",
"discovered_date": "0",
"exposure_individual_name": "id",
"owner_id": "Security Operations Center"
}
I figure my approach is wrong and I'd appreciate any help. I'm new to Python so I'm expecting this is a beginner's mistake.
Thanks for your help.
JSON square brackets are for arrays, which correspond to Python lists. JSON curly braces are for objects, which correspond to Python dictionaries.
So you need to create a list containing a dictionary, then convert that to JSON.
incident_type = [{"name": "Phishing - General"}]
incident_owner = 'Security Operations Center'
payload = {
'name':name,
'discovered_date':'0',
'owner_id':incident_owner,
'description':description,
'exposure_individual_name':corpID,
'incident_type_ids':incident_type
}
body=json.dumps(payload)
It's only slightly coincidental that the Python syntax for this is similar to the JSON syntax.

Can Google Calendar extended properties be lists?

I want to add a set of extendedProperties to a Google Calendar event. I want some of those properties be lists. As (in python),
event = {
..., # standard properties
"extendedProperties": {
"shared": {
"max_crew": 3,
"crew_list": [
"crew1#example.com",
"crew2#example.com",
],
}
}
...
}
This creates the max_crew property but not the crew_list property.
Any way to do this? Or do I need to use a parse-able string (max 1024 chars)?
There is a way: explained in Google Calendar's guide and reference.
And in Python first create a dictionary for your extra fields.
body = {
"extendedProperties": {
"private": {
"petsAllowed": "yes"
}
}
}
Then make a request with:
service.events().patch(calendarId='calendar_id', eventId='event_id', body=body).execute()
If it is successful, it will return the updated event.
Hybor confirms my observation that the interface does not support a list as a value. Shortsighted, imho, but so it goes.

aggregate a field in elasticsearch-dsl using python

Can someone tell me how to write Python statements that will aggregate (sum and count) stuff about my documents?
SCRIPT
from datetime import datetime
from elasticsearch_dsl import DocType, String, Date, Integer
from elasticsearch_dsl.connections import connections
from elasticsearch import Elasticsearch
from elasticsearch_dsl import Search, Q
# Define a default Elasticsearch client
client = connections.create_connection(hosts=['http://blahblahblah:9200'])
s = Search(using=client, index="attendance")
s = s.execute()
for tag in s.aggregations.per_tag.buckets:
print (tag.key)
OUTPUT
File "/Library/Python/2.7/site-packages/elasticsearch_dsl/utils.py", line 106, in __getattr__
'%r object has no attribute %r' % (self.__class__.__name__, attr_name))
AttributeError: 'Response' object has no attribute 'aggregations'
What is causing this? Is the "aggregations" keyword wrong? Is there some other package I need to import? If a document in the "attendance" index has a field called emailAddress, how would I count which documents have a value for that field?
First of all. I notice now that what I wrote here, actually has no aggregations defined. The documentation on how to use this is not very readable for me. Using what I wrote above, I'll expand. I'm changing the index name to make for a nicer example.
from datetime import datetime
from elasticsearch_dsl import DocType, String, Date, Integer
from elasticsearch_dsl.connections import connections
from elasticsearch import Elasticsearch
from elasticsearch_dsl import Search, Q
# Define a default Elasticsearch client
client = connections.create_connection(hosts=['http://blahblahblah:9200'])
s = Search(using=client, index="airbnb", doc_type="sleep_overs")
s = s.execute()
# invalid! You haven't defined an aggregation.
#for tag in s.aggregations.per_tag.buckets:
# print (tag.key)
# Lets make an aggregation
# 'by_house' is a name you choose, 'terms' is a keyword for the type of aggregator
# 'field' is also a keyword, and 'house_number' is a field in our ES index
s.aggs.bucket('by_house', 'terms', field='house_number', size=0)
Above we're creating 1 bucket per house number. Therefore, the name of the bucket will be the house number. ElasticSearch (ES) will always give a document count of documents fitting into that bucket. Size=0 means to give use all results, since ES has a default setting to return 10 results only (or whatever your dev set it up to do).
# This runs the query.
s = s.execute()
# let's see what's in our results
print s.aggregations.by_house.doc_count
print s.hits.total
print s.aggregations.by_house.buckets
for item in s.aggregations.by_house.buckets:
print item.doc_count
My mistake before was thinking an Elastic Search query had aggregations by default. You sort of define them yourself, then execute them. Then your response can be split b the aggregators you mentioned.
The CURL for the above should look like:
NOTE: I use SENSE an ElasticSearch plugin/extension/add-on for Google Chrome. In SENSE you can use // to comment things out.
POST /airbnb/sleep_overs/_search
{
// the size 0 here actually means to not return any hits, just the aggregation part of the result
"size": 0,
"aggs": {
"by_house": {
"terms": {
// the size 0 here means to return all results, not just the the default 10 results
"field": "house_number",
"size": 0
}
}
}
}
Work-around. Someone on the GIT of DSL told me to forget translating, and just use this method. It's simpler, and you can just write the tough stuff in CURL. That's why I call it a work-around.
# Define a default Elasticsearch client
client = connections.create_connection(hosts=['http://blahblahblah:9200'])
s = Search(using=client, index="airbnb", doc_type="sleep_overs")
# how simple we just past CURL code here
body = {
"size": 0,
"aggs": {
"by_house": {
"terms": {
"field": "house_number",
"size": 0
}
}
}
}
s = Search.from_dict(body)
s = s.index("airbnb")
s = s.doc_type("sleepovers")
body = s.to_dict()
t = s.execute()
for item in t.aggregations.by_house.buckets:
# item.key will the house number
print item.key, item.doc_count
Hope this helps. I now design everything in CURL, then use Python statement to peel away at the results to get what I want. This helps for aggregations with multiple levels (sub-aggregations).
I do not have the rep to comment yet but wanted to make a small fix on Matthew's comment on VISQL's answer regarding from_dict. If you want to maintain the search properties, use update_from_dict rather the from_dict.
According to the Docs , from_dict creates a new search object but update_from_dict will modify in place, which is what you want if Search already has properties such as index, using, etc
So you would want to declare the query body before the search and then create the search like this:
query_body = {
"size": 0,
"aggs": {
"by_house": {
"terms": {
"field": "house_number",
"size": 0
}
}
}
}
s = Search(using=client, index="airbnb", doc_type="sleep_overs").update_from_dict(query_body)

Modify JSON response of Flask-Restless

I am trying to use Flask-Restless with Ember.js which isn't going so great. It's the GET responses that are tripping me up. For instance, when I do a GET request on /api/people for example Ember.js expects:
{
people: [
{ id: 1, name: "Yehuda Katz" }
]
}
But Flask-Restless responds with:
{
"total_pages": 1,
"objects": [
{ "id": 1, "name": "Yahuda Katz" }
],
"num_results": 1,
"page": 1
}
How do I change Flask-Restless's response to conform to what Ember.js would like? I have this feeling it might be in a postprocessor function, but I'm not sure how to implement it.
Flask extensions have pretty readable source code. You can make a GET_MANY postprocessor:
def pagination_remover(results):
return {'people': results['objects']} if 'page' in results else results
manager.create_api(
...,
postprocessors={
'GET_MANY': [pagination_remover]
}
)
I haven't tested it, but it should work.
The accepted answer was correct at the time. However the post and preprocessors work in Flask-Restless have changed. According to the documentation:
The preprocessors and postprocessors for each type of request accept
different arguments, but none of them has a return value (more
specifically, any returned value is ignored). Preprocessors and
postprocessors modify their arguments in-place.
So now in my postprocessor I just delete any keys that I do not want. For example:
def api_post_get_many(result=None, **kw):
for key in result.keys():
if key != 'objects':
del result[key]

Categories