Google fit data via google python api libraries - python

I'm using this python library from google but I can't figure out what to use for the 'body' argument. Is there an example body that I can draw from to create the dict that this tool will need?
Here is the code that I'm using:
flow = client.flow_from_clientsecrets(
workLaptop,
scope='https://www.googleapis.com/auth/fitness.activity.read',
redirect_uri='oauth:code:from:somehwere')
auth_uri = flow.step1_get_authorize_url()
webbrowser.open_new(auth_uri)
auth_code = "a;ldskjfa;lsdkfja;ldsfkja;lsdkfjaldgha;"
credentials = flow.step2_exchange(auth_code)
http_auth = credentials.authorize(httplib2.Http())
service = discovery.build('fitness', 'v1',http_auth)
fitData = service.users().dataset().aggregate(userId='me',body=body).execute()
It's all fine until the part where I need to define the body. Here is the body that I'm trying:
body = {
"aggregateBy": [
{
"dataSourceId": "derived:com.google.step_count.delta:com.google.android.gms:estimated_steps",
"dataTypeName": "com.google.step_count.delta"
},
],
"bucketByActivitySegment": {
"minDurationMillis": "A String", # Only activity segments of duration longer than this is used
},
"endTimeMillis": "1435269600000000000",
"bucketBySession": {
"minDurationMillis": "10", # Only sessions of duration longer than this is used
},
"bucketByActivityType": {
"minDurationMillis": "10", # Only activity segments of duration longer than this is used
},
"startTimeMillis": "1435183200000000000", # required time range
"bucketByTime": { # apparently oneof is not supported by reduced_nano_proto
"durationMillis": "10",
},
}
What is wrong with my body dict? Here is the error code:
https://www.googleapis.com/fitness/v1/users/me/dataset:aggregate?alt=json returned "Internal Error">
Here is an example of the object in the API explorer:

Although I'm not 100% o fey with the Google API for the Google Fit, there definitely some issues with your JSON body request in the first instance.
For example:
body = {
"aggregateBy": [
{
"dataSourceId": "derived:com.google.step_count.delta:com.google.android.gms:estimated_steps",
"dataTypeName": "com.google.step_count.delta"
},
],
"bucketByActivitySegment": {
"minDurationMillis": "A String", # Only activity segments of duration longer than this is used
},
"endTimeMillis": "1435269600000000000",
"bucketBySession": {
"minDurationMillis": "10", # Only sessions of duration longer than this is used
},
"bucketByActivityType": {
"minDurationMillis": "10", # Only activity segments of duration longer than this is used
},
"startTimeMillis": "1435183200000000000", # required time range
"bucketByTime": { # apparently oneof is not supported by reduced_nano_proto
"durationMillis": "10",
},
}
Should actually be this;
body = {
"aggregateBy": [
{
"dataSourceId": "derived:com.google.step_count.delta:com.google.android.gms:estimated_steps",
"dataTypeName": "com.google.step_count.delta"
}
],
"bucketByActivitySegment": {
"minDurationMillis": "A String" # Only activity segments of duration longer than this is used
},
"endTimeMillis": "1435269600000000000",
"bucketBySession": {
"minDurationMillis": "10" # Only sessions of duration longer than this is used
},
"bucketByActivityType": {
"minDurationMillis": "10" # Only activity segments of duration longer than this is used
},
"startTimeMillis": "1435183200000000000", # required time range
"bucketByTime": { # apparently oneof is not supported by reduced_nano_proto
"durationMillis": "10"
}
}
JSON Based rest services are really unforgiving for the use of extra comma's where they should not be, it renders the string un-jsonable which will lead to a 500 failure. Give that a try in the first instance ;)

Not an expert myself, but I have been playing with the API for a number of days. Here's a sample from my OAuth playground.
Sample response
From what I understand, your "endTimeMillis": "1435269600000000000" is not properly defined as it's in nanoseconds. For it to be in milli, change it to "1435269600000"

Related

How to remove highlight in the google slides API?

I am using the Google API Python Client to replace text placeholders with generated data. In this example, I detect all instances of "bar" and replace them with "foo", in all slides. slides_service is instantiated with apiclient.discovery.build(...)
batch_requests_array = [
{
"replaceAllText": {
"replaceText": "foo",
"containsText": {
"text": "bar",
"matchCase": False
}
}
}
]
batch_requests = {"requests": batch_requests_array}
request = slides_service.presentations().batchUpdate(presentationId=slides_id, body=batch_requests)
res = request.execute()
Now if bar has a highlight color, how can I remove that when I replace it with foo? I think I need to add a separate request to my batch requests array, but I have been scrolling up and down here without finding any clue.
For clarity, this is the highlight option I am talking about as it's presented in the UI
To remove the highlight color of a text, you will have to update the backgroundColor. To give you an idea, this is how the request should look, this will set the background fully transparent:
highlightedTextRequest = [
{
"updateTextStyle": {
"objectId": "objectId",
"style": {
"backgroundColor": {
}
},
"fields": "*"
}
}
]
Note: The objectId in the request, is the ID of the shape or table with the text to be styled.
References:
UpdateTextStyleRequest
Formatting text with the Google Slides API

Define resource_defs in dagster job sensor

I am trying to build sensor for the execution of pipeline/graph. The sensor would check on different intervals and executes the job containing different ops. Now the Job requires some resource_defs and config. In the offical documentation I don't see how I can define resource_defs for Job. A small hint would be great
Question : where or how do i define resource_defs in sensor ? Do I even have to define it ? its not mentioned in official documentation
https://docs.dagster.io/concepts/partitions-schedules-sensors/sensors
### defining Job
#job(
resource_defs = {"some_API_Module": API_module , "db_Module" : db} ,
config = {key : value }
)
def job_pipeline ():
op_1 () ## API is used as required resource
op_2 () ## db is used as required resource
### defining sensor that triggers the Job
#sensor ( Job = job_pipeline) :
### some calculation
yield RunRequest(run_key = "" config = {key : value} )
It seems like you may be missing building your run config from within the sensor. The configuration you pass to a RunRequest should contain the resource configuration you want to run the job with, and will look exactly like the run configuration you'd configure from the launchpad. Something like:
### defining sensor that triggers the Job
#sensor ( Job = job_pipeline) :
### some calculation
run_config = {
"ops": {
"op1": {
"config": {
"key": "value"
},
}
},
"op2": {
"config": {
"key": "value",
}
},
},
"resources": {
"some_API_module": {
"config": {"key": "value"}
},
"db": {
"config": {"key": "value"}
},
},
}
yield RunRequest(run_key="<unique_value>", run_config=run_config)

How to push the data from dynamodb through stream

Below is the json file
[
{
"year": 2013,
"title": "Rush",
"actors": [
"Daniel Bruhl",
"Chris Hemsworth",
"Olivia Wilde"
]
},
{
"year": 2013,
"title": "Prisoners",
"actors": [
"Hugh Jackman",
"Jake Gyllenhaal",
"Viola Davis"
]
}
]
Below is the code to push to dynamodb. I have created testjsonbucket bucket name, moviedataten.json is the filename and saved above json.Create a dynamodb with Primary partition key as year (Number) and
Primary sort key as title (String).
import json
from decimal import Decimal
import json
import boto3
s3 = boto3.resource('s3')
obj = s3.Object('testjsonbucket', 'moviedataten.json')
body = obj.json
#def lambda_handler(event,context):
# print (body)
def load_movies(movies, dynamodb=None):
if not dynamodb:
dynamodb = boto3.resource('dynamodb')
table = dynamodb.Table('Movies')
for movie in movies:
year = int(movie['year'])
title = movie['title']
print("Adding movie:", year, title)
table.put_item(Item=movie)
def lambda_handler(event, context):
movie_list = json.loads(body, parse_float=Decimal)
load_movies(movie_list)
I want to push in to ElasticSearch from dynamodb.
I have created a Elastic Domain https://xx.x.x.com/testelas
I have gone through the link https://aws.amazon.com/blogs/compute/indexing-amazon-dynamodb-content-with-amazon-elasticsearch-service-using-aws-lambda/
I clicked Managestream also
My Requirement:
Any change in Dynamodb has to reflect in the Elasticsearch?
This lambda just writing the document to DynamoDb, and I will not recommend adding the code in this lambda to push the same object to Elastic search, as lambda function should perform a single task and pushing the same document to ELK should be managed as a DynamoDB stream.
What if ELK is down or not available how you will manage this in lambda?
What if you want to disable this in future? you will need to modify lambda instead of controlling this from AWS API or AWS console, all you need to just disable the stream when required no changes on above lambda side code
What if you want to move only modify or TTL item to elastic search?
So create Dyanodb Stream that pushes the document to another Lambda that is responsible to push the document to ELK, with this option you can also push old and new both items.
You can look into this article too that describe another approach data-streaming-from-dynamodb-to-elasticsearch
For above approach look into this GitHub project dynamodb-stream-elasticsearch.
const { pushStream } = require('dynamodb-stream-elasticsearch');
const { ES_ENDPOINT, INDEX, TYPE } = process.env;
function myHandler(event, context, callback) {
console.log('Received event:', JSON.stringify(event, null, 2));
pushStream({ event, endpoint: ES_ENDPOINT, index: INDEX, type: TYPE })
.then(() => {
callback(null, `Successfully processed ${event.Records.length} records.`);
})
.catch((e) => {
callback(`Error ${e}`, null);
});
}
exports.handler = myHandler;
DynamoDB has a built in feature (DynamoDB streams) that will handle the stream part of this question.
When you configure this you have the choice of the following configurations:
KEYS_ONLY — Only the key attributes of the modified item.
NEW_IMAGE — The entire item, as it appears after it was modified.
OLD_IMAGE — The entire item, as it appeared before it was modified.
NEW_AND_OLD_IMAGES — Both the new and the old images of the item.
This will produce an event that looks like the following
{
"Records":[
{
"eventID":"1",
"eventName":"INSERT",
"eventVersion":"1.0",
"eventSource":"aws:dynamodb",
"awsRegion":"us-east-1",
"dynamodb":{
"Keys":{
"Id":{
"N":"101"
}
},
"NewImage":{
"Message":{
"S":"New item!"
},
"Id":{
"N":"101"
}
},
"SequenceNumber":"111",
"SizeBytes":26,
"StreamViewType":"NEW_AND_OLD_IMAGES"
},
"eventSourceARN":"stream-ARN"
},
{
"eventID":"2",
"eventName":"MODIFY",
"eventVersion":"1.0",
"eventSource":"aws:dynamodb",
"awsRegion":"us-east-1",
"dynamodb":{
"Keys":{
"Id":{
"N":"101"
}
},
"NewImage":{
"Message":{
"S":"This item has changed"
},
"Id":{
"N":"101"
}
},
"OldImage":{
"Message":{
"S":"New item!"
},
"Id":{
"N":"101"
}
},
"SequenceNumber":"222",
"SizeBytes":59,
"StreamViewType":"NEW_AND_OLD_IMAGES"
},
"eventSourceARN":"stream-ARN"
},
{
"eventID":"3",
"eventName":"REMOVE",
"eventVersion":"1.0",
"eventSource":"aws:dynamodb",
"awsRegion":"us-east-1",
"dynamodb":{
"Keys":{
"Id":{
"N":"101"
}
},
"OldImage":{
"Message":{
"S":"This item has changed"
},
"Id":{
"N":"101"
}
},
"SequenceNumber":"333",
"SizeBytes":38,
"StreamViewType":"NEW_AND_OLD_IMAGES"
},
"eventSourceARN":"stream-ARN"
}
]
}
As you're already familiar with Lambda it makes sense to use a Lambda function to consume the records and then iterate through them to process them in the Elasticsearch format before adding them to your index.
When doing this make sure that you iterate through each record as there may be multiple depending on your configuration.
For more information on the steps required for the Lambda side of the function check out the Tutorial: Using AWS Lambda with Amazon DynamoDB streams page.

Using the Firestore REST API to Update a Document Field

I've been searching for a pretty long time but I can't figure out how to update a field in a document using the Firestore REST API. I've looked on other questions but they haven't helped me since I'm getting a different error:
{'error': {'code': 400, 'message': 'Request contains an invalid argument.', 'status': 'INVALID_ARGUMENT', 'details': [{'#type': 'type.googleapis.com/google.rpc.BadRequest', 'fieldViolations': [{'field': 'oil', 'description': "Error expanding 'fields' parameter. Cannot find matching fields for path 'oil'."}]}]}}
I'm getting this error even though I know that the "oil" field exists in the document. I'm writing this in Python.
My request body (field is the field in a document and value is the value to set that field to, both strings received from user input):
{
"fields": {
field: {
"integerValue": value
}
}
}
My request (authorizationToken is from a different request, dir is also a string from user input which controls the directory):
requests.patch("https://firestore.googleapis.com/v1beta1/projects/aethia-resource-management/databases/(default)/documents/" + dir + "?updateMask.fieldPaths=" + field, data = body, headers = {"Authorization": "Bearer " + authorizationToken}).json()
Based on the the official docs (1,2, and 3), GitHub and a nice article, for the example you have provided you should use the following:
requests.patch("https://firestore.googleapis.com/v1beta1/projects{projectId}/databases/{databaseId}/documents/{document_path}?updateMask.fieldPaths=field")
Your request body should be:
{
"fields": {
"field": {
"integerValue": Value
}
}
}
Also keep in mind that if you want to update multiple fields and values you should specify each one separately.
Example:
https://firestore.googleapis.com/v1beta1/projects/{projectId}/databases/{databaseId}/documents/{document_path}?updateMask.fieldPaths=[Field1]&updateMask.fieldPaths=[Field2]
and the request body would have been:
{
"fields": {
"field": {
"integerValue": Value
},
"Field2": {
"stringValue": "Value2"
}
}
}
EDIT:
Here is a way I have tested which allows you to update some fields of a document without affecting the rest.
This sample code creates a document under collection users with 4 fields, then tries to update 3 out of 4 fields (which leaves the one not mentioned unaffected)
from google.cloud import firestore
db = firestore.Client()
#Creating a sample new Document “aturing” under collection “users”
doc_ref = db.collection(u'users').document(u'aturing')
doc_ref.set({
u'first': u'Alan',
u'middle': u'Mathison',
u'last': u'Turing',
u'born': 1912
})
#updating 3 out of 4 fields (so the last should remain unaffected)
doc_ref = db.collection(u'users').document(u'aturing')
doc_ref.update({
u'first': u'Alan',
u'middle': u'Mathison',
u'born': 2000
})
#printing the content of all docs under users
users_ref = db.collection(u'users')
docs = users_ref.stream()
for doc in docs:
print(u'{} => {}'.format(doc.id, doc.to_dict()))
EDIT: 10/12/2019
PATCH with REST API
I have reproduced your issue and it seems like you are not converting your request body to a json format properly.
You need to use json.dumps() to convert your request body to a valid json format.
A working example is the following:
import requests
import json
endpoint = "https://firestore.googleapis.com/v1/projects/[PROJECT_ID]/databases/(default)/documents/[COLLECTION]/[DOCUMENT_ID]?currentDocument.exists=true&updateMask.fieldPaths=[FIELD_1]"
body = {
"fields" : {
"[FIELD_1]" : {
"stringValue" : "random new value"
}
}
}
data = json.dumps(body)
headers = {"Authorization": "Bearer [AUTH_TOKEN]"}
print(requests.patch(endpoint, data=data, headers=headers).json())
I found the official documentation to not to be of much use since there was no example mentioned. This is the API end-point for your firestore database
PATCH https://firestore.googleapis.com/v1beta1/projects/{YOUR_PROJECT_ID}/databases/(default)/documents/{COLLECTION_NAME}/{DOCUMENT_NAME}
the following code is the body of your API request
{
"fields": {
"first_name": {
"stringValue":"Kurt"
},
"name": {
"stringValue":"Cobain"
},
"band": {
"stringValue":"Nirvana"
}
}
}
The response you should get upon successful update of the database should look like
{
"name": "projects/{YOUR_PROJECT_ID}/databases/(default)/documents/{COLLECTION_ID/{DOC_ID}",
{
"fields": {
"first_name": {
"stringValue":"Kurt"
},
"name": {
"stringValue":"Cobain"
},
"band": {
"stringValue":"Nirvana"
}
}
"createTime": "{CREATE_TIME}",
"updateTime": "{UPDATE_TIME}"
Note that performing the above action would result in a new document being created, meaning that any fields that existed previously but have NOT been mentioned in the "fields" body will be deleted. In order to preserve fields, you'll have to add
?updateMask.fieldPaths={FIELD_NAME} --> to the end of your API call (for each individual field that you want to preserve).
For example:
PATCH https://firestore.googleapis.com/v1beta1/projects/{YOUR_PROJECT_ID}/databases/(default)/documents/{COLLECTION_NAME}/{DOCUMENT_NAME}?updateMask.fieldPaths=name&updateMask.fieldPaths=band&updateMask.fieldPaths=age. --> and so on

Python Office365 API - TimeZone stays at Local Time

I have this python code below and it works for creating a event in Outlook Calendar. The example below has Start and End time from 3pm to 4pm (I think UTC timezone)
We have users from different regions (Pacific, Mountain, Central... times). What I try to accomplish is the time always be local time. No matter where the user account from it should also default to 3pm to 4pm in their Outlook.
Thanks in advance and please let me know if I need to clarify any of this.
# Set the request parameters
url = 'https://outlook.office365.com/api/v1.0/me/events?$Select=Start,End'
user = 'user1#domain.com'
pwd = getpass.getpass('Please enter your AD password: ')
# Create JSON payload
data = {
"Subject": "Testing Outlock Event",
"Body": {
"ContentType": "HTML",
"Content": "Test Content"
},
"Start": "2016-05-23T15:00:00.000Z",
"End": "2016-05-23T16:00:00.000Z",
"Attendees": [
{
"EmailAddress": {
"Address": "user1#domain.com",
"Name": "User1"
},
"Type": "Required" },
{
"EmailAddress": {
"Address": "user2#domain.com",
"Name": "User2"
},
"Type": "Optional" }
]
}
json_payload = json.dumps(data)
# Build the HTTP request
opener = urllib2.build_opener(urllib2.HTTPHandler)
request = urllib2.Request(url, data=json_payload)
auth = base64.encodestring('%s:%s' % (user, pwd)).replace('\n', '')
request.add_header('Authorization', 'Basic %s' % auth)
request.add_header('Content-Type', 'application/json')
request.add_header('Accept', 'application/json')
request.get_method = lambda: 'POST'
# Perform the request
result = opener.open(request)
First of all, there's no point of creating a meeting on the same time at different timezones.
If you really want to do so, you'll need to create separate meeting requests per the timezone of the attendees, the JSON format would be something like below,
var body = new JObject
{
{"Subject", "Testing Outlock Event"},
{"Start", new JObject { { "DateTime", "2016-03-24T15:00:00"}, { "TimeZone", "Pacific Standard Time" } } },
{"End", new JObject { { "DateTime", "2016-03-24T16:00:00"}, { "TimeZone", "Pacific Standard Time" } } }
};
Note you need to remove the capital 'Z' from the time, so that it'll be shown in the local time tone specified by the "TimeZone" attribute.
The workaround above needs to know the timezone of all the attendees, I'm not sure if this can be done pragmatically. If not, it might not make much sense if you have tones of attendees.
The "ideal" way of achieving your requirement is giving the time without specifying the timezone, so that it'll be shown at the same time with different timezone.
However, it's not supported yet, but you can vote here https://officespdev.uservoice.com/forums/224641-general/suggestions/12866364-microsoft-graph-o365-unified-api-create-events-w

Categories