Python: Cannot read returned values from functions - python

I am working on an Fall Detection System. I wrote the Arduino Code and connected to Firebase. So now I have two variables that get 1 or 0 status, and I created a mobile application to receive an automatic push notification whenever the system detects a fall through Firebase+Pusher. I wrote this Python code with PyCharm and I used the stream function to read live data from Firebase and send automatic notifications. The code was working for the variable "Fall_Detection_Status" and I was able to receive push notifications normally with every fall detection. But I tried to modify the code to read data from another variable "Fall_Detection_Status1" and I want my code now to send the notification if both variables are giving 1's. I came up with this code but it seems that the last if statement is not working because I am not able to receive notifications and also print(response['publishId']) at the end of the if statement is not showing any result.
So what is wrong?
import pyrebase
from pusher_push_notifications import PushNotifications
config = {
'apiKey': "***********************************",
'authDomain': "arfduinopushnotification.firebaseapp.com",
'databaseURL': "https://arduinopushnotification.firebaseio.com",
'projectId': "arduinopushnotification",
'storageBucket': "arduinopushnotification.appspot.com",
'messagingSenderId': "************"
}
firebase = pyrebase.initialize_app(config)
db = firebase.database()
pn_client = PushNotifications(
instance_id='*****************************',
secret_key='**************************',
)
value = 0
value1 = 0
def stream_handler(message):
global value
print(message)
if message['data'] is 1:
value = message['data']
return value
def stream_handler1(message):
global value1
print(message)
if message['data'] is 1:
value1 = message['data']
return value1
if value == 1 & value1 == 1:
response = pn_client.publish(
interests=['hello'],
publish_body={
'apns': {
'aps': {
'alert': 'Hello!',
},
},
'fcm': {
'notification': {
'title': 'Notification',
'body': 'Fall Detected !!',
},
},
},
)
print(response['publishId'])
my_stream = db.child("Fall_Detection_Status").stream(stream_handler)
my_stream1 = db.child("Fall_Detection_Status1").stream(stream_handler1)

You are using the wrong operator '&' to combine the results of the two tests. In Python, '&' is the bitwise and operator! I believe you want the logical version which is 'and'.
Secondly, assuming the stream_handler/1 calls are run by your last two statements, those two statements are AFTER the place where you test the values in the if statement. Move those line above the if block.

Related

Can't update record in DynamoDB

Really new to python and coding in general - would really help if someone could point me in the right direction with some code.
So to start off with I am making a proof of concept for a car park running AWS Rekognition and I need some help with updating the database. As you can see with the code below it inputs the reg_plate, entry_time and exit_time into the database all okay. But, what I am trying to do is when Rekognition is invoked a second time with the same reg_plate it updates the exit_time only for the current record in the database.
import boto3
import time
def detect_text(photo, bucket):
client=boto3.client('rekognition')
response=client.detect_text(Image={'S3Object':{'Bucket':bucket,'Name':photo}})
textDetections=response['TextDetections']
for text in textDetections:
if text['Type'] == 'LINE':
return text['DetectedText']
return False
def main(event, context):
bucket=''
photo='regtest.jpg'
text_detected=detect_text(photo,bucket)
if (text_detected == False):
exit()
print("Text detected: " + str(text_detected))
entry_time = str(int(time.time()))
dynamodb = boto3.client('dynamodb')
table_name = 'Customer_Plate_DB'
item = {
'reg_plate': {
'S': str(text_detected)
},
'entry_time': {
'S': entry_time
},
'exit_time': {
'S': str(0)
}
}
dynamodb.put_item(TableName=table_name, Item=item)
Tried various if statements but with no luck as whenever I try it just keeps making new records in the database and the exit_time is never updated.
In DynamoDB a PutItem will overwrite/insert data, so its not what you need if you are wanting to update a single attribute. You will need to use UpdateItem:
response = dynamodb.update_item(
TableName=table_name,
Key={
'reg_plate': {'S': str(text_detected)},
'entry_time': {'S': entry_time}
},
UpdateExpression="SET #t = :t",
ExpressionAttributeValues={ ":t": str(0) },
ExpressionAttributeNames={"#t":"exit_time"},
ConditionExpression='attribute_exists(reg_plate)'
)
In your question you do not tell me what your partition and/or sort keys are, so here I assume you have a partition key only called pk, which you can change to suit your needs.
In the above example, for item where pk = '123' I set exit_time to a value only if the item already exists in your table (Condition Expression).

"The document path provided in the update expression is invalid for update" when trying to update a map value

Here is my sample code
import boto3
import os
ENV = "dev"
DB = "http://awsservice.com"
REGION = "us-east-1"
TABLE = "traffic-count"
def main():
os.environ["AWS_PROFILE"] = ENV
client = boto3.resource("dynamodb", endpoint_url=DB, region_name=REGION)
kwargs = {'Key': {'id': 'D-D0000012345-P-1'},
'UpdateExpression': 'ADD #count.#car :delta \n SET #parentKey = :parent_key, #objectKey = :object_key',
'ExpressionAttributeValues': {':delta': 1, ':parent_key': 'District-D0000012345', ':object_key': 'Street-1'},
'ExpressionAttributeNames': {'#car': 'car', '#count': 'count', '#parentKey': 'parentKey', '#objectKey': 'objectKey'}}
client.Table(TABLE).update_item(**kwargs)
if __name__ == "__main__":
main()
What I want to achieve is this:
With a single API call (in this update_item), I want to be able to
If the item does not exit. create an item with a map count and initialise it with {'car': 1} and set the fields parent_key and object_key.
or
If the item already exists, update the field to {'car': 2} (if the original count is 1)
Previously, if I did not use a map, I can successfully update with this expression,
SET #count = if_not_exist(#count, :zero) + :delta,
#parentKey = :parent_key, #objectKey = :object_key
However I am getting this error:
botocore.exceptions.ClientError: An error occurred
(ValidationException) when calling the UpdateItem operation: The
document path provided in the update expression is invalid for update
Which document path is causing the problem? How can I fix it?
For those who landed on this page with similar error:
The document path provided in the update expression is invalid for update
The reason may be:
for the item on which the operation is being performed,
this attribute (count, for example) is not yet set.
Considering the sample code from question,
The exception could be coming from all those items where count is empty or not set. So the update query doesn't know on which map the new value(s) (car, for example) needs to be set or updated.
In the question, it was possible for the OP in the beginning because, the attribute is not a map and the process is simply setting the value to count as is. It's not trying to access a key of an unknown map to set the value.
This can be handled by catching the exception. For example:
from botocore.exceptions import ClientError
...
try:
response = table.update_item(
Key={
"pk": pk
},
UpdateExpression="set count.car = :c,
ExpressionAttributeValues={
':c': "some car"
},
ReturnValues="UPDATED_NEW"
)
except ClientError as e:
if e.response['Error']['Code'] == 'ValidationException':
response = table.update_item(
Key={
"pk": pk
},
UpdateExpression="set count = :count",
ExpressionAttributeValues={
':count': {
':c': "some car"
}
},
ReturnValues="UPDATED_NEW"
)

Pardot Salesforce Python API

I'm having this requirement to get data out from Pardot Saleforce objects using Python API.
Can someone please share any available snippets to get data from all the Pardot objects(tables) using Python.
I am working on a Pardot sync solution using pypardot4 (kudos to Matt for https://github.com/mneedham91/PyPardot4), which involves retrieving data through the API (v4).
Here are some snippets for Visitors API, but you can use the same for almost any Pardot APIs (except Visit...):
from pypardot.client import PardotAPI
# ... some code here to read API config ...
email = config['pardot_email']
password = config['pardot_password']
user_key = config['pardot_user_key']
client = PardotAPI(email, password, user_key)
client.authenticate()
# plain query
data = client.visitors.query(sort_by='id')
total = data['total_results']
# beware - max 200 results are returned, need to implement pagination using offset query paramerter
# filtered query
data = client.visitors.query(sort_by='id', id_greater_than=last_id)
Also I have used some introspection to iterate through API config data I have set up like this:
apiList = config['apiList']
# loop through the apis, and call their query method
for api in apiList:
api_name = api['apiName']
api_object_name = api['clientObject']
api_object = getattr(client, api_object_name)
method = 'query'
if api.get('noSortBy', False) == False:
data = getattr(api_object, method)(created_after=latest_sync, sort_by='created_at')
else:
# The API is not consistent, the sort_by criteria is not supported by all resources
data = getattr(api_object, method)(created_after=latest_sync)
And a snippet from the apiList config JSON:
"apiList":[
{
"apiName": "campaign",
"clientObject": "campaigns"
},
{
"apiName": "customField",
"clientObject": "customfields"
},
{
"apiName": "customRedirect",
"clientObject": "customredirects"
},
{
"apiName": "emailClick",
"clientObject": "emailclicks",
"noSortBy": true
},
...
Notice the noSortBy field and how it's handled in the code.
Hope this helps!

Error retrieving JSON data from Webhose API in Python

I am a beginner in Python and am trying to use the webhose.io API to collect data from the web. The problem is that this crawler retrieves 100 objects from one JSON at a time, i.e., to retrieve 500 data, it is necessary to make 5 requests. When I use the API, I am not able to collect all the data at once. I was able to collect the first 100 results, but when going to the next request, an error occurs, the first post is repeated. Follow the code:
import webhoseio
webhoseio.config(token="Xxxxx")
query_params = {
"q": "trump:english",
"ts": "1498538579353",
"sort": "crawled"
}
output = webhoseio.query("filterWebContent", query_params)
x = 0
for var in output['posts']:
print output['posts'][x]['text']
print output['posts'][x]['published']
if output['posts'] is None:
output = webhoseio.get_next()
x = 0
Thanks.
Use the following:
while output['posts']:
for var in output['posts']:
print output['posts'][0]['text']
print output['posts'][0]['published']
output = webhoseio.get_next()

get Dividend and Split from blapi via python

I would like to get Dividend and Split from the python module to Bloomberg API (blapi) for some companies in the US (I am using a Screening to extract these companies). I am using the python module blapi :
import blpapi
# Connect the bloomberg platform
sessionOptions = blpapi.SessionOptions()
sessionOptions.setServerHost(bloomberg_host)
sessionOptions.setServerPort(bloomberg_port)
session = blpapi.Session(sessionOptions)
# Get the dividend and Split
refDataService = session.getService("//blp/refdata")
request = refDataService.createRequest("HistoricalDataRequest")
request.getElement("securities").appendValue("AAPL US Equity")
request.getElement("fields").appendValue("DVD_HIST_ALL")
request.set("periodicityAdjustment", "ACTUAL")
request.set("periodicitySelection", "DAILY")
request.set("startDate", "20140101")
request.set("endDate", "20141231")
request.set("maxDataPoints", 1)
But I get the following amswer :
HistoricalDataResponse = {
securityData = {
security = "AAPL US Equity"
eidData[] = {
}
sequenceNumber = 0
fieldExceptions[] = {
fieldExceptions = {
fieldId = "DVD_HIST_ALL"
errorInfo = {
source = "951::bbdbh5"
code = 1
category = "BAD_FLD"
message = "Not valid historical field"
subcategory = "NOT_APPLICABLE_TO_HIST_DATA"
}
}
}
fieldData[] = {
}
}
}
Looking at the documentation (blpapi-developers-guide) I see multiple request possibility (Reference Data Service, Market Data Service, API Field Information Service) but none of them explain how to get the dividend/split. I don't know which Service and which Request to use.
From the terminal these Dividend and Split and registered under the tag CACT if you use a screening and DVD if you look for the dividend/split of a currently loaded stock (I can loop over the companies I want in my code in worse case).
If someone knows how to do it you will illuminate my day!

Categories