Pardot Salesforce Python API - python

I'm having this requirement to get data out from Pardot Saleforce objects using Python API.
Can someone please share any available snippets to get data from all the Pardot objects(tables) using Python.

I am working on a Pardot sync solution using pypardot4 (kudos to Matt for https://github.com/mneedham91/PyPardot4), which involves retrieving data through the API (v4).
Here are some snippets for Visitors API, but you can use the same for almost any Pardot APIs (except Visit...):
from pypardot.client import PardotAPI
# ... some code here to read API config ...
email = config['pardot_email']
password = config['pardot_password']
user_key = config['pardot_user_key']
client = PardotAPI(email, password, user_key)
client.authenticate()
# plain query
data = client.visitors.query(sort_by='id')
total = data['total_results']
# beware - max 200 results are returned, need to implement pagination using offset query paramerter
# filtered query
data = client.visitors.query(sort_by='id', id_greater_than=last_id)
Also I have used some introspection to iterate through API config data I have set up like this:
apiList = config['apiList']
# loop through the apis, and call their query method
for api in apiList:
api_name = api['apiName']
api_object_name = api['clientObject']
api_object = getattr(client, api_object_name)
method = 'query'
if api.get('noSortBy', False) == False:
data = getattr(api_object, method)(created_after=latest_sync, sort_by='created_at')
else:
# The API is not consistent, the sort_by criteria is not supported by all resources
data = getattr(api_object, method)(created_after=latest_sync)
And a snippet from the apiList config JSON:
"apiList":[
{
"apiName": "campaign",
"clientObject": "campaigns"
},
{
"apiName": "customField",
"clientObject": "customfields"
},
{
"apiName": "customRedirect",
"clientObject": "customredirects"
},
{
"apiName": "emailClick",
"clientObject": "emailclicks",
"noSortBy": true
},
...
Notice the noSortBy field and how it's handled in the code.
Hope this helps!

Related

Python: Cannot read returned values from functions

I am working on an Fall Detection System. I wrote the Arduino Code and connected to Firebase. So now I have two variables that get 1 or 0 status, and I created a mobile application to receive an automatic push notification whenever the system detects a fall through Firebase+Pusher. I wrote this Python code with PyCharm and I used the stream function to read live data from Firebase and send automatic notifications. The code was working for the variable "Fall_Detection_Status" and I was able to receive push notifications normally with every fall detection. But I tried to modify the code to read data from another variable "Fall_Detection_Status1" and I want my code now to send the notification if both variables are giving 1's. I came up with this code but it seems that the last if statement is not working because I am not able to receive notifications and also print(response['publishId']) at the end of the if statement is not showing any result.
So what is wrong?
import pyrebase
from pusher_push_notifications import PushNotifications
config = {
'apiKey': "***********************************",
'authDomain': "arfduinopushnotification.firebaseapp.com",
'databaseURL': "https://arduinopushnotification.firebaseio.com",
'projectId': "arduinopushnotification",
'storageBucket': "arduinopushnotification.appspot.com",
'messagingSenderId': "************"
}
firebase = pyrebase.initialize_app(config)
db = firebase.database()
pn_client = PushNotifications(
instance_id='*****************************',
secret_key='**************************',
)
value = 0
value1 = 0
def stream_handler(message):
global value
print(message)
if message['data'] is 1:
value = message['data']
return value
def stream_handler1(message):
global value1
print(message)
if message['data'] is 1:
value1 = message['data']
return value1
if value == 1 & value1 == 1:
response = pn_client.publish(
interests=['hello'],
publish_body={
'apns': {
'aps': {
'alert': 'Hello!',
},
},
'fcm': {
'notification': {
'title': 'Notification',
'body': 'Fall Detected !!',
},
},
},
)
print(response['publishId'])
my_stream = db.child("Fall_Detection_Status").stream(stream_handler)
my_stream1 = db.child("Fall_Detection_Status1").stream(stream_handler1)
You are using the wrong operator '&' to combine the results of the two tests. In Python, '&' is the bitwise and operator! I believe you want the logical version which is 'and'.
Secondly, assuming the stream_handler/1 calls are run by your last two statements, those two statements are AFTER the place where you test the values in the if statement. Move those line above the if block.

How to set group = true in couchdb

I am trying to use map/reduce to find the duplication of the data in couchDB
the map function is like this:
function(doc) {
if(doc.coordinates) {
emit({
twitter_id: doc.id_str,
text: doc.text,
coordinates: doc.coordinates
},1)};
}
}
and the reduce function is:
function(keys,values,rereduce){return sum(values)}
I want to find the sum of the data in the same key, but it just add everything together and I get the result:
<Row key=None, value=1035>
Is that a problem of group? How can I set it to true?
Assuming you're using the couchdb package from pypi, you'll need to pass a dictionary with all of the options you require to the view.
for example:
import couchdb
# the design doc and view name of the view you want to use
ddoc = "my_design_document"
view_name = "my_view"
#your server
server = couchdb.server("http://localhost:5984")
db = server["aCouchDatabase"]
#naming convention when passing a ddoc and view to the view method
view_string = ddoc +"/" + view_name
#query options
view_options = {"reduce": True,
"group" : True,
"group_level" : 2}
#call the view
results = db.view(view_string, view_options)
for row in results:
#do something
pass

Write results to permanent table in bigquery

I am using named parameters in Bigquery SQL and want to write the results to a permanent table. I have two functions 1 for using named query parameters and 1 for writing query results to table. How do I combine the two to get query results written to table; the query having named parameters.
This is the function using parameterized queries :
def sync_query_named_params(column_name,min_word_count,value):
query = """with lsq_results as
(select "%s" = #min_word_count)
replace (%s AS %s)
from lsq.lsq_results
""" % (min_word_count,value,column_name)
client = bigquery.Client()
query_results = client.run_sync_query(query
,
query_parameters=(
bigquery.ScalarQueryParameter('column_name', 'STRING', column_name),
bigquery.ScalarQueryParameter(
'min_word_count',
'STRING',
min_word_count),
bigquery.ScalarQueryParameter('value','INT64',value)
))
query_results.use_legacy_sql = False
query_results.run()
Function to write to permanent table
class BigQueryClient(object):
def __init__(self, bq_service, project_id, swallow_results=True):
self.bigquery = bq_service
self.project_id = project_id
self.swallow_results = swallow_results
self.cache = {}
def write_to_table(
self,
query,
dataset=None,
table=None,
external_udf_uris=None,
allow_large_results=None,
use_query_cache=None,
priority=None,
create_disposition=None,
write_disposition=None,
use_legacy_sql=None,
maximum_billing_tier=None,
flatten=None):
configuration = {
"query": query,
}
if dataset and table:
configuration['destinationTable'] = {
"projectId": self.project_id,
"tableId": table,
"datasetId": dataset
}
if allow_large_results is not None:
configuration['allowLargeResults'] = allow_large_results
if flatten is not None:
configuration['flattenResults'] = flatten
if maximum_billing_tier is not None:
configuration['maximumBillingTier'] = maximum_billing_tier
if use_query_cache is not None:
configuration['useQueryCache'] = use_query_cache
if use_legacy_sql is not None:
configuration['useLegacySql'] = use_legacy_sql
if priority:
configuration['priority'] = priority
if create_disposition:
configuration['createDisposition'] = create_disposition
if write_disposition:
configuration['writeDisposition'] = write_disposition
if external_udf_uris:
configuration['userDefinedFunctionResources'] = \
[ {'resourceUri': u} for u in external_udf_uris ]
body = {
"configuration": {
'query': configuration
}
}
logger.info("Creating write to table job %s" % body)
job_resource = self._insert_job(body)
self._raise_insert_exception_if_error(job_resource)
return job_resource
How do I combine the 2 functions to write a parameterized query and write the results to a permanent table?Or if there is another simpler way. Please suggest.
You appear to be using two different client libraries.
Your first code sample uses a beta version of the BigQuery client library, but for the time being I would recommend against using it, since it needs substantial revision before it is considered generally available. (And if you do use it, I would recommend using run_async_query() to create a job using all available parameters, and then call results() to get the QueryResults object.)
Your second code sample is creating a job resource directly, which is a lower-level interface. When using this approach, you can specify the configuration.query.queryParameters field on your query configuration directly. This is the approach I'd recommend right now.

get Dividend and Split from blapi via python

I would like to get Dividend and Split from the python module to Bloomberg API (blapi) for some companies in the US (I am using a Screening to extract these companies). I am using the python module blapi :
import blpapi
# Connect the bloomberg platform
sessionOptions = blpapi.SessionOptions()
sessionOptions.setServerHost(bloomberg_host)
sessionOptions.setServerPort(bloomberg_port)
session = blpapi.Session(sessionOptions)
# Get the dividend and Split
refDataService = session.getService("//blp/refdata")
request = refDataService.createRequest("HistoricalDataRequest")
request.getElement("securities").appendValue("AAPL US Equity")
request.getElement("fields").appendValue("DVD_HIST_ALL")
request.set("periodicityAdjustment", "ACTUAL")
request.set("periodicitySelection", "DAILY")
request.set("startDate", "20140101")
request.set("endDate", "20141231")
request.set("maxDataPoints", 1)
But I get the following amswer :
HistoricalDataResponse = {
securityData = {
security = "AAPL US Equity"
eidData[] = {
}
sequenceNumber = 0
fieldExceptions[] = {
fieldExceptions = {
fieldId = "DVD_HIST_ALL"
errorInfo = {
source = "951::bbdbh5"
code = 1
category = "BAD_FLD"
message = "Not valid historical field"
subcategory = "NOT_APPLICABLE_TO_HIST_DATA"
}
}
}
fieldData[] = {
}
}
}
Looking at the documentation (blpapi-developers-guide) I see multiple request possibility (Reference Data Service, Market Data Service, API Field Information Service) but none of them explain how to get the dividend/split. I don't know which Service and which Request to use.
From the terminal these Dividend and Split and registered under the tag CACT if you use a screening and DVD if you look for the dividend/split of a currently loaded stock (I can loop over the companies I want in my code in worse case).
If someone knows how to do it you will illuminate my day!

Facebook Marketing API: retrieving metadata for many Ads via Python

I hope, someone has stumbled over the same issue and might guide me towards a simple solution for my problem.
I want to retrieve regularly some data regarding my Ads on Facebook. Basically, I just want to store some metadata in one of my databases for further reporting purposes. Thus, I want to get AD-ID, AD-name and corresponding ADSET-ID for all my Ads.
I have written this small function in Python:
def get_ad_stats(ad_account):
""" Pull basic stats for all ads
Args: 'ad_account' is the Facebook AdAccount object
Returns: 'fb_ads', a list with basic values
"""
fb_ads = []
fb_fields = [
Ad.Field.id,
Ad.Field.name,
Ad.Field.adset_id,
Ad.Field.created_time,
]
fb_params = {
'date_preset': 'last_14_days',
}
for ad in ad_account.get_ads(fields = fb_fields, params = fb_params):
fb_ads.append({
'id': ad[Ad.Field.id],
'name': ad[Ad.Field.name],
'adset_id': ad[Ad.Field.adset_id],
'created_time': datetime.datetime.strptime(ad[Ad.Field.created_time], "%Y-%m-%dT%H:%M:%S+0000"),
})
return (fb_ads)
Similar functions for Campaign- and AdSet-data work fine. But for Ads I am always reaching a user request limit: "(#17) User request limit reached".
I do have an API-access level of "BASIC" and we're talking here about 12,000 Ads.
And, unfortunately, async-calls seem to work only for the Insights-edge.
Is there a way to avoid the user request limit, e.g. by limiting the API-request to only those Ads which have been changed/newly created after a specific date or so?
Ok, sacrificing the 'created_time' field, I have realized I could use the Insights-edge for that.
Here is a revised code for the same function which is now using async-calls and a delay between calls:
def get_ad_stats(ad_account):
""" Pull basic stats for all ads
Args: 'ad_account' is the Facebook AdAccount object
Returns: 'fb_ads', a list with basic values
"""
fb_ads = []
fb_params = {
'date_preset': 'last_14_days',
'level': 'ad',
}
fb_fields = [
'ad_id',
'ad_name',
'adset_id',
]
async_job = ad_account.get_insights(fields = fb_fields, params = fb_params, async=True)
async_job.remote_read()
while async_job['async_percent_completion'] < 100:
time.sleep(1)
async_job.remote_read()
for ad in async_job.get_result():
fb_ads.append({
'id': ad['ad_id'],
'name': ad['ad_name'],
'adset_id': ad['adset_id'],
})
return (fb_ads)

Categories