I'm trying to follow some examples from Python Facebook Marketing Api but, when I run:
i_async_job = account.get_insights(params={'level': 'adgroup'}, async=True)
r_async_job = account.get_report_stats(
params={
'data_columns': ['adgroup_id'],
'date_preset': 'last_30_days'
},
async=True
)
I'm getting
Status: 400
Response:
{
"error": {
"message": "(#12) adaccount/reportstats is deprecated for versions v2.4 and higher",
"code": 12,
"type": "OAuthException"
}
}
Even from Facebook
I found this page, but there are only curl examples.
Is there a working example on how to get data from Insights edge with the Python Ads API?
Here is a full example of how to export some insights asynchronously from the new Insights endpoints:
from facebookads import test_config as config
from facebookads.objects import *
import time
account_id = <YOUR_ACCOUNT_ID>
account_id = 'act_' + str(account_id)
fields = [
Insights.Field.impressions,
Insights.Field.clicks,
Insights.Field.actions,
Insights.Field.spend,
Insights.Field.campaign_group_name,
]
params = {
'date_preset': Insights.Preset.last_7_days,
'level': Insights.Level.adgroup,
'sort_by': 'date_start',
'sort_dir': 'desc',
}
ad_account = AdAccount(account_id)
job = ad_account.get_insights(fields=fields, params=params, async=True)
insights = None
while insights is None:
time.sleep(1)
job.remote_read()
completition = job[AsyncJob.Field.async_percent_completion]
print("Percent done: " + str(completition))
if int(completition) is 100:
insights = job.get_result(params={'limit': 100})
for ad_insight in insights:
print(ad_insight)
Related
I'm executing a Dataflow Pipeline from Google Cloud function and the workflow creation fails with the error shown in the screenshot.
I've created the LauchTemplateParameters according to the official documentation, but some parameters are causing errors:
I would like to set the europe-west1-b zone and n1-standard-4 machine type. What I'm missing?
def call_dataflow(dataflow_name):
service = build('dataflow', 'v1b3')
gcp_project = 'PROJECT_ID'
template_path = 'TEMPLATE_PATH'
template_body = {
'parameters': {
},
'environment': {
'workerZone': 'europe-west1-b',
'subnetwork': 'regions/europe-west1/subnetworks/europe-west1-subnet',
'network': 'dataflow-network',
"machineType": 'n1-standard-4',
'numWorkers': 5,
'tempLocation': 'TEMP_BUCKET_PATH',
'ipConfiguration': 'WORKER_IP_PRIVATE'
},
'jobName': dataflow_name
}
print('series_fulfillment - call_dataflow ' + dataflow_name + ' - lanzando')
request = service.projects().templates().launch(projectId=gcp_project, gcsPath=template_path, body=template_body)
response = request.execute()
return response
I'm working to implement the search_all_iam_policies() method in google-cloud-asset as follows:
from google.cloud import asset_v1
ASSET_CLIENT = asset_v1.AssetServiceClient()
response = ASSET_CLIENT.search_all_iam_policies(
scope='projects/my_project',
query='my.email#domain.com'
)
policies = []
for policy in response:
policies.append(policy)
return json.dumps({
'policies': policies
})
But cannot find a way to get JSON representation of policies nor policy. In this case 'response' is a google.cloud.asset_v1.services.asset_service.pagers.SearchAllIamPoliciesPager and each 'policy' is an google.cloud.asset_v1.types.assets.IamPolicySearchResult. I can print them to the console but need them in JSON format to send to another system.
Just to expand on Michaels answer.
When using that approach you "lose" some information namely the resource, project, asset_type and organization.
from google.cloud import asset_v1
from google.protobuf.json_format import MessageToJson
ASSET_CLIENT = asset_v1.AssetServiceClient()
response = ASSET_CLIENT.search_all_iam_policies(
scope='projects/my_project',
query='my.email#domain.com' # This field is optional
)
policies = []
for policy in response:
policies.append(
{
"resource": f"{policy.resource}",
"project": f"{policy.project}",
"bindings": json.loads(MessageToJson(policy.policy)).get('bindings'),
"asset_type": f"{policy.asset_type}",
"organization": f"{policy.organization}"
}
)
This will give you a list of dicts that look like the following:
{
'resource': '//some_resource',
'project': 'some_project',
'bindings': [
{
'role': 'some_role',
'members': [
'projectEditor:some_project',
'projectOwner:some_project'
]
},
{
'role': 'some_other_role',
'members': [
'projectViewer:some_project'
]
},
],
'asset_type': 'some_asset_type',
'organization': 'some_organization'
}
Found a way to decode the message like this:
from google.cloud import asset_v1
from google.protobuf.json_format import MessageToDict
ASSET_CLIENT = asset_v1.AssetServiceClient()
response = ASSET_CLIENT.search_all_iam_policies(
scope='projects/my_project',
query='my.email#domain.com'
)
policies = []
for policy in response:
policies.append(MessageToDict(policy.policy))
return json.dumps({
'policies': policies
})
I'm trying to post events to Google Analytics. It works fine when I do it using the NodeJS code below, but fails when I use the Python code below. Both do return a HTTP 200 and even when posting to the debug URL (https://www.google-analytics.com/debug/collect) Google Analytics returns success details in both cases (see valid: true in the response below). The problem is that when posting from NodeJS the result shows up in the GA website, when posting from Python it never shows up. I did compare the requests for both and have not been able to spot a difference.
{
"hitParsingResult": [ {
"valid": true,
"parserMessage": [ ],
"hit": "/debug/collect?v=1\u0026t=event\u0026tid=XXXXXXX\u0026cid=YYYYYYu0026ec=Slack\u0026ea=SlashCommand\u0026el=whowasat-curl\u0026an=staging.Whereis-Everybody?\u0026aid=staging.whereis-everybody.com"
} ],
"parserMessage": [ {
"messageType": "INFO",
"description": "Found 1 hit in the request."
} ]
}
The NodeJS code is (result does show up in Google Analytics):
'use strict';
var request = require('request');
require('request-debug')(request);
function postEventToGA(category, action, label) {
var options = {
v: '1',
t: 'event',
tid: process.env.GOOGLEANALYTICS_TID,
cid: process.env.GOOGLEANALYTICS_CID,
ec: category,
ea: action,
el: label,
an: process.env.STAGE_INFIX + "appname",
aid: process.env.STAGE_INFIX + "appname"
};
console.log("payload: " + JSON.stringify(options))
request.post({ url: 'https://www.google-analytics.com/collect', form: options }, function (err, response, body) {
console.log(request)
if (err) {
console.log("Failed to post event to Google Analytics, error: " + err);
} else {
if (200 != response.statusCode) {
console.log("Failed to post event to Google Analytics, response code: " + response.statusCode + " error: " + err);
}
}
});
}
postEventToGA("some-category", "some-action", "some-label")
And the Python code is (result does not show up in Google Analytics):
import json
import logging
import os
import requests
LOGGER = logging.getLogger()
LOGGER.setLevel(logging.INFO)
GOOGLEANALYTICS_TID = os.environ["GOOGLEANALYTICS_TID"]
GOOGLEANALYTICS_CID = os.environ["GOOGLEANALYTICS_CID"]
STAGE_INFIX = os.environ["STAGE_INFIX"]
def post_event(category, action, label):
payload = {
"v": "1",
"t": "event",
"tid": GOOGLEANALYTICS_TID,
"cid": GOOGLEANALYTICS_CID,
"ec": category,
"ea": action,
"el": label,
"an": STAGE_INFIX + "appname,
"aid": STAGE_INFIX + "appname",
}
response = requests.post("https://www.google-analytics.com/collect", payload)
print(response.request.method)
print(response.request.path_url)
print(response.request.url)
print(response.request.body)
print(response.request.headers)
print(response.status_code)
print(response.text)
if response.status_code != 200:
LOGGER.warning(
"Got non 200 response code (%s) while posting to GA.", response.status_code
)
post_event("some-category", "some-action", "some-label")
Any idea why the NodeJS post will show up in Google Analytics and the Python post does not?
(while both return a HTTP200)
Did some more testing and discovered that the user agent HTTP header was causing the problem. When I set it to an empty string in the Python code it works. Like this:
headers = {"User-Agent": ""}
response = requests.post(
"https://www.google-analytics.com/collect", payload, headers=headers
)
The documentation at https://developers.google.com/analytics/devguides/collection/protocol/v1/reference does state that the user agent is used, but does not clearly state what the requirements are. "python-requests/2.22.0" (default value by python-requests lib) is apparently not accepted.
The example code for Google's YouTube Data API is a piece of junk. It's so complicated and tied to the oauth redirect flow that I can't use it. Trying to go raw with requests pip and not getting too far.
I've followed the instructions exactly (as far as I can tell), with the following code:
import json
import os
import sys
import urllib
import requests
payload_file = None
payload = None
print 'Loading Config'
# Get the directory path of this file. When using any relative file paths make
# sure they are relative to current_dir so that the script can be run from any CWD.
current_dir = os.path.dirname(os.path.abspath(__file__))
# Reads in the config.json file then parses it
config = json.loads(open(os.path.join(current_dir, '..', 'config.json')).read())
print 'Parsing Payload'
for i in range(len(sys.argv)):
if sys.argv[i] == "--json" and (i + 1) < len(sys.argv):
payload = json.loads(sys.argv[i + 1])
elif sys.argv[i] == "-payload" and (i + 1) < len(sys.argv):
payload_file = sys.argv[i + 1]
with open(payload_file,'r') as f:
payload = json.loads(f.read())
break
print 'Configuring youtube with token {0}'.format(payload['token'])
print 'Downloading video...'
# See how big it is
f = urllib.urlopen(payload['url'])
content_length = int(f.headers["Content-Length"])
# Download it
# urllib.urlretrieve(payload['url'], "video.mp4")
metadata = {
'snippet' : {
'title': payload['title'],
"categoryId": 22
},
'status' : {
"privacyStatus": "public",
"embeddable": True,
"license": "youtube"
}
}
if 'tags' in payload:
metadata['snippet']['tags'] = payload['tags']
if 'description' in payload:
metadata['snippet']['description'] = payload['description']
headers = {
'Authorization' : 'Bearer {0}'.format(payload['token']),
'Content-Type' : 'application/json; charset=UTF-8',
'Content-Length' : json.dumps(metadata).__len__(),
'X-Upload-Content-Length' : content_length,
'X-Upload-Content-Type' : 'video/*',
}
print 'Attempting to upload video'
print headers
# upload video file
r = requests.post('https://www.googleapis.com/upload/youtube/v3/videos?uploadType=resumable&part=snippet,status', data=metadata, headers=headers);
print "RESPONSE!"
print r.text
# files = {
# 'file': video_file,
# }
# r = requests.post('https://www.googleapis.com/upload/youtube/v3/videos', data={ "video" : video }, headers=headers);
Obviously its not finished, but its dying on the metadata upload request with the following output:
Loading Config
Parsing Payload
Configuring youtube with token <access-token>
Downloading video...
Attempting to upload video
{'X-Upload-Content-Length': 51998563, 'Content-Length': 578, 'Content-Type': 'application/json; charset=UTF-8', 'X-Upload-Content-Type': 'video/*', 'Authorization': 'Bearer <access-token>'}
RESPONSE!
{
"error": {
"errors": [
{
"domain": "global",
"reason": "parseError",
"message": "Parse Error"
}
],
"code": 400,
"message": "Parse Error"
}
}
This error is not even listed in their "Errors" docs.
What is wrong with my code?
Here is an example in python that works. It assumes you've already done the oauth part though.
import requests
from os import fstat
import json
fi = open('myvideo.mp4')
base_headers = {
'Authorization': '%s %s' % (auth_data['token_type'],
auth_data['access_token']),
'content-type': 'application/json'
}
initial_headers = base_headers.copy()
initial_headers.update({
'x-upload-content-length': fstat(fi.fileno()).st_size,
'x-upload-content-type': 'video/mp4'
})
initial_resp = requests.post(
'https://www.googleapis.com/upload/youtube/v3/videos?uploadType=resumable&part=snippet,status,contentDetails',
headers=initial_headers,
data=json.dumps({
'snippet': {
'title': 'my title',
},
'status': {
'privacyStatus': 'unlisted',
'embeddable': True
}
})
)
upload_url = initial_resp.headers['location']
resp = requests.put(
upload_url,
headers=base_headers,
data=fi
)
fi.close()
the above is graet, just adding: you can also get the youtube id from the response (for future use):
cont = json.loads(resp.content)
youtube_id = cont['id']
I'm trying to do full-text search on a mongodb db with the Elastic Search engine but I ran into a problem: no matters what search term I provide(or if I use query1 or query2), the engine always returns the same results. I think the problem is in the way I make the requests, but I don't know how to solve it.
Here is the code:
def search(search_term):
query1 = {
"fuzzy" : {
"art_text" : {
"value" : search_term,
"boost" : 1.0,
"min_similarity" : 0.5,
"prefix_length" : 0
}
},
"filter": {
"range" : {
"published": {
"from" : "20130409T000000",
"to": "20130410T235959"
}
}
}
}
query2 = {
"match_phrase": { "art_text": search_term }
}
es_query = json.dumps(query1)
uri = 'http://localhost:9200/newsidx/_search'
r = requests.get(uri, params=es_query)
results = json.loads( r.text )
data = [res['_source']['api_id'] for res in results['hits']['hits'] ]
print "results: %d" % len(data)
pprint(data)
The params parameter is not for data being sent. If you're trying to send data to the server you should specifically be using the data parameter. If you're trying to send query parameters, then you shouldn't be JSON-encoding them and just give it to params as a dict.
I suspect your first request should be the following:
r = requests.get(uri, data=es_query)
And before someone downvotes me, yes the HTTP/1.1 spec allows data to be sent with GET requests and yes requests does support it.
search = {'query': {'match': {'test_id':13} }, 'sort' {'date_utc':{'order':'desc'}} }
data = requests.get('http://localhost:9200/newsidx/test/_search?&pretty',params = search)
print data.json()
http://docs.python-requests.org/en/latest/user/quickstart/