I'm currently working with the LinkedIn marketing API in python and I'm migrating to the version 2.0.0.
I was trying to get the adCreatives via adCampaigns urn in this way:
import requests
url = 'https://api.linkedin.com/v2/adCreativesV2?q=search&search=(campaigns:(values:List(urn%3li%3sponsoredCampaign%XXXXXXX, other_urns)))&fields=campaign,id,reference,status,changeAuditStamps,type'
response = request.request(url=url, headers={"X-Restli-Protocol-Version": "2.0.0",
"Authorization": f"Bearer {access_token}"}, method="GET")
but I bumped into this error:
response.json()
>>> {'message': 'Request would return too many entities.', 'status': 400}
The first thing that I've tried was to reduce the amount of adCampaigns urn from the List(...) but because of I was still getting this error I've remove also all the parameters, but turns out it was pointless.
The strange fact is that when I do the same API call with the following url
url = 'https://api.linkedin.com/v2/adCampaignGroupsV2?q=search&search=(accounts:(values:List(urn%3li%3sponsoredAccount%XXXXX)))&fields=account,id,name,status,changeAuditStamps,runSchedule
I get the correct response with the status: 200. This also happen with adAccounts and adCampaigns.
Does anybody know how to solve this?
Solution
I found out that the documentation states that the search field is campaign; moreover I fix the urn replacing %3 with %3A (althought that was not the problem as you can see from the adCampaignGroups API call) and now the correct url is:
url = 'https://api.linkedin.com/v2/adCreativesV2?q=search&search=(campaign:(values:List(urn%3Ali%3AsponsoredCampaign%3AXXXXX)))&fields=campaign,id,reference,status,changeAuditStamps,type,variables'
It is still unclear how the working search field parameter for adCampaigns and adCampaignGroups is accounts although here states that is account.
Related
I'm trying to use the Hasura API to get the contents of my database. The appropriate endpoint is v1alpha1/pg_dump.
I've tried doing the following in Python:
import requests
api_url = 'http://localhost:9695/v1alpha1/pg_dump'
header = {'Content-Type': 'application/json',
'x-hasura-admin-secret': 'MY_SECRET',
'X-Hasura-Role': 'admin'}
r = requests.post(url=api_url, headers=header)
If I do requests.get, I get information back (html code, although nothing particularly useful). However, if I do requests.post (which is required by Hasura: https://hasura.io/docs/1.0/graphql/core/api-reference/pgdump.html), I get a 404 error. I don't understand why. It's not an authentication error, but a page not found error.
Have I built my url incorrectly? Is there something I'm missing? The port is correct (and if I change it in the code, it gives me a different error telling me the port is invalid/closed). I'm not sure what else to change.
So, I have tried in my own Digital Ocean 1 click deployment environment. I have not secured it so I am not providing any headers. It works fine as follows:
import requests
import json
r = requests.post('http://address_of_hasura/v1alpha1/pg_dump',
data = json.dumps({
'opts' : ['-O', '-x', '--schema-only', '--schema', 'public'],
'clean_output': True
}) )
print r.text
If you have used the HASURA_GRAPHQL_ENABLED_APIS env variable and not included pgdump, that could be a reason it would be disabled.
I am trying to get list of all posts/medias posted by user on instagram. (I bassicaly don't care how I will do that, so if you have other solution (not fixing that described in this question) I'd be grateful) I used this python code:
access_token = "XXXXXYU3EwVG10b1FaaWVMZAERXRkdIei15cjdoXXXXXGgxUnR4V2hyazh6ZAGVIdFNWcHg2aklSdWZAUUnNtLW03Vzd5aENDY3BlWl92dC1DRVXXXXXEyWHB3RUY0N0pMSkRHbTZAjZAVVWSVNYWmt2QmpWTXXXXX"
url = 'https://graph.facebook.com/v7.0/17841405822304914/media?access_token=' + str(access_token)
r = requests.get(url)
print(r)
print(r.text)
Which throws following output:
<Response [400]>
{"error":{"message":"Invalid OAuth access token.","type":"OAuthException","code":190,"fbtrace_id":"AUp8Al-lOeDSS42uzm9mgoR"}}
I checked this - link, and it shows that I acctualy have access to user_media using this key.
Here's how I obtained my access key - . I tried regenerating token, but it still refuses to work. I found this question, but answers didn't work for me, I also feel like it's related to old, non-graph Instagram API.
I'm trying to make a getmyebayselling request so I can track the prices and quantities of current listings.
Following the docs and sample here I generated a token and tried sending a request to the production XML to see my current listings.
Current attempt:
endpoint = "https://api.ebay.com/ws/api.dll"
xml = """<?xml version="1.0" encoding="utf-8"?>
<GetMyeBaySellingRequest xmlns="urn:ebay:apis:eBLBaseComponents">
<RequesterCredentials>
<eBayAuthToken>AgAAAA*...full auth token here...wIAYMEFWl</eBayAuthToken>
</RequesterCredentials>
<Version>967</Version>
<ActiveList>
<Sort>TimeLeft</Sort>
<Pagination>
<EntriesPerPage>3</EntriesPerPage>
<PageNumber>1</PageNumber>
</Pagination>
</ActiveList>
</GetMyeBaySellingRequest>"""
headers = {'Content-Type': 'application/xml'}
response = requests.post(endpoint, data=xml, headers=headers)
print response
print response.content
The response:
<?xml version="1.0" encoding="UTF-8" ?><GeteBayOfficialTimeResponse xmlns="urn:ebay:apis:eBLBaseComponents"><Timestamp>2017-04-17 13:01:25</Timestamp><Ack>Failure</Ack><Errors><ShortMessage>Unsupported API call.</ShortMessage><LongMessage>The API call "GeteBayOfficialTime" is invalid or not supported in this release.</LongMessage><ErrorCode>2</ErrorCode><SeverityCode>Error</SeverityCode><ErrorClassification>RequestError</ErrorClassification></Errors><Build>18007282</Build></GeteBayOfficialTimeResponse>
The useful part of that response:
The API call "GeteBayOfficialTime" is invalid or not supported in this release.
I'm working from a sample of their own docs here. The only link to time I could really see was <Sort>TimeLeft</Sort> which was a stretch but even without that I get the same response.
I was faffing around with different Python libs trying to get a getmyebayselling request working without much documentation. Now going by the docs from eBay themselves I'm feeling pretty dead in the water. If anyone can nudge me in the right direction I'd appreciate it. Not really sure what to try next.
The API response error is a little less than helpful, but judging by the code you shared you are making the request with missing required header fields. More details here
The following change should point you in the right direction -
headers = {
'X-EBAY-API-COMPATIBILITY-LEVEL': '<compat_level>',
'X-EBAY-API-CALL-NAME': '<api_call_name>',
'X-EBAY-API-SITEID': '<api_siteid>',
'Content-Type': 'application/xml'
}
Suddenly I started getting the error message:
The API call "GeteBayOfficialTime" is invalid or not supported in this release.
But I was not calling GeteBayOfficialTime! I had a problem, but the error message was misleading.
To make sure you get the post headers and content right, the build test tool is absolutely helpful:
ebay developer build test tool
After hours of troubleshooting, I finally figured out my problem: I was passing the required headers in the query string, not as http request headers! For over a year, it worked OK, but then suddenly it stopped working.
Moral: the invalid "GeteBayOfficialTime" API call message indicates a problem with the http headers.
I'm completely new at this, and a bit in over my head. I've been writing a program to import contacts into Constant Contact using their API. I've got most of it down, and seemingly only one more stumbling block... I've got a 403 Forbidden error popping up. I'm hoping it's just my formatting , and that one of you fine folks can point out where I've screwed up.
Here's my python code:
url2 = 'https://api.constantcontact.com/v2/contacts' + '?action_by=ACTION_BY_VISITOR&api_key=foonumber'
headers = { 'Authorization' : 'Bearer barnumber', 'Content-Type' : 'application/json'}
data2 = json.dumps({"lists": [{"id": "1313956673"}],"email_addresses": [{"email_address": "test#example.com"}]})
req = urllib2.Request(url2, data2, headers)
response = urllib2.urlopen(req)
the_page = response.read()
So something's wrong here, because the return I get on my response = urllib.urlopen(req) line is a HTTP Error 403: Forbidden.
I've double checked the api key and the access token, and they both work for the GET request earlier in the program.
I have used various Constant Contact APIs (v1 & v2) and the kind your error you are having usually means you are trying to access content that is restricted or not available to you. Passing the right api_key and access token is not a guarantee it will work.
So make sure any kind of data you are passing currently exists in your account (listid). Also try removing that action_by=ACTION_BY_VISITORyou have in your URL just to make sure it's not causing any issue, if not you can always add it back later.
This is not necessary but I've found out it sometimes work wonders, try adding an X-Originating-Ip header to your request.
A while ago, I made a python function which took a URL of an image and passed it to Imgur's API v2. Since I've been notified that the v2 API is going to be deprecated, I've attempted to make it using API v3.
As they say in the Imgur API documentation:
[Sending] an authorization header with your client_id along with your requests [...] also works if you'd like to upload images anonymously (without the image being tied to an account). This lets us know which application is accessing the API.**
Authorization: Client-ID YOURCLIENTID
It's unclear to me (especially with the italics they put) whether they mean that the header should be {'Authorization': 'Client-ID ' + clientID}, or {'Authorization: Client-ID ': clientID}, or {'Authorization:', 'Client-ID ' + clientID}, or some other variation...
Either way, I tried and this is what I got (using Python 2.7.3):
def sideLoad(imgURL):
img = urllib.quote_plus(imgURL)
req = urllib2.Request('https://api.imgur.com/3/image',
urllib.urlencode([('image', img),
('key', clientSecret)]))
req.add_header('Authorization', 'Client-ID ' + clientID)
response = urllib2.urlopen(req)
return response.geturl()
This seems to me like it does everything Imgur wants me to do: I've got the right endpoint, passing data to urllib2.Request makes it a POST request according to the Python docs, I'm passing the image parameter with the form-encoded URL, I also tried giving it my client secret as a POST parameter since I got an error saying I need an ID (even though there is no mention of the need for me to use my client secret anywhere in the relevant documentation). I add the Authorization header and it seems to be the right form, so... why am I getting an Error 400: Bad Request?
Side-question: I might be able to debug it myself if I could see the actual error Imgur returns, but because it returns an erroneous HTTP status, Python dies and gives me one of those nauseating stack traces. Is there any way I could have Python stop whining and give me the error message JSON that I know Imgur returns?
Well, I'll be damned. I tried taking out the encoding functions and just straight up forming the string, and I got it to work. I guess Imgur's API expects the non-form-encoded URL?
Oh... or was it because I used both quote_plus() and url_encode(), encoding the URL twice? That seems even more likely...
This is my working solution, at long last, for something that took me a day when I thought it'd take an hour at most:
def sideLoad(imgURL):
img = urllib.quote_plus(imgURL)
req = urllib2.Request('https://api.imgur.com/3/image', 'image=' + img)
req.add_header('Authorization', 'Client-ID ' + clientID)
response = urllib2.urlopen(req)
response = json.loads(response.read())
return str(response[u'data'][u'link'])
It's not a final version, mind you, it still lacks some testing (I'll see whether I can get rid of quote_plus(), or if it's perhaps preferable to use url_encode alone) as well as error handling (especially for big gifs, the most frequent case of failure).
I hope this helps! I searched all over Google, Imgur and Stack Overflow and the information about anonymous usage of APIv3 were confusing (and drowned in a sea of utterly horrifying OAuth2 stuff).
In python 3.4 using urllib I was able to do it like this:
import urllib.request
import json
opener = urllib.request.build_opener()
opener.addheaders = [("Authorization", "Client-ID"+ yourClientId)]
jsonStr = opener.open("https://api.imgur.com/3/image/"+pictureId).read().decode("utf-8")
jsonObj = json.loads(jsonStr)
#jsonObj is a python dictionary of the imgur json response.