I'm trying to use the Hasura API to get the contents of my database. The appropriate endpoint is v1alpha1/pg_dump.
I've tried doing the following in Python:
import requests
api_url = 'http://localhost:9695/v1alpha1/pg_dump'
header = {'Content-Type': 'application/json',
'x-hasura-admin-secret': 'MY_SECRET',
'X-Hasura-Role': 'admin'}
r = requests.post(url=api_url, headers=header)
If I do requests.get, I get information back (html code, although nothing particularly useful). However, if I do requests.post (which is required by Hasura: https://hasura.io/docs/1.0/graphql/core/api-reference/pgdump.html), I get a 404 error. I don't understand why. It's not an authentication error, but a page not found error.
Have I built my url incorrectly? Is there something I'm missing? The port is correct (and if I change it in the code, it gives me a different error telling me the port is invalid/closed). I'm not sure what else to change.
So, I have tried in my own Digital Ocean 1 click deployment environment. I have not secured it so I am not providing any headers. It works fine as follows:
import requests
import json
r = requests.post('http://address_of_hasura/v1alpha1/pg_dump',
data = json.dumps({
'opts' : ['-O', '-x', '--schema-only', '--schema', 'public'],
'clean_output': True
}) )
print r.text
If you have used the HASURA_GRAPHQL_ENABLED_APIS env variable and not included pgdump, that could be a reason it would be disabled.
Related
I'm currently working with the LinkedIn marketing API in python and I'm migrating to the version 2.0.0.
I was trying to get the adCreatives via adCampaigns urn in this way:
import requests
url = 'https://api.linkedin.com/v2/adCreativesV2?q=search&search=(campaigns:(values:List(urn%3li%3sponsoredCampaign%XXXXXXX, other_urns)))&fields=campaign,id,reference,status,changeAuditStamps,type'
response = request.request(url=url, headers={"X-Restli-Protocol-Version": "2.0.0",
"Authorization": f"Bearer {access_token}"}, method="GET")
but I bumped into this error:
response.json()
>>> {'message': 'Request would return too many entities.', 'status': 400}
The first thing that I've tried was to reduce the amount of adCampaigns urn from the List(...) but because of I was still getting this error I've remove also all the parameters, but turns out it was pointless.
The strange fact is that when I do the same API call with the following url
url = 'https://api.linkedin.com/v2/adCampaignGroupsV2?q=search&search=(accounts:(values:List(urn%3li%3sponsoredAccount%XXXXX)))&fields=account,id,name,status,changeAuditStamps,runSchedule
I get the correct response with the status: 200. This also happen with adAccounts and adCampaigns.
Does anybody know how to solve this?
Solution
I found out that the documentation states that the search field is campaign; moreover I fix the urn replacing %3 with %3A (althought that was not the problem as you can see from the adCampaignGroups API call) and now the correct url is:
url = 'https://api.linkedin.com/v2/adCreativesV2?q=search&search=(campaign:(values:List(urn%3Ali%3AsponsoredCampaign%3AXXXXX)))&fields=campaign,id,reference,status,changeAuditStamps,type,variables'
It is still unclear how the working search field parameter for adCampaigns and adCampaignGroups is accounts although here states that is account.
I am attempting to use a facial recognition API and am still new to the requests package. The code I have written is posted below.
import requests
baseURL = "https://api-live.wiseai.tech"
appKey = "--------"
appSecret = "----------"
def createFaceDB(appKey, appSecret, libraryName, thresholds):
headers = {'Accept': 'application/json', 'Content-Type': 'application/json'}
body = {
"appKey": appKey,
"appSecret": appSecret,
"libraryName": libraryName,
"thresholds": thresholds
}
r = requests.post("http://api-live.wiseai.tech", data= body, headers=headers)
return r.text
print(createFaceDB(appKey, appSecret, "test", 1))
The documentation for the API states that there is either going to be a successful requests or an unsuccessful one. If either occurs I am suppose to get either success message or an error message respectively. The error messages vary such as ERROR_KEY_ISNOT_LEGAL indicating there is something incorrect with the API key or BAD_REQUEST indicating there are missing parameters.
Unfortunately, when I run the code I get back a bunch of gibberish on the command prompt. Not receiving a successful or unsuccessful request. Furthermore, if I incorrectly put in the API key expecting to receive a error message I get the same output in the console.
Both API and appSecret keys are correct and available. Unfortunately, at this moment I am unable to share them.
I've added more information the question along with images. Linked below.
https://imgur.com/a/wy11JUf
Edit 1: Some of the other things I've tried is setting json=body. Another thing is that the output display grecaptcha at the end of it (as displayed in the image). Just wanted to point that out, not sure exactly what its suppose to mean.
Edit 2: It seems that even though the body consists of 4 values and the definition expects 4 parameters if I remove appKey and appSecret I get the same results on the console. Perhaps there is another command I could use instead of requests.post()
I been trying to solve this error but I can't find what seems to be wrong.
I am using Microsoft Cognitive Services Face API with python. Here is my code:
import requests
import json
import http.client, urllib, base64, json
body = {"URL": "http://www.scientificamerican.com/sciam/cache/file/35391452-5457-431A-A75B859471FAB0B3.jsdfpg" }
headers = {
"Content-Type": "application/json",
"Ocp-Apim-Subscription-Key": "xxx"
}
try:
r=requests.post('https://api.projectoxford.ai/face/v1.0/detect?returnFaceId=true&returnFaceLandmarks=false&returnFaceAttributes=age,gender',json.dumps(body) , headers)
print(r.content)
except Exception as e:
print(format(e))
When I run the script I get:
"code":"Unspecified","message":"Access denied due to invalid subscription key. Make sure you are subscribed to an API you are trying to call and provide the right key."
The thing is that when I put the exact same Key on the console everything works fine. So I am pretty sure it is not the key.
The error must be on my code, but I can't find it.
Any tip in the right direction will be appreciated,
Thanks
The error is in the way you conform the request.post call. Parameters to this function are positional, as mentioned in this other post hence the headers are not passed as headers, so the key is not recognized. If you specify what each parameter is, you will avoid this error. That is:
r=requests.post('https://api.projectoxford.ai/face/v1.0/detect?returnFaceId=true&returnFaceLandmarks=false&returnFaceAttributes=age,gender',params=None, data = json.dumps(body), headers = headers)
Also, the URL to your image does not point to a valid JPEG file (the extension is garbled, probably a typo).
I'm trying to create a super-simplistic Virtual In / Out Board using wx/Python. I've got the following code in place for one of my requests to the server where I'll be storing the data:
data = urllib.urlencode({'q': 'Status'})
u = urllib2.urlopen('http://myserver/inout-tracker', data)
for line in u.readlines():
print line
Nothing special going on there. The problem I'm having is that, based on how I read the docs, this should perform a Post Request because I've provided the data parameter and that's not happening. I have this code in the index for that url:
if (!isset($_POST['q'])) { die ('No action specified'); }
echo $_POST['q'];
And every time I run my Python App I get the 'No action specified' text printed to my console. I'm going to try to implement it using the Request Objects as I've seen a few demos that include those, but I'm wondering if anyone can help me explain why I don't get a Post Request with this code. Thanks!
-- EDITED --
This code does work and Posts to my web page properly:
data = urllib.urlencode({'q': 'Status'})
h = httplib.HTTPConnection('myserver:8080')
headers = {"Content-type": "application/x-www-form-urlencoded",
"Accept": "text/plain"}
h.request('POST', '/inout-tracker/index.php', data, headers)
r = h.getresponse()
print r.read()
I am still unsure why the urllib2 library doesn't Post when I provide the data parameter - to me the docs indicate that it should.
u = urllib2.urlopen('http://myserver/inout-tracker', data)
h.request('POST', '/inout-tracker/index.php', data, headers)
Using the path /inout-tracker without a trailing / doesn't fetch index.php. Instead the server will issue a 302 redirect to the version with the trailing /.
Doing a 302 will typically cause clients to convert a POST to a GET request.
I'm completely new at this, and a bit in over my head. I've been writing a program to import contacts into Constant Contact using their API. I've got most of it down, and seemingly only one more stumbling block... I've got a 403 Forbidden error popping up. I'm hoping it's just my formatting , and that one of you fine folks can point out where I've screwed up.
Here's my python code:
url2 = 'https://api.constantcontact.com/v2/contacts' + '?action_by=ACTION_BY_VISITOR&api_key=foonumber'
headers = { 'Authorization' : 'Bearer barnumber', 'Content-Type' : 'application/json'}
data2 = json.dumps({"lists": [{"id": "1313956673"}],"email_addresses": [{"email_address": "test#example.com"}]})
req = urllib2.Request(url2, data2, headers)
response = urllib2.urlopen(req)
the_page = response.read()
So something's wrong here, because the return I get on my response = urllib.urlopen(req) line is a HTTP Error 403: Forbidden.
I've double checked the api key and the access token, and they both work for the GET request earlier in the program.
I have used various Constant Contact APIs (v1 & v2) and the kind your error you are having usually means you are trying to access content that is restricted or not available to you. Passing the right api_key and access token is not a guarantee it will work.
So make sure any kind of data you are passing currently exists in your account (listid). Also try removing that action_by=ACTION_BY_VISITORyou have in your URL just to make sure it's not causing any issue, if not you can always add it back later.
This is not necessary but I've found out it sometimes work wonders, try adding an X-Originating-Ip header to your request.