How do I solve an error related with APIs - python

i wold like to obtain info about an API but i get this error:**{"message":"You are not subscribed to this API."}**How can I solve this?My code is this one:
url = "https://imdb8.p.rapidapi.com/title/get-top-stripe"
querystring = {"currentCountry":"US","purchaseCountry":"US","tconst":""}
headers = {
'x-rapidapi-host': "imdb8.p.rapidapi.com",
'x-rapidapi-key': ""
}
response = requests.request("GET", url, headers=headers, params=querystring)
print(response.text)```

One needs to subscribe to a particular API on RapidAPI Hub. You can do that by visiting the pricing page of an API. I believe you're using IMDb API. The Basic plan is free of this IMDb API.
You can subscribe to any API by following the steps below:
Search API -> Go to the Pricing page -> Choose a plan (according to your need) -> Click on Subscribe button.
Once you subscribed, you would be able to test and connect to an API.

Related

Azure functions python SDK force response to make POST instead of GET

I am trying to write an azure function to manage SSO between two services. The first one will host the link to the HTTP triggered Azure Function which then should respond with the formatted SAML Response which then gets sent to the consumption URL as a POST, but I can only make GET requests with the azure.functions.HttpResponse method needed to parse outputs for Azure Functions (unless I'm wrong).
Alternatively I've tried to set the cookie that I get as a response from sending the SAML Response with the python requests method, but the consumption URL doesn't seem to care that the cookie is there and just brings me back to the login page.
The SP in this situation is Absorb LMS and I can confirm that the SAML Response is formatted correctly because submitting it from an HTTP form works fine (which I've also tried returning as the body of the azure.functions.HttpResponse, but I just get HTTP errors which I can't make heads or tails of).
import requests
import azure.functions as func
headers = {
'Content-Type': 'application/x-www-form-urlencoded'
}
body = {"SAMLResponse": *b64 encoded saml response and assertion*}
response = requests.post(url=*acs url*, headers=headers, data=body)
headers = response.headers
headers['Location'] = *acs url*
return func.HttpResponse(headers=headers, status_code=302)

Query Firebase dynamic link information

When trying to query Google Firebase dynamic link stats I am getting an empty object.
I've got 5 dynamic links in the firebase console which were created via the console. Using the following code I am able to get a token. I used the GCP->IAM->Service Accounts to create a new account and pull down the JSON file. I've ensured the project_id matches the one in firebase.
link = "my_dynamic_link_short_name"
scopes = ["https://www.googleapis.com/auth/firebase"]
credentials = service_account.Credentials.from_service_account_file("key.json", scopes=scopes)
url_base = "https://firebasedynamiclinks.googleapis.com/v1/SHORT_DYNAMIC_LINK/linkStats?durationDays=1"
encoded_link = urllib.parse.quote(link, safe='')
url = url_base.replace('SHORT_DYNAMIC_LINK', encoded_link)
request = Request()
credentials.refresh(request)
access_token = credentials.token
HEADER = {"Authorization": "Bearer " + access_token}
response = requests.get(url, headers=HEADER)
print(response.json())
Both of the above requests return a 200 but no data returned.
The GCP service account I am using has the following roles:
Firebase Admin
Firebase Admin SDK Administrator Service Agent
Service Account Token Creator
I've given it full owner to test and it didn't resolve issue.
FDL Analytics REST API returns an empty object {} if the short link doesn't have analytics data on the specified date range. If you have existing short links in the FDL dashboard that has click data, you can use it to validate if the response from the API matches the data displayed on the dashboard.
If you're still having issues, I suggest filing a ticket https://firebase.google.com/support
Edit: To add, Firebase Dynamic Links click data are aggregated daily and should be updated the next day. For newly created links, give it a day or two for the click data to be updated. This applies on both click data from the API and the one displayed on the dashboard.

Python POST requests - how to extract html of request destination

Scraping data of mortgage from official mortgage registry. The problem is that I can't extract the html of particular document. Everything happens on POST behalf - I have all of the data required to precise the POST request, but still when i'm printing the request.url it shows me the welcome screen page. It should retrieve html from particular document. All data like number of mortgage or current page are listed in dev tools > netowrk > Form Data, so I bet it must be possible. I'm quite new in web python so I will apprecaite any help.
My code:
import requests
data = {
'kodWydzialu':'PT1R',
'nrKw':'00037314',
'cyfraK':'9',
}
r = requests.post('https://przegladarka-ekw.ms.gov.pl/eukw_prz/KsiegiWieczyste/wyszukiwanieKW', data=data)
print(r.url), print(r.content)
You are getting the welcome screen because you aren't sending all the requests required to view the next page.
Go to Chrome > Network tabs, and you will see that when you click the submit/search button, a bunch of other GET requests are being sent to different URLs after that first POST request.
You need to replicate that in your script. Depending upon the website it can be tough to get the response, so you should consider using Selenium
That said, it's not impossible to do this with requests:
session = requests.Session()
You need to send the POST request, and all other GET requests that follow in the same session.
data = {
'kodWydzialu':'PT1R',
'nrKw':'00037314',
'cyfraK':'9',
}
session.post(URL, headers=headers, params=data)
# Start sending the GET requests
session.get(URL_1, headers=headers)
session.get(URL_2, headers=headers)

Python code to authenticate to website, navigate through links and download files

I'm looking something which could be interesting to you as well.
I'm developing a feature using Python, which should be able to authenticate (using userid/password and/or with other preferred authentication methods) and connect to specify website, navigate through the website and download the file under a specific option.
Later I have to write the schedules on developed code and automate it.
Did anyone come across such scenario and developed the code in python?
Please suggest if any python libraries are there.
What I have achieved right now is:
I can download file with specific URL.
I know how to authenticate and download the file.
I'm able to pull the links from the specific website.
This is something we could achieve using selenium, but I want to write in Python.
After 5 days of research, I found what I wanted. Your urlLogin and urlAuth could be same, its totally depends on what action taken on Login button or form action. I used crome inspect option to findout the actual GET or POST request used on the portal.
Here is the answer of my own question-->
import requests
urlLogin = 'https://example.com/jsp/login.jsp'
urlAuth = 'https://example.com/CheckLoginServlet'
urlBd = 'https://example.com/jsp/batchdownload.jsp'
payload = {
"username": "username",
"password": "password"
}
# Session will be closed at the end of with block
with requests.Session() as s:
s.get(urlLogin)
headers = s.cookies.get_dict()
print(f"Session cookies {headers}")
r1 = s.post(urlAuth, data=payload, headers=headers)
print(f'MainFrame text:::: {r1.status_code}') #200
r2 = s.post(urlBd, data=payload)
print(f'MainFrame text:::: {r2.status_code}') #200
print(f'MainFrame text:::: {r2.text}') #page source
# 3. Again cookies will be used through session to access batch download page
r2 = s.post(config['access-url'])
print(f'Batch Download status:::: {r2.status_code}') #200
source_code = r2.text
# print(f'Batch Download source:::: {source_code}')

Using Airbnb API with Facebook Login

This is my first time dealing with logging in using Facebook credentials. I want to be able to query listings in Airbnb through my account. My original account in Airbnb is through Facebook login. Here is the sample request on airbnb page: http://airbnbapi.org/#login-by-facebook.
I am not sure where can I get my client_id and Facebook's access token. Although it does point to https://developers.facebook.com/docs/facebook-login/access-tokens to get the user access token but, if I understand it correctly, it requires me to create an app. I am not sure what flow of authentication is required for me to use Airbnb API.
I have already looked at Airbnb docs to search for client_id but, of no use.
Here is what I have so far:
import requests
import json
API_URL = "https://api.airbnb.com"
LISTING_ENDPOINT= "https://api.airbnb.com/v2/search_results"
post_query = {
"client_id": "I HAVE NO IDEA WHERE TO GET IT",
"locale": "en-US",
"currency":"USD",
"assertion_type":"https://graph.facebook.com/me"
"assertion":"HOW SHOULD I GET THIS ONE?",
"prevent_account_creation":True
}
# I think this should be able to log me in and I should be able to query listings
_ = requests.post(API_URL, post_query).json()
query = {
"client_id":"FROM ABOVE",
"user_lat": "40.00",
"user_long":"-54.31"
}
listings = requests.get(LISTING_ENDPOINT, json=query).json()
I came across the same problem as you. I figure it out finally. The tool i used is the advanced function of requests library, that is Session(), for saving cookies. The important part for login with a third party account is to find the link we need to post the cookies. The following is my code.
import requests
x=requests.Session() #savig the cookies when you click the "log in with facebook button"
y=requests.Session() #saving the cookies for parsing the airbnb listing.
account={'email':'your_facebook_account','pass':'your_facebook_ps'}
log_in_with_facebook_click=x.post("https://www.airbnb.jp/oauth_connect?from=facebook_login&service=facebook")
#all the cookies up to now is saved in "x"
my_first_time_cookies=x.cookies.get_dict()
real_login_link=log_in_with_facebook_click.url
real_log_in=y.post(real_login_link,account,cookies=my_first_time_cookies)
#the real login link is saved in "log_in_with_facebook"
#pass the cookies and your facebook account information to the real login link
#you should have logged into airbnb.For testing the log in, we do the following. We check the reservation data.
from bs4 import BeautifulSoup
page=1
test=y.get("https://www.airbnb.jp/my_reservations?page="+str(page))
#Remember that the cookies we use to access airbnb website after loggin in is saved in "y"
test_html=BeautifulSoup(test.text,'lxml')
print(test_html.text)
#you should have looked your tenants reservation information.

Categories