I'm trying to check multiple URLs on Google Safebrowsing API, but it returns an empty response every time. Have been googling for quite few hours with no results, and I don't need some overkill library for a simple POST request.
Edit: Using Python 3.5.2
import requests
import json
api_key = '123456'
url = "https://safebrowsing.googleapis.com/v4/threatMatches:find"
payload = {'client': {'clientId': "mycompany", 'clientVersion': "0.1"},
'threatInfo': {'threatTypes': ["SOCIAL_ENGINEERING", "MALWARE"],
'platformTypes': ["ANY_PLATFORM"],
'threatEntryTypes': ["URL"],
'threatEntries': [{'url': "http://www.thetesturl.com"}]}}
params = {'key': api_key}
r = requests.post(url, params=params, json=payload)
# Print response
print(r)
print(r.json())
This is my code, that returns HTTP 200 OK, but the response is empty.
What am I doing wrong?
I have the feeling that the api is not working properly. It returns 200 empty result even for urls that are marked as dangerous. For example, I checked this url using google's form and got the result `Some pages on this site are unsafe``. But using the api, it returns 200 empty... I believe it returns results only for specific pages. If only some pages are infected/dangerous, then you won't get any data for the main domain... Not very useful if you're asking me, but hey...it's free.
It would be nice if someone from Google could confirm this and add it to the documentation.
A real malware test url would have been much appreciated also, so we can test with real data.
According to the Safe Browsing API documentation, if you receive an empty object is because there was no match found:
Note: If there are no matches (that is, if none of the URLs specified
in the request are found on any of the lists specified in a request),
the HTTP POST response simply returns an empty object in the response
body.
Related
I'm trying to build an app that alerts when air quality rises above a certain level. I'm trying to get some json data from the api at https://api-docs.iqair.com, and they kindly provide simple copy and paste code. However, when I run this (with my API key), I get this error message:
requests.exceptions.MissingSchema: Invalid URL '{{urlExternalAPI}}v2/city?city=Los Angeles&state=California&country=USA&key={{my_key}}': No schema supplied. Perhaps you meant http://{{urlExternalAPI}}v2/city?city=Los Angeles&state=California&country=USA&key={{my_key}}?`
I tried putting in the http, but then nothing happened.
Here's the code they provide:
import requests
url = "{{urlExternalAPI}}v2/city?city=Los Angeles&state=California&country=USA&key={{YOUR_API_KEY}}"
payload = {}
headers= {}
response = requests.request("GET", url, headers=headers, data = payload)
print(response.text.encode('utf8'))
First of all, you have to put in the URL, and not use the curly brackets. Also, I couldn't find the correct URL, but after googling it, I found that I merely had to use the correct URL, which was https://api.airvisual.com.
I have worked with a few API's, but not sure how to get started with sending requests for Star Citizen. Does anyone know how you might go about using python to send a get request for say getting some data on game items. Here is their official API documentation but not sure where to start!
https://starcitizen-api.com/gamedata.php#get-items
Could anyone post an example get request that return data?
from the docs, the urls seems to be /xxxxxxxx/v1/gamedata/get/3.6.1/ship?name=Avenger or some such where i guess the xxx is your personal key or account or whatever
try this:
import requests
url = '/xxxxxxxx/v1/gamedata/get/3.6.1/ship?name=Avenger'
response = requests.get(url, verify = False)
contents = response.json()
just make sure the url is complete, should work the same for any web API really
EDIT:
from the docs it looks like the url should look like this (since the host is listed as Host: api.starcitizen-api.com
https://api.starcitizen-api.com/xxxxxxx/v1/gamedata/get/3.6.1/ship?name=Avenger
UPDATE:
It turned out to be an inconsistency in the responses of the Instagram graphql (unofficial) API, which requires authentication for some IDs but does not for others for the same endpoint.
I am issuing GET requests against Instagram graphql endpoint. For some queries, the JSON response I get via Python requests module is inconsistent with what I get via a browser for the same query.
For example this URL returns a JSON object containing 10 users as expected:
https://www.instagram.com/graphql/query/?variables=%7B%22shortcode%22%3A+%22BYRWPzFHUfg8r_s9UMtd6BtoI01RPGmviXaskI0%22%2C+%22first%22%3A+10%7D&query_id=17864450716183058
But when I request the same URL via requests module like this:
import requests
url = 'https://www.instagram.com/graphql/query/?variables=%7B%22shortcode%22%3A+%22BYRWPzFHUfg8r_s9UMtd6BtoI01RPGmviXaskI0%22%2C+%22first%22%3A+10%7D&query_id=17864450716183058'
response = requests.get(url)
The returned value, i.e. response.text is {"data": {"shortcode_media": null}, "status": "ok"}, kinda empty response, which I suppose means something like the media ID did not match.
As a double check, this test of comparing the original URL with the URL of the final response holds true, showing that the URL is not changed by requests module in any way:
>>> response.url == url
True
This only happens for long media IDs such as BYRWPzFHUfg8r_s9UMtd6BtoI01RPGmviXaskI0. For shorter IDs, e.g. BZx5Zx9nHwS the response returned by the requests module is the same that is return via the browser as expected.
Rather than the length of the ID, I thought it may be a special character in the ID which is being encoded differently, such as the underscore. I tried encoding it with %5F but that didn't work neither.
Any ideas? Can it be a bug in the requests module?
I am trying to get apps which are similar to an app from the Google Play Store in python, using the requests API.
This returns a happy 200:
payload = {"id":"apk_name"}
requests.get("https://play.google.com/store/apps/details", params = payload)
This returns a 404:
requests.get("http://play.google.com/store/apps/similar", params = payload)
The last request produces a valid url for both Chrome and postman, how could I create a valid request for similar apps?
To make matters more interesting, was testing this in my terminal, it worked twice in a row, then went back to replying with a 404.
Solved it by using the answer here
https://stackoverflow.com/a/13854790/4658520
Issue was using session, not HttpAuth
I have another question about posts.
This post should be almost identical to one referenced on stack overflow using this question 'Using request.post to post multipart form data via python not working', but for some reason I can't get it to work. The website is http://www.camp.bicnirrh.res.in/predict/. I want to post a file that is already in the FASTA format to this website and select the 'SVM' option using requests in python. This is based on what #NorthCat gave me previously, which worked like a charm:
import requests
import urllib
file={'file':(open('Bishop/newdenovo2.txt','r').read())}
url = 'http://www.camp.bicnirrh.res.in/predict/hii.php'
payload = {"algo[]":"svm"}
raw = urllib.urlencode(payload)
response = session.post(url, files=file, data=payload)
print(response.text)
Since it's not working, I assumed the payload was the problem. I've been playing with the payload, but I can't get any of these to work.
payload = {'S1':str(data), 'filename':'', 'algo[]':'svm'} # where I tried just reading the file in, called 'data'
payload = {'svm':'svm'} # not actually in the headers, but I tried this too)
payload = {'S1': '', 'algo[]':'svm', 'B1': 'Submit'}
None of these payloads resulted in data.
Any help is appreciated. Thanks so much!
You need to set the file post variable name to "userfile", i.e.
file={'userfile':(open('Bishop/newdenovo2.txt','r').read())}
Note that the read() is unnecessary, but it doesn't prevent the file upload succeeding. Here is some code that should work for you:
import requests
session = requests.session()
response = session.post('http://www.camp.bicnirrh.res.in/predict/hii.php',
files={'userfile': ('fasta.txt', open('fasta.txt'), 'text/plain')},
data={'algo[]':'svm'})
response.text contains the HTML results, save it to a file and view it in your browser, or parse it with something like Beautiful Soup and extract the results.
In the request I've specified a mime type of "text/plain" for the file. This is not necessary, but it serves as documentation and might help the receiving server.
The content of my fasta.txt file is:
>24.6jsd2.Tut
GGTGTTGATCATGGCTCAGGACAAACGCTGGCGGCGTGCTTAATACATGCAAGTCGAACGGGCTACCTTCGGGTAGCTAGTGGCGGACGGGTGAGTAACACGTAGGTTTTCTGCCCAATAGTGGGGAATAACAGCTCGAAAGAGTTGCTAATACCGCATAAGCTCTCTTGCGTGGGCAGGAGAGGAAACCCCAGGAGCAATTCTGGGGGCTATAGGAGGAGCCTGCGGCGGATTAGCTAGATGGTGGGGTAAAGGCCTACCATGGCGACGATCCGTAGCTGGTCTGAGAGGACGGCCAGCCACACTGGGACTGAGACACGGCCCAGACTCCTACGGGAGGCAGCAGTAAGGAATATTCCACAATGGCCGAAAGCGTGATGGAGCGAAACCGCGTGCGGGAGGAAGCCTTTCGGGGTGTAAACCGCTTTTAGGGGAGATGAAACGCCACCGTAAGGTGGCTAAGACAGTACCCCCTGAATAAGCATCGGCTAACTACGTGCCAGCAGCCGCGGTAATACGTAGGATGCAAGCGTTGTCCGGATTTACTGGGCGTAAAGCGCGCGCAGGCGGCAGGTTAAGTAAGGTGTGAAATCTCCCTGCTCAACGGGGAGGGTGCACTCCAGACTGACCAGCTAGAGGACGGTAGAGGGTGGTGGAATTGCTGGTGTAGCGGTGAAATGCGTAGAGATCAGCAGGAACACCCGTGGCGAAGGCGGCCACCTGGGCCGTACCTGACGCTGAGGCGCGAAGGCTAGGGGAGCGAACGGGATTAGATACCCCGGTAGTCCTAGCAGTAAACGATGTCCACTAGGTGTGGGGGGTTGTTGACCCCTTCCGTGCCGAAGCCAACGCATTAAGTGGACCGCCTGGGGAGTACGGTCGCAAGACTAAAACTCAAAGGAATTGACGGGGACCCGCACAAGCAGCGGAGCGTGTGGTTTAATTCGATGCGACGCGAAGAACCTTACCTGGGCTTGACATGCTATCGCAACACCCTGAAAGGGGTGCCTCCTTCGGGACGGTAGCACAGATGCTGCATGGCTGTCGTCAGCTCGTGTCGTGAGATGTTGGGTTAAGTCCCGCAACGAGCGCAACCCCTGTCCTTAGTTGTATATCTAAGGAGACTGCCGGAGACAAACCGGAGGAAGGTGGGGATGACGTCAAGTCAGCATGGCTCTTACGTCCAGGGCTACACATACGCTACAATGGCCGTTACAGTGAGATGCCACACCGCGAGGTGGAGCAGATCTCCAAAGGCGGCCTCAGTTCAGATTGCACTCTGCAACCCGAGTGCATGAAGTCGGAGTTGCTAGTAACCGCGTGTCAGCATAGCGCGGTGAATATGTTCCCGGGTCTTGTACACACCGCCCGTCACGTCATGGGAGCCGGCAACACTTCGAGTCCGTGAGCTAACCCCCCCTTTCGAGGGTGTGGGAGGCAGCGGCCGAGGGTGGGGCTGGTGACTGGGACGAAGTCGTAACAAGGT