Working with Tenor's API - python

My problem is that I don't know how to work with the result of the search of a gif. I used an example, I know how to modify some parameters but I don't know how to build the gifs of the result. Code:
import requests
import json
# set the apikey and limit
apikey = "MYKEY" # test value
lmt = 8
# load the user's anonymous ID from cookies or some other disk storage
# anon_id = <from db/cookies>
# ELSE - first time user, grab and store their the anonymous ID
r = requests.get("https://api.tenor.com/v1/anonid?key=%s" % apikey)
if r.status_code == 200:
anon_id = json.loads(r.content)["anon_id"]
# store in db/cookies for re-use later
else:
anon_id = ""
# our test search
search_term = "love"
# get the top 8 GIFs for the search term
r = requests.get(
"https://api.tenor.com/v1/search?q=%s&key=%s&limit=%s&anon_id=%s" %
(search_term, apikey, lmt, anon_id))
if r.status_code == 200:
# load the GIFs using the urls for the smaller GIF sizes
top_8gifs = json.loads(r.content)
print (top_8gifs)
else:
top_8gifs = None
I would like to download the file. I know I can do it with urllib and request, but the problem is that I don't even know what is top_8gifs.
I hope someone could help me. I'm waiting you answer, thanks for your attention!!

First of all you have to use a legitimate key instead of MYKEY. Once you have done that you'll observe this code will print the output of the GET request that you have sent. It is a json file which is similar to a dictionary in python. So now you can exploit this dictionary and obtain the urls. The best strategy is to simply print out the output of json and observe the structure of dictionary carefully and extract the url from it. If you want more clarity we can use pprint module in python. It is pretty awesome and will show you how a json file looks properly. Here is the modified version of your code which pretty prints the json file, prints the gif urls and downloads the gif files. You can improve upon it and play with it if you want.
import requests
import json
import urllib.request,urllib.parse,urllib.error
import pprint
# set the apikey and limit
apikey = "YOURKEY" # test value
lmt = 8
# load the user's anonymous ID from cookies or some other disk storage
# anon_id = <from db/cookies>
# ELSE - first time user, grab and store their the anonymous ID
r = requests.get("https://api.tenor.com/v1/anonid?key=%s" % apikey)
if r.status_code == 200:
anon_id = json.loads(r.content)["anon_id"]
# store in db/cookies for re-use later
else:
anon_id = ""
# our test search
search_term = "love"
# get the top 8 GIFs for the search term
r = requests.get(
"https://api.tenor.com/v1/search?q=%s&key=%s&limit=%s&anon_id=%s" %
(search_term, apikey, lmt, anon_id))
if r.status_code == 200:
# load the GIFs using the urls for the smaller GIF sizes
pp = pprint.PrettyPrinter(indent=4)
top_8gifs = json.loads(r.content)
pp.pprint(top_8gifs) #pretty prints the json file.
for i in range(len(top_8gifs['results'])):
url = top_8gifs['results'][i]['media'][0]['gif']['url'] #This is the url from json.
print (url)
urllib.request.urlretrieve(url, str(i)+'.gif') #Downloads the gif file.
else:
top_8gifs = None

Related

Script to check status code of URLs using Python

I want to write script that accept multiple URLs through list or text file and append some string at the end of each URL and check https status code (200, 401 and 403)of each URL and save in separate files.
Here's my code so far:
lst = {'back.sql',
'backup.sql',
'accounts.sql',
'backups.sql',
'clients.sql',
'customers.sql',
'data.sql',
'database.sql',
'database.sqlite',
'users.sql',
'db.sql',
'db.sqlite',
'db_backup.sql',
'dbase.sql',
'dbdump.sql',
'setup.sql',
'sqldump.sql',
'dump.sql',
'mysql.sql',
'sql.sql',
'temp.sql'
}
url_test = 'http://www.Holiday.com/%s/' #This can be modified to accept multiple URLs
for i in lst:
url = url_test %i
print(url) #This can be modified to save results for each http status code
If you want, check status code you have to request each page one by one
from requests import get
lst = {'back.sql',
'backup.sql',
'accounts.sql',
'backups.sql',
'clients.sql',
'customers.sql',
'data.sql',
'database.sql',
'database.sqlite',
'users.sql',
'db.sql',
'db.sqlite',
'db_backup.sql',
'dbase.sql',
'dbdump.sql',
'setup.sql',
'sqldump.sql',
'dump.sql',
'mysql.sql',
'sql.sql',
'temp.sql'
}
url_test = ['http://www.Holiday.com/%s/'] #Create list of url
result_dict = dict()
for i in lst:
for url_from_list in url_test:
url = url_from_list %i
# request and get status code from each page one by one
result_dict[url] = get(url).status_code
result_dict will be a dictionary which will contain url as key and response code as value
Then save it to file
with open('filename.txt', 'w') as file:
for url, status_code in result_dict.items():
line = url+" "+str(status_code)+"\n"
file.write(line)

How do I download videos from Pexels API?

I have this code that can pull images off of Pexels, but I don't know how to change it to video. I haven't seen anyone do this before and any help greatly appreciated. I tried switching all the photo tags to videos but that seemed not to work. I've also tried adding more libraries but that doesn't seem to work either.
import argparse
import json
import os
import time
import requests
import tqdm
from pexels_api import API
PEXELS_API_KEY = os.environ['PEXELS_KEY']
MAX_IMAGES_PER_QUERY = 100
RESULTS_PER_PAGE = 10
PAGE_LIMIT = MAX_IMAGES_PER_QUERY / RESULTS_PER_PAGE
def get_sleep(t):
def sleep():
time.sleep(t)
return sleep
def main(args):
sleep = get_sleep(args.sleep)
api = API(PEXELS_API_KEY)
query = args.query
page = 1
counter = 0
photos_dict = {}
# Step 1: Getting urls and meta information
while page <= PAGE_LIMIT:
api.search(query, page=page, results_per_page=RESULTS_PER_PAGE)
photos = api.get_entries()
for photo in tqdm.tqdm(photos):
photos_dict[photo.id] = vars(photo)['_Photo__photo']
counter += 1
if not api.has_next_page:
break
page += 1
sleep()
print(f"Finishing at page: {page}")
print(f"Images were processed: {counter}")
# Step 2: Downloading
if photos_dict:
os.makedirs(args.path, exist_ok=True)
# Saving dict
with open(os.path.join(args.path, f'{query}.json'), 'w') as fout:
json.dump(photos_dict, fout)
for val in tqdm.tqdm(photos_dict.values()):
url = val['src'][args.resolution]
fname = os.path.basename(val['src']['original'])
image_path = os.path.join(args.path, fname)
if not os.path.isfile(image_path): # ignore if already downloaded
response = requests.get(url, stream=True)
with open(image_path, 'wb') as outfile:
outfile.write(response.content)
else:
print(f"File exists: {image_path}")
if __name__ == '__main__':
parser = argparse.ArgumentParser()
parser.add_argument('--query', type=str, required=True)
parser.add_argument('--path', type=str, default='./results_pexels')
parser.add_argument('--resolution', choices=['original', 'large2x', 'large',
'medium', 'small', 'portrait',
'landscape', 'tiny'], default='original')
parser.add_argument('--sleep', type=float, default=0.1)
args = parser.parse_args()
main(args)
sorry for bumping into the question. I just faced a similar situation when downloading the videos from Pexels using the python API, pexelsPy. This may be helpful:
I retrieved the ID of the videos and then created the downloading URL that has the following structure: "https://www.pexels.com/video/"+ ID +"/download".
See the following example:
def download_video(type_of_videos):
video_tag = random.choice(type_of_videos)
PEXELS_API = '-' #please add your API Key here
api = API(PEXELS_API)
retrieved_videos = read_already_download_files('downloaded_files.txt')
video_found_flag = True
num_page = 1
while video_found_flag:
api.search_videos(video_tag, page=num_page, results_per_page=10)
videos = api.get_videos()
for data in videos:
if data.width > data.height: #look for horizontal orientation videos
if data.url not in retrieved_videos:
# write_file('downloaded_files.txt', data.url)
url_video = 'https://www.pexels.com/video/' + str(data.id) + '/download' #create the url with the video id
r = requests.get(url_video)
with open(data.url.split('/')[-2]+'.mp4', 'wb') as outfile:
outfile.write(r.content)
return data.url.split('/')[-2]+'.mp4' #download the video
num_page += 1
download_video function takes an array of strings with several tags, e.g.: ['happy','sad','relax']. Then it randomly chooses one of these tags.
PEXELS_API should contain your API Key.
read_already_download_files('downloaded_files.txt'): Retrieves already downloaded files to check if the current found file is already downloaded.
from pypexels import PyPexels
import requests
api_key = 'api id'
# instantiate PyPexels object
py_pexel = PyPexels(api_key=api_key)
search_videos_page = py_pexel.videos_search(query="love", per_page=40)
# while True:
for video in search_videos_page.entries:
print(video.id, video.user.get('name'), video.url)
data_url = 'https://www.pexels.com/video/' + str(video.id) + '/download'
r = requests.get(data_url)
print(r.headers.get('content-type'))
with open('sample.mp4', 'wb') as outfile:
outfile.write(r.content)
# if not search_videos_page.has_next:
break
# search_videos_page = search_videos_page.get_next_page()
I just tried to do the same. When I was looking for it, I wanted a simple example. All other fancy stuff I was sure I could add myself. So, I built upon inou's answer. The shown example is very basic and requests one page with only 5 results using the 'Tiger' tag in the search query. I download the first video using its id provided by the response and simply write it to the source folder. The api is provided by pexelsPy and the request is executed using the standard requests package. To get access to the API, you need to create a key on pexels website (see here). Once you get your own API key, you should be able to simply substitute the shown example key and run the code as a test.
import pexelsPy
import requests
PEXELS_API = '16gv62567257256iu78krtuzwqsddudrtjberzabzwzjsrtgswnr'
api = pexelsPy.API(PEXELS_API)
api.search_videos('Tiger', page=1, results_per_page=5)
videos = api.get_videos()
url_video = 'https://www.pexels.com/video/' + str(videos[0].id) + '/download'
r = requests.get(url_video)
with open('test.mp4', 'wb') as outfile:
outfile.write(r.content)
You can download multiple videos with this code :
import pexelsPy
import requests
PEXELS_API = '-'
api = pexelsPy.API(PEXELS_API)
api.search_videos('nature', page=2, results_per_page=100, orientation='landscape')
videos = api.get_videos()
for i, video in enumerate(videos):
url_video = 'https://www.pexels.com/video/' + str(video.id) + '/download'
r = requests.get(url_video)
with open(f'test_{i}.mp4', 'wb') as outfile:
outfile.write(r.content)
This will download 100 videos, with each video being written to a separate file named test_0.mp4, test_1.mp4, ..., test_99.mp4.

Request Status Code 500 when running Python Script

This is what i am suppose to do:
List all files in data/feedback folder
Scan all the files, and make a nested dictionary with Title, Name, Date & Feedback (All the files are in Title,Name, Date & Feedback format with each in a different line of file, that’s why using rstrip function)
Post the dictionary in The given url
Following is my code:
#!/usr/bin/env python3
import os
import os.path
import requests
import json
src = '/data/feedback/'
entries = os.listdir(src)
Title, Name, Date, Feedback = 'Title', 'Name', 'Date', 'Feedback'
inputDict = {}
for i in range(len(entries)):
fileName = entries[i]
completeName = os.path.join(src, fileName)
with open(completeName, 'r') as f:
line = f.readlines ()
line tuple = (line[0],line[1],line[2],line[3])
inputDict[fileName] = {}
inputDict[fileName][Title] = line_tuple[0].rstrip()
inputDict[fileName][Name] = line_tuple[1].rstrip()
inputDict[fileName][Date] = line_tuple[2].rstrip()
inputDict[fileName][Feedback] = line_tuple[3].rstrip()
x = requests.get ("http://website.com/feedback")
print (x.status_code)
r = requests.post ("http://Website.com/feedback” , data=inputDict)
print (r.status_code)
After i run it, get gives 200 code but post gives 500 code.
I just want to know if my script is causing the error or not ?
r = requests.post ("http://Website.com/feedback” , data=inputDict)
If your rest api endpoint is expecting json data then the line above is not doing that; it is sending the dictionary inputDict as form-encoded, as though you were submitting a form on an HTML page.
You can either use the json parameter in the post function, which sets the content-type in the headers to application/json:
r = requests.post ("http://Website.com/feedback", json=inputDict)
or set the header manually:
headers = {'Content-type': 'application/json'}
r = requests.post("http://Website.com/feedback", data=json.dumps(inputDict), headers=headers)

How do I isolate a .json file?

I was trying to split some parts of a .json, to completely isolate parts of a .json file from an API I found.
This is trying to isolate the open share price of any stocks on the internet. I've consulted with Stack Overflow, but I think I may have made a mistake in my paraphrasing.
# example
import sys
import requests
import json
from ticker import *
def main():
stock_ticker = input("Name the stock ticker?\n")
time2 = int(input("How many minutes do you want to view history?\n"))
#separate file to generate URL for API
url = webpage(stock_ticker, time2)
response = requests.get(url)
assert response.status_code == 200
data = json.loads(response.text)
open_share_price = data["Time Series (5min)"]["2019-11-01 16:00:00"]["1. open"]
print(open_share_price)
return 0
if __name__ == "__main__":
sys.exit(main())
Returns
136.800
I've been wanting to get open share prices from different time frames, not just 16 :00:00, and not just at 5 min intervals.
I'm not great at programming, so any help would be gratefully received. Sorry in advance for my conciseness errors
Edit: The link for the data. Sorry I didn't include it the first time around. https://www.alphavantage.co/query?function=TIME_SERIES_INTRADAY&symbol=kmb&interval=5min&apikey=exampleapikey
If you have to more than one element then you should use for-loop
import requests
url = 'https://www.alphavantage.co/query?function=TIME_SERIES_INTRADAY&symbol=kmb&interval=5min&apikey=exampleapikey'
response = requests.get(url)
data = response.json()
for key, val in data["Time Series (5min)"].items():
print(key, val["1. open"])
If you want to keep it as JSON then create new directory to keep values and later save it in file.
import requests
import json
url = 'https://www.alphavantage.co/query?function=TIME_SERIES_INTRADAY&symbol=kmb&interval=5min&apikey=exampleapikey'
response = requests.get(url)
data = response.json()
new_data = dict()
for key, val in data["Time Series (5min)"].items():
new_data[key] = val["1. open"]
#print(new_data)
with open('new_data.json', 'w') as fp:
fp.write(json.dumps(new_data))

Can't parse XML effectively using Python

import urllib
import xml.etree.ElementTree as ET
def getWeather(city):
#create google weather api url
url = "http://www.google.com/ig/api?weather=" + urllib.quote(city)
try:
# open google weather api url
f = urllib.urlopen(url)
except:
# if there was an error opening the url, return
return "Error opening url"
# read contents to a string
s = f.read()
tree=ET.parse(s)
current= tree.find("current_condition/condition")
condition_data = current.get("data")
weather = condition_data
if weather == "<?xml version=":
return "Invalid city"
#return the weather condition
#return weather
def main():
while True:
city = raw_input("Give me a city: ")
weather = getWeather(city)
print(weather)
if __name__ == "__main__":
main()
gives error , I actually wanted to find values from google weather xml site tags
Instead of
tree=ET.parse(s)
try
tree=ET.fromstring(s)
Also, your path to the data you want is incorrect. It should be: weather/current_conditions/condition
This should work:
import urllib
import xml.etree.ElementTree as ET
def getWeather(city):
#create google weather api url
url = "http://www.google.com/ig/api?weather=" + urllib.quote(city)
try:
# open google weather api url
f = urllib.urlopen(url)
except:
# if there was an error opening the url, return
return "Error opening url"
# read contents to a string
s = f.read()
tree=ET.fromstring(s)
current= tree.find("weather/current_conditions/condition")
condition_data = current.get("data")
weather = condition_data
if weather == "<?xml version=":
return "Invalid city"
#return the weather condition
return weather
def main():
while True:
city = raw_input("Give me a city: ")
weather = getWeather(city)
print(weather)
I'll give the same answer here I did in my comment on your previous question. In the future, kindly update the existing question instead of posting a new one.
Original
I'm sorry - I didn't mean that my code would work exactly as you desired. Your error is because s is a string and parse takes a file or file-like object. So, "tree = ET.parse(f)" may work better. I would suggest reading up on the ElementTree api so you understand what the functions I've used above do in practice. Hope that helps, and let me know if it works.

Categories