Can't get prices for multiple symbols, gives error {'code': -1101, 'msg': "Duplicate values for parameter 'symbols'."}. I do as indicated in the documentation GitHub
This is a my code
import requests
symbols = ["KEYUSDT","BNBUSDT","ADAUSDT"]
url = 'https://api.binance.com/api/v3/ticker/price'
params = {'symbols': symbols}
ticker = requests.get(url, params=params).json()
print(ticker)
What am I doing wrong?
You have to specify the list as a string:
import requests
symbols = '["KEYUSDT","BNBUSDT","ADAUSDT"]'
url = 'https://api.binance.com/api/v3/ticker/price'
params = {'symbols': symbols}
ticker = requests.get(url, params=params).json()
print(ticker)
Result:
[{'symbol': 'BNBUSDT', 'price': '317.50000000'}, {'symbol': 'ADAUSDT', 'price': '0.56690000'}, {'symbol': 'KEYUSDT', 'price': '0.00504000'}]
Related
I am trying to use the PokeAPI to extract all pokemon names for a personal project to help build API comfort. I have been having issues with the Params specifically. Can someone please provide support or resources to simplify data grabbing with JSON. Here is the code I have written so far, which returns the entire data set.
import json
from unicodedata import name
import requests
from pprint import PrettyPrinter
pp = PrettyPrinter()
url = ("https://pokeapi.co/api/v2/ability/1/")
params = {
name : "garbodor"
}
def main():
r= requests.get(url)
status = r.status_code
if status != 200:
quit()
else:
get_pokedex(status)
def get_pokedex(x):
print("status code: ", + x) # redundant check for status code before the program begins.
response = requests.get(url, params = params).json()
pp.pprint(response)
main()
Website link: https://pokeapi.co/docs/v2#pokemon-section working specifically with the pokemon group.
I have no idea what values you want but response is a dictionary with lists and you can use keys and indexes (with for-loops) to select elements from response - ie. response["names"][0]["name"]
Minimal working example
Name or ID has to be added at the end of URL.
import requests
import pprint as pp
name_or_id = "stench" # name
#name_or_id = 1 # id
url = "https://pokeapi.co/api/v2/ability/{}/".format(name_or_id)
response = requests.get(url)
if response.status_code != 200:
print(response.text)
else:
data = response.json()
#pp.pprint(data)
print('\n--- data.keys() ---\n')
print(data.keys())
print('\n--- data["name"] ---\n')
print(data['name'])
print('\n--- data["names"] ---\n')
pp.pprint(data["names"])
print('\n--- data["names"][0]["name"] ---\n')
print(data['names'][0]['name'])
print('\n--- language : name ---\n')
names = []
for item in data["names"]:
print(item['language']['name'],":", item["name"])
names.append( item["name"] )
print('\n--- after for-loop ---\n')
print(names)
Result:
--- data.keys() ---
dict_keys(['effect_changes', 'effect_entries', 'flavor_text_entries', 'generation', 'id', 'is_main_series', 'name', 'names', 'pokemon'])
--- data["name"] ---
stench
--- data["names"] ---
[{'language': {'name': 'ja-Hrkt',
'url': 'https://pokeapi.co/api/v2/language/1/'},
'name': 'あくしゅう'},
{'language': {'name': 'ko', 'url': 'https://pokeapi.co/api/v2/language/3/'},
'name': '악취'},
{'language': {'name': 'zh-Hant',
'url': 'https://pokeapi.co/api/v2/language/4/'},
'name': '惡臭'},
{'language': {'name': 'fr', 'url': 'https://pokeapi.co/api/v2/language/5/'},
'name': 'Puanteur'},
{'language': {'name': 'de', 'url': 'https://pokeapi.co/api/v2/language/6/'},
'name': 'Duftnote'},
{'language': {'name': 'es', 'url': 'https://pokeapi.co/api/v2/language/7/'},
'name': 'Hedor'},
{'language': {'name': 'it', 'url': 'https://pokeapi.co/api/v2/language/8/'},
'name': 'Tanfo'},
{'language': {'name': 'en', 'url': 'https://pokeapi.co/api/v2/language/9/'},
'name': 'Stench'},
{'language': {'name': 'ja', 'url': 'https://pokeapi.co/api/v2/language/11/'},
'name': 'あくしゅう'},
{'language': {'name': 'zh-Hans',
'url': 'https://pokeapi.co/api/v2/language/12/'},
'name': '恶臭'}]
--- data["names"][0]["name"] ---
あくしゅう
--- language : name ---
ja-Hrkt : あくしゅう
ko : 악취
zh-Hant : 惡臭
fr : Puanteur
de : Duftnote
es : Hedor
it : Tanfo
en : Stench
ja : あくしゅう
zh-Hans : 恶臭
--- after for-loop ---
['あくしゅう', '악취', '惡臭', 'Puanteur', 'Duftnote', 'Hedor', 'Tanfo', 'Stench', 'あくしゅう', '恶臭']
EDIT:
Another example with other URL and with parameters limit and offset.
I use for-loop to run with different offset (0, 100, 200, etc.)
import requests
import pprint as pp
url = "https://pokeapi.co/api/v2/pokemon/"
params = {'limit': 100}
for offset in range(0, 1000, 100):
params['offset'] = offset # add new value to dict with `limit`
response = requests.get(url, params=params)
if response.status_code != 200:
print(response.text)
else:
data = response.json()
#pp.pprint(data)
for item in data['results']:
print(item['name'])
Result (first 100 items):
bulbasaur
ivysaur
venusaur
charmander
charmeleon
charizard
squirtle
wartortle
blastoise
caterpie
metapod
butterfree
weedle
kakuna
beedrill
pidgey
pidgeotto
pidgeot
rattata
raticate
spearow
fearow
ekans
arbok
pikachu
raichu
sandshrew
sandslash
nidoran-f
nidorina
nidoqueen
nidoran-m
nidorino
nidoking
clefairy
clefable
vulpix
ninetales
jigglypuff
wigglytuff
zubat
golbat
oddish
gloom
vileplume
paras
parasect
venonat
venomoth
diglett
dugtrio
meowth
persian
psyduck
golduck
mankey
primeape
growlithe
arcanine
poliwag
poliwhirl
poliwrath
abra
kadabra
alakazam
machop
machoke
machamp
bellsprout
weepinbell
victreebel
tentacool
tentacruel
geodude
graveler
golem
ponyta
rapidash
slowpoke
slowbro
magnemite
magneton
farfetchd
doduo
dodrio
seel
dewgong
grimer
muk
shellder
cloyster
gastly
haunter
gengar
onix
drowzee
hypno
krabby
kingler
voltorb
I am new at api programming. I am trying to download data from the moex api.
Here is the code I use:
import requests as re
from io import StringIO
import pandas as pd
import json
session = re.Session()
login = "aaaa"
password = "bbbb"
session.get('https://passport.moex.com/authenticate', auth=(login, password))
cookies = {'MicexPassportCert': session.cookies['MicexPassportCert']}
def api_query(engine, market, session, secur, from_start, till_end):
param = 'https://iss.moex.com/iss/history/engines/{}/markets/{}/sessions/{}/securities/{}/candles.json?from={}&till={}&interval=24&start=0'.format(engine, market, session, secur, from_start, till_end)
return param
url = api_query('stock', 'bonds', 'session', 'RU000A0JVWL2', '2020-11-01', '2021-05-01')
response = re.get(url, cookies=cookies)
As a result I have got the following data (part of data)
'history.cursor': {'metadata': {'INDEX': {'type': 'int64'}, 'TOTAL': {'type': 'int64'}, 'PAGESIZE': {'type': 'int64'}}, 'columns': ['INDEX', 'TOTAL', 'PAGESIZE'], 'data': [[0, 32, 100]]}}
I need to convert json format into pandas dataframe. How to do it? As a result I should get dataframe with 1 row and 3 columns.
Thanks in advance
Assuming your json is properly encoded you could try something like this:
import pandas as pd
import numpy as np
json = {
'history.cursor': {
'metadata': {'INDEX': {'type': 'int64'}, 'TOTAL': {'type': 'int64'}, 'PAGESIZE': {'type': 'int64'}},
'columns': ['INDEX', 'TOTAL', 'PAGESIZE'],
'data': [[0, 32, 100]]
}
}
columns = json['history.cursor']['columns']
data = np.array(json['history.cursor']['data'])
metadata = json['history.cursor']['metadata']
d = {}
for i, column in enumerate(columns):
d[column] = data[:,i].astype(metadata[column]['type'])
df = pd.DataFrame(d)
print(df)
you should use the method pd.io.json.read_json() method
your orientation would likely be 'split'
so
pd.read_json(json,orient='split') where split is your json in the form of dict like {index -> [index], columns -> [columns], data -> [values]}
I am trying to extract some data from two tables from the same HTML with BeautifulSoup. Actually, I already extracted part from both tables but not all. This is the code that I have:
from urllib.request import urlopen
from bs4 import BeautifulSoup
html_content = urlopen('https://www.icewarehouse.com/Bauer_Vapor_X25_Ice_Hockey_Skates/descpage-V25XS.html')
soup = BeautifulSoup(html_content, "lxml")
tables = soup.find_all('table', attrs={'class' : 'orderingtable fl'})
for table_skates in tables:
t_headers = []
t_data = []
t_row = {}
for tr in table_skates.find_all('th'):
t_headers.append(tr.text.replace('\n', '').strip())
for td in table_skates.find_all('td'):
t_data.append(td.text.replace('\n', '').strip())
t_row = dict(zip(t_headers, t_data))
print(t_row)
Here is the output that I get:
{'Size': '1.0', 'Price': '$109.99', 'Stock': '1', 'Qty': ''}
{'Size': '7.0', 'Price': '$159.99', 'Stock': '2+', 'Qty': ''}
You can easily get it by using 'read_html' in 'pandas'.
df = pd.read_html(html_content, attrs={'class' : 'orderingtable fl'})
So I'm trying to scrape a table from this API:
https://api.pbpstats.com/get-wowy-combination-stats/nbaTeamId=1610612743&Season=201819&SeasonType=Playoffs&PlayerIds=203999,1627750,200794
But I'm having trouble getting the headers as a nice list like ['Players On', 'Players Off', 'Minutes', 'NetRtg', 'OffRtg', 'DefRtg'] for my eventual dataframe because the headers are their own class and not part of the other class results.
My current code looks like:
import requests
url = 'https://api.pbpstats.com/get-wowy-combination-stats/nba?TeamId=1610612743&Season=2018-19&SeasonType=Playoffs&PlayerIds=203999,1627750,200794'
response = requests.get(url, headers={'User-Agent': 'Mozilla/5.0'})
# grab table
table = response.json()['results'][0]
#grab headers
headers = response.json()['headers']
And when I print(headers) I get [{'field': 'On', 'label': 'Players On'}, {'field': 'Off', 'label': 'Players Off'}, {'field': 'Minutes', 'label': 'Minutes', 'type': 'number'}, {'field': 'NetRtg', 'label': 'NetRtg', 'type': 'decimal'}, {'field': 'OffRtg', 'label': 'OffRtg', 'type': 'decimal'}, {'field': 'DefRtg', 'label': 'DefRtg', 'type': 'decimal'}].
Is a good way to get these into a list like ['Players On', 'Players Off', 'Minutes', 'NetRtg', 'OffRtg', 'DefRtg'] so I can then create a dataframe?
Thank you!
Just extract out all the values with a specific key out of the headers list
and make your dictionary
import requests
url = 'https://api.pbpstats.com/get-wowy-combination-stats/nba?TeamId=1610612743&Season=2018-19&SeasonType=Playoffs&PlayerIds=203999,1627750,200794'
response = requests.get(url, headers={'User-Agent': 'Mozilla/5.0'})
#grab table
table = response.json()['results'][0]
#grab headers
headers = response.json()['headers']
#Extracting all values with every key into a dictionary
results = {}
for header in headers:
for k,v in header.items():
results.setdefault(k,[])
results[k].append(v)
#Remove duplicate elements from the list of values
results = {k:list(set(v)) for k,v in results.items()}
print(results)
The output will look like
{
'field': ['Minutes', 'Off', 'On', 'DefRtg', 'NetRtg', 'OffRtg'],
'label': ['Minutes', 'DefRtg', 'Players On', 'NetRtg', 'OffRtg', 'Players Off'],
'type': ['decimal', 'number']
}
list comprehension to iterate through should do the trick:
import requests
url = 'https://api.pbpstats.com/get-wowy-combination-stats/nba?TeamId=1610612743&Season=2018-19&SeasonType=Playoffs&PlayerIds=203999,1627750,200794'
response = requests.get(url, headers={'User-Agent': 'Mozilla/5.0'})
# grab table
table = response.json()['results'][0]
#grab headers
headers = response.json()['headers']
headers = [each['label'] for each in headers ]
The data I am using is Twitter API's twitter trending topics.
url_0 = 'https://api.twitter.com/1.1/trends/place.json?id=2459115'
res = requests.get(url_0, auth=auth)
print(res, res.status_code, res.headers['content-type'])
print(res.url)
top_trends_twitter = res.json()
data= top_trends_twitter[0]
This is how data looks like:
[{'as_of': '2017-02-13T21:59:32Z',
'created_at': '2017-02-13T21:53:22Z',
'locations': [{'name': 'New York', 'woeid': 2459115}],
'trends': [{'name': 'Victor Cruz',
'promoted_content': None,
'query': '%22Victor+Cruz%22',
'tweet_volume': 45690,
'url': 'http://twitter.com/search?q=%22Victor+Cruz%22'},
{'name': '#percussion',
'promoted_content': None,
'query': '%23percussion',
'tweet_volume': None,
'url': 'http://twitter.com/search?q=%23percussion'}, .....etc
Now, after I connect the server with SQL, and create database and table, an error appears. This is the part that is causing me trouble:
for entry in data:
trendname = entry['trends']['name']
url = entry['trends']['url']
num_tweets = entry['trends']['trend_volume']
date= entry['as_of']
print("Inserting trend", trendname, "at", url)
query_parameters = (trendname, url, num_tweets, date)
cursor.execute(query_template, query_parameters)
con.commit()
cursor.close()
Then, I get this error:
TypeError Traceback (most recent call last)
<ipython-input-112-da3e17aadce0> in <module>()
29
30 for entry in data:
---> 31 trendname = entry['trends']['name']
32 url = entry['trends']['url']
33 num_tweets = entry['trends']['trend_volume']
TypeError: string indices must be integers
How can I get the set of strings into dictionary, so that I can use that for entry data code?
You Need entry['trends'][0]['name']. entry['trends'] is a list and you need integer index to access items of list.
Try like so:
data=[{'as_of': '2017-02-13T21:59:32Z',
'created_at': '2017-02-13T21:53:22Z',
'locations': [{'name': 'New York', 'woeid': 2459115}],
'trends': [{'name': 'Victor Cruz',
'promoted_content': None,
'query': '%22Victor+Cruz%22',
'tweet_volume': 45690,
'url': 'http://twitter.com/search?q=%22Victor+Cruz%22'},
{'name': '#percussion',
'promoted_content': None,
'query': '%23percussion',
'tweet_volume': None,
'url': 'http://twitter.com/search?q=%23percussion'}]}]
for entry in data:
date= entry['as_of']
for trend in entry['trends']:
trendname = trend['name']
url = trend['url']
num_tweets = trend['tweet_volume']
print trendname, url, num_tweets, date
Output:
Victor Cruz http://twitter.com/search?q=%22Victor+Cruz%22 45690 2017-02-13T21:59:32Z
#percussion http://twitter.com/search?q=%23percussion None 2017-02-13T21:59:32Z