How to retrieve bus stops from OpenStreetMap using Python? - python

I'm trying to get all USA bus stops from OSM using this Biergarten example for Germany but I haven't achieved that.
This is what I've tried so far:
import requests
import json
overpass_url = "http://overpass-api.de/api/interpreter"
overpass_query = """
[out:json];
area["ISO3166-1"="US"][admin_level=2];
(node["highway"="bus_stop"](area);
way["highway"="bus_stop"](area);
rel["highway"="bus_stop"](area);
);
out center;
"""
response = requests.get(overpass_url,
params={'data': overpass_query})
data = response.json()
The result is an empty list. I would appreciate any hints on this problem – thanks!

Related

Nutritionix API only returns first 10 results?

I'm trying to extract nutritional information using the Nutritionix API/database in Python. I was able to get a successful query and placed it into a pandas dataframe. However, I'm a bit confused though because the resulting json claims that there are several thousand 'hits' for my query but at most 10 are ever returned. For instance, when I query for Garbanzo, the json file says that there are 513 total_hits, but only 10 are actually returned. Does anyone know what is causing this? The code I'm using is below.
import requests
import json
import pandas as pd
from nutritionix import Nutritionix
nix_apikey = ''
nix_appid = ''
nix = Nutritionix(app_id = nix_appid, api_key = nix_apikey)
results = nix.search('Garbanzo').json()
df = pd.json_normalize(results, record_path = ['hits'])
I'm not including the my api_key or app_id for obvious reasons. Here's a link to the Nutritionix API: https://github.com/leetrout/python-nutritionix
Thanks for any suggestions!

JSONDecodeError when convert OpenStreetMap query in jupyter notebook

I need help related to OpenStreetMap. I'm using python (jupyter notebook) to get data of hospitals in Bali area, Indonesia. Here is my code and query:
import pandas as pd
import requests
import json
overpass_api = "http://overpass-api.de/api/interpreter"
query_hospital = """
[out:json];
{{geocodeArea:'Provinsi Bali'}}->.searchArea;
node[amenity='hospital'](area.searchArea);
out;
"""
response_hospital = requests.get(overpass_api, params={'data':query_hospital})
but when I run the next code,
data_hospital = response_hospital.json()
it returns error JSONDecodeError: Expecting value: line 1 column 1 (char 0)
the query works well in Overpass Turbo but when I put in notebook, it returns error.
I've found the solution. Looks like python can't parse the double curly braces {{ }} in the openstreetmap query. So I modify the query into like this
query_hospital = """
[out:json];
area[name=Bali];
node[amenity='hospital'](area);
out;
"""
or if we use area name in local language
query_hospital = """
[out:json];
area['name:id'='Provinsi Bali'];
node[amenity='hospital'](area);
out;
"""
the query returns same result and now python can parse it.

Python for loop API request

I am extracting data from this API
I was able to save the JSON file on my local machine.
I want to run the requests for several stocks.
How do I do it?
I tried to play with for loops but not good came out of this. I attached the code below.
the out put is:
AAPL
[]
TSLA
[]
Thank you, Tal
try:
# For Python 3.0 and later
from urllib.request import urlopen
except ImportError:
# Fall back to Python 2's urllib2
from urllib2 import urlopen
import requests
import json
import time
def get_jsonparsed_data(url):
"""
Receive the content of ``url``, parse it as JSON and return the object.
Parameters
----------
url : str
Returns
-------
dict
"""
stock_symbol = ["AAPL","TSLA"]
for symbol in stock_symbol:
print (symbol)
#Sending the API request
r = requests.get('https://financialmodelingprep.com/api/v3/income-statement/symbol={stock_symbol}?limit=120&apikey={removed by me})
packages_JSON = r.json()
print(packages_JSON)
#Exporting the data into JSON file
with open('stocks_data321.json', 'w', encoding='utf-8') as f:
json.dump(packages_JSON, f, ensure_ascii=False, indent=4)
Querying multiple APIs iterativelly will take a lot of time. Consider using theading or AsyncIO to do requests simultaniously and speed up the process.
In a nutshell you should do something like this for each API:
import threading
for provider in [...]: # list of APIs to query
t = threading.Thread(target=api_request_function, args=(provider, ...))
t.start()
However better read this great article first to understand whats and whys of threading approach.

How can I automate downloading files from a website using different inputs using Python?

I need to download a number of data from the website https://www.renewables.ninja/ and I want to automate the process using Python if possible.
I want to select cities (say Berlin, New York, Seoul) as well as parameters for solar PV and wind based on the inputs from a Python file, and run it (which takes approximately 5 seconds in the website) and download the csv files.enter image description here
Is it possible to automate this process using Python since I need to download a large number of files for different data points?
enter image description here
You can fetch the files and save them using the requests module as follows:
import requests
with open('saved_data_file.csv','w') as f:
csv_data = requests.get('https://www.renewables.ninja/api/data/weather',
params={"format":"csv"}).content
f.write(csv_data)
If you want to see what parameters are used when you request certain data from the website open inspect element (F12) and go to the network tab. Request data using their form and have a look at the new request that pops up. The URL will look something like this:
https://www.renewables.ninja/api/data/weather?local_time=true&format=csv&header=true&lat=70.90491170356151&lon=24.589843749945597&date_from=2019-01-01&date_to=2019-12-31&dataset=merra2&var_t2m=true&var_prectotland=false&var_precsnoland=false&var_snomas=false&var_rhoa=false&var_swgdn=false&var_swtdn=false&var_cldtot=false
Then pickout the parameters you want and put them in a dictionary that you feed into requests.get e.g. params={"format":"csv","local_time":"true","header":"true" etc.}
Yeah, it's definitely possible to automate this process.
Consider looking at this url: https://www.renewables.ninja/api/data/pv?local_time=true&format=csv&header=true&lat=52.5170365&lon=13.3888599&date_from=2019-01-01&date_to=2019-12-31&dataset=merra2&capacity=1&system_loss=0.1&tracking=0&tilt=35&azim=180&raw=false
It's a request to API for SolarPV data.You can change query parameters here and get data for cities that you want.Just change lat and lon parameters.
To get these parameters for city you can use this API: https://nominatim.openstreetmap.org/search?format=json&limit=1&q=berlin. Change q parameter here for city that you want.
Code example:
import json
import requests
COORD_API = "https://nominatim.openstreetmap.org/search"
CITY = "berlin" # It's just an example.
payload = {'format': 'json', 'limit': 1, 'q':CITY}
r = requests.get(COORD_API, params=payload)
lat_long_data = json.loads(r.text)
lat = lat_long_data[0]['lat']
lon = lat_long_data[0]['lon']
# With this values we can get Solar data
MAIN_API = "https://www.renewables.ninja/api/data/pv?local_time=true&format=csv&header=true&date_from=2019-01-01&date_to=2019-12-31&dataset=merra2&capacity=1&system_loss=0.1&tracking=0&tilt=35&azim=180&raw=false"
payload = {'lat': lat, 'lon': lon}
resp = requests.get(MAIN_API, params=payload)
***
Do something with this data.

Search through JSON query from Valve API in Python

I am looking to find various statistics about players in games such as CS:GO from the Steam Web API, but cannot work out how to search through the JSON returned from the query (e.g. here) in Python.
I just need to be able to get a specific part of the list that is provided, e.g. finding total_kills from the link above. If I had a way that could sort through all of the information provided and filters it down to just that specific thing (in this case total_kills) then that would help a load!
The code I have at the moment to turn it into something Python can read is:
url = "http://api.steampowered.com/IPlayerService/GetOwnedGames/v0001/?key=FE3C600EB76959F47F80C707467108F2&steamid=76561198185148697&include_appinfo=1"
data = requests.get(url).text
data = json.loads(data)
If you are looking for a way to search through the stats list then try this:
import requests
import json
def findstat(data, stat_name):
for stat in data['playerstats']['stats']:
if stat['name'] == stat_name:
return stat['value']
url = "http://api.steampowered.com/ISteamUserStats/GetUserStatsForGame/v0002/?appid=730&key=FE3C600EB76959F47F80C707467108F2&steamid=76561198185148697"
data = requests.get(url).text
data = json.loads(data)
total_kills = findstat(data, 'total_kills') # change 'total_kills' to your desired stat name
print(total_kills)

Categories