How to encode latitude and longitude using urllib.parse.urlencode? - python

I'm using Google API to obtain the json data of nearby coffee outlets. To do this, I need to encode the latitude and longitude into the URL.
The required URL: https://maps.googleapis.com/maps/api/place/textsearch/json?query=coffee&location=22.303940,114.170372&radius=1000&maxprice=3&key=myAPIKey
The URL i'm obtaining using urlencode: https://maps.googleapis.com/maps/api/place/textsearch/json?query=coffee&location=22.303940%2C114.170372&radius=1000&maxprice=3&key=myAPIKEY
How can I remove the "%2C" in the URL? (I have shown my code below)
serviceurl_placesearch = 'https://maps.googleapis.com/maps/api/place/textsearch/json?'
parameters = dict()
query = input('What are you searching for?')
parameters['query'] = query
parameters['location'] = "22.303940,114.170372"
while True:
radius = input('Enter radius of search in meters: ')
try:
radius = int(radius)
parameters['radius'] = radius
break
except:
print('Please enter number for radius')
while True:
maxprice = input('Enter the maximum price level you are looking for(0 to 4): ')
try:
maxprice = int(maxprice)
parameters['maxprice'] = maxprice
break
except:
print('Valid inputs are 0,1,2,3,4')
parameters['key'] = API_key
url = serviceurl_placesearch + urllib.parse.urlencode(parameters)
I added this piece of code in to make the URL work however I don't think this is a long term solution. I'm looking for a more long term solution.
urlparts = url.split('%2C')
url = ','.join(urlparts)

You can add safe=","
import urllib.parse
parameters = {'location': "22.303940,114.170372"}
urllib.parse.urlencode(parameters, safe=',')
Result
location=22.303940,114.170372

Related

Unable to change a 'str' into an 'int', can only use float

I have the below code. This snippet:
pe = df[3][0]
pe = int(pe)
print(pe)
Does not work and will return the error in the 'exception' part of the code below:
ticker = input("Please choose a ticker symbol: ")
print("Loading data for " + ticker.upper())
try:
#Pulling the data for the chosen ticker.
url = ('https://finviz.com/quote.ashx?t=' + ticker.upper())
req = requests.get(url, headers = headers)
table = pd.read_html(req.text, attrs = {"class":"snapshot-table2"})
df = table[0]
#Pulls company name.
soup = BeautifulSoup(req.text, 'html.parser')
for title in soup.find_all('title'):
print(title.get_text())
pe = df[3][0]
pe = int(pe)
print(pe)
#Ratios and metrics to be checked.
data = {
df[10][10]: ["$" + df[11][10]], #Stock Price
df[2][0]: [df[3][0]], #P/E
df[2][1]: [df[3][1]], #Forward P/E
df[2][2]: [df[3][2]], #PEG
df[2][3]: [df[3][3]], #P/S
df[2][6]: [df[3][6]], #P/FCF
df[0][7]: [df[1][7]], #Dividend %
df[6][5]: [df[7][5]] #ROE
}
#Framing the table and printing the results.
df = pd.DataFrame(data, index = ["Stats:"])
print(df)
except ValueError:
print("Ticker doesn't exist. Please check your selection.")
This returns:
Loading data for KO
KO The Coca-Cola Company Stock Quote
Ticker doesn't exist. Please check your selection.
When I use this:
pe = int(float(pe))
It works but it rounds it to 30 for Coca-Cola for example. I'd like to return the exact number but turning it into an int isn't as straightforward as it seems. I did use type() to make sure the original piece of data is a str and df[3][0] is a str.
Any help is appreciated, thank you.

How to find "rain" in a variable for openweathermap

I am trying to make a simple program that will tell you if you need to water your plants today (seeing if it will rain today). I am using openweathermap api to do so. The api does not include rain as a variable like it does for temperature or humidity, instead it only appears as "rain" if it is raining as a part of the weather variable.
When I run my code below it never knows that it is raining even if the variable it is looking in includes rain. I want to know how I can find the word "rain" and print a message accordingly if it finds it in a variable.
After running my current code this is what I get if I print the weathervar:
[{'id': 502, 'main': 'Rain', 'description': 'heavy intensity rain', 'icon': '10n'}]
Even when the variable contains "rain" my code thinks it doesn't.
import requests, json
api_key = "someapikey"
base_url = "http://api.openweathermap.org/data/2.5/weather?"
city_name = ("Brunei")
complete_url = base_url + "appid=" + api_key + "&q=" + city_name
response = requests.get(complete_url)
x = response.json()
if x["cod"] != "404":
y = x["main"]
current_temperature = y["temp"]
weathervar = x["weather"]
else:
print(" City Not Found ")
if 'Rain' in weathervar:
print("You don't need to water your plants today.")
else:
print("You need to water your plants today")
print(weathervar)
Since weathervar is a list of dictionaries, you should check
if any(item['main'] == 'Rain' for item in weathervar):
Try using:
import requests, json
api_key = "someapikey"
base_url = "http://api.openweathermap.org/data/2.5/weather?"
city_name = ("Brunei")
complete_url = base_url + "appid=" + api_key + "&q=" + city_name
response = requests.get(complete_url)
x = response.json()
if x["cod"] != "404":
y = x["main"]
current_temperature = y["temp"]
weathervar = x["weather"][0]
else:
print(" City Not Found ")
if 'Rain' in weathervar.values():
print("You don't need to water your plants today.")
else:
print("You need to water your plants today")
print(weathervar)
First you have to convert the array to a proper dictionary. Then you can ask for the values of the dictionary with .values().

Different result on browser search vs Bio.entrez search

I am getting different result when I use Bio Entrez to search. For example when I search on browser using query "covid side effect" I get 344 result where as I get only 92 when I use Bio Entrez. This is the code I was using.
from Bio import Entrez
Entrez.email = "Your.Name.Here#example.org"
handle = Entrez.esearch(db="pubmed", retmax=40, term="covid side effect", idtype="acc")
record = Entrez.read(handle)
handle.close()
print(record['Count'])
I was hoping if someone could help me with this discrepancy.
For some reason everyone seemed to have same issue whether it's R api or Python API. I have found a work around to get the same result. It is slow but it gets job done. If your result is less than 10k you could probably use Selenium to get the pubmedid. Else, we can scrape the data using code below. I hope this will help someone in future.
import requests
# # Custom Date Range
# req = requests.get("https://pubmed.ncbi.nlm.nih.gov/?term=covid&filter=dates.2009/01/01-2020/03/01&format=pmid&sort=pubdate&size=200&page={}".format(i))
# # Custom Year Range
# req = requests.get("https://pubmed.ncbi.nlm.nih.gov/?term=covid&filter=years.2010-2019&format=pmid&sort=pubdate&size=200&page={}".format(i))
# #Relative Date
# req = requests.get("https://pubmed.ncbi.nlm.nih.gov/?term=covid&filter=datesearch.y_1&format=pmid&sort=pubdate&size=200&page={}".format(i))
# # filter language
# # &filter=lang.english
# # filter human
# #&filter=hum_ani.humans
# Systematic Review
#&filter=pubt.systematicreview
# Case Reports
# &filter=pubt.casereports
# Age
# &filter=age.newborn
search = "covid lungs"
# search_list = "+".join(search.split(' '))
def id_retriever(search_string):
string = "+".join(search_string.split(' '))
result = []
old_result = len(result)
for page in range(1,10000000):
req = requests.get("https://pubmed.ncbi.nlm.nih.gov/?term={string}&format=pmid&sort=pubdate&size=200&page={page}".format(page=page,string=string))
for j in req.iter_lines():
decoded = j.decode("utf-8").strip(" ")
length = len(decoded)
if "log_displayeduids" in decoded and length > 46:
data = (str(j).split('"')[-2].split(","))
result = result + data
data = []
new_result = len(result)
if new_result != old_result:
old_result = new_result
else:
break
return result
ids=id_retriever(search)
len(ids)

Parsing a JSON using specific key words using Python

I'm trying to parse a JSON of a sites stock.
The JSON: https://www.ssense.com/en-us/men/sneakers.json
So I want to take some keywords from the user. Then I want to parse the JSON using these keywords to find the name of the item and (in this specific case) return the ID, SKU and the URL.
So for example:
If I inputted "Black Fennec" I want to parse the JSON and find the ID,SKU, and URL of Black Fennec Sneakers (that have an ID of 3297299, a SKU of 191422M237006, and a url of /men/product/ps-paul-smith/black-fennec-sneakers/3297299 )
I have never attempted doing anything like this. Based on some guides that show how to parse a JSON I started out with this:
r = requests.Session()
stock = r.get("https://www.ssense.com/en-us/men/sneakers.json",headers = headers)
obj json_data = json.loads(stock.text)
However I am now confused. How do I find the product based off the keywords and how do I get the ID,Url and the SKU or it?
Theres a number of ways to handle the output. not sure what you want to do with it. But this should get you going.
EDIT 1:
import requests
r = requests.Session()
obj_json_data = r.get("https://www.ssense.com/en-us/men/sneakers.json").json()
products = obj_json_data['products']
keyword = input('Enter a keyword: ')
for product in products:
if keyword.upper() in product['name'].upper():
name = product['name']
id_var = product['id']
sku = product['sku']
url = product['url']
print ('Product: %s\nID: %s\nSKU: %s\nURL: %s' %(name, id_var, sku, url))
# if you only want to return the first match, uncomment next line
#break
I also have it setup to store it into a dataframe, and or a list too. Just to give some options of where to go with it.
import requests
import pandas as pd
r = requests.Session()
obj_json_data = r.get("https://www.ssense.com/en-us/men/sneakers.json").json()
products = obj_json_data['products']
keyword = input('Enter a keyword: ')
products_found = []
results = pd.DataFrame()
for product in products:
if keyword.upper() in product['name'].upper():
name = product['name']
id_var = product['id']
sku = product['sku']
url = product['url']
temp_df = pd.DataFrame([[name, id_var, sku, url]], columns=['name','id','sku','url'])
results = results.append(temp_df)
products_found = products_found.append(name)
print ('Product: %s\nID: %s\nSKU: %s\nURL: %s' %(name, id_var, sku, url))
if products_found == []:
print ('Nothing found')
EDIT 2: Here is another way to do it by converting the json to a dataframe, then filtering by those rows that have the keyword in the name (this is actually a better solution in my opinion)
import requests
import pandas as pd
from pandas.io.json import json_normalize
r = requests.Session()
obj_json_data = r.get("https://www.ssense.com/en-us/men/sneakers.json").json()
products = obj_json_data['products']
products_df = json_normalize(products)
keyword = input('Enter a keyword: ')
products_found = []
results = pd.DataFrame()
results = products_df[products_df['name'].str.contains(keyword, case = False)]
#print (results[['name', 'id', 'sku', 'url']])
products_found = list(results['name'])
if products_found == []:
print ('Nothing found')
else:
print ('Found: '+ str(products_found))

getting distance between two location using geocoding

I want to find the distance between two location using google API. I want output to be look like - "The distance between location 1 and location 2 is 500 miles ( distance here is example purposes )", but how can i get the desired output as the current program is showing various output ( which i cant use to get he desired output ) . can you guys please show me the way or show me what is the exact procedure to do it?
import urllib
import json
serviceurl = 'http://maps.googleapis.com/maps/api/geocode/json?'
while True:
address = raw_input('Enter location: ')
if len(address) < 1 : break
url = serviceurl + urllib.urlencode({'sensor':'false', 'address': address})
print 'Retrieving', url
uh = urllib.urlopen(url)
data = uh.read()
print 'Retrieved',len(data),'characters'
try: js = json.loads(str(data))
except: js = None
if 'status' not in js or js['status'] != 'OK':
print '==== Failure To Retrieve ===='
print data
continue
print json.dumps(js, indent=4)
lat = js["results"][0]["geometry"]["location"]["lat"]
lng = js["results"][0]["geometry"]["location"]["lng"]
print 'lat',lat,'lng',lng
location = js['results'][0]['formatted_address']
print location
Google has a specific api for that, it's called Google Maps Distance Matrix API.
Distance & duration for multiple destinations and transport modes.
Retrieve duration and distance values based on the recommended route
between start and end points.
If you just need the distance between two points on the globe you may want to use the Haversine formula
If you know lat and lon, use the geopy package:
In [1]: from geopy.distance import great_circle
In [2]: newport_ri = (41.49008, -71.312796)
In [3]: cleveland_oh = (41.499498, -81.695391)
In [4]: great_circle(newport_ri, cleveland_oh).kilometers
Out[4]: 864.4567616296598

Categories