Count all restaurants from a city by maps api - python

i'm trying to count all restaurants in my city using python-google-places api
but is not working, i'm getting "failed with response code: INVALID_REQUEST"
What could be causing this?
My code is like that
from googleplaces import GooglePlaces, types, lang
from time import sleep
YOUR_API_KEY = '<<MYKEY>>'
google_places = GooglePlaces(YOUR_API_KEY)
# You may prefer to use the text_search API, instead.
query_result = google_places.nearby_search(
lat_lng={'lat': -16.6824083, 'lng': -49.2556573}
,
location='Goiania',
radius=50000,types=[types.TYPE_RESTAURANT])
counter = 0;
while (query_result.has_next_page_token):
counter = counter + len(query_result.places)
query_result = google_places.nearby_search(
lat_lng={'lat': -16.6824083, 'lng': -49.2556573},
location='Goiania',
radius=50000,types=[types.TYPE_RESTAURANT],
pagetoken=query_result.next_page_token)
print(counter)
i'm getting this
---------------------------------------------------------------------------
GooglePlacesError Traceback (most recent call last)
<ipython-input-42-9cc6675b31bc> in <module>()
21 location='Goiania',
22 radius=50000,types=[types.TYPE_RESTAURANT],
---> 23 pagetoken=query_result.next_page_token)
24
25 print(counter)
C:\ProgramData\Anaconda3\lib\site-packages\googleplaces\__init__.py in nearby_search(self, language, keyword, location, lat_lng, name, radius, rankby, sensor, type, types, pagetoken)
303 url, places_response = _fetch_remote_json(
304 GooglePlaces.NEARBY_SEARCH_API_URL, self._request_params)
--> 305 _validate_response(url, places_response)
306 return GooglePlacesSearchResult(self, places_response)
307
C:\ProgramData\Anaconda3\lib\site-packages\googleplaces\__init__.py in _validate_response(url, response)
173 error_detail = ('Request to URL %s failed with response code: %s' %
174 (url, response['status']))
--> 175 raise GooglePlacesError(error_detail)
176
177
GooglePlacesError: Request to URL https://maps.googleapis.com/maps/api/place/nearbysearch/json?location=-16.6824083%2C-49.2556573&radius=50000&type=restaurant&pagetoken=CqQCGAEAAAzE3wT0DnczXFlzyjvAaka8vRLZMlsAjF2aqezA8dtGcLIV7ePoqXAUOm0MyxgroXBcKydzt3U3rB2RFvqLijFCbJ3-ucQ-nijN1E7d4aEcC2UlKUR2gNnHfmKYmFVmfQ70lbW-UmCm79WOl2s5oQ8VYoE9bRnr01IphBbVeiS_IDBsCwmsALU4ti5z-7RSYT9ACTCgFs8bVwU9lQ2x_F3v2FtkdqP7UWl5MmNLteox4dSCwa_k3gKD9yd8mCzzos0CvS248uqn_24wLaVubPmxAUrDbSFDhoSx5c8O7S-XrHl4aZ2dx4QUznYXVcEcD_9c-AHKnPoqK-zwh2MVRiHLHNscTnxr4_iCJwsrrOcqlyQrN192HCq9BMADG1tLVxIQ16yZSa5g10FKIcHzFwQqrxoUxS_m8v1Lbr0IbujvfXRi74p71ws&language=en&key=AIzaSyD8YxHJjYdGMO-k7MbOdF807uzEYT-QGYo&sensor=false failed with response code: INVALID_REQUEST

Related

zerodha api and python : InputException: Invalid `api_key` or `access_token`

It looks like the error is related to an invalid API key or access token. Everything I did is correct and mentioned below what steps are taken by me:
I've created https://developers.kite.trade/ (from the Zerodha Kite Connect dashboard)
Here is Zerodha API key created image
To get data from Zerodha in Python, I am trying the Zerodha Kite Connect API. Kite Connect is a set of REST-like APIs that expose many capabilities required to build a complete investment and trading platform. To use the API, I first needed to create a Zerodha account and then applied for API access. After I have received your API key, I can use it to make requests to the Kite Connect API using a Python library such as kiteconnect or kiteconnect-python.
Here is an example of how you could use the kiteconnect library to get historical data for a stock:
This python code:
from kiteconnect import KiteConnect
import datetime
kite = KiteConnect(api_key='0cv9cnax7bmgjclh')
# Get historical data for a stock
today = datetime.datetime.now().date()
historical_data = kite.historical_data(
instrument_token=6048, # Instrument token of a stock
from_date=today - datetime.timedelta(days=365), # From date
to_date=today, # To date
interval="daily" # Interval (minute, hourly, daily, weekly, monthly, yearly)
)
print(historical_data)
Error:
---------------------------------------------------------------------------
InputException Traceback (most recent call last)
~\AppData\Local\Temp\ipykernel_18768\3958108260.py in <module>
6 # Get historical data for a stock
7 today = datetime.datetime.now().date()
----> 8 historical_data = kite.historical_data(
9 instrument_token=6048, # Instrument token of a stock
10 from_date=today - datetime.timedelta(days=365), # From date
~\anaconda3\lib\site-packages\kiteconnect\connect.py in historical_data(self, instrument_token, from_date, to_date, interval, continuous, oi)
629 to_date_string = to_date.strftime(date_string_format) if type(to_date) == datetime.datetime else to_date
630
--> 631 data = self._get("market.historical",
632 url_args={"instrument_token": instrument_token, "interval": interval},
633 params={
~\anaconda3\lib\site-packages\kiteconnect\connect.py in _get(self, route, url_args, params, is_json)
849 def _get(self, route, url_args=None, params=None, is_json=False):
850 """Alias for sending a GET request."""
--> 851 return self._request(route, "GET", url_args=url_args, params=params, is_json=is_json)
852
853 def _post(self, route, url_args=None, params=None, is_json=False, query_params=None):
~\anaconda3\lib\site-packages\kiteconnect\connect.py in _request(self, route, method, url_args, params, is_json, query_params)
925 # native Kite errors
926 exp = getattr(ex, data.get("error_type"), ex.GeneralException)
--> 927 raise exp(data["message"], code=r.status_code)
928
929 return data["data"]
InputException: Invalid `api_key` or `access_token`.
I am trying to get historical data via API from Zerodha.

Tiingo Data Reader

I was developing a price prediction model that requires Tiingo but there seems to be problem in the API authentification. I used the OS access the Tiingo API.
`
api_key =os.environ.get('TIINGO_API_KEY')
df=pdr.get_data_tiingo('AAPL',api_key)
df=pd.read_csv('AAPL.csv')
print(df.tail())
The error I got looks like:
~\AppData\Local\Temp/ipykernel_9920/1017009006.py in <module>
1 api_key =os.environ.get('TIINGO_API_KEY')
----> 2 df=pdr.get_data_tiingo('AAPL',api_key)
3 df=pd.read_csv('AAPL.csv')
4 print(df.tail())
~\anaconda3\lib\site-packages\pandas_datareader\data.py in get_data_tiingo(*args, **kwargs)
118
119 def get_data_tiingo(*args, **kwargs):
--> 120 return TiingoDailyReader(*args, **kwargs).read()
121
122
~\anaconda3\lib\site-packages\pandas_datareader\tiingo.py in __init__(self, symbols, start, end, retry_count, pause, timeout, session, freq, api_key)
181 api_key = os.getenv("TIINGO_API_KEY")
182 if not api_key or not isinstance(api_key, str):
--> 183 raise ValueError(
184 "The tiingo API key must be provided either "
185 "through the api_key variable or through the "
ValueError: The tiingo API key must be provided either through the api_key variable or through the environmental variable TIINGO_API_KEY.
Any assistance is highly appreciated
It seems api_key is coming as None. You should check that.

TweepError: Twitter error response: status code = 403

I'm trying to extract the amount of #btc since 2019-01-01 per day.
I know the error is about permission, but I'm already using the keys generated from Twitter developer's portal.
Here's my code, I deleted my developer keys.
# Python Script to Extract tweets of a
# particular Hashtag using Tweepy and Pandas
# import modules
import pandas as pd
import tweepy
# function to display data of each tweet
def printtweetdata(n, ith_tweet):
print()
print(f"Tweet {n}:")
print(f"Username:{ith_tweet[0]}")
print(f"Description:{ith_tweet[1]}")
print(f"Location:{ith_tweet[2]}")
print(f"Following Count:{ith_tweet[3]}")
print(f"Follower Count:{ith_tweet[4]}")
print(f"Total Tweets:{ith_tweet[5]}")
print(f"Retweet Count:{ith_tweet[6]}")
print(f"Tweet Text:{ith_tweet[7]}")
print(f"Hashtags Used:{ith_tweet[8]}")
# function to perform data extraction
def scrape(words, date_since, numtweet):
# Creating DataFrame using pandas
db = pd.DataFrame(columns=['username', 'description', 'location', 'following',
'followers', 'totaltweets', 'retweetcount', 'text', 'hashtags'])
# We are using .Cursor() to search through twitter for the required tweets.
# The number of tweets can be restricted using .items(number of tweets)
tweets = tweepy.Cursor(api.search, q=words, lang="en",
since=date_since, tweet_mode='extended').items(numtweet)
# .Cursor() returns an iterable object. Each item in
# the iterator has various attributes that you can access to
# get information about each tweet
list_tweets = [tweet for tweet in tweets]
# Counter to maintain Tweet Count
i = 1
# we will iterate over each tweet in the list for extracting information about each tweet
for tweet in list_tweets:
username = tweet.user.screen_name
description = tweet.user.description
location = tweet.user.location
following = tweet.user.friends_count
followers = tweet.user.followers_count
totaltweets = tweet.user.statuses_count
retweetcount = tweet.retweet_count
hashtags = tweet.entities['hashtags']
# Retweets can be distinguished by a retweeted_status attribute,
# in case it is an invalid reference, except block will be executed
try:
text = tweet.retweeted_status.full_text
except AttributeError:
text = tweet.full_text
hashtext = list()
for j in range(0, len(hashtags)):
hashtext.append(hashtags[j]['text'])
# Here we are appending all the extracted information in the DataFrame
ith_tweet = [username, description, location, following,
followers, totaltweets, retweetcount, text, hashtext]
db.loc[len(db)] = ith_tweet
# Function call to print tweet data on screen
printtweetdata(i, ith_tweet)
i = i+1
filename = 'scraped_tweets.csv'
# we will save our database as a CSV file.
db.to_csv(filename)
if __name__ == '__main__':
# Enter your own credentials obtained
# from your developer account
consumer_key = ""
consumer_secret = ""
access_key = ""
access_secret = ""
auth = tweepy.OAuthHandler(consumer_key, consumer_secret)
auth.set_access_token(access_key, access_secret)
api = tweepy.API(auth)
# Enter Hashtag and initial date
print("Enter Twitter HashTag to search for")
words = input()
print("Enter Date since The Tweets are required in yyyy-mm--dd")
date_since = input()
# number of tweets you want to extract in one run
numtweet = 100
scrape(words, date_since, numtweet)
print('Scraping has completed!')
Here's the error:
---------------------------------------------------------------------------
TweepError Traceback (most recent call last)
<ipython-input-4-dee0a1a7784b> in <module>()
98 # number of tweets you want to extract in one run
99 numtweet = 100
--> 100 scrape(words, date_since, numtweet)
101 print('Scraping has completed!')
6 frames
<ipython-input-4-dee0a1a7784b> in scrape(words, date_since, numtweet)
38 # the iterator has various attributes that you can access to
39 # get information about each tweet
---> 40 list_tweets = [tweet for tweet in tweets]
41
42 # Counter to maintain Tweet Count
<ipython-input-4-dee0a1a7784b> in <listcomp>(.0)
38 # the iterator has various attributes that you can access to
39 # get information about each tweet
---> 40 list_tweets = [tweet for tweet in tweets]
41
42 # Counter to maintain Tweet Count
/usr/local/lib/python3.7/dist-packages/tweepy/cursor.py in __next__(self)
49
50 def __next__(self):
---> 51 return self.next()
52
53 def next(self):
/usr/local/lib/python3.7/dist-packages/tweepy/cursor.py in next(self)
241 if self.current_page is None or self.page_index == len(self.current_page) - 1:
242 # Reached end of current page, get the next page...
--> 243 self.current_page = self.page_iterator.next()
244 while len(self.current_page) == 0:
245 self.current_page = self.page_iterator.next()
/usr/local/lib/python3.7/dist-packages/tweepy/cursor.py in next(self)
130
131 if self.index >= len(self.results) - 1:
--> 132 data = self.method(max_id=self.max_id, parser=RawParser(), *self.args, **self.kwargs)
133
134 if hasattr(self.method, '__self__'):
/usr/local/lib/python3.7/dist-packages/tweepy/binder.py in _call(*args, **kwargs)
251 return method
252 else:
--> 253 return method.execute()
254 finally:
255 method.session.close()
/usr/local/lib/python3.7/dist-packages/tweepy/binder.py in execute(self)
232 raise RateLimitError(error_msg, resp)
233 else:
--> 234 raise TweepError(error_msg, resp, api_code=api_error_code)
235
236 # Parse the response payload
TweepError: Twitter error response: status code = 403
I just had this issue and I resolved it by going back to my Developer Portal and applying for "Elevated Access". All 403 status code errors are related to authentication issues.

Python: List index out of range when performing a query using osmapi

Hi I am new to osmapi and python too. I was writing a script to perform some queries using osmapi until I got this error and the data seems to work on this link https://www.openstreetmap.org/way/77517260, and same for the xml response https://api.openstreetmap.org/api/0.6/way/77517260.
When I test for another way ID it works, but this id 77517260 doesn't, here is the following error:
---------------------------------------------------------------------------
IndexError Traceback (most recent call last)
~/.pyenv/versions/3.6.4/lib/python3.6/site-packages/osmapi/OsmApi.py in _OsmResponseToDom(self, response, tag, single)
2060 all_data = osm_dom.getElementsByTagName(tag)
-> 2061 first_element = all_data[0]
2062 except (xml.parsers.expat.ExpatError, IndexError) as e:
IndexError: list index out of range
During handling of the above exception, another exception occurred:
XmlResponseInvalidError Traceback (most recent call last)
<ipython-input-20-79d93245d84a> in <module>
----> 1 way = api.NodeWays(77517260)
~/.pyenv/versions/3.6.4/lib/python3.6/site-packages/osmapi/OsmApi.py in NodeWays(self, NodeId)
513 uri = "/api/0.6/node/%d/ways" % NodeId
514 data = self._get(uri)
--> 515 ways = self._OsmResponseToDom(data, tag="way")
516 result = []
517 for way in ways:
~/.pyenv/versions/3.6.4/lib/python3.6/site-packages/osmapi/OsmApi.py in _OsmResponseToDom(self, response, tag, single)
2062 except (xml.parsers.expat.ExpatError, IndexError) as e:
2063 raise XmlResponseInvalidError(
-> 2064 "The XML response from the OSM API is invalid: %r" % e
2065 )
2066
XmlResponseInvalidError: The XML response from the OSM API is invalid: IndexError('list index out of range',)
my python code:
import osmapi as osm
api = osm.OsmApi()
way = api.NodeWays(77517260)
First - you should pass url and credentials in constructor:
api = osm.OsmApi(api="https://api.openstreetmap.org", username="username", password="secret")
Next -api/0.6/way/{id} - maybe you are looking for WayGet method.
Code:
import osmapi as osm
api = osm.OsmApi(api="https://api.openstreetmap.org", username="username", password="secret")
way = api.WayGet(77517260)

Getting this error while I try to insert data to MySQL->"TypeError: not all arguments converted during string formatting"

I scrape some data from Amazon and I insert those data to 4 list. But when I am trying to insert those lists into a Database, I just get TypeError: not all arguments converted during string formatting.
But all the data are in string format. I tried using a tuple, but it is not working.
# Importing Requests and BeautifulSoup Module
import requests
from bs4 import BeautifulSoup
import pymysql
# Setting Base Url
base_url = "https://www.amazon.com/s/ref=lp_6503737011_pg_2?rh=n%3A16310101%2Cn%3A%2116310211%2Cn%3A2983386011%2Cn%3A6503737011&page="
# Setting range for pagination
pagination = list(range(1,3))
# Declaring Empty Data
name = []
retailer = []
price = []
image_link = []
# Looping through pagination
for num in pagination:
url = base_url + str(num)
# Connection Error Handler
try:
r = requests.get(url)
except requests.exceptions.ConnectionError:
r.status_code = "Connection refused"
print("Connection Refused by the server")
# Setting BeautifulSoup Object
soup = BeautifulSoup(r.content, "html.parser")
# Setting Div Class of Info
g_data = soup.find_all("div", {"class": "s-item-container"})
# Getting Every Data from Info Div
for item in g_data:
imgs = soup.findAll("img", {"class":"s-access-image"})
for img in imgs:
image_link.append(img['src'])
name.append(item.contents[2].find_all('h2', {'class':'s-access-title'})[0].text)
retailer.append(item.contents[2].find_all('span', {'class':'a-size-small'})[1].text)
whole_number = str(item.contents[3].find_all('span', {'class':'sx-price-whole'})[0].text)
fractional_number = str(item.contents[3].find_all('sup', {'class':'sx-price-fractional'})[0].text)
price_1 = whole_number+"."+fractional_number
price.append(price_1)
This is the code for scraping data. All is good to here.But when I try to insert data into database am getting problem.
import pymysql
db = pymysql.connect('localhost','root','','scrape')
cursor = db.cursor()
sql = """INSERT INTO wine(
NAME,RETAILER,PRICE,IMAGE_LINK) VALUES"
"""
cursor.executemany(sql, (name,retailer,price,image_link))
I am getting this error while running this code:
TypeError Traceback (most recent call last)
<ipython-input-7-0fca81edd73c> in <module>()
6 NAME,RETAILER,PRICE,IMAGE_LINK) VALUES"
7 """
----> 8 cursor.executemany(sql, (name,retailer,price,image_link))
C:\Anaconda3\lib\site-packages\pymysql\cursors.py in executemany(self, query, args)
193 self._get_db().encoding)
194
--> 195 self.rowcount = sum(self.execute(query, arg) for arg in args)
196 return self.rowcount
197
C:\Anaconda3\lib\site-packages\pymysql\cursors.py in <genexpr>(.0)
193 self._get_db().encoding)
194
--> 195 self.rowcount = sum(self.execute(query, arg) for arg in args)
196 return self.rowcount
197
C:\Anaconda3\lib\site-packages\pymysql\cursors.py in execute(self, query, args)
162 pass
163
--> 164 query = self.mogrify(query, args)
165
166 result = self._query(query)
C:\Anaconda3\lib\site-packages\pymysql\cursors.py in mogrify(self, query, args)
141
142 if args is not None:
--> 143 query = query % self._escape_args(args, conn)
144
145 return query
TypeError: not all arguments converted during string formatting
I am not able to find any solution to solve this problem.
Your query is incomplete: you need placeholders, e.g. %s
.executemany() takes a container of containers as its second argument; typically this is a list of tuples
Change to:
sql = """INSERT INTO wine(NAME,RETAILER,PRICE,IMAGE_LINK) VALUES (%s,%s,%s,%s);"""
to_insert = [(a,b,c,d) for a,b,c,d in zip(name,retailer,price,image_link)]
cursor.executemany(sql,to_insert)

Categories