Fetching place details (specifically reviews) with GooglePlaces in Python 3 - python

I am completely new to this module and Python in general, yet wanted to start some sort of a fun project in my spare time.
I have a specific question concerning the GooglePlaces module for Python - how do I retrieve the reviews of a place by only knowing its Place ID.
So far I have done...
from googleplaces import GooglePlaces, types, lang
google_places = GooglePlaces('API KEY')
query_result = google_places.get_place(place_id="ChIJB8wSOI11nkcRI3C2IODoBU0")
print(query_result) #<Place name="Starbucks", lat=48.14308250000001, lng=11.5782337>
print(query_result.get_details()) # Prints None
print(query_result.rating) # Prints the rating of 4.3
I am completely lost here, because I cannot get access to the object's details. Maybe I am missing something, yet would be very thankful for any guidance through my issue.

If you are completly lost just read the docs :)
Example from https://github.com/slimkrazy/python-google-places:
for place in query_result.places:
# Returned places from a query are place summaries.
# The following method has to make a further API call.
place.get_details()
# Referencing any of the attributes below, prior to making a call to
# get_details() will raise a googleplaces.GooglePlacesAttributeError.
print place.details # A dict matching the JSON response from Google.
See the Problem with your code now?
print(query_result.get_details()) # Prints None
should be
query_result.get_details() # Fetch details
print(query_result.details) # Prints details dict
Regarding the results, the Google Docs states:
reviews[] a JSON array of up to five reviews. If a language parameter
was specified in the Place Details request, the Places Service will
bias the results to prefer reviews written in that language. Each
review consists of several components:

Related

Problem with getting tweet_fields from Twitter API 2.0 using Tweepy

I have a similar problem as in this question (Problem with getting user.fields from Twitter API 2.0)
but I am using Tweepy. When making the request with tweet_fields, the response is only giving me the default values. In another fuction where I use user_fields it works perfectly.
I followed this guide, specifically number 17 (https://dev.to/twitterdev/a-comprehensive-guide-for-using-the-twitter-api-v2-using-tweepy-in-python-15d9)
My function looks like this:
def get_user_tweets():
client = get_client()
tweets = client.get_users_tweets(id=get_user_id(), max_results=5)
ids = []
for tweet in tweets.data:
ids.append(str(tweet.id))
tweets_info = client.get_tweets(ids=ids, tweet_fields=["public_metrics"])
print(tweets_info)
This is my response (with the last tweets from elonmusk) also there is no error code or anything else
Response(data=[<Tweet id=1471419792770973699 text=#WholeMarsBlog I came to the US with no money & graduated with over $100k in debt, despite scholarships & working 2 jobs while at school>, <Tweet id=1471399837753135108 text=#TeslaOwnersEBay #PPathole #ScottAdamsSays #johniadarola #SenWarren It’s complicated, but hopefully out next quarter, along with Witcher. Lot of internal debate as to whether we should be putting effort towards generalized gaming emulation vs making individual games work well.>, <Tweet id=1471393851843792896 text=#PPathole #ScottAdamsSays #johniadarola #SenWarren Yeah!>, <Tweet id=1471338213549744130 text=link>, <Tweet id=1471325148435394566 text=#24_7TeslaNews #Tesla ❤️>], includes={}, errors=[], meta={})
I found this link: https://giters.com/tweepy/tweepy/issues/1670. According to it,
Response is a namedtuple. Here, within its data field, is a single Tweet object.
The string representation of a Tweet object will only ever include its ID and text. This was an intentional design choice, to reduce the excess of information that could be displayed when printing all the data as the string representation, as with models.Status. The ID and text are the only default / guaranteed fields, so the string representation remains consistent and unique, while still being concise. This design is used throughout the API v2 models.
To access the data of the Tweet object, you can use attributes or keys (like a dictionary) to access each field.
If you want all the data as a dictionary, you can use the data attribute/key.
In that case, to access public metrics, you could maybe try doing this instead:
tweets_info = client.get_tweets(ids=ids, tweet_fields=["public_metrics"])
for tweet in tweets_info.data:
print(tweet["id"])
print(tweet["public_metrics"])

Getting SalesforceMalformedRequest: Malformed request error

I am trying to execute this following code to push data to Salesforce using the simple_salesforce python library :
from simple_salesforce import Salesforce
staging_df = hive.execute("select * from hdmni")
staging_df = staging_df.toPandas()
# # staging_df['birth_date']= staging_df['birth_date'].dt.date
staging_df['birth_date'] = staging_df['birth_date'].astype(str)
staging_df['encounter_start_date'] = staging_df['encounter_start_date'].astype(str)
staging_df['encounter_end_date'] = staging_df['encounter_end_date'].astype(str)
bulk_data = []
for row in staging_df.itertuples():
d= row._asdict()
del d['Index']
bulk_data.append(d)
sf = Salesforce(password='', username='', security_token='')
sf.bulk.Delivery_Detail__c.insert(bulk_data)
I am getting this error while trying to send dictionary to salesforce :
SalesforceMalformedRequest: Malformed request
https://subhotutorial-dev-ed.my.salesforce.com/services/async/38.0/job/7500o00000HtWP6AAN/batch/7510o00000Q15TnAAJ/result.
Response content: {'exceptionCode': 'InvalidBatch',
'exceptionMessage': 'Records not processed'}
There's something about your query that is not correct. While I don't know your use case, by reading this line, you can tell that you are attempting to insert into a custom object/entity in Salesforce:
sf.bulk.Delivery_Detail__c.insert(bulk_data)
The reason you can tell is because of the __c suffix, which gets appended onto custom objects and fields (that's two underscores, by the way).
Since you're inserting into a custom object, your fields would have to be custom, too. And note, you've not appended that suffix onto them.
Note: Every custom object/entity in Salesforce does come with a few standard fields to support system features like record key (Id), record name (Name), audit fields (CreatedById, CreatedDate, etc.). These wouldn't have a suffix. But none of the fields you reference are any of these standard system fields...so the __c suffix would be expected.
I suspect that what Salesforce is expecting in your insert operation are field names like this:
Birth_Date__c
Encounter_Start_Date__c
Encounter_End_Date__c
These are referred to as the API name for both objects and fields, and anytime code interacts with them (whether via integration, or on code that executes directly on the Salesforce platform) you need to make certain you're using this API name.
Incidentally, you can retrieve this API name through a number of ways. Probably easiest is to log into your Salesforce org, and in Setup > Object Manager > [some object] > Fields and Relationships you can view details of each field, including the API name. Here's a screen shot.
You can also use SObject describe APIs, either in native Apex code, or via integration and either the REST or SOAP APIs. Here's part of the response from the describe API request to the describe REST endpoint for the same object as my UI example above, found here at https://[domain]/services/data/v47.0/sobjects/Expense__c/describe:
Looking at the docs for the simple-salesforce python library you're using, they've surfaced the describe API. You can find some info under Other Options. You would invoke it as sf.SObject.describe where "SObject" is the actual object you want to find the information about. For instance, in your case you would use:
sf.Delivery_Detail__c.describe()
As a good first troubleshooting step when interacting with a Salesforce object, I'd always recommend double-checking correctly referencing an API name. I can't tell you how many times I've bumped into little things like adding or missing an underscore. Especially with the __c suffix.

Basics of connecting python to the web and validating user input

I'm relatively new, and I'm just at a loss as to where to start. I don't expect detailed step-by-step responses (though, of course, those are more than welcome), but any nudges in the right direction would be greatly appreciated.
I want to use the Gutenberg python library to select a text based on a user's input.
Right now I have the code:
from gutenberg.acquire import load_etext
from gutenberg.cleanup import strip_headers
text = strip_headers(load_etext(11)).strip()
where the number represents the text (in this case 11 = Alice in Wonderland).
Then I have a bunch of code about what to do with the text, but I don't think that's relevant here. (If it is let me know and I can add it).
Basically, instead of just selecting a text, I want to let the user do that. I want to ask the user for their choice of author, and if Project Gutenberg (PG) has pieces by that author, have them then select from the list of book titles (if PG doesn't have anything by that author, return some response along the lines of "sorry, don't have anything by $author_name, pick someone else." And then once the user has decided on a book, have the number corresponding to that book be entered into the code.
I just have no idea where to start in this process. I know how to handle user input, but I don't know how to take that input and search for something online using it.
Ideally, I'd be able to handle things like spelling mistakes too, but that may be down the line.
I really appreciate any help anyone has the time to give. Thanks!
The gutenberg module includes facilities for searching for a text by metadata, such as author. The example from the docs is:
from gutenberg.query import get_etexts
from gutenberg.query import get_metadata
print(get_metadata('title', 2701)) # prints frozenset([u'Moby Dick; Or, The Whale'])
print(get_metadata('author', 2701)) # prints frozenset([u'Melville, Hermann'])
print(get_etexts('title', 'Moby Dick; Or, The Whale')) # prints frozenset([2701, ...])
print(get_etexts('author', 'Melville, Hermann')) # prints frozenset([2701, ...])
It sounds as if you already know how to read a value from the user into a variable, and replacing the literal author in the above would be as simple as doing something like:
author_name = my_get_input_from_user_function()
texts = get_etexts('author', author_name)
Note the following note from the same section:
Before you use one of the gutenberg.query functions you must populate the local metadata cache. This one-off process will take quite a while to complete (18 hours on my machine) but once it is done, any subsequent calls to get_etexts or get_metadata will be very fast. If you fail to populate the cache, the calls will raise an exception.
With that in mind, I haven't tried the code I've presented in this answer because I'm still waiting for my local cache to populate.

limit TwitterSearch API, python

How would I limit the number of results in a twitter search?
This is what I thought would work...tweetSearchUser is user input in another line of code.
tso = TwitterSearchOrder() # create a TwitterSearchOrder object
tso.set_keywords(["tweetSearchUser"]) # let's define all words we would like to have a look for
tso.setcount(30)
tso.set_include_entities(False) # and don't give us all those entity information
Was looking at this reference
https://twittersearch.readthedocs.org/en/latest/advanced_usage_ts.html
tried this, seems like it should work but can't figure out the format to enter the date...
tso.set_until('2016-02-25')
You should use set_count as specified in the documentation.
The default value for count is 200, because it is the maximum of tweets returned by the Twitter API.

BioPython Pubmed Eutils url?

I'm trying to run some queries against Pubmed's Eutils service. If I run them on the website I get a certain number of records returned, in this case 13126 (link to pubmed).
A while ago I bodged together a python script to build a query to do much the same thing, and the resultant url returns the same number of hits (link to Eutils result).
Of course, not having any formal programming background, it was all a bit cludgy, so I'm trying to do the same thing using Biopython. I think the following code should do the same thing, but it returns a greater number of hits, 23303.
from Bio import Entrez
Entrez.email = "A.N.Other#example.com"
handle = Entrez.esearch(db="pubmed", term="stem+cell[All Fields]",datetype="pdat", mindate="2012", maxdate="2012")
record = Entrez.read(handle)
print(record["Count"])
I'm fairly sure it's just down to some subtlety in how the url is being generated, but I can't work out how to see what url is being generated by Biopython. Can anyone give me some pointers?
Thanks!
EDIT:
It's something to do with how the url is being generated, as I can get back the original number of hits by modifying the code to include double quotes around the search term, thus:
handle = Entrez.esearch(db='pubmed', term='"stem+cell"[ALL]', datetype='pdat', mindate='2012', maxdate='2012')
I'm still interested in knowing what url is being generated by Biopython as it'll help me work out how i have to structure the search term for when i want to do more complicated searches.
handle = Entrez.esearch(db="pubmed", term="stem+cell[All Fields]",datetype="pdat", mindate="2012", maxdate="2012")
print(handle.url)
You've solved this already (Entrez likes explicit double quoting round combined search terms), but currently the URL generated is not exposed via the API. The simplest trick would be to edit the Bio/Entrez/__init__.py file to add a print statement inside the _open function.
Update: Recent versions of Biopython now save the URL as an attribute of the returned handle, i.e. in this example try doing print(handle.url)

Categories