Iam just trying to build a web-application and try to find the topics any user follows. I tried to get the timeline and run a topic-model(classifier) but it really takes time. Is there any way to find that from tweepy's get_user function? thnks
i would probably study twitter API
i have developed a Telegram bot, hosted as a google web app, to interface to twitter, and the API was quite rich
good luck
There's no direct way to find topics that a user follows via the Twitter API, but in Twitter API v2, you can discover the topics that are associated with the Tweets a user is posting.
(note that, at the time of posting this answer, Tweepy does not yet support API v2, but they are working on that - there are some Python samples in the TwitterDev GitHub, which use standard Python libraries where possible to do the work)
The general steps here are:
have an approved Twitter developer account, and create a new project and app for access to the v2 API
call the user Tweet timeline endpoint for the user ID you are interested in, adding the context_annotations Tweet field to your request, so that each Tweet is returned with the topic information attached.
This will return data in a format something like this (using the #TwitterDev account as an example):
{
"data": [
{
"context_annotations": [
{
"domain": {
"description": "Top level interests and hobbies groupings, like Food or Travel",
"id": "65",
"name": "Interests and Hobbies Vertical"
},
"entity": {
"description": "Technology and computing",
"id": "848920371311001600",
"name": "Technology"
}
},
{
"domain": {
"description": "A grouping of interests and hobbies entities, like Novelty Food or Destinations",
"id": "66",
"name": "Interests and Hobbies Category"
},
"entity": {
"description": "Computer programming",
"id": "848921413196984320",
"name": "Computer programming"
}
}
],
"created_at": "2021-01-05T22:45:35.000Z",
"id": "1346588685555306497",
"text": "\ud83d\udd11\ud83d\udd11 On Tuesday, January 12th, we\u2019re removing the ability to view existing consumer API keys from the developer portal. Be sure to save your API keys in a secure place before Tuesday to ensure your access to the #TwitterAPI is not disrupted. Learn more https://twittercommunity.com/t/ability-to-view-existing-consumer-api-keys-being-removed-from-your-developer-dashboard/147849"
}
...
Related
A little more detail on the question -
Scenario
The app I'm working on currently performs the following -
Logs in users via Google OAuth ( added to Auth0 login )
Comprises of a list of Google Sheets with their links, which the user can open when he is logged in
When the user clicks on a sheet's link to open it, he is redirected to a page where the sheet is expected to be displayed in an iframe.
The gspread module in Python retrieves the list of users the sheet has been shared with (permission list) (gspread is authenticated using a service account which helps do this). If the authenticated user is a part of the permission list, the iframe is displayed, else, an error message is displayed.
Now, the next requirement we'd like to achieve is for specific users in the site to be able to share the Google Sheet with other users, using the share method in the gspread module. However, we would like to share it with users with regular Google accounts, and not those enabled with Google Workspace, owing to business requirements which I prefer not to disclose at this point.
Is there a way to do this? I've found a something here - https://developers.google.com/admin-sdk/directory/v1/quickstart/python#configure_the_sample, but this is only to check with the users of the same workspace, if the service account I possess is that of the workspace's admin, but what I need to know is in general if a given account is a regular one or is linked to the workspace of any organization.
The People api has a method called people.get If i pass it me and check the person fields for memberships
Workspace domain account
{
"resourceName": "people/106744391248434652261",
"etag": "%EgMBLjcaBAECBQciDFpMNzJsdkk3SG80PQ==",
"memberships": [
{
"metadata": {
"source": {
"type": "DOMAIN_PROFILE",
"id": "106744391248434652261"
}
},
"domainMembership": {
"inViewerDomain": true
}
}
]
}
standard gmail user
{
"resourceName": "people/117200475532672775346",
"etag": "%EgMBLjcaBAECBQciDEdwc0JEdnJyNWRnPQ==",
"memberships": [
{
"metadata": {
"source": {
"type": "CONTACT",
"id": "3faa96eb08baa4be"
}
},
"contactGroupMembership": {
"contactGroupId": "myContacts",
"contactGroupResourceName": "contactGroups/myContacts"
}
}
]
}
So the answer is yes you need to go though the google people api. I dont have any python examples for the people api on hand but let me know if you cant get it working.
Sorry to bother with maybe a stupid question, but I'm still a beginner with the Graph API. A little background to better understand my question: I need to run an analysis on a Facebook page (of which I'm not the owner but administrator, small size, ~4000 likes and ~150 posts, more or less one per day). What I intended to do was the following:
Obtain the data trough Graph API. Namely, I was most interested in retrieving the messages, number of likes and reach of every post
Import the data in R and identify the outliers (I mean, the posts whose likes and reach are not in line with the mean)
Look for correlations between those messages (since the page, for its nature, needs to talk about a wide range of topics, I want to understand which of them generate the most reactions and plan accordingly)
I've already done an analysis "by hand", but I want to test if it is possible to make the same conclusions without involving a human operator.
I've looked on the web for tutorials on how to use the graph API in python, but I've not been able to find something comprehensive. I've set up my API and obtained the permanent page token with manage_pages and read_insights permissions.
Here an idea of what I'm doing:`
def get_facebook_page_data(page_id, access_token):
website = "https://graph.facebook.com/v3.1/"
location = "%s/posts/" % page_id
fields = "?fields=message,id" + \
"reactions.type(LIKE).limit(0).summary(total_count).as(reactions_like)"
authentication = "&limit=100&access_token=%s" % (access_token)
request_url = website + location + fields + authentication
data = json.loads(request_data_from_url(request_url))
return data`
So, with this function I'm able to obtain the id, message and number of likes of all the posts stored inside data, and with another function I write everything on a csv file.
First question: am I doing something wrong?
Second question: I cannot retrieve a lot of information. For example, when adding type to the fields, it says that this is deprecated (I'm running python 3.7.3)
Third questions: how do I retrieve the reach for every post? I'm assuming this is obtained by scraping the insights, by I don't seem to get it right... How do I query the Graph API for those data?
In general, I'm finding a lot of trouble in just getting the right keywords while building the links. I've installed facebook-sdk but I don't know how to use it (as I said, I'm a beginner). Do you have suggestions on this?
Thanks very much to everyone answering, and greetigs from Italy!
First of all I suggest to use the latest version of the API available, currently the 5.0, regarding your question:
Second question: I cannot retrieve a lot of information. For example,
when adding type to the fields, it says that this is deprecated (I'm
running python 3.7.3)
Regarding to the doc of the Page Feed see the attachments field, as example, adding this to the request:
attachments.fields(media_type)
Third questions: how do I retrieve the reach for every post? I'm
assuming this is obtained by scraping the insights, by I don't seem to
get it right... How do I query the Graph API for those data?
Regarding to the doc of the Page Insights see the page_impressions field, as example in order to return the page_impressions field for a lifetime period:
insights.period(lifetime).metric(post_impressions_unique)
A complete example:
https://graph.facebook.com/v3.1/<PAGE-ID>/posts?fields=message,id,reactions.type(LIKE).limit(0).summary(total_count).as(reactions_like),insights.period(lifetime).metric(post_impressions_unique),attachments.fields(media_type)
Will return:
{
"data": [{
"message": "Hello",
"id": "269816000129666_780829305694997",
"reactions_like": {
"data": [],
"summary": {
"total_count": 0
}
},
"insights": {
"data": [{
"name": "post_impressions_unique",
"period": "lifetime",
"values": [{
"value": 15
}],
"title": "Lifetime Post Total Reach",
"description": "Lifetime: The number of people who had your Page's post enter their screen. Posts include statuses, photos, links, videos and more. (Unique Users)",
"id": "269816000129666_780829305694997/insights/post_impressions_unique/lifetime"
}],
"paging": {
"previous": "https://graph.facebook.com/v3.1/269816000129666_780829305694997/insights?access_token=EAAAAKq6xRNcBAOMKY3StjWXPgL1REATIfPFsyZCY21KDAnZAZB7MpKgNGCHRlKVt9bZBoVZAHpV0jqxZAAVZCOKDIh96YxvpxPaavR1AYK5EQCEEOSMKqz4ZAItcX9WvVfEEN5FzqgyoQWi8oKZBQmQB4Nf80SgicaesluNbI0hDMw2QAxfV9rAFpRc10Pop1d1vtVeziPEjEKwZDZD&metric=post_impressions_unique&period=lifetime&since=1573891200&until=1574064000",
"next": "https://graph.facebook.com/v3.1/269816000129666_780829305694997/insights?access_token=EAAAAKq6xRNcBAOMKY3StjWXPgL1REATIfPFsyZCY21KDAnZAZB7MpKgNGCHRlKVt9bZBoVZAHpV0jqxZAAVZCOKDIh96YxvpxPaavR1AYK5EQCEEOSMKqz4ZAItcX9WvVfEEN5FzqgyoQWi8oKZBQmQB4Nf80SgicaesluNbI0hDMw2QAxfV9rAFpRc10Pop1d1vtVeziPEjEKwZDZD&metric=post_impressions_unique&period=lifetime&since=1574236800&until=1574409600"
}
},
"attachments": {
"data": [{
"media_type": "photo"
}]
}
},
{
"message": "Say hello!",
"id": "269816000129666_780826782361916",
"reactions_like": {
"data": [],
"summary": {
"total_count": 0
}
},
"insights": {
"data": [{
"name": "post_impressions_unique",
"period": "lifetime",
"values": [{
"value": 14
}],
"title": "Lifetime Post Total Reach",
"description": "Lifetime: The number of people who had your Page's post enter their screen. Posts include statuses, photos, links, videos and more. (Unique Users)",
"id": "269816000129666_780826782361916/insights/post_impressions_unique/lifetime"
}],
"paging": {
"previous": "https://graph.facebook.com/v3.1/269816000129666_780826782361916/insights?access_token=EAAAAKq6xRNcBAOMKY3StjWXPgL1REATIfPFsyZCY21KDAnZAZB7MpKgNGCHRlKVt9bZBoVZAHpV0jqxZAAVZCOKDIh96YxvpxPaavR1AYK5EQCEEOSMKqz4ZAItcX9WvVfEEN5FzqgyoQWi8oKZBQmQB4Nf80SgicaesluNbI0hDMw2QAxfV9rAFpRc10Pop1d1vtVeziPEjEKwZDZD&metric=post_impressions_unique&period=lifetime&since=1573891200&until=1574064000",
"next": "https://graph.facebook.com/v3.1/269816000129666_780826782361916/insights?access_token=EAAAAKq6xRNcBAOMKY3StjWXPgL1REATIfPFsyZCY21KDAnZAZB7MpKgNGCHRlKVt9bZBoVZAHpV0jqxZAAVZCOKDIh96YxvpxPaavR1AYK5EQCEEOSMKqz4ZAItcX9WvVfEEN5FzqgyoQWi8oKZBQmQB4Nf80SgicaesluNbI0hDMw2QAxfV9rAFpRc10Pop1d1vtVeziPEjEKwZDZD&metric=post_impressions_unique&period=lifetime&since=1574236800&until=1574409600"
}
},
"attachments": {
"data": [{
"media_type": "photo"
}]
}
},
Is it possible to pull the new employee’s personal information who gets hired every day from DocuSign through API? I'm trying to find a way to automate the process of user account creation process from DocuSign to Active Directory by avoiding CSV file. New to this, any input might be useful?
There are two parts to this endeavor.
The first, is not related to DocuSign. You need to get an event fired everytime a new contact is added to AD and be able to process this request. I assume you have a way to do that.
Then, the second part is using our REST API to add a new user.
Make a POST request to:
POST /v2.1/accounts/{accountId}/users
You pass this information in the request body:
{
"newUsers": [
{
"userName": "Claire Horace",
"email": "claire#example.com.com"
},
{
"userName": "Tal Mason",
"email": "tal#example.com.com",
"userSettings": [
{
"name": "canSendEnvelope",
"value": "true"
},
{
"name": "locale",
"value": "fr"
}
]
}
]
}
I am working on a charity project at school in which the top 10 donors will be rewarded. The ultimate goal is to have a live feed of the top 10 lists like a scoreboard, either on our websites or though periodic tweets. I am a second year computer science major and know python.
I dont think I will have an issues parsing the JSON into a python dictionary or list and then sorting the leaderboard. The problem is I don't know enough about web technologies in terms of importing the data using a webhook. I can see the data using https://requestb.in/ and testing transactions, but I need a more permanent solution. I also need to be able to run this all online and not on my computer.
I would really appreciate being pointed in the right direction.
Example Transaction data seen on https://requestb.in/
{
"date_created": "2013-12-16T16:15:23.514136",
"type": "payment.created",
"data": {
"action": "pay",
"actor": {
"about": "No Short Bio",
"date_joined": "2011-09-09T00:30:51",
"display_name": "Andrew Kortina",
"first_name": "Andrew",
"id": "711020519620608087",
"last_name": "Kortina",
"profile_picture_url": "",
"username": "kortina"
},
"amount": null,
"audience": "public",
"date_completed": "2013-12-16T16:20:00",
"date_created": "2013-12-16T16:20:00",
"id": "1312337325098795713",
"note": "jejkeljeljke",
"status": "settled",
"target": {
"email": null,
"phone": null,
"type": "user",
"user": {
"about": "No Short Bio",
"date_joined": "2011-09-09T00:30:54",
"display_name": "Shreyans Bhansali",
"first_name": "Shreyans",
"id": "711020544786432772",
"last_name": "Bhansali",
"profile_picture_url": "",
"username": "shreyans"
}
}
}
}
I see that your example JSON above is from https://developer.venmo.com/docs/webhooks
A webhook is basically just a URL that knows how to handle POST requests; when they want to notify your site/webapp, they call that URL and pass it the information they want you to receive.
The URL can be unencrypted (http) or encrypted (https); if you are dealing with financial info you definitely want it to be encrypted. Check your web host's instructions on setting up an SSL certificate.
On the same page it talks about how to set configure your webhook (log in to your Venmo account, go to the Developer tab, and enter your URL). For confirmation, it will make a GET call (ie https://your_site/path/page?venmo_challenge=XYZZY); your page needs to return the challenge value (ie XYZZY).
I will suggest Flask as a simple Python framework and Heroku for hosting; there are many other alternatives, but this should get you started.
I want to use the Python facebook-sdk library to retrieve the names of the persons who posted a message on a Facebook page I created.
This is an example, returned by the Graph API explorer:
{
"feed": {
"data": [
{
"from": {
"name": "Ralph Crützen",
"id": "440590514975673"
},
"message": "Nog een test.",
"created_time": "2015-10-17T19:33:30+0000",
"id": "649463214822976_649745285127205"
},
{
"from": {
"name": "Ralph Crützen",
"id": "440590514975673"
},
"message": "Testing!",
"created_time": "2015-10-16T20:44:17+0000",
"id": "649463214822976_649492455153388"
},
... etc ...
But when I use the following Python code...
graph = facebook.GraphAPI(page_access_token)
profile = graph.get_object('tinkerlicht')
posts = graph.get_connections(profile['id'], 'feed')
print(posts['data'][0]['message'])
print(posts['data'][0]['from']['name'])
...only the message value is printed. Printing the name of the person who posted the message gives the error:
print(posts['data'][0]['from']['name'])
KeyError: 'from'
At first, I thought that I needed the read_page_mailboxes permission. To use this permission, it has to be approved by Facebook, so I submitted a request. But Facebook replied:
"You don't need any additional permissions to post to Pages or blogs that you administer. You only need to submit your app for review if your app will use a public-facing login."
So what exactly is the reason I can't retrieve the from data from the messages feed? (While reading it from the Graph API explorer works fine...)
Btw, I'm using a page access token which never expires. I generated this token the way it's described here.