I have pulled data from a private slack channel, using conversation history, and it pulls the userid instead of username, how can I change the code to pull the user name so I can identify who each user is? Code below
CHANNEL = ""
MESSAGES_PER_PAGE = 200
MAX_MESSAGES = 1000
SLACK_TOKEN = ""
client = slack_sdk.WebClient(token=SLACK_TOKEN)
# get first page
page = 1
print("Retrieving page {}".format(page))
response = client.conversations_history(
channel=CHANNEL,
limit=MESSAGES_PER_PAGE,
)
assert response["ok"]
messages_all = response['messages']
# get additional pages if below max message and if they are any
while len(messages_all) + MESSAGES_PER_PAGE <= MAX_MESSAGES and response['has_more']:
page += 1
print("Retrieving page {}".format(page))
sleep(1) # need to wait 1 sec before next call due to rate limits
response = client.conversations_history(
channel=CHANNEL,
limit=MESSAGES_PER_PAGE,
cursor=response['response_metadata']['next_cursor']
)
assert response["ok"]
messages = response['messages']
messages_all = messages_all + messages
It isn't possible to change what is returned from the conversations.history method. If you'd like to convert user IDs to usernames, you'll need to either:
Call the users.info method and retrieve the username from the response.
or
Call the users.list method and iterate through the list and create a local copy (or store in a database) and then have your code look it up.
Related
I'm trying to get the latest 100 posts from my giphy user.
It works for accounts like "giphy" and "spongebob"
But not for "jack0_o"
import requests
def get_user_gifs(username):
api_key = "API_KEY"
limit = 25 # The number of GIFs to retrieve per request (max 25)
offset = 0
# Set a flag to indicate when all GIFs have been retrieved
done = False
# Keep making requests until all GIFs have been retrieved
while not done:
# Make the request to the Giphy API
endpoint = f"https://api.giphy.com/v1/gifs/search?api_key={api_key}&q={username}&limit={limit}&offset={offset}&sort=recent"
response = requests.get(endpoint)
data = response.json()
# Extract the GIF URLs from the data and print them one per line
for gif in data["data"]:
print(gif["url"])
# Update the starting index for the next batch of GIFs
offset += limit
# Check if there are more GIFs to retrieve
if len(data["data"]) < limit or offset >= 100:
done = True
get_user_gifs("spongebob") #WORKS
get_user_gifs("jack0_o") #does not work
Already tried adding ratings with "pg", "r", "g"
I am simply trying to extract the followers of a Twitter profile using the following code. However, for some unknown reason, the id query parameter value is not valid. I have tried to input the user id of several Twitter accounts to see if the problem lied with the id rather than my code. The problem is in the code...
def create_url_followers(max_results):
# params based on the get_users_followers endpoint
query_params_followers = {'max_results': max_results, # The maximum number of results to be returned per page
'pagination_token': {}} # Used to request the next page of results
return query_params_followers
def connect_to_endpoint_followers(url, headers, params):
response = requests.request("GET", url, headers=headers, params=params)
print("Endpoint Response Code: " + str(response.status_code))
if response.status_code != 200:
raise Exception(response.status_code, response.text)
return response.json()
# Inputs for the request
bearer_token = auth() # retrieves the token from the environment
headers = create_headers(bearer_token)
max_results = 100 # number results i.e. followers of pol_id
To run the code in a for loop I add the id in the position for id in the URL and loop through the list of ids while appending results to the json_response_followers.
pol_ids_list = house["Uid"].astype(str).values.tolist() # create list of politician ids column Uid in df house.
json_response_followers = [] # empty list to append Twitter data
for id in pol_ids_list: # loop over ids in list pol_ids_list
url = (("https://api.twitter.com/2/users/" + id + "/followers"), create_url_followers(max_results))
print(url)
json_response_followers.append(connect_to_endpoint_followers(url[0], headers, url[1])) # append data to list json_response_list
sleep(60.5) # sleep for 60.5 seconds since API limit is
pass
I think the problem here could be that you're specifying pol_id as a parameter, which would be appended to the call. In the case of this API, you want to insert the value of pol_id at the point where :id is in the URL. The max_results and pagination_token values should be appended.
Try checking the value of url before calling connect_to_endpoint_followers.
I think you are currently trying to call
https://api.twitter.com/2/users/:id/followers?id=2891210047&max_results=100&pagination_token=
This is not valid, as there's no value for :id in the URI where it belongs, and the id parameter itself is not valid / is essentially a duplicate of what should be :id.
It should be:
https://api.twitter.com/2/users/2891210047/followers?max_results=100&pagination_token=
The code below retrieves the latest email in the thread. How do I retrieve the latest 2 emails in the thread? Thanks in advance.
messages = service.users().threads().list(userId='me').execute().get('threads', [])
for message in messages:
if search in message['snippet']:
# add/modify the following lines:
thread = service.users().threads().get(userId='me', id=message['id'], fields='messages(id,internalDate)').execute() #.get( [])
last = len(thread['messages']) - 1
message_id = thread['messages'][last]['id']
# non-modified code:
full_message = service.users().messages().get(userId='me', id=message_id, format="raw").execute()
msg_str = base64.urlsafe_b64decode(full_message['raw'].encode('ASCII'))
mime_msg = email.message_from_bytes(msg_str)
y = re.findall(r'Delivered-To: (\S+)', str(mime_msg))
print(y[0])
The line last = len(thread['messages']) - 1 specifies that you want to retrieve the last message from a thread
Consequently, to retrieve the prelast message, you need to specify prelast = len(thread['messages']) - 2
And respectively prelast_message_id = thread['messages'][prelast]['id']
Now, you can push both last and prelast message Ids into an array and perform your # non-modified code in a for loop on both message ids.
I'm having fun with the FB Graph API collecting "reactions" until I hit the FB limit of 100. (Some of the posts I need to query have well over 1000 reactions)
I do see dictionary key "next" in the json response which is a link to the next group and that group has a next key and so on. Below is a simplified version of what I have so far...
post_id_list = ['387990201279405_1155752427836508'] #short list for this example
def make_post_reaction_url_list(postid_list, APP_ID, APP_SECRET):
''' constructs a list of FB urls to gather reactions to posts limit set to 3'''
post_id_list_queries = []
for post_id in postid_list:
post_id_list_queries.append("https://graph.facebook.com/v2.8/"+post_id+"?fields=reactions.limit(3)&access_token="+ APP_ID + "|" + APP_SECRET)
return post_id_list_queries
post_id_reaction_query_list = make_post_reaction_url_list(post_id_list, APP_ID, APP_SECRET)
def return_json_from_fb_query(post_id_rection_query_list):
list_o_json = []
for target in post_id_reaction_query_list:
t = requests.get(target)
t_json = t.json()
list_o_json.append(t_json)
return list_o_json
list_o_json = return_json_from_fb_query(post_id_reaction_query_list)
list_o_json[0]['reactions']['data'] #gives me information for the response
list_o_json[0]['reactions']['paging']['next'] #returns a http link to the next set of reactions.
Any suggestions how I can collect then follow the "next" link, collect info then follow the next link etc. to the end of the node?
I'm using python-twitter in my Web Application to post tweets like this:
import twitter
twitter_api = twitter.Api(
consumer_key="BlahBlahBlah",
consumer_secret="BlahBlahBlah",
access_token_key="BlahBlahBlah",
access_token_secret="BlahBlahBlah",
)
twitter_api.PostUpdate("Hello World")
How do I retrieve all tweets posted to this account (including tweets that were previously posted to this account from other Twitter clients)? I want to do this so that I can delete them all by calling twitter_api.destroyStatus() on each tweet.
One approach could be like the following:
import twitter
api = twitter.Api(consumer_key='consumer_key',
consumer_secret='consumer_secret',
access_token_key='access_token',
access_token_secret='access_token_secret')
# get user data from credentials
user_data = api.VerifyCredentials()
user_id = long(user_data.id)
max_status_id = 0
# repeat until all tweets are deleted
while True:
# let us get 200 statuses per API call.
# trim_user helps improve performance by reducing size of return value
timeline_args = {'user_id': user_id, 'count': 200, 'trim_user': 'true'}
# if not first iteration, use max_status_id seen so far
if max_status_id != 0:
timeline_args['max_id'] = max_status_id
# Get statuses from user timeline
statuses = api.GetUserTimeline(**timeline_args)
#if no more tweets are left, then break the loop
if statuses is None or len(statuses) == 0:
break
for status in statuses:
# remember max_status_id seen so far
max_status_id = long(status.id) - 1
# delete the tweet with current status[id]
api.DestroyStatus(status.id)