I have these lines of code.just basically gets last 10-20 tweet from a person
import requests
import json
bearer_token='******'
userid=******
#url='https://api.twitter.com/2/users/by/username/{}'.format(username)
while True:
count = count + 1
url='https://api.twitter.com/2/users/{}/tweets'.format(userid)
headers={'Authorization': 'Bearer {}'.format(bearer_token)}
response= requests.request('GET',url,headers=headers)
tweetsData=response.json()
print(json.dumps(tweetsData,indent=4,sort_keys=True))
I want to have last one only how?
plz help me...
According to the API documentation for this endpoint, the max_results option can be applied:
max_results
Specifies the number of Tweets to try and retrieve, up to a maximum of 100 per distinct request. By default, 10 results are returned if this parameter is not supplied. The minimum permitted value is 5. It is possible to receive less than the max_results per request throughout the pagination process.`
So, modifying your url, you can retrieve 5 Tweets as a minimum.
url='https://api.twitter.com/2/users/{}/tweets?max_results=5'.format(userid)
If you only want to process or display a single one, you will need to do something with the value of tweetsData in order to truncate it to only a single Tweet.
Related
I am using tweepy too gather data from users and the date of the creation of their account.
when I want to add expansions to the requests "user_fields" for making the "created_at" expansions appear. The value is null for every ID's person's.
Even when I use "user_fields" in the first paginator requests it doesn't work, same with the second one.
Here is my code:
screen_id= "71026122"
liste=[]
for tweet in tweepy.Paginator(client.get_users_followers,id=screen_id,max_results=100,limit=1):
for i in tweet.data:
print(i['id'])
liste.append(i['id'])
print(liste)
user_information=client.get_users(ids=liste,user_fields=["created_at"])
print(user_information)
I'm not sure why I am getting rate limited so quickly using:
mentions = []
for tweet in tweepy.Paginator(client.search_all_tweets, query= "to:######## lang:nl -is:retweet",
start_time = "2022-01-01T00:00:00Z", end_time = "2022-05-31T00:00:00Z",
max_results=500).flatten(limit=10000):
mention = tweet.text
mentions.append(mention)
I suppose I could put time.sleep(1) after these lines, but then it would mean I could only process one Tweet every second, whereas with a regular client.search_all_tweets I would get 500 Tweets per request.
Is there anything I'm missing here? How can I process more than one Tweet a second using tweepy.Paginator?
BTW: I have academic access and know the rate limit documentation.
See the FAQ section about this in Tweepy's documentation:
Why am I getting rate-limited so quickly when using Client.search_all_tweets() with Paginator?
The GET /2/tweets/search/all Twitter API endpoint that Client.search_all_tweets() uses has an additional 1 request per second rate limit that is not handled by Paginator.
You can time.sleep() 1 second while iterating through responses to handle this rate limit.
See also the relevant Tweepy issues #1688 and #1871.
Here is the code I am using from this link. I have updated the original code as I need the full .json object. But I am having a problem with pagination as I am not getting the full 3200 Tweets.
api = tweepy.API(auth, parser=tweepy.parsers.JSONParser(),wait_on_rate_limit=True)
jsonFile = open(path+filname+'.json', "a+",encoding='utf-8')
page=1
max_pages=3200
result_limit=2
last_tweet_id=False
while page <= max_pages:
if last_tweet_id:
tweet = api.user_timeline(screen_name=user,
count=result_limit,
max_id=last_tweet_id - 1,
tweet_mode = 'extended',
include_retweets=True
)
else:
tweet = api.user_timeline(screen_name=user,
count=result_limit,
tweet_mode = 'extended',
include_retweets=True)
json_str = json.dumps(tweet, ensure_ascii=False, indent=4)
as per author "result_limit and max_pages are multiplied together to get the number of tweets called."
Then shouldn't I get 6400 Tweets by this definition. But the problem is I am getting 2 Tweets 3200 times. I also updated the values to
max_pages=3200
result_limit=5000
You can say it as a super limit so I should at least get 3200 Tweets. But in this case I got 200 Tweets repeated many times (as I terminated the code).
I just want 3200 Tweets per user profile, nothing fancy. Consider that I have 100 users list, so I want that in an efficient way. Currently seems like I am just sending so many requests and wasting time and assets.
Even though I update the code with a smaller value of max_pages, I am still not sure what should be that value, How am I supposed to know that a one-page covers how many Tweets?
Note: "This answer is not useful" as it has an error at .item() so please don't mark it duplicate.
You don't change last_tweet_id after setting it to False, so only the code in the else block is executing. None of the parameters in that method call change while looping, so you're making the same request and receiving the same response back over and over again.
Also, neither page nor max_pages changes within your loop, so this will loop infinitely.
I would recommend looking into using tweepy.Cursor instead, as it handles pagination for you.
I am trying to fetch a subscription list according to the Subscriptions: list documentation. I want to get all my subscribers so I am using mySubscribers=True in the parameter list in a loop after my first request.
while "nextPageToken" in my_dict:
next_page_token = my_dict["nextPageToken"]
my_dict = subscriptions_list_by_channel_id(client,
part='snippet,contentDetails',
mySubscribers=True,
maxResults=50,
pageToken=next_page_token
)
for item in my_dict["items"]:
file.write("{}\n".format(item["snippet"]["channelId"]))
The problem is at page 20 my loop breaks, i.e. I don't recieve a nextPageToken key in the response capping my data to 1000 total subscribers fetched. But I have more than 1000 subs. The documentation states that myRecentSubscribers has a limit at 1000 but that mySubscribers does not.
Can not really find much help with this anywhere. Any light on my situation?
I chose to list channels instead of listing subscriptions, passing the same argument mySubscribers. The documentations says it's deprecated and gives them back in a weird order with duplicates but it does not have a limit.
Twitter only returns 100 tweets per "page" when returning search results on the API. They provide the max_id and since_id in the returned search_metadata that can be used as parameters to get earlier/later tweets.
Twython 3.1.2 documentation suggests that this pattern is the "old way" to search:
results = twitter.search(q="xbox",count=423,max_id=421482533256044543)
for tweet in results['statuses']:
... do something
and that this is the "new way":
results = twitter.cursor(t.search,q='xbox',count=375)
for tweet in results:
... do something
When I do the latter, it appears to endlessly iterate over the same search results. I'm trying to push them to a CSV file, but it pushes a ton of duplicates.
What is the proper way to search for a large number of tweets, with Twython, and iterate through the set of unique results?
Edit: Another issue here is that when I try to iterate with the generator (for tweet in results:), it loops repeatedly, without stopping. Ah -- this is a bug... https://github.com/ryanmcgrath/twython/issues/300
I had the same problem, but it seems that you should just loop through a user's timeline in batches using the max_id parameter. The batches should be 100 as per Terence's answer (but actually, for user_timeline 200 is the max count), and just set the max_id to the last id in the previous set of returned tweets minus one (because max_id is inclusive). Here's the code:
'''
Get all tweets from a given user.
Batch size of 200 is the max for user_timeline.
'''
from twython import Twython, TwythonError
tweets = []
# Requires Authentication as of Twitter API v1.1
twitter = Twython(PUT YOUR TWITTER KEYS HERE!)
try:
user_timeline = twitter.get_user_timeline(screen_name='eugenebann',count=200)
except TwythonError as e:
print e
print len(user_timeline)
for tweet in user_timeline:
# Add whatever you want from the tweet, here we just add the text
tweets.append(tweet['text'])
# Count could be less than 200, see:
# https://dev.twitter.com/discussions/7513
while len(user_timeline) != 0:
try:
user_timeline = twitter.get_user_timeline(screen_name='eugenebann',count=200,max_id=user_timeline[len(user_timeline)-1]['id']-1)
except TwythonError as e:
print e
print len(user_timeline)
for tweet in user_timeline:
# Add whatever you want from the tweet, here we just add the text
tweets.append(tweet['text'])
# Number of tweets the user has made
print len(tweets)
As per the official Twitter API documentation.
Count optional
The number of tweets to return per page, up to a maximum of 100
You need to make repeated calls to the python method. However, there is no guarantee that these will be the next N, or if the tweets are really coming in it might miss some.
If you want all the tweets in a time frame you can use the streaming api: https://dev.twitter.com/docs/streaming-apis and combine this with the oauth2 module.
How can I consume tweets from Twitter's streaming api and store them in mongodb
python-twitter streaming api support/example
Disclaimer: i have not actually tried this
As a solution to the problem of returning 100 tweets for a search query using Twython, here is the link showing how it can be done using the "old way":
Twython search API with next_results