I'm using Python 3.7 with searchtweets to scrape some basic data from the twitter api, and i want to be able to suspend process when the rate limit has been hit. I currently only have access to the free packages (sandbox and the limited premium). I was able to do this with tweepy and the sandbox package, which doesn't allow historical searches.
I have progressed to try to get more data with historical tweets using searchtweets, but this does not seem to have any access to the rate_limit_status that was available in the sandbox package (open issue).
I have tried the suggestion in the response to the issue, but it still returns to 429 error code
Does anyone know how to access the rate_limit_status in searchtweets or is there any other way to determine how many request i have remaining?
Related
Google has a Google Workspace Status Dashboard where they indicate whether any of their core services are experiencing an outage or not.
Accordingly, I would like to be able to fetch the status for a particular service. For example, I would like to check whether Gmail has one of the following statuses:
Available
Service disruption
Service outage
I would like to make an API call in Python that would retrieve the status and allow me to perform an action according to the current status.
Is there a way I can achieve this?
I found some documentation here but I'm still trying to figure out how I can do it.
The problem with the API is that it does not work directly with the Dashboard, instead it works with the information from the Google Workspace Alert Center, meaning that you need to set up an alert first in order to pull the data from this specific alert using the API and the alert will only be triggered when there is a service disruption or outage reported in the Dashboard, so it will not show any data about the service being Available but only when there is an outage.
As mentioned by Bijay Regmi and the official documentation, I think the best option would be to subscribe to the Status Dashboard RSS or use the JSON feeds.
With Python you could also create a RSS reader to pull that information in a better way, and you can use this other Stack Overflow post as a reference on how to build it.
I have a small script making a request to http://translate.google.com/m?hl=%s&sl=%s&q=%s as my base_link however I've seemed to have surpassed the requests limit
HTTPError: HTTP Error 429: Too Many Requests
I am looking for what the limit is for google translate. The answers out there point to the Google Translate API limits however not the limit if one is directly accessing from Python's requests module.
Let me know if I can provide some more information.
The request you are submitting using the url http://translate.google.com/m?hl=%s&sl=%s&q=%s is using the free service via http://translate.google.com. The limits you have been seeing online is related to Google Translate API which is under Google Cloud Platform. These 2 are totally different from each other.
Currently, there is no public document discussing the limits in using googletrans. But, Googletrans will block your IP address once the system detects that you have been exploiting the free translation service by submitting huge amounts of requests, resulting in
HTTPError: HTTP Error 429: Too Many Requests
.
You can use Google Cloud Platform’s Translation API which is a paid service. Using this, you can avoid low limits set in googletrans which will give you more flexibility with the amount of requests you can submit.
According to this answer, the content quota (on the API) is 6,000,000 characters for 60 seconds per project and user. Therefore it is safe to assume that Google would protect the private API with much less than this. Google will not release the numbers for the private API (because it is supposed to only be used by the website itself, therefore they aren't willing to give any information about it since only they are supposed to know it anyways). If you need a more specific answer, I suggest you try and find
more accurate API limits and assume it is much less than whatever number it is.
I'm not sure but according to this source it seems to be 500,000 characters for free user.
We're building a simple script using the Tweepy Python library. We need to get some simple information on all of the accounts following our Twitter account.
Using the cursor Tweepy's built in this is quite easy, but we very rapidly hit the 15 request limit for the window. I've read that per-App (as opposed to per-user) authentication allows for 300 requests per minute, but I can't find anything in the Tweepy API that supports this.
Any pointers?
You should be able to use tweepy.AppAuthHandler in the same way that you're using OauthHandler.
auth = tweepy.AppAuthHandler(token, secret)
api = tweepy.API(auth)
For some reason, this isn't documented, but you can take a look at the code yourself on GitHub.
It also depends what kinds of requests you're making. There are no resources with a 15 request user limit that have a 300 request app limit. You can consult this chart to determine the limits for user and app auth for each endpoint. In any case, using them in conjunction will at least double your requests.
I'm not what information of your followers you are after exactly, but it might be part of the request followers/list. If so, you can set the field count to 200 which is the maximum value, while the default is only 20. That will save you some requests.
I'm creating an app using python/tweepy
I'm able to use the StreamListener to get real time "timeline" when indicating a "topic" or #, $, etc.
Is there a way to have a real-time timeline function similar to tweetdeck or an embedded widget for a website for the user ID? non-stop
When using api.user_timeline receiving the 20 most recent tweepy.
Any thoughts?
Twitter is a Python library, so there's really no way to use it directly on a website. You could send the data from the stream to a browser using WebSockets or AJAX long polling, however.
There's no way to have the Userstream endpoint send tweets to you any faster - all of that happens on Twitter's end. I'm not sure why Tweetdeck would receive them faster, but there's nothing than can be done except to contact Twitter, if you're getting a slow speed.
I am trying to build an app where users will be able to connect to my app, enter a keyword for searching on twitter and then the results will be stored on a database. From the moment the user enters a keyword I want to keep track of what is being said on twitter.Those results will be further analyzed and some statistics will be presented to the user.
So far I have used tweppy and twitter streaming api for getting the tweets. But I realized that I can not have more than one open streaming connections (for searching in parallel for multiple keywords).
I searched the stackoverflow and found solutions like disconnect, connect and then search with a new keyword, but in that case I am going to lose data.
Also I checked the Twitter API, which gives you 450 results max/15 min:
https://dev.twitter.com/docs/rate-limiting/1.1/limits
Stream API:
- public stream doesn't give the oppurtunity to have more than connections
- Site stream doesn't give you the oppurtunity for search
Firehose API is not option since is too expensive.
How can I solve this problem? I am seeing many apps searching live for more instances than one. Have anyone met this before?
You could use tweepy to collect all tweets from the sample or filter streaming endpoint and save this to a database. Then use the database to only return tweets for your search term.
If you don't want tweets to persist for too long, you might have better results using noSQL databases like redis and using an expiration timestamp, so it doesn't fill up infinitely.