Rate limits on RSS feeds (youtube) - python

I am making a discord bot in python which would ping me whenever the YouTubers I wish to keep track of upload any video.
So at first, I thought of using the Youtube Api but I couldn't find anything for this specific topic. Then I found RSS feeds.
So my main question is, what are the rate limits on calling the URL (https://www.youtube.com/feeds/videos.xml?channel_id=xxx)?
As the plan, I have in my mind is that I will make an HTTP request to the URL every x seconds, and if it sees any update, it pings me in my server.

Related

How can I quickly download and send audio from youtube?

How can I quickly download audio from YouTube by URL or ID and send it to Telegram bot? I've been using youtube-dl to download audio, save it on hosting and after that send it to user. It takes 1-2 minutes to do that. But other bots (like this one #LyBot) do this with the speed of light. How do they do this?
As it says in their documentation "I send audio instantly if it has already been downloaded through me earlier. Otherwise, the download usually takes no more than 10 seconds."
They probably store a file the first time its downloaded by any user so that it can be served instantly for subsequent requests.

How to send partial status of request to frontend by django python?

Suppose, I have sent a post request from react to Django rest API and that request is time taking. I want to get how many percentages it has been processed and send to the frontend without sending the real response?
There are two broad ways to approach this.
(which I would recommend to start with): Break the request up. The initial request doesn't start the work, it sends a message to an async task queue (such as Celery) to do the work. The response to the initial request is the ID of the Celery task that was spawned. The frontend now can use that request ID to poll the backend periodically to check if the task is finished and grab the results when they are ready.
Websockets, wherein the connection to the backend is kept open across many requests, and either side can initiate sending data. I wouldn't recommend this to start with, since its not really how Django is built, but with a higher level of investment it will give an even smoother experience.

Facebook's Graph API's request limit on a locally run program? How to get specific data in real time without reaching it?

I've been writing a program in Python which needs to have the datum of the number of likes of a specific Facebook page in real time. The program itself works, but it's based on a loop that is constantly requesting the number of likes and updating it on a variable, and I was afraid that this way it will soon reach the API's limit of requests.
I read that Graph API's request limit per user for an application is 200 requests per hour. Is a program locally run as this one considered an application with one user, or what is it considered?
Also, I read that some users say the API can handle 600 requests per 600 seconds without returning an error, does this still apply? (Could I, for example, delay the loop for one second and still be able to make all the requests?) If not, is there a solution to get that info in real time in a local program? (I saw that Graph can send you updates with a POST on a specified URL, but is there a way to receive those updates without owning an URL? Maybe a way to renew the token or something?). I need to have this program running for almost a whole day, so not being rejected from the API is quite important.
Sorry if it sounds silly or anything, this is the first time I'm using the Graph API (and a web-based API in general).

Real-time timeline function like tweetdeck?

I'm creating an app using python/tweepy
I'm able to use the StreamListener to get real time "timeline" when indicating a "topic" or #, $, etc.
Is there a way to have a real-time timeline function similar to tweetdeck or an embedded widget for a website for the user ID? non-stop
When using api.user_timeline receiving the 20 most recent tweepy.
Any thoughts?
Twitter is a Python library, so there's really no way to use it directly on a website. You could send the data from the stream to a browser using WebSockets or AJAX long polling, however.
There's no way to have the Userstream endpoint send tweets to you any faster - all of that happens on Twitter's end. I'm not sure why Tweetdeck would receive them faster, but there's nothing than can be done except to contact Twitter, if you're getting a slow speed.

How to subscribe to real-time XMPP RSS feeds with Superfeedr

I'm trying to subscribe to feeds with Superfeedr, and I've got a python wrapper for XMPP up and running, and I'm receiving the dummy.xml successfully.
I don't quite understand how to add more sources, however? I've tried adding a few superfeedr.com/track/'s, but I get no new feeds from it (though I do seem to get a confirmation of subscription).
I'd like to add as many real-time (non-POLL) feeds as possible, perhaps by using PubSubHub servers.
I'd really appreciate some help towards this - where do I find such feeds? Can I subscribe to the whole superfeedr.com real-time feed just by adding /track/ ? Or will that only filter the feeds I'm subscribing to? Also, as I'm subscribing from my XMPP.py client on my Amazon server, what exactly is my Subscriber URL (callback) ?
Where do I go from here?
I'll add more info if needed, just let me know.
Superfeedr is an API which will help you gather data from feeds that you're supposed to curate yourself. So the whole process starts with you collecting a list of feeds to which you want to subscribe.
The Track API does not help you find feeds, but rather helps you build virtual feeds that match a given criteria. For example, if you want any mention ot 'stackoerflow' in any feed, you could use track for that. Think of it as RSS feeds for search results, but in realtime (forward looking).
Finally, if you use XMPP, you don't need a callback url, as these are part of the PubSubHubbub API.

Categories