I'm creating an app using python/tweepy
I'm able to use the StreamListener to get real time "timeline" when indicating a "topic" or #, $, etc.
Is there a way to have a real-time timeline function similar to tweetdeck or an embedded widget for a website for the user ID? non-stop
When using api.user_timeline receiving the 20 most recent tweepy.
Any thoughts?
Twitter is a Python library, so there's really no way to use it directly on a website. You could send the data from the stream to a browser using WebSockets or AJAX long polling, however.
There's no way to have the Userstream endpoint send tweets to you any faster - all of that happens on Twitter's end. I'm not sure why Tweetdeck would receive them faster, but there's nothing than can be done except to contact Twitter, if you're getting a slow speed.
Related
maybe it is simple but I didn't find a satisfactory answer yet.
I have a python application that collects data over CAN bus (temperature, weight, ...) and I want to visualize them over Angular.
On the one side, I wrote the Python application that cyclic read the CAN-bus data and writes them to the console and on the other hand I wrote a small Angular application that contains the first step a simple table.
Now I want to fill in the table every 10 seconds with data from the Python application instead of printing them to the console.
How can I connect these both?
My first thought was a simple file where I save the values from Python and read them with Angular.
The second solution is a database, but I think this is too much for only a few values
So is there a direct way to access the Python data from Angular?
Basic idea is to create an api in python and let angular consume that
then there is the question of weather you want to have backup data in python,
if so then save it a db or file and use that as response for angular
if you want to do some fancy real time stuff may be look into long polling or http event stream
There are several ways you can access Python data from an Angular application:
One way is to use a REST API. You can create a REST API in Python
using a web framework like Flask or Django, and then use Angular's
HTTP client to make requests to the API and retrieve the data.
Another option is to use WebSockets. You can use a Python library
like asyncio or websockets to set up a WebSocket server, and then
use Angular's WebSocket client to connect to the server and receive
updates in real-time.
You can also use a message queue like RabbitMQ or ZeroMQ to allow
your Python and Angular applications to communicate with each other
asynchronously.
Overall, the best approach will depend on your specific requirements and how you want to structure your application. A REST API is a good choice if you need to retrieve data from the Python application on demand, while WebSockets or a message queue can be used for real-time communication and updates.
I've been writing a program in Python which needs to have the datum of the number of likes of a specific Facebook page in real time. The program itself works, but it's based on a loop that is constantly requesting the number of likes and updating it on a variable, and I was afraid that this way it will soon reach the API's limit of requests.
I read that Graph API's request limit per user for an application is 200 requests per hour. Is a program locally run as this one considered an application with one user, or what is it considered?
Also, I read that some users say the API can handle 600 requests per 600 seconds without returning an error, does this still apply? (Could I, for example, delay the loop for one second and still be able to make all the requests?) If not, is there a solution to get that info in real time in a local program? (I saw that Graph can send you updates with a POST on a specified URL, but is there a way to receive those updates without owning an URL? Maybe a way to renew the token or something?). I need to have this program running for almost a whole day, so not being rejected from the API is quite important.
Sorry if it sounds silly or anything, this is the first time I'm using the Graph API (and a web-based API in general).
I want to get all messages of a facebook conversation. This conversation has around 60,000 messages and I want to get all of them. Is possible to get all of them in one go or I have to make multiple requests with pagination?
The Thread documentation is of hardly any help. I registered an app and using the access key in the Graph API Explorer. Even if I change the the limit to 25000 still I am getting handful of messages.
t_id.<thread_id>/messages?limit=25000
So what's the best way to get all messages?
I will be using this in my tornado app. The call would be something like this:
self.facebook_request("/53......13", access_token=self.get_secure_cookie('access_token'), callback=self.async_callback(self._after_messages))
I am trying to build an app where users will be able to connect to my app, enter a keyword for searching on twitter and then the results will be stored on a database. From the moment the user enters a keyword I want to keep track of what is being said on twitter.Those results will be further analyzed and some statistics will be presented to the user.
So far I have used tweppy and twitter streaming api for getting the tweets. But I realized that I can not have more than one open streaming connections (for searching in parallel for multiple keywords).
I searched the stackoverflow and found solutions like disconnect, connect and then search with a new keyword, but in that case I am going to lose data.
Also I checked the Twitter API, which gives you 450 results max/15 min:
https://dev.twitter.com/docs/rate-limiting/1.1/limits
Stream API:
- public stream doesn't give the oppurtunity to have more than connections
- Site stream doesn't give you the oppurtunity for search
Firehose API is not option since is too expensive.
How can I solve this problem? I am seeing many apps searching live for more instances than one. Have anyone met this before?
You could use tweepy to collect all tweets from the sample or filter streaming endpoint and save this to a database. Then use the database to only return tweets for your search term.
If you don't want tweets to persist for too long, you might have better results using noSQL databases like redis and using an expiration timestamp, so it doesn't fill up infinitely.
I'm trying to subscribe to feeds with Superfeedr, and I've got a python wrapper for XMPP up and running, and I'm receiving the dummy.xml successfully.
I don't quite understand how to add more sources, however? I've tried adding a few superfeedr.com/track/'s, but I get no new feeds from it (though I do seem to get a confirmation of subscription).
I'd like to add as many real-time (non-POLL) feeds as possible, perhaps by using PubSubHub servers.
I'd really appreciate some help towards this - where do I find such feeds? Can I subscribe to the whole superfeedr.com real-time feed just by adding /track/ ? Or will that only filter the feeds I'm subscribing to? Also, as I'm subscribing from my XMPP.py client on my Amazon server, what exactly is my Subscriber URL (callback) ?
Where do I go from here?
I'll add more info if needed, just let me know.
Superfeedr is an API which will help you gather data from feeds that you're supposed to curate yourself. So the whole process starts with you collecting a list of feeds to which you want to subscribe.
The Track API does not help you find feeds, but rather helps you build virtual feeds that match a given criteria. For example, if you want any mention ot 'stackoerflow' in any feed, you could use track for that. Think of it as RSS feeds for search results, but in realtime (forward looking).
Finally, if you use XMPP, you don't need a callback url, as these are part of the PubSubHubbub API.