How to subscribe to real-time XMPP RSS feeds with Superfeedr - python

I'm trying to subscribe to feeds with Superfeedr, and I've got a python wrapper for XMPP up and running, and I'm receiving the dummy.xml successfully.
I don't quite understand how to add more sources, however? I've tried adding a few superfeedr.com/track/'s, but I get no new feeds from it (though I do seem to get a confirmation of subscription).
I'd like to add as many real-time (non-POLL) feeds as possible, perhaps by using PubSubHub servers.
I'd really appreciate some help towards this - where do I find such feeds? Can I subscribe to the whole superfeedr.com real-time feed just by adding /track/ ? Or will that only filter the feeds I'm subscribing to? Also, as I'm subscribing from my XMPP.py client on my Amazon server, what exactly is my Subscriber URL (callback) ?
Where do I go from here?
I'll add more info if needed, just let me know.

Superfeedr is an API which will help you gather data from feeds that you're supposed to curate yourself. So the whole process starts with you collecting a list of feeds to which you want to subscribe.
The Track API does not help you find feeds, but rather helps you build virtual feeds that match a given criteria. For example, if you want any mention ot 'stackoerflow' in any feed, you could use track for that. Think of it as RSS feeds for search results, but in realtime (forward looking).
Finally, if you use XMPP, you don't need a callback url, as these are part of the PubSubHubbub API.

Related

Display Sensordata in webpage

I am retrieving temperature data from a sensor continuously. Now I want to display them in a webpage hosted by a node.js webserver. I struggle to understand how these data are beeing send to my html webpage because there are many ways doing that without making any way clear for me. I read in this context terms like REST, AJAX, POST and GET.
Can someone make it clear for me which would be the easiest choice in that case.
All those terms are connected with one another:
REST is a software architecture used for creating web-services that allows a requesting system (e.g. your browser) to access and/or manipulate data on the server.
GET and POST are two HTTP methods that define what you want to do to the data on the server (get it, change it, add something, ...).
Ajax is used on the client-side to retrieve data from RESTfull services.
In your case, you would create a GET endpoint in node.js (with e.g. express) and then connect to this endpoint via Ajax to retrieve the data and display it to your website.

What does it mean to use an API key in server-side auth flow?

New to programming, using Python 3.
I work in sales and want to make a program using the Podio API which is going to take information about potential clients from an excel sheet and use it to create subpages in Podio with their information. To get an API-key, Podio wants a redirect-URL for the purposes described here and here, a whole bunch of text I don't really understand. Does it mean I have to authenticate myself in my program (using my Podio login info?), which sends me to Podio (where I log in to Podio manually, using the same login info?), which sends me to the redirect URL, which sends me back to Podio? I can't really make sense of this.
I googled and found some similar questions but none of the answers explained exactly what the actual functions of these authentication flows are. When do I need them? Do I need them if I'm just going to be using this program myself? Do I always need them to gain access to my Podio account through my program?
Thanks in advance.
If you are only going to use your program yourself, then username/password flow is what you need. It is simplest to understand and use flow of authenticating with Podio API. Here are all needed details for it: https://developers.podio.com/authentication/username_password
To be short: yes, you can enter localhost as full domain (without protocol) of your return URL

Confused about using Superfeedr to subscribe and download RSS feeds using XMPP or Pubsubhubbub

I am trying to use Python 2.7 to subscribe to RSS feeds using Superfeedr.
After reading Superfeedr documentation my understanding is that a user can subscribe using XMPP or Pubsubhubbub.
I have previously worked with REST apis however I am very confused as to what I need to do in order to subscribe to feeds and receive them?
I have already installed the Superfeedr XMPP API Python Wrapper and looked into Superfeedr mashape api page and I am still struggling.
What are the basic steps a user needs to take to be able to subscribe and download RSS feeds in Superfeedr using either XMPP or Pubsubhubbub?
Sofia, I created Superfeedr.
The first step for you is to pick between XMPP and PubSubHubbub. These are 2 APIs with different purposes.
Since you previously worked with REST APIs, I suggest you stick to PubSubHubbub, which you'll probably be a lot more familiar with.
The most important concept of this API is that it's a webhook based system. This means that not only will you send us requests to subscribe to feeds, but we will also send you requests when the feeds have been updated. We will send requests to an URL on your application, named the webhook or hub.callback.
Finally, rememebr that even if you can indeed retrieve (download) the content of an RSS feed from Superfeedr, the recommended way is to actually wait for us to send you that data (via the webhook).

How to monitor the Internet connectivity on two PCs simultaneously?

I have two PCs and I want to monitor the Internet connectivity in both of them and make it available in a page as to whether they're currently online and running. How can I do that?
I'm thinking of a cron job that gets executed every minute that sends a POST to a file located in a server, which in turn would write the connectivity status "online" to a file. In the page where the statuses are displayed, read from both the status files and display whether they're online or not. But this feels like a sloppy idea. What alternative suggestion do you have?
(The answer doesn't necessarily have to be code; I'm not looking for copy-paste solutions. I just want an idea, a nudge in the right directio,)
I would suggest just a GET request (you just need a ping to indicate that the PC is on) sent periodically to maybe a Django server and if you query a page on the Django server, it shows a webpage indicating the status of each.
In the Django server have a loop where the time each GET is received is indicated, if the time between the last GET and current time is too large, set a flag to false.
That flag will later be visible when the URL is queried, via the views.
I don't think this would end up sloppy, just a trivial solution where you don't really have to dig too deep to make it work.
I have used Nagios in the past I like it a lot. It is free and open source. I have used it to monitor several Web, DNS, Mail servers and a proxy. You can check it here: https://www.nagios.com/products/nagioscore

retrieving data from twitter python

I am trying to build an app where users will be able to connect to my app, enter a keyword for searching on twitter and then the results will be stored on a database. From the moment the user enters a keyword I want to keep track of what is being said on twitter.Those results will be further analyzed and some statistics will be presented to the user.
So far I have used tweppy and twitter streaming api for getting the tweets. But I realized that I can not have more than one open streaming connections (for searching in parallel for multiple keywords).
I searched the stackoverflow and found solutions like disconnect, connect and then search with a new keyword, but in that case I am going to lose data.
Also I checked the Twitter API, which gives you 450 results max/15 min:
https://dev.twitter.com/docs/rate-limiting/1.1/limits
Stream API:
- public stream doesn't give the oppurtunity to have more than connections
- Site stream doesn't give you the oppurtunity for search
Firehose API is not option since is too expensive.
How can I solve this problem? I am seeing many apps searching live for more instances than one. Have anyone met this before?
You could use tweepy to collect all tweets from the sample or filter streaming endpoint and save this to a database. Then use the database to only return tweets for your search term.
If you don't want tweets to persist for too long, you might have better results using noSQL databases like redis and using an expiration timestamp, so it doesn't fill up infinitely.

Categories