Flask limiting user sessions by time - python

I have rather unusual task, so I would like to ask for a piece of advice from experts :)
I need to build small Flask-based web which will have build-in video player. Users will have to log-in to access videos. The problem is that I need to limit user by the amount of time they can spend using the service.
Could someone please recommend a possible way to make it work or help me to find a place to get started?
What I am thinking of... what if i create user's profile variable like "credits_minutes", and i could find a way to decrease credits_minutes every minute by one?

The sessions are based on requests from my understanding what you are trying to do is to actually get the amount of time spent on the site? You'll need to do some kind of keep alive from the client.
Such as web sockets, repetitive JavaScript calls or something else to know that they are on the actual site and base you logic on that.
A simple solution would be to write something with jquery that polls an endpoint of you choice where you could do something time based for each poll. Such as saving the oldest call and comparing it to each new that is arriving. and when X minutes has elapsed, redirect the user.

From the Flask-Session documentation: https://pythonhosted.org/Flask-Session/
PERMANENT_SESSION_LIFETIME: the lifetime of a permanent session as datetime.timedelta object. Starting with Flask 0.8 this can also be an integer representing seconds.

Related

How to get new (real time) comments from my blog?

I have a blog and an application that gives the number of comments and posts on my blog by using my blog's API.
The issue I'm having is that I want to have my application receive new comments from my application in real-time.
My solution:
I can have my application calling the API every 30 seconds or so to check whether there is a response (i.e. whether there is a new comment).
I think the best solution is to use something called Long Polling to get updates. Its a technique in programming to handle requests with less resources such as CPU being used over time. For a detailed solution for your case search for
Long Polling in Flask application

Bypass rate limit for requests.get

I want to constantly scrape a website - once every 3-5 seconds with
requests.get('http://www.example.com', headers=headers2, timeout=35).json()
But the example website has a rate limit and I want to bypass that. How can I do so?? I thought about doing it with proxies but was hoping there were some other ways?
You would have to do some very low level stuff. Utilizing likely socket and urllib2.
First do your research. How are they limiting your query rate? Is it by IP, or session based (server side cookie) or local cookies? I suggest going to the site manually as your first step of research, and using a web-developer tool to view all headers communicated.
One you figure this out, create a plan to manipulate it.
Lets say it is session based, you could utilize multiple threads to control several individual instances of a scraper, each with unique sessions.
Now, if it is IP based, then you must spoof your IP which is much more complex.
just buy quite a lot of proxy.
and config the script to change the proxy to next after the rate limit time of the server.

Facebook's Graph API's request limit on a locally run program? How to get specific data in real time without reaching it?

I've been writing a program in Python which needs to have the datum of the number of likes of a specific Facebook page in real time. The program itself works, but it's based on a loop that is constantly requesting the number of likes and updating it on a variable, and I was afraid that this way it will soon reach the API's limit of requests.
I read that Graph API's request limit per user for an application is 200 requests per hour. Is a program locally run as this one considered an application with one user, or what is it considered?
Also, I read that some users say the API can handle 600 requests per 600 seconds without returning an error, does this still apply? (Could I, for example, delay the loop for one second and still be able to make all the requests?) If not, is there a solution to get that info in real time in a local program? (I saw that Graph can send you updates with a POST on a specified URL, but is there a way to receive those updates without owning an URL? Maybe a way to renew the token or something?). I need to have this program running for almost a whole day, so not being rejected from the API is quite important.
Sorry if it sounds silly or anything, this is the first time I'm using the Graph API (and a web-based API in general).

How to monitor the Internet connectivity on two PCs simultaneously?

I have two PCs and I want to monitor the Internet connectivity in both of them and make it available in a page as to whether they're currently online and running. How can I do that?
I'm thinking of a cron job that gets executed every minute that sends a POST to a file located in a server, which in turn would write the connectivity status "online" to a file. In the page where the statuses are displayed, read from both the status files and display whether they're online or not. But this feels like a sloppy idea. What alternative suggestion do you have?
(The answer doesn't necessarily have to be code; I'm not looking for copy-paste solutions. I just want an idea, a nudge in the right directio,)
I would suggest just a GET request (you just need a ping to indicate that the PC is on) sent periodically to maybe a Django server and if you query a page on the Django server, it shows a webpage indicating the status of each.
In the Django server have a loop where the time each GET is received is indicated, if the time between the last GET and current time is too large, set a flag to false.
That flag will later be visible when the URL is queried, via the views.
I don't think this would end up sloppy, just a trivial solution where you don't really have to dig too deep to make it work.
I have used Nagios in the past I like it a lot. It is free and open source. I have used it to monitor several Web, DNS, Mail servers and a proxy. You can check it here: https://www.nagios.com/products/nagioscore

What’s the correct way to run a long-running task in Django whilst returning a page to the user immediately?

I’m writing a tiny Django website that’s going to provide users with a way to delete all their contacts on Flickr.
It’s mainly an exercise to learn about Selenium, rather than something actually useful — because the Flickr API doesn’t provide a way to delete contacts, I’m using Selenium to make an actual web browser do the actual deleting of contacts.
Because this might take a while, I’d like to present the user with a message saying that the deleting is being done, and then notify them when it’s finished.
In Django, what’s the correct way to return a web page to the user immediately, whilst performing a task on the server that continues after the page is returned?
Would my Django view function use the Python threading module to make the deleting code run in another thread whilst it returns a page to the user?
Consider using some task queues - one of the most liked by Django community solution is to use Celery with RabbitMQ.
Once I needed this, I set up another Python process, that would communicate with Django via xmlrpc - this other process would take care of the long requests, and be able to answer the status of each. The Django views would call that other process (via xmlrpc) to queue jobs, and query job status. I made a couple proper json views in django to query the xmlrpc process - and would update the html page using javascript asynchronous calls to those views (aka Ajax)

Categories