Can I use mod_python's Sessions with mod_pywebsocket? - python

I'm creating a simple web game that uses web sockets for to stream updates HTTP AJAX requests for everything else (e.g. login system, user profiles, &c). Unfortunately I'm somewhat new to mod_python, but it seems that I want to use the Sessions class to keep track of visitors. The only problem is that a Session requires a mod_python request for some reason. Is there a way I can use these sessions within a mod_pywebsocket handler, or do I need to roll my own session mechanism?

In case anyone could use this, I've found that mod_python's sessions work quite well with mod_pywebsocket. Here are two considerations to be aware of:
Initialization Typically, you construct a mod_python Session object with a mod_python request. Luckily, the authors of mod_pywebsocket had the forethought to make the web socket requests (the ones you get in web_socket_transfer_data arguments) compatible. That means you can instantiate your Session in the same way you normally would in mod_python (see the docs for examples). This might seem obvious, but it wasn't to me. If you get an error doing this, you've done something else wrong.
Session locks The other thing to keep in mind is that the session associated with a given ID is locked by default, and the lock persists for the lifetime of that Session object. This means that if you have two web sockets that use Sessions from the same host, one of them is in danger of blocking forever. In addition, the documentation states that these mutex locks can require non-trivial system resources. They were clearly designed for serving quick HTTP requests, not for persistent connection-oriented use.
One way to fix sessions is to disable the locking, but that's probably not a smart thing to do. I haven't tried it, but best of luck with those race conditions if you make the attempt. What I did was to create the Sessions I needed only for short periods of time and then assign None to it when I was done. Apparently with clauses won't work with these sessions. Again, this isn't terribly obscure, but it can lead to some headaches if you don't realize what's going on under the hood.

Related

Python objects lose state after every request in nginx

This is really troublesome for me. I have a telegram bot that runs in django and python 2.7. During development I used django sslserver and everything worked fine. Today I deployed it using gunicorn in nginx and the code works very different than it did on my localhost. I tried everything I could since I already started getting users, but all to no avail. It seems to me that most python objects lose their state after each request and this is what might be causing the problems. The library I use has a class that handles conversation with a telegram user and the state of the conversation is stored in a class instance. Sometimes when new requests come, those values would already be lost. Please has anyone faced this? and is there a way to solve the problem quick? I am in a critical situation and need a quick solution
Gunicorn has a preforking worker model -- meaning that it launches several independent subprocesses, each of which is responsible for handling a subset of the load.
If you're relying on internal application state being consistent across all threads involved in offering your service, you'll want to turn the number of workers down to 1, to ensure that all those threads are within the same process.
Of course, this is a stopgap -- if you want to be able to scale your solution to run on production loads, or have multiple servers backing your application, then you'll want to be modify your system to persist the relevant state to a shared store, rather than relying on content being available in-process.

Python APNs background connection

What would be the best practice in this scenario?
I have an App Engine Python app, with multiple cron jobs. Instantiated by user requests and cron jobs, push notifications might be sent. This could easily scale up to a total of +- 100 pushes per minute.
Setting up and tearing down a connection to APNs for every batch is not what I want. Neither is Apple advising to do this. So I would like to keep the connection alive, even when user requests finish or when a cron finishes. Possibly with a timeout (2 minutes no pushes, then close then connection).
Reading the GAE documentation, I couldn't figure out if there even is such a thing available. Also, I might need this to be available in different apps and/or modules.
You can put the messages in a pull taskqueue and have a backend instance (or a cron job) to process the tasks
First, please take a look at Google Cloud Messaging. It's cool and you can use it easier than APNS's protocol.
If you can not use GCM (because of code refactoring, etc ...), I think AppEngine Managed VM is suitable for your situation now. Managed VM is something that stands between AppEngine and Compute Engine.
You can use the datastore (eventually shadowed by memcache for performance) to persist all the necessary APN (or any other) connection/protocol status/context info such that multiple related requests can share the same connection as if your app would be a long-living one.
Maybe not trivial, but definitely feasible.
Some requests may need to be postponed temporarily, depending on the shared connection status/context, that's true.

CORS with gevent

I am building a gevent application in which I use gevent.http.HTTPServer. The application must support CORS, and properly handle HTTP OPTIONS requests. However, when OPTIONS arrives, HTTPServer automatically sends out a 501 Not Implemented, without even dispatching anything to my connection greenlet.
What is the way to work around this? I would not want to introduce an extra framework/web server via WSGI just to be able to support HTTP OPTIONS.
Practically the only option in this situation is to switch to using WSGI. I ended up switching to pywsgi.WSGIServer, and the problem solved itself.
It's important to understand that switching to WSGI in reality introduces very little (if any) overhead, giving you so many benefits that the practical pros far outweigh the hypothetical cons.

Setting limits in django (preventing too many hits on a view)

Not sure of the customs for people who release production django apps but I'd assume there is some kind of protection mechanism against people who spam a view or so?
If a view did not implement caching and a user just spams the url a bunch of times wouldn't that be a bad thing?
I want some mechanism to block people by IP address or whatnot if they are repeatedly calling a view at a high rate.
I tried to use this app: http://django-ratelimit.readthedocs.org/en/latest/install.html
But it promptly does not work, or perhaps my setup is wrong (has anyone used it?).
Thanks.
Typically this kind of security would happen at the web server level, i.e. in Nginx or whatever you're using to serve your app. Think about the fact that in order to block someone's IP in your app after a certain number of attempts you'd need to record their IP somewhere and then check incoming requests against that. If it were to go in your app then this kind of functionality would best fit at a middleware level.
If you were to do this at an application level for the purpose of protecting individual views then I would probably do it by means of a decorator.
You should have a mechanism in place for this anyway, as what you've described can also be a Denial of Service attack in the right context. Some web hosts have hardware-level protection for this, so ask your host about that too.
Generally in production you have some kind of frontend server. If your application logic not coupled to the number of requests, better do this work on frontend. For example Nginx has limit_req module:
http://nginx.org/en/docs/http/ngx_http_limit_req_module.html

Does anyone know of an asynchronous CouchBase client for the Tornado web framework?

I am writing a web application that uses nginx to serve static content and tornado to serve dynamic content. I was thinking of utilizing CouchBase as my datastore, but am having trouble locating a suitable client for use with the Tornado framework (i.e. asynchronous). Anybody know of one?
I've seen trombi: https://github.com/inoi/trombi but couldn't find much information on it. If anyone has had any experience with it (good or bad), I'd love to hear about it.
I would really recommend sticking with the Couchbase released code for Python. While it isn't technically asynchronous, the queries are so fast that it really doesn't factor into things. Its not like building out a query for a Database which could easily lock up continued actions for a period of time. Not to mention the fact there is a lot of load balancing and bucket management code that you would lose in most situations by trying to find some third party module for it.
Also you can always build a multiprocessing package to create sub-processes to handle removing these calls from the primary process stream and reduce the impact to almost nothing.
UPDATE
Another option is to use Tornado's internal callback functionality to offset the blocking process so it doesn't impair browsing. A method for this is described here: http://tornadogists.org/2185380/

Categories