I read the Flask doc, it said whenever you need to access the GET variables in the URL, you can just import the request object in your current python file?
My question here is that if two user are hitting the same Flask app with the same URL and GET variable, how does Flask differentiate the request objects? Can someone tell me want is under the hood?
From the docs:
In addition to the request object there is also a second object called
session which allows you to store information specific to a user from
one request to the next. This is implemented on top of cookies for you
and signs the cookies cryptographically. What this means is that the
user could look at the contents of your cookie but not modify it,
unless they know the secret key used for signing.
Means every user is associated with a flask session object which distinguishes them from eachother.
Just wanted to highlight one more fact about the requests object.
As per the documentation, it is kind of proxy to objects that are local to a specific context.
Imagine the context being the handling thread. A request comes in and the web server decides to spawn a new thread (or something else, the underlying object is capable of dealing with concurrency systems other than threads). When Flask starts its internal request handling it figures out that the current thread is the active context and binds the current application and the WSGI environments to that context (thread). It does that in an intelligent way so that one application can invoke another application without breaking.
Related
I need to be able to access the currently executing web request in Tornado deep within my application without passing it down through all of my methods. When the request is first received I want to assign a trace id to it and then every time a message gets logged I want to include it in the logging information.
Is there some global information somewhere that I can use in Tornado to identify the current request that's being processed?
Thanks!
Tornado's StackContext is the way to do this.
Here's an example: https://gist.github.com/simon-weber/7755289.
I am a newbie to Google App Engine and Python.
I want to create an entry in a SessionSupplemental table (Kind) anytime a new user accesses the site (regardless of what page they access initially).
How can I do this?
I can imagine that there is a list of standard event triggers in GAE; where would I find these documented? I can also imagine that there are a lot of system/application attributes; where can I find these documented and how to use them?
Thanks.
I am trying to be pretty general here as I don't know whether you are using the default users service or not and I don't know how you are uniquely linking your SessionSupplemental entities to users or whether you even have a way to identify users at this point. I am also assuming you are using some version of webapp as that is the standard request handling library on App Engine. Let me know a bit more and I can update the answer to be more specific.
Subclass the default RequestHandler in webapp with a new class (such as MyRequestHandler).
In your subclass override the initialize() method.
In your new initialize() method get the current user from your session system (or the users service or whatever you are using). Test to see if a SessionSupplemental entity already exists for this user and if not create a new one.
For all your other request handlers you now want to subclass MyRequestHandler (instead of the default RequestHandler).
Whenever a request happens webapp will automatically call the initialize() method.
This is going to cost you a read for every request and also a write for every request by a new user. If you use the ndb library (instead of db) then a lot of the requests will just hit memcache instead of the datastore.
Now if you are just starting creating a new AppEngine app I would recommend using the Python27 runtime and webapp2 and trying to leverage as much of the webapp2 Auth module as you can so you don't have to write so much session stuff yourself. Also, ndb can be much nicer than the default db library.
I need to write a cgi page which will act like a reverse proxy between the user and another page (mbean). The issue is that each mbean uses different port and I do not know ahead of time which port user will want to hit.
Therefore want I need to do is following:
A) Give user a page which will allow him to choose which application he wants to hit
B) spawn a reverse proxy base on information above (which gives me port, server, etc..)
C) the user connects to the remote mbean page via the reverse proxy and therefore never "leaves" the original page.
The reason for C is that user does not have direct access to any of the internal apps only has access to initial port 80.
I looked at twisted and it appears to me like it can do the job. What I don't know is how to spawn twisted process from within cgi so that it can establish the connection and keep further connection within the reverse proxy framework.
BTW I am not married to twisted, if there is another tool that would do the job better, I am all ears. I can't do things like mod_proxy (for instance) since the wide range of ports would make configuration rather silly (at around 1000 different proxy settings).
You don't need to spawn another process, that would complicate things a lot. Here's how I would do it based on something similar in my current project :
Create a WSGI application, which can live behind a web server.
Create a request handler (or "view") that is accessible from any URL mapping as long as the user doesn't have a session ID cookie.
In the request handler, the user can choose the target application and with it, the hostname, port number, etc. This request handler creates a connection to the target application, for example using httplib and assigns a session ID to it. It sets the session ID cookie and redirects the user back to the same page.
Now when your user hits the application, you can use the already open http connection to redirect the query. Note that WSGI supports passing back an open file-like object as response, including those provided by httplib, for increased performance.
I'm using CherryPy to make a web-based frontend for SymPy that uses an asynchronous process library on the server side to allow for processing multiple requests at once without waiting for each one to complete. So as to allow for the frontend to function as expected, I am using one process for the entirety of each session. The client-side Javascript sends the session-id from the cookie to the server when the user submits a request, and the server-side currently uses a pair of lists, storing instances of a controller class in one and the corresponding session-id's in another, creating a new interpreter proxy and sending the input if a non-existant session-id is submitted. The only problem with this is that the proxy classes are not deleted upon the expiration of their corresponding sessions. Also, I can't see anything to retrieve the session-id for which the current request is being served.
My questions about all this are: is there any way to "connect" an arbitrary object to a CherryPy session so that it gets deleted upon session expiration, is there something I am overlooking here that would greatly simplify things, and does CherryPy's multi-threading negate the problem of synchronous reading of the stdout filehandle from the child process?
You can create your own session type, derived from CherryPy's base session. Use its clean_up method to do your cleanup.
Look at cherrypy/lib/sessions.py for details and sample session implementations.
In GAE, you can say users.get_current_user() to get the currently logged-in user implicit to the current request. This works even if multiple requests are being processed simultaneously -- the users module is somehow aware of which request the get_current_user function is being called on behalf of. I took a look into the code of the module in the development server, and it seems to be using os.environ to get the user email and other values associated to the current request.
Does this mean that every request gets an independent os.environ object?
I need to implement a service similar to users.get_current_user() that would return different values depending on the request being handled by the calling code. Assuming os.environ is the way to go, how do I know which variable names are already being used (or reserved) by GAE?
Also, is there a way to add a hook (or event handler) that gets called before every request?
As the docs say,
A Python web app interacts with the
App Engine web server using the CGI
protocol.
This basically means exactly one request is being served at one time within any given process (although, differently from real CGI, one process can be serially reused for multiple requests, one after the other, if it defines main functions in the various modules to which app.yaml dispatches). See also this page, and this one for documentation of the environment variables CGI defines and uses.
The hooks App Engine defines are around calls at the RPC layer, not the HTTP requests. To intercept each request before it gets served, you could use app.yaml to redirect all requests to a single .py file and perform your interception in that file's main function before redirecting (or, you could call your hook at the start of the main in every module you're using app.yaml to dispatch to).