I have NGINX running as a reverse proxy in front of a few Flask apps.
I want to implement caching for logged out users.
Flask-login adds a Set-Cookie header for every response, even for anonymous users, as it contains a session cookie with a CSRF token. This means that that I'm using proxy_ignore_headers Set-Cookie; to ensure that stuff actually get's cached by NGINX (which won't cache and response with a Set-Cookie header).
I'm setting a separate cookie in the apps to indicate the logged in/out status of a user and using that to determine whether to use the cache or not. This works great.
The issue is that the cached responses for a logged out user include the Set-Cookie header which sets the session cookie. This session cookie is served to any request that hits the cache, which ultimately results in different users receiving the same CSRF token.
I would like to either prevent the Set-Cookie header being stored in the cache, or remove/overwrite it when it's sent to the client from the cache.
I've tried setting proxy_hide_headers Set-Cookie which removes it from cached responses, but also from responses from that app. So no one can log in. Which is bad.
It feels like there should be a really easy solution to this, I just can find it no matter how hard I google.
Any help is appreciated.
Update: After trying a million things I have a solution that’s working for multiple cookies, I would like your opinions.
On Debian 10 I installed apt-get install libnginx-mod-http-lua I think this is not the complete OpenResty lua-nginx-module, isn’t it?
map $upstream_bytes_received $hide_cookie {
default '';
'' Set-Cookie;
}
Inside location:
header_filter_by_lua_block {
ngx.header[ngx.var.hide_cookie] = nil;
}
And it works, I will do more testing...
Previous answer, for 1 cookie, without Lua:
I've been working on a solution for this, but for now it works for ONLY ONE cookie.
First I faced the following problems: $proxy_hide_header does not accept variables, and cannot be used inside if().
I finally found an answer that contained a viable solution to that: Using a Header to Filter Proxied Response Headers.
So this is my code for now , that I will test more, because is a delicate matter:
map $upstream_bytes_received $cookies {
default $upstream_http_set_cookie;
'' '';
}
And then inside location:
proxy_hide_header Set-Cookie;
add_header Set-Cookie $cookies;
Maybe I would make the default: No cookies, that will be noticeable if fails, and less problematic regarding privacy.
But this solution I think cannot be improved for multiple cookies, I have to look elsewhere, if I could force the use of variables at $proxy_hide_header would be the end solution.
Related
In my Django project, until recently I had left the settings SESSION_COOKIE_DOMAIN and CSRF_COOKIE_DOMAIN unset. Recently I changed them to .mydomain.com and since then I have been seeing sporadic CSRF failures on AJAX requests for certain users. The failure manifests as a 403 with CSRF token missing or incorrect. appearing in the logs.
Asking users to clear cookies seems to resolve the issue, but I'm wondering how the settings change could have caused this and why it only seems to be happening for some users and not others.
Wondering if there is a way to resolve these issues without asking my users to clear cookies.
The cookie with the new SESSION_COOKIE_DOMAIN is sent as a new cookie and does not replace the old one. So the browser will send both to your server. AFAICT, it sends them in arbitrary order.
That means that you're setting a cookie for .mydomain.com, but receiving either the cookie you just set for .mydomain.com, or a stale cookie for whatever.mydomain.com was implicitly set originally (because django will only pick one, most likely the last it sees). Which one you get depends on the browser, possibly on some particulars of how the client computer stores them, and possibly even on how django reads the headers. This is why the failures are inconsistent: it randomly works for some clients and fails for others.
Edit: You could delete the stale cookie from the server, if you know the original cookie's properties. Probably the best way is to set a custom Set-Cookie header with the domain and other properties set, and expiration date in the past. You could do that e.g. from the 403 page handler.
(see https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/Set-Cookie and Correct way to delete cookies server-side )
I'm learning flask and want to understand how sessions work.
Apparently the server stores a signed cookie on the client browser.
I have done this process using
sessions['mycookie'] = 'mycookievalue'
But I'm unable to find the cookie on the browser. I normally list cookies on the browser using chrome developer tools and running the command:
document.cookie
This works when I set a cookie but nothing comes up when I set it through sessions.
The Flask session cookie has the httponly flag set, making it invisible from JavaScript.
It is otherwise a normal, regular cookie so it is still stored in the browser cookie store; you should still be able to see it in your browser's developer tools.
You can set the SESSION_COOKIE_HTTPONLY option to False if you want to be able to access the cookie value from JavaScript code. From the Builtin Configuration Values section:
SESSION_COOKIE_HTTPONLY
controls if the cookie should be set with the httponly flag. Defaults to True.
The cookie contains all your session data, serialised using JSON (with tagging support for a wider range of Python types), together with a cryptographic signature that makes sure the data can't be tampered with securely.
If you disable the httponly protection, any JS code could still decode and read all your session data. Even if it can't change those values, that could still be very interesting to malicious code. Imagine a XSS bug in your site being made worse because the JS code could just read a CSRF token used to protect a web form straight from the session.
I am finding this question 3 years and 8 months later because I have an interest in the event it is modified or spoofed, to ensure my backend is able to tell the difference.
Using chrome, use F12, select Application tab, underneath Storage go to Cookies. Under cookies you'll find the webpage, select it and the right side will populate and assuming you have done something to create your session cookie, it will be there. You will notice that the value is encrypted.
Picture showing the location of session cookie
sessions are meant for server use only. That is why it is hidden and encrypted for the client.
If you want to set a cookie which can be used by client/browser. You can just set a normal cookie instead of a secure cookie (like session).
You can set cookies by modifying response.
def home_page():
resp = make_response(...)
resp.set_cookie('my_cookie', 'cookie_value')
return resp
document.cookie on browser will give you mycookie=cookie_value
I noticed that tokens generated by Flask-WTF expire in one hour and a different one is generated every request. This will cause problems in SPAs when a page has been opened longer than an hour. XHR requests made after one hour after page-load will start failing, even if the user was active.
My workaround is to set a new token in the browser in each API call. In the server, all API responses contain a freshly-generated token:
from flask_wtf.csrf import generate_csrf
def api_response(data, error=None):
response = {"csrftoken": generate_csrf(), "data":data}
...
return make_response(jsonify(response), response_code)
In the browser we set the csrftoken on each API response.
then(function(result) {
if(result.csrftoken) csrftoken=result.csrftoken;
callback(result);
})
Is this method still safe and fast? Is there a better way to handle this? I am not too sure about using generate_csrf directly.
No, there is no other way to use the CSRF protection in Flask-WTF. When you need a CSRF token, you need to generate one and use it. There should be no problem with generating it like you do. It is still generated and validated the same way on the server, and transmitted over the same channel to the client.
I am having a problem with sessionid: request.session.session_key Generates a key every page refresh / form submission.
While this: request.COOKIES[settings.SESSION_COOKIE_NAME] complains that 'sessionid' key is not found.
Am I missing something? I need a "key" that is persistent across page requests on my site. I need it to persist even when the browser is closed/3 weeks, etc. How would I do this is Django?
Do I need to configure something? If I am misunderstanding something regarding the sessionid and how it is generated, please correct me.
Thanks for your replies.
Regards,
W
My settings.py is: http://pastebin.com/G9qnD9En
It sounds like your browser is not accepting the session cookies that Django is sending.
Your browser should be able to tell you what cookies are being set with a page response from your application. Check to see that a 'sessionid' cookie is actually being sent, and that the domain and path are correct.
If you have SESSION_COOKIE_DOMAIN or SESSION_COOKIE_PATH set incorrectly in your settings.py file, they may be causing Django to set cookies in the browser that are not being returned to the server.
If you want to verify your whole setup, start by reading this: http://docs.djangoproject.com/en/1.2/topics/http/sessions/
In a nutshell, you need to:
have 'django.contrib.sessions' in your INSTALLED_APPS;
have 'django.contrib.sessions.middleware.SessionMiddleware' in MIDDLEWARE_CLASSES; and
on a production server, you may need to set SESSION_COOKIE_DOMAIN and SESSION_COOKIE_PATH to interact well with other web applications on the same domain or related domains.
Edit:
Having looked at your pasted settings, I can see that there are two different things going on, each of which is enough to stop the session cookies from working.
SESSION_COOKIE_DOMAIN is set to "mydomain.com"
A cookie for a generic TLD requires that the "domain" part contain at least two period (".") separators in it. This stops people from setting cookies for domains like ".com". (Cookies for domains under country-level jurisdiction, I believe, require three periods.)
Change this to ".mydomain.com" and it should be returned by the browser.
In development (running on your local machine, at 127.0.0.1), leave this setting blank, or your browser won't accept the cookie at all.
SESSION_COOKIE_PATH is set to "/tmp"
This looks like a mistake, unless your web application is hosted at "http://mydomain.com/tmp/"
SESSION_COOKIE_PATH is used to indicate the "path" component of the cookie, i.e., the URL prefix under which the cookie will be returned to the server. This lets you host one application at "mydomain.com/firstapp/" and another at "mydomain.com/secondapp/", and you can be sure that the "sessionid" cookies will not be confused between them.
If you only have one application hosted under this domain name, then leave it blank, and it will default to "/" (the entire domain)
To control where Django stores session data on your filesystem (which is what it looks like you're trying to do), you can use the SESSION_FILE_PATH setting. By default, it is set to "/tmp/" anyway, so you shouldn't need to set it at all.
I had a similar problem, and I fixed it by setting SESSION_COOKIE_NAME to something other than the default 'sessionid'. I think google analytics might have been clobbering the cookie somehow.
I am trying to use Python to write a client that connects to a custom http server that uses digest authentication. I can connect and pull the first request without problem. Using TCPDUMP (I am on MAC OS X--I am both a MAC and a Python noob) I can see the first request is actually two http requests, as you would expect if you are familiar with RFC2617. The first results in the 401 UNAUTHORIZED. The header information sent back from the server is correctly used to generate headers for a second request with some custom Authorization header values which yields a 200 OK response and the payload.
Everything is great. My HTTPDigestAuthHandler opener is working, thanks to urllib2.
In the same program I attempt to request a second, different page, from the same server. I expect, per the RFC, that the TCPDUMP will show only one request this time, using almost all the same Authorization Header information (nc should increment).
Instead it starts from scratch and first gets the 401 and regenerates the information needed for a 200.
Is it possible with urllib2 to have subsequent requests with digest authentication recycle the known Authorization Header values and only do one request?
[Re-read that a couple times until it makes sense, I am not sure how to make it any more plain]
Google has yielded surprisingly little so I guess not. I looked at the code for urllib2.py and its really messy (comments like: "This isn't a fabulous effort"), so I wouldn't be shocked if this was a bug. I noticed that my Connection Header is Closed, and even if I set it to keepalive, it gets overwritten. That led me to keepalive.py but that didn't work for me either.
Pycurl won't work either.
I can hand code the entire interaction, but I would like to piggy back on existing libraries where possible.
In summary, is it possible with urllib2 and digest authentication to get 2 pages from the same server with only 3 http requests executed (2 for first page, 1 for second).
If you happen to have tried this before and already know its not possible please let me know. If you have an alternative I am all ears.
Thanks in advance.
Although it's not available out of the box, urllib2 is flexible enough to add it yourself. Subclass HTTPDigestAuthHandler, hack it (retry_http_digest_auth method I think) to remember authentication information and define an http_request(self, request) method to use it for all subsequent requests (add WWW-Authenticate header).