I'm having an issue with my application on Heroku where sessions aren't persisting. Specifically, flask's SecureCookieSession object is empty, every time a request is made. Contrast this with running my application on localhost, where the contents of SecureCookieSession persist the way they should.
Also I'm using flask-login + flask-seasurf, but I'm pretty sure the issue happening somewhere between flask / gunicorn / heroku.
Here are three questions that describe a similar issue:
Flask sessions not persisting on heroku
Flask session not persisting
Flask-Login and Heroku issues
Except I'm not dealing with AJAX or multiple workers here (it's a single heroku free dyno, with a single line in the Procfile). I do get the feeling that using server side sessions with redis or switching from Heroku to something like EC2 might solve my problem though.
Also, here's my git repo if it helps https://gitlab.com/collectqt/quirell/tree/develop. And I'm testing session stuff with
def _before_request(self):
LOG.debug('SESSION REQUEST '+str(flask.session))
def _after_request(self, response):
LOG.debug('SESSION RESPONSE '+str(flask.session))
return response
Got the solved with some external help, mainly by changing the secret key to use a random string I came up with, instead of os.urandom(24)
Changing to server side redis sessions helped too, if only by making testing simpler
Just in case someone else comes across this question, check APPLICATION_ROOT configuration variable. I recently deployed a Flask application to a subdirectory under nginx with a reverse-proxy and setting the APPLICATION_ROOT variable broke Flask's session. Cookies aren't being set under the correct path because of that.
Related
In short:
I have a Django application being served up by Apache on a Google Compute Engine VM.
I want to access a secret from Google Secret Manager in my Python code (when the Django app is initialising).
When I do 'python manage.py runserver', the secret is successfully retrieved. However, when I get Apache to run my application, it hangs when it sends a request to the secret manager.
Too much detail:
I followed the answer to this question GCP VM Instance is not able to access secrets from Secret Manager despite of appropriate Roles. I have created a service account (not the default), and have given it the 'cloud-platform' scope. I also gave it the 'Secret Manager Admin' role in the web console.
After initially running into trouble, I downloaded the a json key for the service account from the web console, and set the GOOGLE_APPLICATION_CREDENTIALS env-var to point to it.
When I run the django server directly on the VM, everything works fine. When I let Apache run the application, I can see from the logs that the service account credential json is loaded successfully.
However, when I make my first API call, via google.cloud.secretmanager.SecretManagerServiceClient.list_secret_versions , the application hangs. I don't even get a 500 error in my browser, just an eternal loading icon. I traced the execution as far as:
grpc._channel._UnaryUnaryMultiCallable._blocking, line 926 : 'call = self._channel.segregated_call(...'
It never gets past that line. I couldn't figure out where that call goes so I couldnt inspect it any further than that.
Thoughts
I don't understand GCP service accounts / API access very well. I can't understand why this difference is occurring between the django dev server and apache, given that they're both using the same service account credentials from json. I'm also surprised that the application just hangs in the google library rather than throwing an exception. There's even a timeout option when sending a request, but changing this doesn't make any difference.
I wonder if it's somehow related to the fact that I'm running the django server under my own account, but apache is using whatever user account it uses?
Update
I tried changing the user/group that apache runs as to match my own. No change.
I enabled logging for gRPC itself. There is a clear difference between when I run with apache vs the django dev server.
On Django:
secure_channel_create.cc:178] grpc_secure_channel_create(creds=0x17cfda0, target=secretmanager.googleapis.com:443, args=0x7fe254620f20, reserved=(nil))
init.cc:167] grpc_init(void)
client_channel.cc:1099] chand=0x2299b88: creating client_channel for channel stack 0x2299b18
...
timer_manager.cc:188] sleep for a 1001 milliseconds
...
client_channel.cc:1879] chand=0x2299b88 calld=0x229e440: created call
...
call.cc:1980] grpc_call_start_batch(call=0x229daa0, ops=0x20cfe70, nops=6, tag=0x7fe25463c680, reserved=(nil))
call.cc:1573] ops[0]: SEND_INITIAL_METADATA...
call.cc:1573] ops[1]: SEND_MESSAGE ptr=0x21f7a20
...
So, a channel is created, then a call is created, and then we see gRPC start to execute the operations for that call (as far as I read it).
On Apache:
secure_channel_create.cc:178] grpc_secure_channel_create(creds=0x7fd5bc850f70, target=secretmanager.googleapis.com:443, args=0x7fd583065c50, reserved=(nil))
init.cc:167] grpc_init(void)
client_channel.cc:1099] chand=0x7fd5bca91bb8: creating client_channel for channel stack 0x7fd5bca91b48
...
timer_manager.cc:188] sleep for a 1001 milliseconds
...
timer_manager.cc:188] sleep for a 1001 milliseconds
...
So, we a channel is created... and then nothing. No call, no operations. So the python code is sitting there waiting for gRPC to make this call, which it never does.
The problem appears to be that the forking behaviour of Apache breaks gRPC somehow. I couldn't nail down the precise cause, but after I began to suspect that forking was the issue, I found this old gRPC issue that indicates that forking is a bit of a tricky area.
I tried to reconfigure Apache to use a different 'Multi-processing Module', but as my experience in this is limited, I couldn't get gRPC to work under any of them.
In the end, I switched to using nginx/uwsgi instead of Apache/mod_wsgi, and I did not have the same issue. If you're trying to solve a problem like this and you have to use Apache, I'd advice further investigating Apache forking, how gRPC handles forking, and the different MPMs available for Apache.
I'm facing a similar issue. When running my Flask Application with eventlet==0.33.0 and gunicorn https://github.com/benoitc/gunicorn/archive/ff58e0c6da83d5520916bc4cc109a529258d76e1.zip#egg=gunicorn==20.1.0. When calling secret_client.access_secret_version it hangs forever.
It used to work fine with an older eventlet version, but we needed to upgrade to the latest version of eventlet due to security reasons.
I experienced a similar issue and I was able to solve with the following:
import grpc.experimental.gevent as grpc_gevent
from gevent import monkey
from google.cloud import secretmanager
monkey.patch_all()
grpc_gevent.init_gevent()
client = secretmanager.SecretManagerServiceClient()
I want to make the setup of my Flask application as easy as possible by providing a custom CLI command for automatically generating SECRET_KEY and saving it to .env. However, it seems that invoking a command, creates an instance of the Flask app, which then needs the not-yet-existing configuration.
Flask documentation says that it's possible to run commands without application context, but it seems like the app is still created, even if the command doesn't have access to it. How can I prevent the web server from running without also preventing the CLI command?
Here's almost-working code. Note, that it requires the python-dotenv package, and it runs with flask run instead of python3 app.py.
import os
import secrets
from flask import Flask, session
app = Flask(__name__)
#app.cli.command(with_appcontext=False)
def create_dotenv():
cwd = os.getcwd()
filename = '.env'
env_path = os.path.join(cwd, filename)
if os.path.isfile(env_path):
print('{} exists, doing nothing.'.format(env_path))
return
with open(env_path, 'w') as f:
secret_key = secrets.token_urlsafe()
f.write('SECRET_KEY={}\n'.format(secret_key))
#app.route('/')
def index():
counter = session.get('counter', 0)
counter += 1
session['counter'] = counter
return 'You have visited this site {} time(s).'.format(counter)
# putting these under if __name__ == '__main__' doesn't work
# because then they are not executed during `flask run`.
secret_key = os.environ.get('SECRET_KEY')
if not secret_key:
raise RuntimeError('No secret key')
app.config['SECRET_KEY'] = secret_key
Some alternative options I considered:
I could allow running the app without a secret key, so the command works fine, but I don't want to make it possible to accidentally run the actual web server without a secret key. That would result in an error 500 when someone tries to use routes that have cookies and it might not be obvious at the time of starting the server.
The app could check the configuration just before the first request comes in as suggested here, but this approach would not be much better than the previous option for the ease of setup.
Flask-Script is also suggested, but Flask-Script itself says it's no longer mainained and points to Flask's built-in CLI tool.
I could use a short delay before killing the application if the configuration is missing, so the CLI command would be able to run, but a missing secret key would be easy to notice when trying to run the server. This would be quite a cursed approach though and who knows maybe even be illegal.
Am I missing something? Is this a bug in Flask or should I do something completely different for automating secret key generation? Is this abusing the Flask CLI system's philosophy? Is it bad practice to generate environment files in the first place?
As a workaround, you can use a separate Python/shell script file for generating SECRET_KEY and the rest of .env.
That will likely be the only script that needs to be able to run without the configuration, so your repository shouldn't get too cluttered from doing so. Just mention the script in README and it probably doesn't result in a noticeably different setup experience either.
I think you are making it more complicated than it should be.
Consider the following code:
import os
app.config['SECRET_KEY'] = os.urandom(24)
It should be sufficient.
(But I prefer to use config files for Flask).
AFAIK the secret key does not have to be permanent, it should simply remain stable for the lifetime of Flask because it will be used for session cookies and maybe some internal stuff but nothing critical to the best of my knowledge.
If your app were interrupted users would lose their session but no big deal.
It depends on your current deployment practices, but if you were using Ansible for example you could automate the creation of the config file and/or the environment variables and also make some sanity checks before starting the service.
The problem with your approach is that as I understand it is that you must give the web server privileges to write to the application directory, which is not optimal from a security POV. The web user should not have the right to modify application files.
So I think it makes sense to take care of deployment separately, either using bash scripts or Ansible, and you can also tighten permissions and automate other stuff.
I agree with precedent answers, anyway you could write something like
secret_key = os.environ.get('SECRET_KEY') or secrets.token_urlsafe(32)
so that you can still use your configured SECRET_KEY variable from environment. If, for any reason, python doesn't find the variable it will be generated by the part after the 'or'.
In my small web-site I feel need to make some data widely available, to avoid exchanging with database for every request made. E.g. this could be the list of current users show in the bottom of every page or the time of last update of ranking.
The stuff works in Python (Flask) running upon nginx + uwsgi (this docker image).
I wonder, do I have some small cache or shared memory for keeping such information "out of the box", or I need to take care of explicitly setting up some dedicated cache? Or perhaps some thing like this is provided by nginx?
alternatively I still can use database for it has its own cache I think, anyway
Sorry if question seems to be naive/silly - for I come from java world (where things a bit different as we serve all requests with one fat instance of java application) - and have some difficulty grasping what powers does wsgi/uwsgi provide. Thanks in advance!
Firstly, nginx has cache:
https://www.nginx.com/blog/nginx-caching-guide/
But for flask cacheing you also have options:
https://pythonhosted.org/Flask-Cache/
http://flask.pocoo.org/docs/1.0/patterns/caching/
Did you have a look at caching section from Flask docs?
It literally says:
Flask itself does not provide caching for you, but Werkzeug, one of the libraries it is based on, has some very basic cache support
You create a cache object once and keep it around, similar to how Flask objects are created. If you are using the development server you can create a SimpleCache object, that one is a simple cache that keeps the item stored in the memory of the Python interpreter:
from werkzeug.contrib.cache import SimpleCache
cache = SimpleCache()
-- UPDATE --
Or you could solve on the frontend side storing data in the web browser local storage.
If there's nothing in the local storage you call the DB, else you use the information from local storage rather than making db call.
Hope it helps.
I'm running a Flask application with Gunicorn as a web server.
The whole project is deployed to Heroku.
Procfile
web: gunicorn app:app --log-file=-
Flask sessions are implemented server side, only a session id is stored in the flask.session object.
Whenever I'm trying to do a login, I get logged in correctly at first, but then get redirected to the starting site (which should be the user site).
LoginController.py
def login(form) :
User.session.set(User.getByLogin(form))
if User.session.exists() :
return redirect(Urls.home)
return redirect(Urls.login)
The log shows that User.session.exists() returns True but in the next method (during the redirect)...
HomeController.py
def view() :
if User.session.exists() :
return CourseController.view()
return render_template("home.html")
...the same method returns False.
User.session object
def exists(self) :
key = session.get("user_key")
user = self.users.get(key)
Log.debug("session::exists", user = user)
return user is not None
In all following requests the user is randomly logged in or not.
What can be the reason for this? I heard that a too large session object can result in data loss, but I'm only storing integers in it.
Looks like there were two problems:
The app.secret_key shouldn't be set to os.urandom(24) because every worker will have another secret key
For some reason the dict where I stored my sessions in was sometimes empty and sometimes not... Still haven't found the reason for this though
Storing the sessions in a database instead a dictionary at runtime solves the problem.
I had a similar issue, but for me the answer was related to the cookies. A new session was being created when I opened my development environment, then another one when going to google, and a new one after a successful log in.
The problem was that my SESSION_COOKIE_DOMAIN was incorrect, and the cookie domain was being set to a different host. For my local development purposes I set SESSION_COOKIE_DOMAIN = '127.0.0.1', and use http://127.0.0.1: to access it, and it works OK now.
I had the same issue, while working locally worked, but on the server nothing did.
Found out when I changed 'app.secret_key' from a "my_secret_key" to os.urandom(24) with one my test user Was always in the session, with the other was never set in the session. reading several pages i did try adding a name to the cookie
app.config['SECRET_KEY'] = os.urandom(24)
# this is important or wont work
app.config['SESSION_COOKIE_NAME'] = "my_session"
now it works as is expected and i can log in, go to other webpages, and log out will remove the keys from the session.
I'm working on a Django 1.2.3 project, and I'm finding that the admin session seems to timeout extremely early, after about a minute after logging in, even while I'm using it.
Initially, I had these settings:
SESSION_COOKIE_AGE=1800
SESSION_EXPIRE_AT_BROWSER_CLOSE=True
I thought the problem might be my session storage was mis-configured, so I tried configuring my session to be stored in local memory by adding:
SESSION_ENGINE = "django.contrib.sessions.backends.cache"
CACHE_BACKEND = 'locmem://'
However, the problem still occurs. Is there something else that would cause admin sessions to timeout early even when the user is active?
Caching sessions in locmem:// means that you lose the session whenever the python process restarts. If you're running under the dev server, that would be any time you save a file. In a production environment, that will vary based on your infrastructure - mod_wsgi in apache, for example, will restart python after a certain number of requests (which is highly configurable). If you have multiple python processes configured, you'll lose your session whenever your request goes to a different process.
What's more, if you have multiple servers in a production environment, locmem:// will only refer to one server process.
In other words, don't use locmem:// for session storage.