I'm trying to understand the behaviour of Pyramid regarding the [main:server] configuration and gunicorn.
If I use pserve, it'll use the configuration of [main:server], for both waitress and gunicorn. For example:
# development.ini
[server:main]
use = egg:waitress#main
listen = *:6543
So now, $ pserve development.ini will launch the project with waitress, which is expected. But if I use the command $ gunicorn (with gunicorn or waitress in the ini file) it'll work as well, which is not expected by me.
My questions are:
why does this configuration work if I run the command $ gunicorn --paste development.ini?
what happens under the hook? is waitress working? (I would say it's not according to the processes in my computer)
There are two independent pieces of configuration required to start serving requests for any WSGI app.
1) Which WSGI app to use.
2) Which WSGI server to use.
These pieces are handled separately and can be done in different ways depending on how you set it up. The ini file format is defined by the PasteDeploy library and provides a way for a consumer of the format to determine both the app config and the server config. However, when using gunicorn --paste foo.ini you're already telling gunicorn you want to use the gunicorn server (not waitress) so it ignores the server section and focuses only on loading the app. Gunicorn actually has other ways to load the app as well, but I'll ignore that complexity for now since that part is working for you. Any server config for gunicorn needs to be done separately... it is not reading the [server:main] section when you run gunicorn from the cli. Alternatively you can start your app using pserve which does use the server section to determine what server to use - but in your current setup that would run waitress instead of gunicorn.
So, after lots of reading and test, I have to conclude that:
using [main:server] is mandatory for a pyramid application
if you are running the application with gunicorn, you have to define this [main:server] nevertheless
gunicorn will ignore the use attribute, but pyramid will check the egg exists
gunicorn will use the rest of the settings (if any) but they will have less priority than the command line arguments or the config.py file
The reason behind this behaviour is still confusing to me, but at least I can work with it. Any other hints will be very appreciated.
Related
I created a simple flask application that needs authentication to have access to the data.
When I run this application locally it works fine (accepts more than one client), however when I host the app on railway or heroku it can't handle more than one client.
Ex: when I access the URL on a computer and log in, if I access the URL on my cellphone (different netowrk) I get to have access to that account logged in.
I'm using the latest version of flask and using flask_login to manage authentication.
Does anyone have any idea why it's happening?
I've tried everything I found out on Internet, such as using
app.run(threaded=True)
I've also set the numbers of workers on gunicorn command for exemple
Does anyone have any idea why it's happening?
As official Flask's documentation says, never run your application in production in dev mode (what app.run() actually is).
Please refer to this section if you are going to deploy in self-hosted machine: https://flask.palletsprojects.com/en/2.2.x/deploying/
And if you are going to deploy to Heroku, you need to prepare for correct Procfile, like this:
web: gunicorn run:app
I've just solved it.
My gunicorn was in sync way and would handle only one request at a time.
So I had insert threads number on Procifle in order to it change worker_class from sync to Gthread
My Procfile afterall:
web: gunicorn --threads 4 :$PORT index:app
https://docs.gunicorn.org/en/stable/design.html#choosing-a-worker-type
I'm dockerizing a Flask / React application. I've started the process by installing gunicorn so that I could make this the default web server, and to that end created a wsgi.py file to load the application from there, and then from the command line I ran:
gunicorn --bind 0.0.0.0:5000 wsgi:app
And it worked without any issues.
Now I'm attempting to dockerize the app for my first time. I didn't use a gunicorn.config file yet but I've seen it referred to in a couple of articles.
Do I need to add a gunicorn.config file to my application in order for it to run smoothly once inside a Docker container? I'm leaving it out for now but I'd appreciate any advice on what is good practice or downright necessary.
It is generally a good idea to export any modifiable configuration to a config file when running your app in production. You don't have to use a config file, but hard coding all the configuration means you'll need to rebuild and redeploy the image if you want to make a change, which is not ideal.
When using Docker, you can use the -v option to mount specific files/directories from the host to the container, which is what you'd want to use when mounting an external config file.
I have an existing application which is written in python3 default threading based http server. Is it possible to use gunicorn on top of that?
I know in flaks application, we creare a wsgi file or an instance of application to gunicorn command.
Is there any similar way?
I'm used to building my websites with PHP, and on my OS X machine I expect to have to ensure that I have my scripts living in an explicitly specified location that I define as my Apache server's document root. But when I follow the simple instructions for building a Flask website, I magically get a working website, with nothing at all in any of the places on my machine that serve as document roots, regardless of where I have my Flask script. This is especially confusing since I always think if deployment as involving careful duplicating the file structure of my site under document root on the deployment server's document root.
Where is Flask "running from" on my OS X machine? Where do I "put it" when I deploy it (and what to I put)?
It's running from wherever you put it. You surely know where you saved the code: that's where it is.
But your mistake is in thinking that this development environment is running through Apache, or indeed has anything to do with how you'll run it in production. Neither is true. You're using a development server, the separate Werkzeug project, but that is not suitable for running in prod.
When you are ready to deploy, Flask has full instructions on how to either connect it to Apache through mod_wsgi, or set up a separate WSGI server which you'll usually connect to through a reverse proxy such as nginx.
Supposed you have your main.py under /path/to/my_project/, when you run the internal server python main.py, Flask is then running under your project folder.
Of course that built-in server is only good for development, when you're trying to deploy for production, normally Gunicorn (via wsgi app, read more HERE) or other web server is more appropriated (and advised by Flask) itself. And your production folder can be placed wherever you want, just like Apache PHP you may place your folder under /var/www/ (EDITED: as Daniel Roseman pointed out, you may try to change this folder location for security concern), it's the same for Flask, that's nothing stops you placing the folder but rather have the permission set properly.
Hope this helps.
I'm serving files in ubuntu using Nginx and fcgi, python and web.py. My index.py contents are:
app = web.application(urls, globals(), True)
if __name__ == "__main__":
web.wsgi.runwsgi = lambda func, addr=None: web.wsgi.runfcgi(func, addr)
app.run()
And I'm launching with:
spawn-fcgi -n -d /usr/share/nginx/www -f ~/Projects/index.py -a 127.0.0.1 -p 9002
Which works fine, EXCEPT, once I make changes to the source files (index.py or any class it includes), those new files are never loaded. I have to stop spawn-fcgi and restart it to see any changes. This make development very cumbersome.
In addition I've turned off the generation of python .pyc/cache files.
TIA
I deploy my apps using nginx+uwsgi or apache+mod_wsgi, both of them reload app if I touch code.py. But I run apps from integrated server when developing.
If running web.py integrated server in development mode that has its own reloader is not an option then the only option is to write your own dispatcher with reload functionality.
That is most likely by design.
You do normally not want modules reloaded in a production environment (performance, and due to the fact that a module reload in Python does not always have the intended effect).
For development, use some other simpler server model (for example, Django provides its own development server for this exact purpose, I have not used webpy but it appears to have the same functionality according to the tutorial). Use nginx only when deploying the webapp, not in your development environment.
You should not have to bother about .pyc files under normal circumstances (exceptions are in some problematic NFS setups, or when .pyc files are created by the wrong user with the wrong permissions).