My Flask web application runs using nginx and gunicorn. I use supervisor to let my application run in the background. I always updated my files using Windows Power Shell and the command SCP. After i moved the new edited files, which are already existing on my Ubuntu server, to the server, i use the command sudo supervisorctl reload to restart the flask app to see the changes. But this time the flask app did not start and i only get 502 Bad Gateway. It does not matter how many times i reload the supervisor or restart nginx, i only get the error code 502.
The issue was a not installed module and a typing error in a configuration file.
Related
I'm using nginx and gunicorn to serve the Flask app. The problem is when I make any change in APIs and to have those changes into effect I'm doing
sudo service nginx reload && sudo systemctl restart myapp
But sometimes it gives 502 errors. My assumption is that there are already ongoing API requests served by the server while I'm doing systemctl restart myapp.
What is the best way to reload the service without any downtime and also the existing API calls shouldn't be affected as well?
I am looking for help deploying my flash app. I've already written the app and it works well. I'm currently using the following command in the directory of my flask code:
sudo uwsgi --socket 0.0.0.0:70 --protocol=http -w AppName:app --buffer-size=32768
This is on my Amazon Lightsail instance. I have the instance linked to a static public IP, and if I navigate to the website, it works great. However, to get the command to continuously run in the background even after logging out of the Lightsail, I first start a screen command, execute the above line of code, and then detach the screen using ctrl-a-d.
The problem is, if the app crashes (which is understandable since it is very large and under development), or if the command is left running for too long, the process is killed, and it is no longer being served.
I am looking for a better method of deploying a flask app on Amazon Lightsail so that it will redeploy the app in the event of a crash without any interaction from myself.
Generally you would write your own unit file for systemd to keep your application running, auto restart when it crashes and start when you boot your instances.
There are many tutorials out there showing how to write such a unit file. Some examples:
Systemd: Service File Examples
Creating a Linux service with systemd
How to write startup script for Systemd?
You can use pm2
Starting an application with PM2 is straightforward. It will auto
discover the interpreter to run your application depending on the
script extension. This can be configurable via the Ecosystem config
file, as I will show you later on this article.
All you need to install pm2 and then
pm2 start appy.py
Great, this application will now run forever, meaning that if the process exit or throw an exception it will get automatically restarted. If you exit the console and connect again you will still be able to check the application state.
To list application managed by PM2 run:
pm2 ls
You can also check logs
pm2 logs
Keeping Processes Alive at Server Reboot
If you want to keep your application online across unexpected (or expected) server restart, you will want to setup init script to tell your system to boot PM2 and your applications.
It’s really simple with PM2, just run this command (without sudo):
pm2 startup
Pm2 Manage-Python-Processes
I've got a web app developed in Flask. The setup is simple. The app is running on Gunicorn. All requests are proxied through the nginx. The Flask app itself makes HTTP requests to external API. The HTTP requests from the flask app to the external API are initiated by AJAX calls from the javascript code in the frontend. The external API returns data in JSON format to the Flask app and the back to the frontend.
The problem is that when I run this app in development mode with the option multithreaded = True I can see that the JSON data get returned asynchronously to the server and I can see the result on the frontend page very quickly.
However, when I try to run the app in production mode with nginx and gunicorn I see that the JSON data get returned sequentially - quit slowly, one by one. It seems that due to some reason the HTTP requests to the external API get blocked.
I use supervisor on linux Ubuntu Server 16.04. This is how I start gunicorn through supervisor:
command = /path/to/project/env/bin/gunicorn -k gevent --worker-connections 1000 wsgi:app -b localhost:8500
It seems that gunicorn does not handle the requests asynchronously, although it should.
As experiment I ran the Flask app using it's built in wsgi server (NOT gunicorn) in development mode, with debug=True and multithreaded=True. All requests were still proxied through the nginx. The JSON data returned much quicker, i.e. asynchronously (seems the calls did not block).
I read gunicorn's documentation. It says if I need to make calls to external API, then I should use async workers. I use them but it doesn't work.
All the caching stuff was taken into account. I may assume that I don't use any cache. I cleared it all when I checked the server setups.
What am I missing? How can I make gunicorn run as expected?
Thanks.
I actually solved this problem quite quickly and forgot to post the answer right away. The reason why the gunicorn server did not process the requests acynchronously as I would expect was very simple and stupid. Since I was managing gunicorn through the supervisor after I had changed the config to:
command = /path/to/project/env/bin/gunicorn -k gevent --worker-connections 1000 wsgi:app -b localhost:8500
I forgot to run:
sudo supervisorctl reread
sudo supervisorctl update
It's simple but not obvious though. My mistake was that I expected the config to update automatically after I restart my app on gunicorn using this command:
sudo supervisorctl restart my_app
Yes it restart the app, but not the config of gunicorn.
I made a flask app following flask's tutorial. After python flaskApp.py, how can I stop the app? I pressed ctrl + c in the terminal but I can still access the app through the browser. I'm wondering how to stop the app? Thanks.
I even rebooted the vps. After the vps is restated, the app still is running!
CTRL+C is the right way to quit the app, I do not think that you can visit the url after CTRL+C. In my environment it works well.
What is the terminal output after CTRL+C? Maybe you can add some details.
You can try to visit the url by curl to test if browser cache or anything related with browser cause this problem.
Since you are using apache so in order to stop your app, you have to disable it by deleting .conf file from '/etc/apache2/sites-enabled/' folder and then restart the apache server. This will surely destroy your current running session.
$ cd /etc/apache2/sites-enabled/
$ sudo rm conf_filename.conf
$ sudo service apache2 restart
Try it and your site will be down. To enable it again, copy paste your file to '/etc/apache2/sites-available/' and run the following commands to enable it again.
$ sudo a2ensite conf_filename.conf
$ sudo service apache2 restart
Now your site will be live again.
Have you tried pkill python?
WARNING: do not do so before consulting your system admin if are sharing a server with others.
So, probably a dumb question, but I am beginning to learn all this so your feedback will be valuable for me.
The question is: In flask documentation it says start the flask server by entering the command 'python hello.py' and I do it successfully to see the output on localhost:5000. Now, I have a shared hosting plan and if I upload this file over there will i need to initiate the server over there as well like this? If so, when I close the terminal over there, will the flask server shut down (because when I close the terminal on my computer it shuts down the flask server and the results are no more available on localhost:5000)?.. It basically suggests me that I have to keep running the terminal all the time..please tell me what is the basic idea here? Thanks.
What you're asking is how you deploy your app. There are many options, that will depend on your needs, your hosting service, etc.
You should check the flask docs for the options. http://flask.pocoo.org/docs/deploying/
In essence, you'll have your flask app running as a local service on the server, so it's not shut down when you close the terminal, and an HTTP server that somehow proxies requests to that service. I guess the most popular is uWSGI with nginx.
When you upload your code to a remote host, you will need to provide a way to start the server and get things running. How this works is host- and software-dependent. As an example, here is some documentation for how you would fire Flask up on Heroku.