I have a django project hosted on an amazon ec2 linux instance.
For run my app also when section is close i use gunicorn but i experience some errors and degradation in perfonrmances.
When i run command:
python manage.py runserver
from terminal all works great but when section is close app does not work.
How can i run command "python manage.py runserver" for work forever (until i'll kill it) in background also in case of closed session?
I know there is uWSGI but i prefer if possible use directly django native command.
Thanks in advance
What happens here is that the script is interrupted by SIGHUP signal when your session is closed. To overcome this problem, there is a tool called nohup which doesn't pass the SIGHUP down to the program/script it executes. Use it as follows:
nohup python manage.py runserver &
(note the & in the end, it is needed so that manage.py runs in background rather than in foreground).
By default nohup redirects the output in the file nohup.out, so you can use tail -f nohup.out to watch the output/logs of your Django app.
Note, however, that manage.py runserver is not supposed to be used in production. For production you really should use a proper WSGI server, such as uWSGI or Gunicorn.
You can install and use tmux if you want to run your scripts in background even after closing SSH and mosh connections
$ sudo apt-get install tmux
then run it using command $ tmux a new shell will be opened just execute your command
$ python manage.py runserver 0.0.0.0:8000
0.0.0.0:8000 here will automatically get your allowed hosts. Now you can detach your tmux session to run it in background using CTRL + B and then press D
Now you can exit your terminal but your command keep on running in tmux. Just learn basic commands to use tmux from here
for that, you can use screen just start a new screen and run
python manage.py runserver
Related
I have set up Flask on my Rapsberry Pi and I am using it for the sole purpose of acting as a server for an xml file which I created with a Python script to pass data to an iPad app (iRule).
My RPI is set up as headless and my access is with Windows 10 using PuTTY, WinSCP and TightVNC Viewer.
I run the server by opening a terminal window and the following command:
sudo python app1c.py
This sets up the server and I can access my xml file quite well. However, when I turn off the Windows machine and the PuTTY session, the Flask server shuts down!
How can I set it up so that the Flask server continues even when the Windows machine is turned off?
I read in the Flask documentation:
While lightweight and easy to use, Flask’s built-in server is not suitable for production as it doesn’t scale well and by default serves only one request at a time.
Then they go on to give examples of how to deploy your Flask application to a WSGI server! Is this necessary given the simple application I am dealing with?
Use:
$ sudo nohup python app1c.py > log.txt 2>&1 &
nohup allows to run command/process or shell script that can continue running in the background after you log out from a shell.
> log.txt: it forword the output to this file.
2>&1: move all the stderr to stdout.
The final & allows you to run a command/process in background on the current shell.
Install Node package forever at here https://www.npmjs.com/package/forever
Then use
forever start -c python your_script.py
to start your script in the background. Later you can use
forever stop your_script.py
to stop the script
You have multiple options:
Easy: deattach the process with &, for example:
$ sudo python app1c.py &
Medium: install tmux with apt-get install tmux
launch tmux and start your app as before and detach with CTRL+B.
Complexer:
Read run your flask script with a wsgi server - uwsgi, gunicorn, nginx.
Been stressing lately so I decide to go deep.
pm2 start app.py --interpreter python3
Use PM2 for things like this. I also use it for NodeJs app and a Python app on a single server.
Use:
$sudo python app1c.py >> log.txt 2>&1 &
">> log.txt" pushes all your stdout inside the log.txt file (You may check the application logs in it)
"2>&1" pushes all the stderr inside the log.txt file (This would push all the error logs inside log.txt)
"&" at the end makes it run in the background.
You would get the process id immediately after executing this command with which you can monitor or verify it.
$sudo ps -ef | grep <process-id>
Hope it helps..!!
You can always use nohup to run any scripts as background process.
nohup python script.py
This will run your script in background and also have its logs appended in nohup.out file which will be located in the directory script.py is store.
Make sure, you close the terminal and not press Ctrl + C. This will allow it to run in background even when you log out.
To stop it from running , ssh in to the pi again and run ps -ef |grep nohup and kill -9 XXXXX
where XXXX is the pid you will get ps command.
I've always found a detached screen process to be best for use cases such as these.
Run:
screen -m -d sudo python app1c.py
I was trying to run my flask app for testing in my GitHub CI and the step where I was running the app was getting stuck for ever. The reason was that it was never releasing the command line
Best solution I found was a combination of two other responses in here:
nohup python script.py &
I followed this tutorial to set up Gunicorn to run Django on a VPS, this is working perfectly fine and the web server is running on Nginx.
I created a separate manage.py command that I want to run Async using a worker, I am unsure how to integrate this through Gunicorn.
This is a follow up to Run code on first Django start, where the recommendation was to create a separate manage.py command and then run it as a separate worker process through Gunicorn.
Gunicorn's purpose here is to serve the Django project using WSGI, it doesn't use manage.py at all. You should call anything related to manage.py directly:
$ cd <projectdir>
$ source myprojectenv/bin/activate
$ python manage.py <your command here>
For setting it as a worker, you can either set a cron job that points the python binary in the virtualenv or you can consider making a Celery setup with the process management tool (supervisord, docker etc) of your choice.
I have set up Flask on my Rapsberry Pi and I am using it for the sole purpose of acting as a server for an xml file which I created with a Python script to pass data to an iPad app (iRule).
My RPI is set up as headless and my access is with Windows 10 using PuTTY, WinSCP and TightVNC Viewer.
I run the server by opening a terminal window and the following command:
sudo python app1c.py
This sets up the server and I can access my xml file quite well. However, when I turn off the Windows machine and the PuTTY session, the Flask server shuts down!
How can I set it up so that the Flask server continues even when the Windows machine is turned off?
I read in the Flask documentation:
While lightweight and easy to use, Flask’s built-in server is not suitable for production as it doesn’t scale well and by default serves only one request at a time.
Then they go on to give examples of how to deploy your Flask application to a WSGI server! Is this necessary given the simple application I am dealing with?
Use:
$ sudo nohup python app1c.py > log.txt 2>&1 &
nohup allows to run command/process or shell script that can continue running in the background after you log out from a shell.
> log.txt: it forword the output to this file.
2>&1: move all the stderr to stdout.
The final & allows you to run a command/process in background on the current shell.
Install Node package forever at here https://www.npmjs.com/package/forever
Then use
forever start -c python your_script.py
to start your script in the background. Later you can use
forever stop your_script.py
to stop the script
You have multiple options:
Easy: deattach the process with &, for example:
$ sudo python app1c.py &
Medium: install tmux with apt-get install tmux
launch tmux and start your app as before and detach with CTRL+B.
Complexer:
Read run your flask script with a wsgi server - uwsgi, gunicorn, nginx.
Been stressing lately so I decide to go deep.
pm2 start app.py --interpreter python3
Use PM2 for things like this. I also use it for NodeJs app and a Python app on a single server.
Use:
$sudo python app1c.py >> log.txt 2>&1 &
">> log.txt" pushes all your stdout inside the log.txt file (You may check the application logs in it)
"2>&1" pushes all the stderr inside the log.txt file (This would push all the error logs inside log.txt)
"&" at the end makes it run in the background.
You would get the process id immediately after executing this command with which you can monitor or verify it.
$sudo ps -ef | grep <process-id>
Hope it helps..!!
You can always use nohup to run any scripts as background process.
nohup python script.py
This will run your script in background and also have its logs appended in nohup.out file which will be located in the directory script.py is store.
Make sure, you close the terminal and not press Ctrl + C. This will allow it to run in background even when you log out.
To stop it from running , ssh in to the pi again and run ps -ef |grep nohup and kill -9 XXXXX
where XXXX is the pid you will get ps command.
I've always found a detached screen process to be best for use cases such as these.
Run:
screen -m -d sudo python app1c.py
I was trying to run my flask app for testing in my GitHub CI and the step where I was running the app was getting stuck for ever. The reason was that it was never releasing the command line
Best solution I found was a combination of two other responses in here:
nohup python script.py &
I am trying to create my first Django project and i've been following a tutorial for the setup here (so far this tutorial has been very helpful)
The only thing i'm doing different is i'm running everything in bash rather than the DOS command prompt (something else i'm new to)
My problem is that nothing happens in bash when i execute $ python manage.py runserver
however, if i go to http://127.0.0.1:8000/ the server is running and i get the django welcome page.
But bash just stays frozen with a blank line after the command i executed. Then if i do a keyboard interrupt, i can enter new commands in bash, but then if i go back to http://127.0.0.1:8000 the server isn't running and i get 'webpage not available'.
I'm need to know why i can't execute new commands in bash after i have executed $ python manage.py runserver
when you run python manage.py runserver something is happening - there is a web service listening on http://127.0.0.1:8000/
The reason bash appears to be frozen is that the runserver is "holding" it.
I'm developing some Python project with Django. When we render the Python/Django application, we need to open the command prompt and type in python manage.py runserver. That's ok on for the development server. But for production, it looks funny. Is there anyway to run the Python/Django project without opening the command prompt?
The deployment section of the documentation details steps to configure servers to run django in production.
runserver is to be used strictly for development and should never be used in production.
You run the runserver command only when you develop. After you deploy, the client does not need to run python manage.py runserver command. Calling the url will execute the required view. So it need not be a concern
If you are using Linux I wrote a pretty, pretty basic script, which I am always using when I don't want to call this command.
Note: You really just should use the "runserver" for developing :)
#!/bin/bash
#Of course change "IP-Address" to your current IP-Address and the "Port" to your port.
#ifconfig to get your IP-Address
python manage.py runserver IP-Address:Port
just name it runserver.sh and execute it like this in your terminal:
./runserver.sh