I've a django application with waitress (gunicorn doesn't work on windows) to serve it. Because its production code and its based on windows 2012 server. But I want the django application to run in daemon mode is it possible?
Daemon mode - app running without command prompt opening/visible also it'll be helpful to open the shell without closing the server. AutoStart if for some reason system has to restart.
Note:
Limitations: The project cannot be moved to UNIX based system.
Third-Party applications like any .exe file cannot be used.
You cannot use Docker as it consumes a lot of space.
For production:
create a file server.py at same level as manage.py and add following:
from waitress import serve
from myapp.wsgi import application
if __name__ == '__main__':
serve(application, port='8000')
Start-Process python -NoNewWindow -ArgumentList "server.py"
You can close the terminal after that and it still runs.
If you later want to stop, you have to do with Get-Process and then
TaskKill
Running with CMD:
START "myapp" /B python server.py
Running in cmd
Just add & at the end of command
Another solution - to use Docker, and in your docker you can use gunicorn or any other linux feature
Related
I have set up Flask on my Rapsberry Pi and I am using it for the sole purpose of acting as a server for an xml file which I created with a Python script to pass data to an iPad app (iRule).
My RPI is set up as headless and my access is with Windows 10 using PuTTY, WinSCP and TightVNC Viewer.
I run the server by opening a terminal window and the following command:
sudo python app1c.py
This sets up the server and I can access my xml file quite well. However, when I turn off the Windows machine and the PuTTY session, the Flask server shuts down!
How can I set it up so that the Flask server continues even when the Windows machine is turned off?
I read in the Flask documentation:
While lightweight and easy to use, Flask’s built-in server is not suitable for production as it doesn’t scale well and by default serves only one request at a time.
Then they go on to give examples of how to deploy your Flask application to a WSGI server! Is this necessary given the simple application I am dealing with?
Use:
$ sudo nohup python app1c.py > log.txt 2>&1 &
nohup allows to run command/process or shell script that can continue running in the background after you log out from a shell.
> log.txt: it forword the output to this file.
2>&1: move all the stderr to stdout.
The final & allows you to run a command/process in background on the current shell.
Install Node package forever at here https://www.npmjs.com/package/forever
Then use
forever start -c python your_script.py
to start your script in the background. Later you can use
forever stop your_script.py
to stop the script
You have multiple options:
Easy: deattach the process with &, for example:
$ sudo python app1c.py &
Medium: install tmux with apt-get install tmux
launch tmux and start your app as before and detach with CTRL+B.
Complexer:
Read run your flask script with a wsgi server - uwsgi, gunicorn, nginx.
Been stressing lately so I decide to go deep.
pm2 start app.py --interpreter python3
Use PM2 for things like this. I also use it for NodeJs app and a Python app on a single server.
Use:
$sudo python app1c.py >> log.txt 2>&1 &
">> log.txt" pushes all your stdout inside the log.txt file (You may check the application logs in it)
"2>&1" pushes all the stderr inside the log.txt file (This would push all the error logs inside log.txt)
"&" at the end makes it run in the background.
You would get the process id immediately after executing this command with which you can monitor or verify it.
$sudo ps -ef | grep <process-id>
Hope it helps..!!
You can always use nohup to run any scripts as background process.
nohup python script.py
This will run your script in background and also have its logs appended in nohup.out file which will be located in the directory script.py is store.
Make sure, you close the terminal and not press Ctrl + C. This will allow it to run in background even when you log out.
To stop it from running , ssh in to the pi again and run ps -ef |grep nohup and kill -9 XXXXX
where XXXX is the pid you will get ps command.
I've always found a detached screen process to be best for use cases such as these.
Run:
screen -m -d sudo python app1c.py
I was trying to run my flask app for testing in my GitHub CI and the step where I was running the app was getting stuck for ever. The reason was that it was never releasing the command line
Best solution I found was a combination of two other responses in here:
nohup python script.py &
I am looking for help deploying my flash app. I've already written the app and it works well. I'm currently using the following command in the directory of my flask code:
sudo uwsgi --socket 0.0.0.0:70 --protocol=http -w AppName:app --buffer-size=32768
This is on my Amazon Lightsail instance. I have the instance linked to a static public IP, and if I navigate to the website, it works great. However, to get the command to continuously run in the background even after logging out of the Lightsail, I first start a screen command, execute the above line of code, and then detach the screen using ctrl-a-d.
The problem is, if the app crashes (which is understandable since it is very large and under development), or if the command is left running for too long, the process is killed, and it is no longer being served.
I am looking for a better method of deploying a flask app on Amazon Lightsail so that it will redeploy the app in the event of a crash without any interaction from myself.
Generally you would write your own unit file for systemd to keep your application running, auto restart when it crashes and start when you boot your instances.
There are many tutorials out there showing how to write such a unit file. Some examples:
Systemd: Service File Examples
Creating a Linux service with systemd
How to write startup script for Systemd?
You can use pm2
Starting an application with PM2 is straightforward. It will auto
discover the interpreter to run your application depending on the
script extension. This can be configurable via the Ecosystem config
file, as I will show you later on this article.
All you need to install pm2 and then
pm2 start appy.py
Great, this application will now run forever, meaning that if the process exit or throw an exception it will get automatically restarted. If you exit the console and connect again you will still be able to check the application state.
To list application managed by PM2 run:
pm2 ls
You can also check logs
pm2 logs
Keeping Processes Alive at Server Reboot
If you want to keep your application online across unexpected (or expected) server restart, you will want to setup init script to tell your system to boot PM2 and your applications.
It’s really simple with PM2, just run this command (without sudo):
pm2 startup
Pm2 Manage-Python-Processes
I try to open a directory using xdg-open in Ubuntu. It works if I run xdg-open ./dir in terminal.
I have a Flask web app that opens directories using xdg-open in some situations. When I start the app from terminal in development mode (By running $ flask run ) it works and opens all directories without any problems. But when i start it in production mode using Nginx & Gunicorn it returns:
xdg-open: no method available for opening ./test
The result is exactly same as the situation that I run xdg-open in non-graphical terminal(alt+ctrl+f1)
What should I do?
Finally I found the solution
Just I must set the $DISPLAY env variable to :0 and then run the command.
s.th like this:
env = dict(os.environ)
env['DISPLAY'] = ":0"
subprocess.Popen('xdg-open ./some_folder',env=env,shell=True)
I have set up Flask on my Rapsberry Pi and I am using it for the sole purpose of acting as a server for an xml file which I created with a Python script to pass data to an iPad app (iRule).
My RPI is set up as headless and my access is with Windows 10 using PuTTY, WinSCP and TightVNC Viewer.
I run the server by opening a terminal window and the following command:
sudo python app1c.py
This sets up the server and I can access my xml file quite well. However, when I turn off the Windows machine and the PuTTY session, the Flask server shuts down!
How can I set it up so that the Flask server continues even when the Windows machine is turned off?
I read in the Flask documentation:
While lightweight and easy to use, Flask’s built-in server is not suitable for production as it doesn’t scale well and by default serves only one request at a time.
Then they go on to give examples of how to deploy your Flask application to a WSGI server! Is this necessary given the simple application I am dealing with?
Use:
$ sudo nohup python app1c.py > log.txt 2>&1 &
nohup allows to run command/process or shell script that can continue running in the background after you log out from a shell.
> log.txt: it forword the output to this file.
2>&1: move all the stderr to stdout.
The final & allows you to run a command/process in background on the current shell.
Install Node package forever at here https://www.npmjs.com/package/forever
Then use
forever start -c python your_script.py
to start your script in the background. Later you can use
forever stop your_script.py
to stop the script
You have multiple options:
Easy: deattach the process with &, for example:
$ sudo python app1c.py &
Medium: install tmux with apt-get install tmux
launch tmux and start your app as before and detach with CTRL+B.
Complexer:
Read run your flask script with a wsgi server - uwsgi, gunicorn, nginx.
Been stressing lately so I decide to go deep.
pm2 start app.py --interpreter python3
Use PM2 for things like this. I also use it for NodeJs app and a Python app on a single server.
Use:
$sudo python app1c.py >> log.txt 2>&1 &
">> log.txt" pushes all your stdout inside the log.txt file (You may check the application logs in it)
"2>&1" pushes all the stderr inside the log.txt file (This would push all the error logs inside log.txt)
"&" at the end makes it run in the background.
You would get the process id immediately after executing this command with which you can monitor or verify it.
$sudo ps -ef | grep <process-id>
Hope it helps..!!
You can always use nohup to run any scripts as background process.
nohup python script.py
This will run your script in background and also have its logs appended in nohup.out file which will be located in the directory script.py is store.
Make sure, you close the terminal and not press Ctrl + C. This will allow it to run in background even when you log out.
To stop it from running , ssh in to the pi again and run ps -ef |grep nohup and kill -9 XXXXX
where XXXX is the pid you will get ps command.
I've always found a detached screen process to be best for use cases such as these.
Run:
screen -m -d sudo python app1c.py
I was trying to run my flask app for testing in my GitHub CI and the step where I was running the app was getting stuck for ever. The reason was that it was never releasing the command line
Best solution I found was a combination of two other responses in here:
nohup python script.py &
So, experimenting with Docker + Supervisord + Django app via uWSGI. I have the whole stack working fine, but need to tidy up the logging.
If I launch supervisor in non-daemon mode,
/usr/bin/supervisord -n
Then I get the logging output for supervisor played into the docker logs stdout. However, if supervisord is in daemon mode, its own logs get stashed away in the container filesystem, and the logs of its applications do too - in their own app__stderr/stdout files.
What I want is to log both supervisor, and application stdout to the docker log.
Is starting supervisord in non-daemon mode a sensible idea for this, or does it cause unintended consequences? Also, how do I get the application logs also played into the docker logs?
I accomplished this using .
Install supervisor-stdout in your Docker image:
RUN apt-get install -y python-pip && pip install supervisor-stdout
Supervisord Configuration
Edit your supervisord.conf look like so:
[program:myprogram]
command=/what/ever/command
stdout_events_enabled=true
stderr_events_enabled=true
[eventlistener:stdout]
command = supervisor_stdout
buffer_size = 100
events = PROCESS_LOG
result_handler = supervisor_stdout:event_handler
Docker container is like a kleenex, you use it then you drop it. To be "alive", Docker needs something running in foreground (whereas daemons run in background), that's why you are using Supervisord.
So you need to "redirect/add/merge" process output (access and error) to Supervisord output you see when running your container.
As Drew said, everyone is using https://github.com/coderanger/supervisor-stdout to achieve it (to me this should be added to supervisord project!). Something Drew forgot to say, you may need to add
stdout_logfile=/dev/stdout
stdout_logfile_maxbytes=0
To the supervisord program configuration block.
Something very usefull also, imagine your process is logging in a log file instead of stdout, you can ask supervisord to watch it:
[program:php-fpm-log]
command=tail -f /var/log/php5-fpm.log
stdout_events_enabled=true
stderr_events_enabled=true
This will redirect php5-fpm.log content to stdout then to supervisord stdout via supervisord-stdout.
supervisor-stdout requires to install python-pip, which downloads ~150mb, for a container I think is a lot just for install another tool.
Redirecting logfile to /dev/stdout works for me:
stdout_logfile=/dev/stdout
stdout_logfile_maxbytes=0
http://veithen.github.io/2015/01/08/supervisord-redirecting-stdout.html
I agree, not using the daemon mode sounds like the best solution, but I would probably employ the same strategy you would use when you had actual physical servers or some kind of VM setup: centralize logging.
You could use something self-hosted like logstash inside the container to collect logs and send it to a central server. Or use a commercial service like loggly or papertrail to do the same.
Today's best practice is to have minimal Docker images. For me, ideal container with Python application contain just my code, supporting libraries and something like uwsgi if it is necessary.
I published one solution on https://github.com/msgre/uwsgi_logging. It is simple Django application behind uwsgi which is configured to display logs from uwsgi and Django app on containers stdout without need of supervisor.
I had the same problem with my python app (Flask). Solution that worked for me was to:
Start supervisord in nodaemon mode (supervisord -n)
Redirect log to /proc/1/fd/1 instead of /dev/stdout
Set these two environment variables in my docker image PYTHONUNBUFFERED=True and PYTHONIOENCODING=UTF-8
Just add below line to your respective supervisor.ini config file.
redirect_stderr=true
stdout_logfile=/proc/1/fd/1
Export these variables to application (linux) environment.
$ export PYTHONUNBUFFERED=True
$ export PYTHONIOENCODING=UTF-8
Indeed, starting supervisord in non-daemon mode is the best solution.
You could also use volumes in order to mount the supervisord's logs to a central place.