I am in a hurry, I can find out how to do this but I need some help to achieve this without loosing too much time.
Currently what I do to run a uWsgi instance along with my ini file is just:
uwsgi --ini /home/myonlinesite/uwsgi.ini --pidfile /var/run/uwsgi_serv.pid
and then to stop:
uwsgi --stop /var/run/uwsgi_serv.pid.
By the way, I have this code inside a uwsgi init file in my /etc/init.d/uwsgi.
so when I run /etc/init.d/uwsgi start it executes the ini config file and when I execute /etc/init.d/uwsgi stop it stops the uwsgi process id.
The problem is that when I start the uWsgi service it runs normally and logs every http request, any debug print and so on, but when I close putty which is where I run my Vps it kills all uWsgi process and quits the site from being displayed.
I do not know if I have to touch the pid file only, or what do I need to do leave the uWsgi process executing and I can close putty.
Thanks in advance.
If you are setting the parameters in the command line, add the flag -d file.log to your command (-d stands for daemonize):
uwsgi --ini /home/myonlinesite/uwsgi.ini --pidfile /var/run/uwsgi_serv.pid -d file.log
If you are setting the parameters in a config file, add the following line in your config:
daemonize = /absolute/path/to/file.log
In both cases, uWsgi will run in the background and log everything in file.log. Given these options, there is no need using nohup et al.
Using nohup to start the uWsgi process should solve your problem of the process stopping when you log out.
A tutorial
Be sure to add
daemon = logfile
to your config
Related
I am using django_q for some scheduling and automations in my django project.
I successfully configured all the needed stuff but to get django_q running I have to type in the server command line 'python manage.py qcluster' and after i close the shell session id doesn't work anymore.
In the django_q official documentation it says that there is no need for a supervisor, but this is not running.
Any ideas?
There are a few approaches you can use.
You could install the screen program to create a terminal session which stays around after logout. See also: https://superuser.com/questions/451057/keep-processes-alive-after-ssh-logout
You could use systemd to automatically start your qcluster. This has the advantage that it will start qcluster again if your server is rebooted. You'll want to write a service unit file with Type=simple. Here's a list of resources.
Here's an example unit file. (You may need to adapt this somewhat.)
[Unit]
Description=qcluster daemon
[Service]
User=<django user>
Group=<django group>
WorkingDirectory=<your working dir>
Environment=PATH=/sbin:/bin:/usr/sbin:/usr/bin:/usr/local/ bin/
ExecStart=python manage.py qcluster
Restart=always
[Install]
WantedBy=multi-user.target
I have a shell script called kill.sh that helps me restart a python script I've written. I normally use pkill -f main.py to kill my forever-running python script. However, when I wrote it into a shell script it does not work.
My script
pkill -f main.py
ps aux | grep main.py # Still shows the process running.
While just executing pkill -f main.py in bash command line works as expected. Why is this?
This is not a satisfactory answer, as I cannot find out the root cause of why pkill -f does not work in a script. I ended up using a systemd Service file to manage my python process. Here's an example fyi.
[Unit]
Description=Service Name
[Service]
Environment=PYTHONUNBUFFERED=1
ExecStart=/path/to/python /path/to/python/script.py
Restart=on-failure
RestartSec=5s
WorkingDirectory=/python/project/dir/
Name the file main.service and place it in /lib/systemd/system/
Running the service systemctl start main.service
Stop the service systemctl stop main.service
Restart the service systemctl restart main.service
Show status and output systemctl status main.service -l
Now I don't have to worry about multiple processes running. If the program dies it'll even restart.
I have a Django project and I am using pykafka. I have created two files named producer.py and consumer.py inside the project. I have to change directory into the folder where these are present and then separately run python producer.py and consumer.py from the terminal. Everything works great.
I deployed my project online and the web-app is running so I want to run the producer and consumer automatically in the background. How do i do that?
EDIT 1: On my production server I did nohup python name_of_python_script.py & to execute it in the background. This works for the time being but is it a good solution?
You can create a systemd service MyKafkaConsumer.service under /etc/systemd/system with the following content:
[Unit]
Description=A Kafka Consumer written in Python
After=network.target # include any other pre-requisites
[Service]
Type=simple
User=your_user
Group=your_user_group
WorkingDirectory=/path/to/your/consumer
ExecStart=python consumer.py
TimeoutStopSec=180
Restart=on-failure
RestartSec=5
[Install]
WantedBy=multi-user.target
In order to start the service (and configure it in order to run on boot) you should run
systemctl enable MyKafkaConsumer.service
systemctl start MyKafkaConsumer.service
To check its status:
systemctl status MyKafkaConsumer
And to see the logs:
journactl -u MyKafkaConsumer -f
(or if you want to see the last 100 lines)
journalctl -u MyKafkaConsumer -n 100
You'd need to create a similar service for your producer too.
There are a lot of options for systemd services. You can refer to this article if you need any further clarifications. It shouldn't be hard to find guides and additional material online though.
I am very new to fabric. In my fabric file I want to restart gunicorn. For that I am killing the gunicorn process first and then starting it..
It looks like:
def restart_gunicorn():
run('ps ax|grep gunicorn')
run('pkill gunicorn')
run('gunicorn -b 0.0.0.0:8080 %(path)s/application/wsgi &' % env)
When I run this it gives me error at pkill gunicorn because at start i will not have any gunicorn process running. So I want to have a check lik if gunicorn processes are running then only kill gunicorn. If not gunicorn process are running I just want to start the gunicorn process..
How can I do this ?
Need help. Thank you
You can just add settings(warn_only=True) and will only give you a warning, but the execution won't fail:
def restart_gunicorn():
run('ps ax|grep gunicorn')
with settings(warn_only=True):
run('pkill gunicorn')
run('gunicorn -b 0.0.0.0:8080 %(path)s/application/wsgi &' % env)
More info on settings context manager here: http://docs.fabfile.org/en/1.10/api/core/context_managers.html#fabric.context_managers.settings
I added a bottle server that uses python's cassandra library, but it exits with this error: Bottle FATAL Exited too quickly (process log may have details) log shows this: File "/usr/local/lib/python2.7/dist-packages/cassandra/cluster.py", line 1765, in _reconnect_internal
raise NoHostAvailable("Unable to connect to any servers", errors)So I tried to run it manually using supervisorctl start Bottle ,and then it started with no issue. The conclusion= Bottle service starts too fast (before the needed cassandra supervised service does): a delay is needed!
This is what I use:
[program:uwsgi]
command=bash -c 'sleep 5 && uwsgi /etc/uwsgi.ini'
Not happy enough with the sleep hack I created a startup script and launched supervisorctl start processname from there.
[program:startup]
command=/startup.sh
startsecs = 0
autostart = true
autorestart = false
startretries = 1
priority=1
[program:myapp]
command=/home/website/venv/bin/gunicorn /home/website/myapp/app.py
autostart=false
autorestart=true
process_name=myapp
startup.sh
#!/bin/bash
sleep 5
supervisorctrl start myapp
This way supervisor will fire the startup script once and this will start myapp after 5 seconds, mind the autostart=false and autorestart=true on myapp.
I had a similar issue where, starting 64 python rq-worker processes using supervisorctl was raising CPU and RAM alert at every restart. What I did was the following:
command=/bin/bash -c "sleep %(process_num)02d && virtualenv/bin/python3 manage.py rqworker --name %(program_name)s_my-rq-worker_%(process_num)02d default low"
Basically, before running the python command, I sleep for N second, where N is the process number, which basically means I supervisor will start one rq-worker process every second.