Hot reloading python process for code reload - python

Any way to hot reload python modules for a running python process? In usual cases we could run kill -HUP <pid> for some of the servers like squid, nginx,gunicorn. My running processes are
root 6 0.6 0.9 178404 39116 ? S 14:21 0:00 python3 ./src/app.py --config ./conf/config.yml
root 7 0.0 1.0 501552 43404 ? Sl 14:21 0:00 python3 ./src/app.py --config ./conf/config.yml
root 8 0.0 1.0 501808 43540 ? Sl 14:21 0:00 python3 ./src/app.py --config ./conf/config.yml

Is the question about reloading a Sanic app? If yes, then there is a hot reload built into the server.
app.run(debug=True)
Or if you want the reload without debugging
app.run(auto_reload=True)
See docs
Or, if this is a question in general, checkout aoiklivereload

Related

Python script not working when launched at startup of RPi

RPi 1B with V1 cam.
Python script takes a picture when pushbutton hooked to gpio is pressed. Then picture is emailed via Mutt.
All working fine when doing step by step.
But not doing as intended when lauched automatically at startup.
import subprocess
from datetime import datetime
from gpiozero import Button
button = Button(17)
while True:
button.wait_for_press()
time = datetime.now()
filename = "capture-%04d%02d%02d-%02d%02d%02d.jpg" % (time.year, time.month, time.day, time.hour, time.minute, time.second)
subprocess.call("raspistill -t 500 -o %s" % filename, shell=True)
subprocess.call("echo "" | mutt -s 'Someone at the door' -i messageBody.txt myname#mailprovider.com -a %s" % filename, shell=True)
All working fine when typing :
$ python raspicam.py
I get a nice email within seconds with picture attached to it.
Next logical step is to get this script to be launched at startup:
$ nano launcher.sh
#!/bin/sh
# launcher.sh
cd /
cd home/pi
python doorbell02.py
cd /
$ chmod 755 launcher.sh
$ sh launcher.sh
Then get it to be launched at startup via cron :
$ mkdir logs
$ sudo crontab -e
add: #reboot sh /home/pi/launcher.sh >/home/pi/logs/cronlog 2>&1
At next reboot all working fine except sending mail via with mutt.
$ ps aux shows that my python script and script launcher belongs to "root"... is it where trouble comes from ?
root 475 0.0 0.0 0 0 ? S 16:51 0:00 [cifsd]
root 500 0.0 0.6 7932 2300 ? Ss 16:51 0:00 /usr/sbin/cron -f
root 502 0.0 0.6 9452 2384 ? S 16:51 0:00 /usr/sbin/CRON -f
root 506 0.0 0.3 1924 1148 ? Ss 16:51 0:00 /bin/sh -c sh /home/pi/launcher.sh >/home/pi/logs/cronlog 2>&1
root 511 0.0 0.2 1924 1108 ? S 16:51 0:00 sh /home/pi/launcher.sh
root 513 1.5 2.5 34348 9728 ? Sl 16:51 4:25 python doorbell02.py
I am also unable to get pdb to work alongside with my script to get some log or debug info...
Some hints would be very appreciated
Thank you very much for your time
Try using absolute paths in your code.
It helped me in my case.

Processes Not Appearing on Beowulf Cluster

More than a week ago, I ran nohup python3 -u script.py on an Ubuntu beowulf cluster I was connected to via SSH. I've now gone back wanting to kill off these processes (this program is using multiprocesing with a Pool object), but I haven't been able to do so, as I haven't been able to find the PIDs. I know that the processes are still being run because nohup.out is still being appended to and other data is being generated, but nothing relevant seems to appear when I run commands like ps or top. For example, when I run ps -x -U mkarrmann, I get:
PID TTY STAT TIME COMMAND
1296920 ? Ss 0:00 /lib/systemd/systemd --user
1296929 ? S 0:00 (sd-pam)
1296937 ? Ssl 0:00 /usr/bin/pulseaudio --daemonize=no --log-target=journal
1296939 ? SNsl 0:00 /usr/libexec/tracker-miner-fs
1296944 ? Ss 0:00 /usr/bin/dbus-daemon --session --address=systemd: --nofork --nopidfile --systemd-activation --syslog-only
1296945 ? R 0:00 sshd: mkarrmann#pts/0
1296960 ? Ssl 0:00 /usr/libexec/gvfsd
1296965 ? Sl 0:00 /usr/libexec/gvfsd-fuse /run/user/3016/gvfs -f -o big_writes
1296972 ? Ssl 0:00 /usr/libexec/gvfs-udisks2-volume-monitor
1296979 pts/0 Ss 0:00 -bash
1296980 ? Ssl 0:00 /usr/libexec/gvfs-gphoto2-volume-monitor
1296987 ? Ssl 0:00 /usr/libexec/gvfs-afc-volume-monitor
1296992 ? Ssl 0:00 /usr/libexec/gvfs-mtp-volume-monitor
1297001 ? Ssl 0:00 /usr/libexec/gvfs-goa-volume-monitor
1297005 ? Sl 0:00 /usr/libexec/goa-daemon
1297014 ? Sl 0:00 /usr/libexec/goa-identity-service
1297126 pts/0 R+ 0:00 ps -x -U mkarrmann
Or when I run ps -faux | grep py, I get:
root 975 0.0 0.0 34240 8424 ? Ss Jul28 0:00 /usr/bin/python3 /usr/bin/networkd-dispatcher --run-startup-triggers
root 1046 0.0 0.3 476004 245516 ? Ss Jul28 66:45 /usr/bin/python3 /usr/sbin/glustereventsd --pid-file /var/run/glustereventsd.pid
root 1275 0.0 0.0 20612 7732 ? S Jul28 0:00 \_ /usr/bin/python3 /usr/sbin/glustereventsd --pid-file /var/run/glustereventsd.pid
mkarrma+ 1297143 0.0 0.0 6380 736 pts/0 S+ 14:40 0:00 \_ grep --color=auto py
Do any of these actually correspond to my Python processes and I'm just missing it? Anything else that I should try? I feel like the only thing I haven't tried is manually parsing through /proc, but that obviously shouldn't be necessary so I'm sure I'm missing something else.
I'm happy to provide any additional information that could be helpful. Thanks!

CPU usage difference between ps aux and -ef

[user#centos-vm-02 ~]$ ps aux|grep python
user 4182 0.0 0.0 9228 1080 ? Ss 02:00 0:00 /bin/sh -c cd data/trandata && /usr/local/bin/python2.7 main.py >> /dev/null 2>&1
user 4190 0.1 0.1 341108 10740 ? Sl 02:00 0:52 /usr/local/bin/python2.7 main.py
user 4205 166 1.6 1175176 129312 ? Sl 02:00 901:39 /usr/local/bin/python2.7 main.py
user 10049 0.1 0.1 435856 10712 ? Sl 10:21 0:04 /usr/local/bin/python2.7 main.py
user 10051 71.1 2.5 948248 207628 ? Sl 10:21 28:42 /usr/local/bin/python2.7 main.py
user 10052 51.9 1.9 948380 154688 ? Sl 10:21 20:57 /usr/local/bin/python2.7 main.py
user 10053 85.9 0.9 815104 76652 ? Sl 10:21 34:41 /usr/local/bin/python2.7 main.py
user 11166 0.0 0.0 103240 864 pts/1 S+ 11:01 0:00 grep python
[user#centos-vm-02 ~]$ ps -ef|grep python
user 4182 4174 0 02:00 ? 00:00:00 /bin/sh -c cd /data/trandata && /usr/local/bin/python2.7 main.py >> /dev/null 2>&1
user 4190 4182 0 02:00 ? 00:00:52 /usr/local/bin/python2.7 main.py
user 4205 4190 99 02:00 ? 15:01:46 /usr/local/bin/python2.7 main.py
user 10049 1 0 10:21 ? 00:00:04 /usr/local/bin/python2.7 main.py
user 10051 10049 71 10:21 ? 00:28:47 /usr/local/bin/python2.7 main.py
user 10052 10049 51 10:21 ? 00:21:01 /usr/local/bin/python2.7 main.py
user 10053 10049 85 10:21 ? 00:34:45 /usr/local/bin/python2.7 main.py
user 11168 10904 0 11:01 pts/1 00:00:00 grep python
As we see, I launch a python process that it would spwan multiprocess, and inside the processes, multithreads are started, and inside the threads, multithreads are started.
Process tree like this:
main_process
--sub_process
----thread1
------sub_thread
------sub_thread
------sub_thread
------sub_thread
----thread2
----thread3
--sub_process
----......
Inside the picture, the pid-4205 shows different CPU usage in ps aux and ps -ef, one is 166, and the other is 99, 166 was also shown in top -c.
And I assure that the pid-4205 is one of the sub processes, which means it could not use more than 100% of CPU with GIL in python.
So that's my question, why ps -ef and ps aux show difference.
It's just a sampling artifact. Say a factory produces one car per hour. If you get there right before a car is made and leave right after a car is made, you can see two cars made in a span of time just over an hour, resulting in you thinking the factory is operating at near double capacity.
Update: Let me try to clarify the example. Say a factory produces one car per hour, on the hour. It is incapable of producing more than one car per hour. If you watch the factory from 7:59 to 9:01, you will see two cars produced (one at 8:00 and one at 9:00) in just over one hours (62 minutes). So you would estimate the factory produces about two cars per hour, nearly double its actual production. That is what happened here. It's a sampling artifact caused by top checking the CPU counters at just the wrong time.

uwsgi : why two processes are loaded per each app? [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 9 years ago.
Improve this question
root#www:~# ps aux | grep uwsgi
root 4660 0.0 0.0 10620 892 pts/1 S+ 19:13 0:00 grep --color=auto uwsgi
root 19372 0.0 0.6 51228 6628 ? Ss 06:41 0:03 uwsgi --master --die-on-term --emperor /var/www/*/uwsgi.ini
root 19373 0.0 0.1 40420 1292 ? S 06:41 0:03 uwsgi --master --die-on-term --emperor /var/www/*/uwsgi.ini
www-data 19374 0.0 1.9 82640 20236 ? S 06:41 0:03 /usr/local/bin uwsgi --ini /var/www/app2/uwsgi.ini
www-data 19375 0.0 2.4 95676 25324 ? S 06:41 0:03 /usr/local/bin uwsgi --ini /var/www/app3/uwsgi.ini
www-data 19385 0.0 2.1 90772 22248 ? S 06:41 0:03 /usr/local/bin uwsgi --ini /var/www/app2/uwsgi.ini
www-data 19389 0.0 2.0 95676 21244 ? S 06:41 0:00 /usr/local/bin uwsgi --ini /var/www/app3/uwsgi.ini
above is ps output of uwsgi processes. Strange thing is that for each ini files there are two instances loaded - even I have two uwsgi masters. is this normal?
the deployment strategy for uwsgi is
have Emperor managed by upstart
Emperor searches for each uwsgi.ini in apps folder
uwsgi.conf for upstart:
# simple uWSGI script
description "uwsgi tiny instance"
start on runlevel [2345]
stop on runlevel [06]
exec uwsgi --master --die-on-term --emperor "/var/www/*/uwsgi.ini"
uwsgi.ini(I have two apps, and both apps have same ini except app# numbering):
[uwsgi]
# variables
uid = www-data
gid = www-data
projectname = myproject
projectdomain = www.myproject.com
base = /var/www/app2
# config
enable-threads
protocol = uwsgi
venv = %(base)/
pythonpath = %(base)/
wsgi-file = %(base)/app.wsgi
socket = /tmp/%(projectdomain).sock
logto = %(base)/logs/uwsgi.log
You started it with the --master option, which spawns a master process to control the workers.
From the official documentation https://uwsgi-docs.readthedocs.org/en/latest/Glossary.html?highlight=master
master
uWSGI’s built-in prefork+threading multi-worker management mode, activated by flicking the master switch on. For all practical serving deployments it’s not really a good idea not to use master mode.
You should read http://uwsgi-docs.readthedocs.org/en/latest/Options.html#master
And also this thread might have some info for you. uWSGI: --master with --emperor spawns two emperors
It is generally not recommended to use --master and --emperor together.
My educated guess on this topic is that it should be transfer to Server Fault indeed.
But here is the answer:
You should have started the upstart script two time ;-)
Just try to kill the main ROOT process with a SIGTERM and see if childs process died to.
If you have run the upstart script twice, you will have one ROOT and Two childs remaining.

Celeryd launching too many processes

How do you ensure celeryd only runs as a single process? When I run manage.py celeryd --concurrency=1 and then ps aux | grep celery I see 3 instances running:
www-data 8609 0.0 0.0 20744 1572 ? S 13:42 0:00 python manage.py celeryd --concurrency=1
www-data 8625 0.0 1.7 325916 71372 ? S 13:42 0:01 python manage.py celeryd --concurrency=1
www-data 8768 0.0 1.5 401460 64024 ? S 13:42 0:00 python manage.py celeryd --concurrency=1
I've noticed a similar problem with celerybeat, which always runs as 2 processes.
As per this link .. The number of processes would be 4: one main process, two child processes and one celerybeat process,
also if you're using FORCE_EXECV there's another process started to cleanup semaphores.
If you use celery+django-celery development, and using RabbitMQ or Redis as a broker, then it shouldn't use more
than one extra thread (none if CELERY_DISABLE_RATE_LIMITS is set)

Categories