RPi 1B with V1 cam.
Python script takes a picture when pushbutton hooked to gpio is pressed. Then picture is emailed via Mutt.
All working fine when doing step by step.
But not doing as intended when lauched automatically at startup.
import subprocess
from datetime import datetime
from gpiozero import Button
button = Button(17)
while True:
button.wait_for_press()
time = datetime.now()
filename = "capture-%04d%02d%02d-%02d%02d%02d.jpg" % (time.year, time.month, time.day, time.hour, time.minute, time.second)
subprocess.call("raspistill -t 500 -o %s" % filename, shell=True)
subprocess.call("echo "" | mutt -s 'Someone at the door' -i messageBody.txt myname#mailprovider.com -a %s" % filename, shell=True)
All working fine when typing :
$ python raspicam.py
I get a nice email within seconds with picture attached to it.
Next logical step is to get this script to be launched at startup:
$ nano launcher.sh
#!/bin/sh
# launcher.sh
cd /
cd home/pi
python doorbell02.py
cd /
$ chmod 755 launcher.sh
$ sh launcher.sh
Then get it to be launched at startup via cron :
$ mkdir logs
$ sudo crontab -e
add: #reboot sh /home/pi/launcher.sh >/home/pi/logs/cronlog 2>&1
At next reboot all working fine except sending mail via with mutt.
$ ps aux shows that my python script and script launcher belongs to "root"... is it where trouble comes from ?
root 475 0.0 0.0 0 0 ? S 16:51 0:00 [cifsd]
root 500 0.0 0.6 7932 2300 ? Ss 16:51 0:00 /usr/sbin/cron -f
root 502 0.0 0.6 9452 2384 ? S 16:51 0:00 /usr/sbin/CRON -f
root 506 0.0 0.3 1924 1148 ? Ss 16:51 0:00 /bin/sh -c sh /home/pi/launcher.sh >/home/pi/logs/cronlog 2>&1
root 511 0.0 0.2 1924 1108 ? S 16:51 0:00 sh /home/pi/launcher.sh
root 513 1.5 2.5 34348 9728 ? Sl 16:51 4:25 python doorbell02.py
I am also unable to get pdb to work alongside with my script to get some log or debug info...
Some hints would be very appreciated
Thank you very much for your time
Try using absolute paths in your code.
It helped me in my case.
Related
I have a python script on a Pi3 which sends sensor readings to a mysql database, which I would like to run at boot. I have tried several combinations of #boot within crontab, but the database table never gets any fresh data.
The first line of the script is...
#!/usr/bin/python
and the script runs with:
./distance2.py
#reboot /home/pi/distance2.py &
# #reboot cd /pyhome/pi/Pimoroni/VL53L1X/Examples && sudo python distance2.py
# #reboot /home/pi/Pimoroni/VL53L1X/Examples/distance2.py &
(I moved the script from the Pimoroni directory for the sake of simplicity.)
When run from terminal, the script works perfectly:
pi#raspberrypi:~ $ ./distance2.py
distance.py
Display the distance read from the sensor.
Uses the "Short Range" timing budget by default.
Press Ctrl+C to exit.
VL53L1X Start Ranging Address 0x29
VL53L0X_GetDeviceInfo:
Device Name : VL53L1 cut1.1
Device Type : VL53L1
Device ID :
ProductRevisionMajor : 1
ProductRevisionMinor : 15
Distance: 0mm
(1L, 'record inserted.')
Distance: 60mm
(1L, 'record inserted.')
Distance: 60mm
grep shows it's running OK (Unless the red colour of the script name text means something bad?)
ps aux | grep distance2.py
pi 1530 0.0 0.5 7332 2032 pts/0 S+ 16:20 0:00 grep --color=auto distance2.py
What's crontab #boot got against my humble project?
Try full path to the python and write the log for investigation:
#reboot /usr/bin/python /home/pi/distance2.py > /home/pi/distance2_cronjoblog 2>&1
I'm having trouble running my PHP script over crontab. The script is PHP with python that takes data from DHT22 and adds it to Mysql DataBase, then shows the result on HTML page. When I do it manually works fine.
Here is the line I try to run every minute in crontab:
* * * * * /usr/bin/php /var/www/html/cron/run.php
As I mentioned when I run the script in command line works fine but when I add it to crontab does nothing.
So far I have added Apache2 user to gpio with sudo adduser www-data gpio and also gave permissions to my python file with sudo chmod +x /var/www/html/cron/python.py
When I type: ps -ef | grep cron I get:
root 247 1 0 18:50 ? 00:00:00 /usr/sbin/cron -f
pi 1794 1119 0 19:48 pts/0 00:00:00 grep --color=auto cron
Any help is welcome :)
Simple solution; don't add sudo... just crontab -e
My Docker container run gunicorn, which points to "myapp.py", which is using the flask.
cat Dockerfile:
FROM python:3.7
<snip no important>
USER nobody
ENTRYPOINT ["/usr/sbin/flask-docker-entrypoint.sh"]
EXPOSE 8000
flask-docker-entrypoint.sh:
#!/bin/bash
/usr/local/bin/gunicorn myapp:app -c /local/gunicorn.conf.py
All work good!
Docker daemon logging set to 'json/file'. I tell gunicorn log to stdout (vers 20 it is default). I can send logs from myapp.py to 'docker logs' with simple logging statements. Why this is?
ps -ef
UID PID PPID C STIME TTY TIME CMD
nobody 1 0 0 22:01 ? 00:00:00 /bin/bash /usr/sbin/flask-docker-entrypoint.sh
nobody 12 1 0 22:01 ? 00:00:00 /usr/local/bin/python /usr/local/bin/gunicorn myapp:app -c /external/
nobody 15 12 0 22:01 ? 00:00:00 /usr/local/bin/python /usr/local/bin/gunicorn myapp:app -c /external/
nobody 57 0 7 22:44 pts/0 00:00:00 bash
nobody 62 57 0 22:44 pts/0 00:00:00 ps -e
flask-docker-entrypoint.sh is pid 1 so that log to stdout. Get that. Do all children of pid using entrypoint also inherit ability to log to stdout? Seems gunicorn is not pid 1 and myapp.py is not pid 1, but both log to stdout?
Thanks you
The default behavior in Unix-like environments is for a process to inherit its parent's stdout (and stdin and stderr). You can demonstrate this in your local shell easily enough:
#!/bin/sh
# This is script1
./script2
#!/bin/sh
# This is script2
echo hi there
$ ./script1
hi there
$ ./script1 > log
$ cat log
hi there
In the last example, if script1's output goes to the console or is redirected into a log file, when it runs script2 as a subprocess it inherits that same stdout.
The reason gunicorn isn't pid 1 is because you have a shell wrapper. You can use the exec shell built-in to replace the shell process with the thing it wants to run
#!/bin/sh
exec /usr/local/bin/gunicorn myapp:app -c /local/gunicorn.conf.py
Unless you need to do more setup, it might be simpler to put the command directly into the Dockerfile. (CMD is easier to override at runtime to do things like get debugging shells if you need to; this replaces your existing ENTRYPOINT line.)
CMD ["gunicorn", "myapp:app", "-c", "/local/gunicorn.conf.py"]
I am trying to start a python script on my VM from my local Mac OS
I did
ssh -i /key/path/id_rsa root#111.11.1.0 "sleep 5s; cd /root/Server;pkill -f server.py;./server.py;"
Result
It's SSH in and it quickly runs those commands and it quickly logging me out. I was expecting it to stay open in SSH session.
My script is NOT running ...
ps -aux | grep python
root 901 0.0 0.2 553164 18584 ? Ssl Jan19 20:37 /usr/bin/pytho -Es /usr/sbin/tuned -l -P
root 15444 0.0 0.0 112648 976 pts/0 S+ 19:16 0:00 grep --color=auto python
If I do this it works
ssh -i /key/path/id_rsa root#111.11.1.0 "sleep 5s; cd /root/Server"
Then
./server.py;
Then, it works.
Am I missing anything?
You might need to state the shell starting your script i.e /bin/bash server.py:
ssh -i /key/path/id_rsa root#111.11.1.0 "sleep 5s; cd /root/Server; pkill -f server.py; /bin/bash ./server.py;"
If you would like to start the script and leave it running even after you end your ssh session you could use nohup. Notice that you need to put the process in the background and redirect stdin, stdout and stderr to completly detach from the remote process:
-i /key/path/id_rsa root#111.11.1.0 "sleep 5s; cd /root/Server; nohup /bin/bash ./server.py < /dev/null > std.out 2> std.err &"
It seems like the reason that your ssh command returns imediately is because somehow the call to pkill -f server.py will also terminate the actual ssh session, since it also contains the server.py in the commandline.
I don't have my regular MacBook Pro here to test with, but I think that adding another semicolon and ending the command line with /bin/bash might do it.
I'm encountering the following problem:
I have this simple script, called test.sh:
#!/bin/bash
function hello() {
echo "hello world"
}
hello
when I run it from shell, I got the expected result:
$ ./test2.sh
hello world
However, when I try to run it from Python (2.7.?) I get the following:
>>> import commands
>>> cmd="./test2.sh"
>>> commands.getoutput(cmd)
'./test2.sh: 3: ./test2.sh: Syntax error: "(" unexpected'
I believe it somehow runs the script from "sh" rather than bash. I think so because when I run it with sh I get the same error message:
$ sh ./test2.sh
./test2.sh: 3: ./test2.sh: Syntax error: "(" unexpected
In addition, when I run the command with preceding "bash" from python, it works:
>>> cmd="bash ./test2.sh"
>>> commands.getoutput(cmd)
'hello world'
My question is: Why does python choose to run the script with sh instead of bash though I added the #!/bin/bash line at the beginning of the script? How can I make it right (I don't want to use preceding 'bash' in python since my script is being run from python by distant machines which I cant control).
Thanks!
There seems to be some other problem - the shbang and commands.getoutput should work properly as you show here. Change the shell script to just:
#!/bin/bash
sleep 100
and run the app again. Check with ps f what's the actual process tree. It's true that getoutput calls sh -c ..., but this shouldn't change which shell executes the script itself.
From a minimal test as described in the question, I see the following process tree:
11500 pts/5 Ss 0:00 zsh
15983 pts/5 S+ 0:00 \_ python2 ./c.py
15984 pts/5 S+ 0:00 \_ sh -c { ./c.sh; } 2>&1
15985 pts/5 S+ 0:00 \_ /bin/bash ./c.sh
15986 pts/5 S+ 0:00 \_ sleep 100
So in isolation, this works as expected - python calls sh -c { ./c.sh; } which is executed by the shell specified in the first line (bash).
Make sure you're executing the right script - since you're using ./test2.sh, double-check you're in the right directory and executing the right file. (Does print open('./test2.sh').read() return what you expect?)