I am running Raspbian on a Raspberry Pi. I have a python script set in /etc/rc.local to run when the device is booted. The python script was working as expected until I added mysql functionality to the script. Mysql is correctly installed and configured; the mysql functionality in the script works as expected when run manually from the shell. According to /var/log/syslog, the script is unable to find the mysql module:
Mar 17 10:18:42 PressPi rc.local[464]: Traceback (most recent call last):
Mar 17 10:18:42 PressPi rc.local[464]: File "/home/pi/python/switch_counter3.py", line 20, in <module>
Mar 17 10:18:42 PressPi rc.local[464]: import mysql.connector
Mar 17 10:18:42 PressPi rc.local[464]: ModuleNotFoundError: No module named 'mysql'
Python import statement :
import mysql.connector
/etc/rc.local entry :
python3 /home/pi/python/switch_counter3.py &
I also tried :
sudo python3 /home/pi/python/switch_counter3.py &
I tried to add a 60 second sleep in rc.local before the python script was called, in case the script was running before mysql was initialized. I can see in syslog that the script is waiting the 60 seconds before running, but I get the same error message.
I could try other methods to automate the script, but I am interested in why the current method is not working as expected.
As Tim Roberts helped me learn, the rc.local script was running as root, but I had installed mysql under my user account. Running from rc.local, the script couldn't find the mysql package. I added sudo -H -u username when calling the python script in /etc/rc.local, and the script runs as expected.
Thank you to Tim Roberts.
for those interested in the full /etc/rc.local. Everything before the sleep statement was default.
#!/bin/sh -e
#
# rc.local
#
# This script is executed at the end of each multiuser runlevel.
# Make sure that the script will "exit 0" on success or any other
# value on error.
#
# In order to enable or disable this script just change the execution
# bits.
#
# By default this script does nothing.
# Print the IP address
_IP=$(hostname -I) || true
if [ "$_IP" ]; then
printf "My IP address is %s\n" "$_IP"
fi
#make sure mysql has been initialized
sleep 15
#run python script as user pi with python3
#The ampersand allows the command to run in a separate process and continue booting with the process running.
sudo -H -u pi python3 /home/pi/python/switch_counter3.py &
exit 0
Related
I'm trying to run a script automatic when booting Raspberry with DietPi.
My script starts a Python3 programm which then at the end starts an external program MP4Box which merges 2 video files to a mp4 in a folder in my lighttp webserver.
When I start the script manually everything works. But when the script starts automatically on boot, when it comes to the external program MP4Box, I get an error:
Cannot open destination file /var/www/Videos/20201222_151210.mp4: I/O Error
Script starting my pythons is "startcam" - which lies in the folder /var/lib/dietpi/postboot.d
#!/bin/sh -e
# Autostart RaspiCam
cd /home/dietpi
rm -f trigger/*
python3 -u record_v0.1.py > record.log 2>&1 &
python3 -u motioninterrupt.py > motion.log 2>&1 &
the readme.txt in postboot.d says:
# /var/lib/dietpi/postboot.d is implemented by DietPi and allows to run scripts at the end of the boot process:
# - /etc/systemd/system/dietpi-postboot.service => /boot/dietpi/postboot => /var/lib/dietpi/postboot.d/*
# There are nearly no restrictions about file names and permissions:
# - All files (besides this "readme.txt" and dot files ".filename") are executed as root user.
# - Execute permissions are automatically added.
# NB: This delays the login prompt by the time the script takes, hence it must not be used for long-term processes, but only for oneshot tasks.
So it should also start my script with root priviledges. And that is the (part of the) Script "record_v0.1.py" that throws the error:
import os
os.system('MP4Box -fps 15 -cat /home/dietpi/b-file001.h264 -cat /home/dietpi/a-file001.h264 -new /var/www/Videos/file001.mp4 -tmp ~ -quiet')
When I start the python programs manually (logged in as root) with:
/var/lib/dietpi/postboot.d/startcam
everythin is OK and instead of the error I get the message:
Appending file /home/dietpi/Videos/b-20201222_153124.h264
No suitable destination track found - creating new one (type vide)
Appending file /home/dietpi/Videos/a-20201222_153124.h264
Saving /var/www/Videos/20201222_153124.mp4: 0.500 secs Interleaving
Thanks for every hint
Contrary to the description, the scripts in postboot.d are not excuted as root. So I changed my script to:
#!/bin/sh -e
# Autostart RaspiCam
cd /home/dietpi
rm -f trigger/*
sudo python3 -u record_v0.1.py > record.log 2>&1 &
sudo python3 -u motioninterrupt.py > motion.log 2>&1 &
Now they are running as root and everything works as wanted.
Overview
I'm trying to use python fabric to run an ssh command as root on a remote server.
The command: nohup ./foo &
foo is expected to command run for several days. I must be able to disassociate foo from fabric's remote ssh session, and put foo in the background.
The Fabric FAQ says you should use something like screen or tmux when you run your fabric script (which runs the backgrounded command). I tried that, but my fabric script still hung. foo is not hanging.
Question
How do I use fabric to run this command on a remote server without the script hanging: nohup ./foo &
Details
This is my script:
#!/bin/sh
# Credit: https://unix.stackexchange.com/a/20895/6766
if "true" : '''\'
then
exec "/nfs/it/network_python/$OSREL/bin/python" "$0" "$#"
exit 127
fi
'''
from getpass import getpass
import os
from fabric import Connection, Config
assert os.geteuid()==0, "ERROR: Must run as root"
for host in ['host1.foo.local', 'host2.foo.local']:
# Make an ssh connection to the host...
conn = Connection(host)
# The script always hangs at this line
result = conn.run('nohup ./foo &', warn=True, hide=True)
I always open a tmux session to run the aforementioned script in; even doing so, the script hangs when I get to conn.run(), above.
I'm running the script on a vanilla CentOS 6.5 VM; it runs under python 2.7.10 and fabric 2.1.
The Fabric FAQ is unclear... I thought the FAQ wanted tmux used on the local side when I executed the Fabric script.
The correct way to fix this problem is to replace nohup in the remote command, with screen -d -m <command>. Now I can run the whole script locally with no hangs (and I don't have to use tmux in the local term).
Explicitly, I have to rewrite the last line of my script in my question as:
# Remove &, and nohup...
result = conn.run('screen -d -m ./foo', warn=True, hide=True)
Good day. I am using a Raspberry Pi 3 model B running Raspbian Stretch. I have a Python script named bluepyscanner.py which is basically a Python 3 variation of the bluepy scanner sample code with a small addition for a .txt log file.
from bluepy.btle import Scanner, DefaultDelegate
class ScanDelegate(DefaultDelegate):
def __init__(self):
DefaultDelegate.__init__(self)
def handleDiscovery(self, dev, isNewDev, isNewData):
if isNewDev:
print("Discovered device", dev.addr)
elif isNewData:
print("Received new data from", dev.addr)
scanner = Scanner().withDelegate(ScanDelegate())
devices = scanner.scan(10.0)
for dev in devices:
print("Device {} ({}), RSSI={} dB".format(dev.addr, dev.addrType, dev.rssi))
for (adtype, desc, value) in dev.getScanData():
print(" {} = {}".format(desc, value))
with open('bluepyscanlog.txt', 'a') as the_file:
the_file.write("{}={}\n".format(desc, value))
I can run this script perfectly when I launch it from terminal with
$ sudo python3 /home/pi/bluepyscanner.py
However, I am somehow unable to get this script to run automatically on boot. I have tried the following three methods separately and none has worked so far:
rc.local (https://www.raspberrypi.org/documentation/linux/usage/rc-local.md): I appended the following line to /etc/rc.local
python3 /home/pi/bluepyscanner.py
Cron (https://www.raspberrypi.org/documentation/linux/usage/cron.md): I used the Cron GUI and added a recurring task to be launched "at reboot"
sudo python3 /home/pi/bluepyscanner.py
systemd (https://www.raspberrypi.org/documentation/linux/usage/systemd.md): I followed the instructions on the linked documentation page with main.py replaced by my bluepyscanner.py and the working directory replaced by /home/pi
Can anyone give me a pointer on what might have gone wrong? Bluetooth is enabled and bluepy is installed in accordance with this. I don't think the script has run because, unlike when ran from terminal, bluepyscanlog.txt was not created.
Thank you in advance for your time.
Please make these changes into your script
...
with open('/home/pi/bluepyscanlog.txt', 'a+') as the_file:
...
and make the proper changes in your /etc/rc.local
sudo python3 /home/pi/bluepyscanner.py
May be you can see previous copies of bluepyscanlog.txt at /
If this doesn't do the job bluetooth service may be starting after rc.local is executed. Do this modifications in your /etc/rd.local as sudo
....
sudo service bluetooth start
sudo python3 /home/pi/bluepyscanner.py > /home/pi/bb.log
exit 0
Ensure that exit 0 is the last command in the file. If you created rc.local manually ensure it gets execution rights.
sudo chmod +x /etc/rc.local
You will see that your script is being executed.
In my raspberry these are the contents of bb.log
Discovered device d2:xx:XX:XX:XX:XX
Device d2:xx:XX:XX:XX:XX (random), RSSI=-62 dB
Flags = 06
0x12 = 08001000
Incomplete 128b Services = xxxxxxxxxxxxxxxxxxxxxxxxx
16b Service Data = xxxxxxxxxxxxxx
Complete Local Name = xxxxxxxxxxx
Tx Power = 05
(Xs mask original content)
Im trying to start mysql server from python 3.4 script on Max OS X 10.10.4
yet I don't know how to pass the super user password ?
import os
os.system("sudo /usr/local/mysql/support-files/mysql.server start")
Sudo: no tty present and no askpass program specified
You are currently going in the right direction. In my system a mysql server is started from /etc/init.d/mysql.
Just use the following code snippet to run your mysql server from a python script.
import os
os.system('sudo /etc/init.d/mysql start')
The best way to run the root commands will be to execute the file itself as root
Simply type sudo python script.py in your shell and you can replace os.system('sudo /etc/init.d/mysql start')
with
os.system('/etc/init.d/mysql start')
I have a python script named sudoserver.py that I start in a CygWin shell by doing:
python sudoserver.py
I am planning to create a shell script (I don't know yet if I will use Windows shell script or a CygWin script) that needs to know if this sudoserver.py python script is running.
But if I do in CygWin (while sudoserver.py is running):
$ ps -e | grep "python" -i
11020 10112 11020 7160 cons0 1000 00:09:53 /usr/bin/python2.7
and in Windows shell:
C:\>tasklist | find "python" /i
python2.7.exe 4344 Console 1 13.172 KB
So it seems I have no info about the .py file being executed. All I know is that python is running something.
The -l (long) option for 'ps' on CygWin does not find my .py file. Nor does it the /v (verbose) switch at tasklist.
What should be the appropriate shell (Windows or CygWin shell would enough; both if possible would be fine) way to programmatically find if an specific python script is executing right now?
NOTE: The python process could be started by another user. Even from a user not logged in a GUI shell, and, even more, the "SYSTEM" (privileged) Windows user.
It is a limitation of the platform.
You probably need to use some low level API to retrieve the process info. You can take a look at this one: Getting the command line arguments of another process in Windows
You can probably use win32api module to access these APIs.
(Sorry, away from a Windows PC so I can't try it out)
Since sudoserver.py is your script, you could modify it to create a file in an accessible location when it starts and to delete the file when it finishes. Your shell script can then check for the existence of that file to find out if sudoserver.py is running.
(EDIT)
Thanks to the commenters who suggested that while the presence or absence of the file is an unreliable indicator, a file's lock status is not.
I wrote the following Python script testlock.py:
f = open ("lockfile.lck","w")
for i in range(10000000):
print (i)
f.close()
... and ran it in a Cygwin console window on my Windows PC. At the same time, I had another Cygwin console window open in the same directory.
First, after I started testlock.py:
Simon#Simon-PC ~/test/python
$ ls
lockfile.lck testlock.py
Simon#Simon-PC ~/test/python
$ rm lockfile.lck
rm: cannot remove `lockfile.lck': Device or resource busy
... then after I had shut down testlock.py by using Ctrl-C:
Simon#Simon-PC ~/test/python
$ rm lockfile.lck
Simon#Simon-PC ~/test/python
$ ls
testlock.py
Simon#Simon-PC ~/test/python
$
Thus, it appears that Windows is locking the file while the testlock.py script is running but it is unlocked when it is stopped with Ctrl-C. The equivalent test can be carried out in Python with the following script:
import os
try:
os.remove ("lockfile.lck")
except:
print ("lockfile.lck in use")
... which correctly reports:
$ python testaccess.py
lockfile.lck in use
... when testlock.py is running but successfully removes the locked file when testlock.py has been stopped with a Ctrl-C.
Note that this approach works in Windows but it won't work in Unix because, according to the Python documentation:
On Windows, attempting to remove a file that is in use causes
an exception to be raised; on Unix, the directory entry is removed
but the storage allocated to the file is not made available until
the original file is no longer in use.
A platform-independent solution using an additional Python module FileLock is described in Locking a file in Python.
(FURTHER EDIT)
It appears that the OP didn't necessarily want a solution in Python. An alternative would be to do this in bash. Here is testlock.sh:
#!/bin/bash
flock lockfile.lck sequence.sh
The script sequence.sh just runs a time-consuming operation:
#!/bin/bash
for i in `seq 1 1000000`;
do
echo $i
done
Now, while testlock.sh is running, we can test the lock status using another variant on flock:
$ flock -n lockfile.lck echo "Lock acquired" || echo "Could not acquire lock"
Could not acquire lock
$ flock -n lockfile.lck echo "Lock acquired" || echo "Could not acquire lock"
Could not acquire lock
$ flock -n lockfile.lck echo "Lock acquired" || echo "Could not acquire lock"
Lock acquired
$
The first two attempts to lock the file failed because testlock.sh was still running and so the file was locked. The last attempt succeeded because testlock.sh had finished running.