I created a python script that I would like to execute at each startup. I modified the etc/rc.local, but I don't get the script to run.
etc/rc.local addition (I added the sleep thinking it may help):
(sleep 10; /usr/bin/python3 /home/pi/mower-gps-tracking/app/gps_logger.py)&
imports in the different python scripts (I don't know if it matters):
from ftplib import FTP
import os
import serial
import time
import threading
from gpiozero import LED, Button
When I start the etc/rc.local manually via a ssh command, it runs fine.
Any idea what I'm missing ?
Check if the script (rc.local) is executable ie. has the 'x' attribute set.
And you will probably need a
#!/bin/bash
as line #1 of your script.
Hope this helps.
And as per Barmar above, probably more approp in the Unix forums.
I tried this on my linux box no issues .. As a test create a script that will wrap your script, have it write to a tmp file so you can see if it actually runs. Use nohup and & in your scripts.
/etc/rc.local:
nohup /root/script.sh &
/root/script.sh:
#!/bin/bash
echo "I'm starting something .." > /tmp/startup_thing.log
/root/gps_logger.py
note: all scripts need to be chmod a+x file.ext. if it's not running check for the file /tmp/startup_thing.log . If it's there then you have some other issue. If it isn't then rc.local isn't working.
Related
I have two files main.py& test.py
Suppose the main file main.py is running and after a point of time I want to run test.py
I cannot use:
import test or os.system("python test.py") because this run python file in same terminal but I want to run the test.py in other terminal
So I mean to say in one terminal main.py is running after a point a new terminal opens and run test.py
Any solutions?
Thanks :D
If I understand correctly you want to run a python script when some condition is fulfilled so I would recommend calling the "test.py" using a subprocess library (bear in mind there are other methods) like this:
import subprocess
if(your_condition):
subprocess.call(['python', 'test.py', testscript_arg1, testscript_val1,...])
as mentioned here: Using a Python subprocess call to invoke a Python script
I have a bash script that I can run flawlessly in my Rpi terminal in its folder:
./veye_mipi_i2c.sh -r -f mirrormode -b 10
it works like this: Usage: ./veye_mipi_i2c.sh [-r/w] [-f] function name -p1 param1 -p2 param2 -b bus
options:
-r read
-w write
-f [function name] function name
-p1 [param1] param1 of each function
-p2 [param1] param2 of each function
-b [i2c bus num] i2c bus number
When I try to run it in Python (2) via my Spyder editor with os.system, I get a "0" return which I interpret as "succesfully executed" but in fact the script has not been executed and the functions have not been performed. I know this because the script is suppose to change the camera functioning and by checking the images I take afterwards, I can see that nothing has changed.
import os
status = os.system('/home/pi/VeyeMipi/Camera_Folder/veye_mipi_i2c.sh -w -f mirrormode -p1 0x04 -b 10')
print status
Any idea, what causes this? The bash script uses two other scripts that lie in the same folder location (read and write). Could it be, that it cannot execute these additional scripts when startet through Python? It does not make sense to me, but so do a lot of things....
Many thanks
Ok, I understand that my question was not exemplary because of the lack of a minimal reproducible example, but as I did not understand what the problem was, I was not able to create one.
I have found out, what the problem was. The script I am calling in bash requires two more scripts that are in the same folder. Namely the "write" script and "read" script. When executing in terminal in the folder, no problem, because the folder was the working directory.
I tried to execute the script within Spyder editor and added the file location to the PATH in the user interface. But still it would not be able to execute the "write" script in the folder.
Simply executing it in the terminal did the trick.
It would help if you fix your scripts so they don't depend on the current working directory (that's a very bad practice).
In the meantime, running
import subprocess
p = subprocess.run(['./veye_mipi_i2c.sh', '-r', '-f', 'mirrormode', '-b', '10'], cwd='/home/pi/VeyeMipi/Camera_Folder')
print(p.returncode)
which changes the directory would help.
Use subprocess and capture the output:
import subprocess
output = subprocess.run(stuff, capture_output=True)
Check output.stderr and output.stdout
I have a simple ansible playbook which will call a shell script on a remote server, the shell script will call another python script, which will do something, when I run the ansible playbook, the script is not working, but when I ssh to the server and run the same command manually, it worked. I've done some debugging, seems when calling the python script, if I delete all the import statements from the python script, it works from ansible, but I don't understand why it works when I ssh to the server and would like to have some suggestion on how to resolve this issue.
the python script:
#!/usr/bin/python
import socket
import argparse
import logging
import subprocess
import time
import imp
def main():
f = open('/afile', 'w')
f.write('a test line')
f.close()
if __name__ == '__main__':
main()
those imports are not using here, it will be used in my real script, here I just write a line into a file for debugging.
The ansible playbooks are just simply like:
---
- hosts: servers
tasks:
- name: trigger the script
shell: /start.sh
The start.sh then simply invoke the python script:
#!/bin/sh
/start.py
sorry, it's my bad, I didn't put all the scripts here, seems that there is another script which has things like
#!/bin/sh
/start & >> stdout.log
this caused the problem, I guess the first three modules imported have things related to standard io, so the solution is using nohup.
again, very sorry for the incomplete question.
*
SUMMARY
If Monit or Cron does not start your script, it could be a PATH issue.
See below.
*
I have 2 main files:
the first, A.py, is the main script in Python, that updates an sqlite database db.sqlite continously (it should never stop);
the second, B.sh, is a shell script, that - if needed - kills and restart the 1st script (it will be run under Monit pre-configured condition - see below)
Both files are executable:
A.py first line #!/usr/bin/env python
B.sh first line #!/bin/sh
Then:
chmod +x A.py
chmod +x B.sh
I configured Monit to check the timestamp of the db.sqlite file, and if the timestamp is greater than 1 minute (which implies that for some unknown reason the A.py updating function has stopped though python script may be still running - that's why I cannot check the A.py status), then it will run the B.sh shell script, which restart A.py.
All works well if I run the scripts by hand, in a shell terminal.
But under Monit it seems not to work.
I add the following in the Monit configuration file (then I check the syntax sudo monit -t and reload the configuration sudo monit reload):
check file db.sqlite with path /right_dir/db.sqlite
if timestamp > 1 minute then exec "/right_dir/restart.sh"
The monit.log report:
error : 'db.sqlite' timestamp for /right_dir/db.sqlite failed -- current timestamp is ...
info : 'db.sqlite' exec: /right_dir/restart.sh
error : 'db.sqlite' timestamp for /right_dir/db.sqlite failed -- current timestamp is ...
error : 'db.sqlite' timestamp for /right_dir/db.sqlite failed -- current timestamp is ...
and so on...
I ps aux|grep A.py but the script is not running.
I really appreciate any help.
Thank you for your time,
gil
UPDATE
I tried a simple file, with cron instead of monit: if I run the script on a terminal all work well. Cron does not.
FYI: I use Anaconda (Python suite)
File A.py:
#!/usr/bin/env python
print("OK")
import matplotlib # this do block
import math # this do not block
while True:
pass
File B.sh:
/usr/bin/pkill -f A.py
nohup /right_path/A.py &
Crontab:
*/1 * * * * /right_path/B.sh
After crontab start I check ps aux|grep A.py.
This line does not block (I see the process with ps aux):
import math
This line blocks (I do not see the process):
import matplotlib
So the problem seems to be related to the module import (some work, some not).
May be a PATH/Env issue?
Any idea?
SOLVED
It is a PATH problem.
Cron does not see the complete PATH, but only a minimal subset.
Just try for yourself: add to crontab (crontab -e) the following:
* * * * * env > /tmp/env.output
and compare the output to the env command run in your terminal.
They are probably different.
The solution is to copy the complete PATH from your terminal and paste it as the second line of B.sh just like that (take it as an example, your situation may be slightly different):
#!/bin/sh
PATH=/home/user/anaconda3/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin
# rest of the script
Thank to this thread: https://askubuntu.com/questions/23009/reasons-why-crontab-does-not-work
I have a .py file in the home directory which contains these three lines:
import os
os.system("cd Desktop/")
os.system("ls")
and I want it to "ls" from the "Desktop" directory but it shows contents of the /home directory.
I looked at these pages:
Calling an external command in Python
http://ubuntuforums.org/showthread.php?t=729192
but I could not understand what to do. Can anybody help me?
The two calls are separate from each other. There is no context kept between successive invocations of os.system because a new shell is spawned for every call. First os.system("cd Desktop/") switches directories to Desktop and exits. Then a new shell executes ls in the original folder.
Try chaining your commands with &&:
import os
os.system("cd Desktop/ && ls")
This will show the contents of directory Desktop.
Fabric
If your application is going to be heavy on os usage you might consider using python-fabric. It allows you to use higher level language constructs like contextmanagers to make command line invocations easier:
from fabric.operations import local
from fabric.context_managers import lcd
with lcd("Desktop/"): # Prefixes all commands with `cd Desktop && `
contents=local("ls", capture=True)
You have to consider that os.system executes the commands in a sub-shell. Hence 1) python starts a sub-shell, 2) the directory is changed, 3) then the sub-shell is completed, 4) return to previous state.
To force the current directory change you should do:
os.chdir("Desktop")
Always try and do it by other means that through os.system (os.listdir), or also by doing other than subprocess (which is an excellent module for command control in the shell)