I want a python script to be executed at bootup on my raspberry pi2, so I put it into .bashrc.
Launching the script with crontab didn't work.
But I only want to execute it once. Not everytime I enter a terminal or every time I login via ssh.
My poor try of course didn't work and it's obvious why.
python_running=false
if [ "$python_running" = false ] ; then
./launcher.sh
$python_running = true
fi
EDIT:
My main problem is that the python script needs internet access. The connection has to be established before the script is executed.
After the first answer and comments I realized that .bashrc is not a good place for launching the script at bootup. It works with autologin, but is not a proper solution.
But what could be a proper solution to run the script only once?
.bashrc is definetly not a proper place to do that. To start the script at bootup the best and easiest solution I found is crontab:
sudo crontab -e
then add the following line to the end of the file:
#reboot sh /home/pi/launcher.sh > /home/pi/logs/cronlog 2>&1
But to use crontab the shell script needs to be changed to wait/poll for internet connection:
ROUTER_IP=192.168.0.1
while ( ! ping -c1 $ROUTER_IP) do
echo "network is not up yet"
sleep 3
done
echo "network is up now"
python3 myScript.py &
Polling might not be the best option, but there's nothing wrong in creating one sleep process every 3 seconds.
Ok.. so we need to clarify some terminology ..
The pi (or any unix system) doesn't really distinguish between a "console" login or a ssh (remote) login, it's going to drop you into a shell anyway.
However, if you want something to start on bootup (which is what I think you want), then look at /etc/rc.d - have a look here - but in case that link goes, put a command in /etc/rc.local
Related
This is a long bash script (400+ lines ) that is originally invoked from a django app like so -
os.system('./bash_script.sh &> bash_log.log')
It stops on a random command in the script. If the order of commands is changed, it hangs on another command in approx. the same location.
sshing to the machine that runs the django app, and running sudo ./bash_script.sh, asks for a password and then runs all the way.
I can't see the message it presents when it hangs in the log file, couldn't make it redirect there. I assume it's a sudo password request.
Tried -
sudo -v in the script - didn't help.
ssh to the machine and manually extend the sudo timeout in /etc/sudoers - didnt help, I think since the django app is already in the air and uses the previos timeout.
splitting the script in two, and running one in separate thread, like so -
def basher(command, log_path):
with open(log_path) as log:
Popen(command, stdout=log, stderr=log).wait()
script_thread = Thread(target=basher, args=('bash_script_pt1.sh', 'bash_log_pt1.log'))
script_thread.start()
os.system('./bash_script_pt2.sh &> bash_log_pt2.log') # I know it's deprecated, not sure if maybe it's better in this case
script_thread.join()
The logs showed that part 1 ended ok, but part 2 still hangs, albeit later in the code than when they were together.
I thought to edit /etc/sudoers from inside the Python code, and then re-login via su - user. There are snippets of how to pass the password using pty, however I don't understand the mechanics of it and could not get it to work.
I also noted that ps aux | grep bash_script.sh shows that the script is being run twice. As -
/bin/bash bash_script.sh
and as
sh -c bash_script.sh.
I assume os.system has an internal shell=True going on.
I don't understand the Linux entities/mechanics in play to figure out what's happening.
My guess is that the django app has different and more limited permissions, than the script itself does, and the script is inheriting said restrictions because it is being executed by it.
You need to find out what permissions the script has when you run it just from bash, and what it has when you run it via django, and then figure out what the difference is.
I wrote a simple script that parses some stuff off the web and emails it to me. Very simple. But I'm starting to realize that implementing this is going to be far more difficult that it really should be.
All I really want to do is run this script once a day.
I have explored using Google App Engine, but it doesn't like smtplib using ssl to login to my gmail to send an email.
I am considering using Heroku, but that just seems like a lot of work for something so simple.
I tried using my raspberry pi, but I'm not sure the script is still running when I exit ssh. I looked into running the scrip on a cron job, but I'm not sure thats an elegant solution.
I looked into running an applescript from my calendar, but I'm not sure what happens if my computer is closed and/or offline.
My question is: Is there a simple, elegant, easy solution here?
When you start the script from your session (./script.py or python script.py) than it stops running when you disconnect. If you want to run the script this way for whatever reason, I would recommend using tmux .
If you're using Raspian or another Debian based distro for your Pi:
$ apt-get install tmux
$ tmux
# disconnect from your tmux session with pressing CTRL+B and (after that) D
# to reattach to your session later, use
$ tmux attach
I would recommend using cron. Just add a file like this in /etc/cron.d/, if you want to run it at a specific time (e.g. every day at 1am), like so:
$ echo "0 1 * * * python /path/to/your/script.py > /dev/null 2>&1" > /etc/cron.d/script-runner
# and don't forget to make it executable
$ chmod +x /etc/cron.d/script-runner
Wikipedia has a nice explanation of the format (and also of shortcuts like #hourly and #daily).
If you don't care when exactly it runs, you can just put your script into /etc/cron.daily/. Don't forget a chmod +x to make it executable.
If you don't want to run it on one of your machines, you can also get a shell on one of uberspaces servers. You can pay whatever you wan't (minimum 1 Eur/month) and you get a shell on a linux box with 10GB storage (the first month is free for testing, cancelation happens automatically when you don't pay, no strings attached). I'm sure there are a lot of other services like that, I just mention it, because it's a cheap one with nice support. Also you get a domain (..uberspace.de) and can send mail from the server (e.g. with mail). So no need to use a gmail account.
Edit: Overread the "python" part. Changed everything to .py. Either use #!/usr/bin/env python3 (or 2.7) in your script or start the script via python scriptname.py.
I have written a python script which scans my gmail INBOX for a particular mail, and if that mail is present it opens up a GUI. I have tested this script and works correctly.
I want to run this script whenever the network connection is established. So, I have added a script in the dispatch.d directory of the NetworkManager. My bash script is shown below.
#!/bin/bash
#/etc/NetworkManager/dispatcher.d/90filename.sh
IF=$1
STATUS=$2
if [ "$IF" == "wlan0" ]; # for wireless internet
then
case "$2" in
up)
logger -s "NM Script up triggered"
python /home/rahul/python/expensesheet/emailReader.py
logger -s "emailReader completed"
exitValue=$?
python3.2 /home/rahul/python/expensesheet/GUI.py &
logger -s "GUI completed with exit status $exitValue"
;;
down)
logger -s "NM Script down triggered"
#place custom here
;;
pre-up)
logger -s "NM Script pre-up triggered"
#place custom here
;;
post-down)
logger -s "NM Script post-down triggered"
#place custom here
;;
*)
;;
esac
fi
I have used tkinter to design my GUI.
My problem is that, emailReader(which has no GUI) gets executed correctly, but GUI.py doesn't get executed. It exits with the exit status 1.
Can somebody throw some light on this matter and explain what I'm doing wrong?
NetworkManager is a process that is running on a virtual terminal, outside of your X-server.
(e.g. NetworkManager get's started on bootup before your window manager gets started; they are totally unrelated).
therefore, any script started by NetworkManager will not (directly) be able to access the GUI. (it is very similar to what you get when you change from your desktop to a virtual terminal (e.g. Ctrl-Alt-1), and then try to run your GUI from there: you will most likely get an error like "Can't open display".
if you want to start a GUI-program, you have 2 possibilities
tell a notification daemon (a sub-process of your window-manager) to start your GUI
tell your GUI to start on the correct display (the one, where your desktop is running)
i'd go for the first solution (notification daemons are designed for that very purpose), but how to do it, heavily depends on the window-manager you use.
the 2nd solution is a bit more dirty and involves potential security breaches but basically try something like starting DISPLAY=:0.0 myguiapp.py instead of starting myguiapp.py (this assumes you are running an X-server on localhost:0.0).
you can check whether this works by simply launching the command with the DISPLAY-line from a virtualterminal.
to get the display you are actually using, simply run echo $DISPLAY in a terminal within your X-server.
usually, remote connections are disabled to your running Xserver (as it allows non-proviliged users to take over your desktop - everything from starting new GUI-programs (which is what you want) to installing keyloggers); if that's the case check man xhost (or go for solution #1)
UPDATE
for the 1st solution, you probably want to check out libraries like libnotify (there's python bindings in python-notify and python-notify2).
if you want more than simple "notification popups", you probably have to dig into D-BUS.
a simple example (haven't tested it personally, though), can be found here.
I want to run a python script on my server (that python script has GUI). But I want to start it from ssh. Something like this:
ssh me#server -i my_key "nohup python script.py"
... > let the script run forever
BUT it complains "unable to access video driver" since it is trying to use my ssh terminal as output.
Can I somehow make my commands output run on server machine and not to my terminal... Basically something like "wake-on-lan functionality" -> tell the server you want something and he will do everything using its own system (not sending any output back)
What about
ssh me#server -i my_key "nohup python script.py >/dev/null 2>&1"
You can use redirection to some remote logfile instead of /dev/null of course.
? :)
EDIT: GUI applications on X usually use $DISPLAY variable to know where they should be displayed. Moreover, X11 display servers use authorization to permit or disallow applications connecting to its display. Commands
export DISPLAY=:0 && xhost +
may be helpful for you.
Isn't it possible for you to rather use python ssh extension instead of calling external application?
It would:
run as one process
guarantee that invocation will be the same among all possible system
lose the overhead from "execution"
send everything trough ssh (you won't have to worry about input like "; possibly local executed command)
If not, go with what Piotr Wades suggested.
I have two python scripts that have to run simultaneously because they interact with each other. One script is a 'server' script running locally and the other is client script that connects to it via a socket. Normally I just open a couple terminal tabs and run the server script in one and the client in the other. After starting and stopping each script over and over, I wanted to make a bash alias to run both scripts with just one command and came up with this:
gnome-terminal --tab -e "python server.py" --tab -e "python client.py"
However, now the server script is raising an sqlite OperationalError saying that one of my data tables doesn't exist. But when I run the scripts manually everything works fine. I have no clue what is going on, but I thought that maybe running the scripts together wasn't giving the server script enough time to initialize and make its connection to the database. So I put a time.sleep(5) in the client script, but as soon as it starts I get the same error.
Anyone have an idea what could be happening? Or does anyone know of any alternatives for starting two python scripts with one command?
Try combining the two commands into one:
gnome-terminal --tab -x bash -c "python server.py & sleep 5; python client.py"
I think it is better to put the sleep command (if needed) outside client since there may be situations where the server is already started and the client does not have to sleep.
The -x flag means
-x, --execute
Execute the remainder of the command line inside the terminal.
The command calls bash:
bash -c "python server.py & sleep 5; python client.py"
bash in turn, has a -c flag which means
-c string If the -c option is present, then commands are read from string. If
there are arguments after the string, they are assigned to the posi‐
tional parameters, starting with $0.
You might want to experiment with
gnome-terminal --tab -e "python server.py & sleep 5; python client.py"
That might work too. When you run bash first, then your ~/.bashrc is read. Without calling bash, I think by default, /bin/sh is called instead.
If you get
"socket.error: [Errno 98] Address already in use",
it probably means that your server has already been started, and running the server a second time fails.