How to exit a bash script after it starts a Python process? - python

I have a Python program that can update itself from a GitHub repository. When I activate the updating process, the Python script runs updater.bash script and kills itself. The bash script updates the program and then runs it again. But it keeps running despite I put exit 0 in the end of the updater. So, every update creates another bash script that eats more resources.
How can I kill the script after it runs the Python script?

exec python ... to replace bash with python program.
See: help exec

Related

Shell script kill command

OS: Raspbian
Python: 3.7.3
I am trying to run and kill my shell script through a Python script. The purpose is so that I can simply press "Run" in my py script, instead of having to go through the terminal every time. Here is my shell script (T.sh):
#!/bin/bash
#Track command
cd /home/pi/rpi-deep-pantilt
. . /rpi-deep-pantilt-env/bin/activate
rpi-deep-pantilt track Raspi --edge-tpu
Here is my Py script:
import os
os.system('bash /home/pi/T.sh')
When I issue the command rpi-deep-pantilt track Raspi --edge-tpu in my terminal and press CTRL + C it kills the script, but it doesn't work when I use this Python script, and neither does pkill. The Python script stops, but the camera stays on and the tracking functionality is still operating. Is there any way I can incorporate some kill command that I can issue with a key interruption?
If there is a better way to go about this let me know. I'm very new to this as you can probably tell.
Python may create a new process for T.sh, so within your python code, try:
os.system("pkill T.sh")

Auto run from rc.local not working when forked (&)

I've been having a lot of problems trying to get my python script to run at boot. I've essentially narrowed it down to a problem with forking.
I'm running on a RPi3.
In rc.local if I have:
python /home/pi/script.py
It seems to run, however as soon as I add
python /home/pi/script.py &
I get zero results.
If I run rc.local manually after boot the fork appears to work as expected.
I've also tried to point rc.local to a .sh file in /home/pi with exactly the same results. This even happens with basic echo commands:
echo "Hello world" > /tmp/log.txt
vs
echo "Hello world" > /tmp/log/txt &
Any help would be greatly appreciated.
I'm guessing that this has something to do with the fact that all child processes of the rc.local script get killed as soon as it reaches the end of the script, which is just about instantaneous if the only command in the file is running a python script as a background process.
The fork will get killed before it can do anything useful.
If you want the process to keep running after rc.local has ended, you should run the process as a daemon.
Some examples on how to do this can be found in this question: Run bash script as daemon

Open terminal, run python script and keep it open for results?

How to get an sh script for starting a new terminal, execute a python script and keep it running? The python script is supposed to run continuously in a perpetual loop, spitting out results as they pop in. Whenever trying with sh-script for gnome-terminal just getting: child process exited normally with status 2
Manually it would just be: python home/ubuntu/pyscript.py
Could someone give an idea how to do this?
I have a list of scripts to run, so resorting to the manual solution is tedious.
You can use gnome-terminal with the -x flag.
Suppose you have a spam.py script; then the following command will spawn a new terminal, run spam.py in it, and close the terminal once the script has ended.
gnome-terminal -x python spam.py
Try with this script:
# spam.py
import time
for _ in range(5):
print("eggs")
time.sleep(1)
Then the previous command will spawn a terminal, that will be printed eggs five times, and then will be closed.
If you want to leave the terminal open with the Python interpret still running after the script ended, then Python's -i flag (doc then CTRL+F -> -i) is what you want:
gnome-terminal -x python -i spam.py
To run the Python script in a new instance of your favourite terminal, write:
x-terminal-emulator -e python -i home/ubuntu/pyscript.py
This will start the Python script and run it until it ends, then display a Python prompt to stop the terminal emulator from closing.
This will work with x-terminal-emulator substituted with any of the many, many terminals installed on my computer, so will work with little modification across all POSIX-compatible systems with the standard terminals installed. This won't work on a Mac, however. For a properly cross-platform Python implementation of something slightly different, see here. Most of the techniques should be transferable.
To run the Python script in the same terminal whilst carrying on with the rest of the shell script, write:
python home/ubuntu/pyscript.py &
Note the &, which runs the program as a new process (but still connects the output to the virtual terminal).

Waiting until a os.system command finishes in my virtual machine (Windows 7)

I have encountered a problem with my python script. I am using python 3.6.2 I am using the os.system function to execute a bash command. The problem is the script launches the bash command and continues without waiting it to finish, knowing that the script acts correctly in my physical machine, it waits till the bash command is done then continues with the following instructions. This is my script:
import os
os.system("C:/Bash/bashCmd.sh")
print("ok")
I also tried subprocess.call, but it never waits in my vm.
My question is about the source of this issue, then if any suggestions to deal with it!

python multithreading issue in cronjob

I have a python program that uses the ThreadPool for multithreading. The program is one step in a shell script. When I execute the shell script manually on the command line, the entire flow works as expected. However, when I execute the shell script as a cronjob, it appears that the flow goes to the next steps before the python multithreading steps are completely finished.
Inside the python program, I do call AsyncResult.get(timeout) to wait for all the results to come back before moving on.
Run your program via batch(1) (see the output of the command man batch) as well. If that works OK, but the cron version does not, then it is almost certainly a problem with your environment variable setup. To verify that, run printenv from your interactive shell to inspect your environment there. Then do the same thing inside the crontab (you will just need to temporarily set up an extra cron entry for it). Try setting the variables in your shell script before invoking Python.
On the other hand, if it doesn't work via batch(1) either, it could be something to do with the files that your code has open. Try running your shell script with input redirected from /dev/null and output going to a file:
$ /usr/local/bin/myscript </dev/null >|/tmp/outfile.txt 2>&1
Try setting "TERM=xterm" (or whatever env variable you have, figure out by command 'env' on your terminal) in your crontab.

Categories