I Have a shell script which in turn runs a python script, I have to exit from the main shell script when an exception is caught in python. Can anyone suggest a way on how to achieve it.
In Python you can set the return value using sys.exit(). Typically when execution completed successfully you return 0, and if not then some non-zero number.
So something like this in your Python will work:
import sys
try:
....
except:
sys.exit(1)
And then, as others have said, you need to make sure your bash script catches the error by either checking the return value explicitly (using e.g. $?) or using set -e.
Related
In a program I am writing in python I need to completely restart the program if a variable becomes true, looking for a while I found this command:
while True:
if reboot == True:
os.execv(sys.argv[0], sys.argv)
When executed it returns the error [Errno 8] Exec format error. I searched for further documentation on os.execv, but didn't find anything relevant, so my question is if anyone knows what I did wrong or knows a better way to restart a script (by restarting I mean completely re-running the script, as if it were been opened for the first time, so with all unassigned variables and no thread running).
There are multiple ways to achieve the same thing. Start by modifying the program to exit whenever the flag turns True. Then there are various options, each one with its advantages and disadvantages.
Wrap it using a bash script.
The script should handle exits and restart your program. A really basic version could be:
#!/bin/bash
while :
do
python program.py
sleep 1
done
Start the program as a sub-process of another program.
Start by wrapping your program's code to a function. Then your __main__ could look like this:
def program():
### Here is the code of your program
...
while True:
from multiprocessing import Process
process = Process(target=program)
process.start()
process.join()
print("Restarting...")
This code is relatively basic, and it requires error handling to be implemented.
Use a process manager
There are a lot of tools available that can monitor the process, run multiple processes in parallel and automatically restart stopped processes. It's worth having a look at PM2 or similar.
IMHO the third option (process manager) looks like the safest approach. The other approaches will have edge cases and require implementation from your side to handle edge cases.
This has worked for me. Please add the shebang at the top of your code and os.execv() as shown below
#!/usr/bin/env python3
import os
import sys
if __name__ == '__main__':
while True:
reboot = input('Enter:')
if reboot == '1':
sys.stdout.flush()
os.execv(sys.executable, [sys.executable, __file__] + [sys.argv[0]])
else:
print('OLD')
I got the same "Exec Format Error", and I believe it is basically the same error you get when you simply type a python script name at the command prompt and expect it to execute. On linux it won't work because a path is required, and the execv method is basically encountering the same error.
You could add the pathname of your python compiler, and that error goes away, except that the name of your script then becomes a parameter and must be added to the argv list. To avoid that, make your script independently executable by adding "#!/usr/bin/python3" to the top of the script AND chmod 755.
This works for me:
#!/usr/bin/python3
# this script is called foo.py
import os
import sys
import time
if (len(sys.argv) >= 2):
Arg1 = int(sys.argv[1])
else:
sys.argv.append(None)
Arg1 = 1
print(f"Arg1: {Arg1}")
sys.argv[1] = str(Arg1 + 1)
time.sleep(3)
os.execv("./foo.py", sys.argv)
Output:
Arg1: 1
Arg1: 2
Arg1: 3
.
.
.
I am working on a program that requires to call another python script and truncate the execution of the current file. I tried doing the same using the os.close() function. As follows:
def call_otherfile(self):
os.system("python file2.py") #Execute new script
os.close() #close Current Script
Using the above code I am able to open the second file but am unable to close the current one.I know I am silly mistake but unable to figure out what's it.
To do this you will need to spawn a subprocess directly. This can either be done with a more low-level fork and exec model, as is traditional in Unix, or with a higher-level API like subprocess.
import subprocess
import sys
def spawn_program_and_die(program, exit_code=0):
"""
Start an external program and exit the script
with the specified return code.
Takes the parameter program, which is a list
that corresponds to the argv of your command.
"""
# Start the external program
subprocess.Popen(program)
# We have started the program, and can suspend this interpreter
sys.exit(exit_code)
spawn_program_and_die(['python', 'path/to/my/script.py'])
# Or, as in OP's example
spawn_program_and_die(['python', 'file2.py'])
Also, just a note on your original code. os.close corresponds to the Unix syscall close, which tells the kernel that your program that you no longer need a file descriptor. It is not supposed to be used to exit the program.
If you don't want to define your own function, you could always just call subprocess.Popen directly like Popen(['python', 'file2.py'])
Use the subprocess module which is the suggested way to do that kind of stuff (execute new script, process), in particular look at Popen for starting a new process and to terminate the current program you can use sys.exit().
Its very simple use os.startfile and after that use exit() or sys.exit() it will work 100%
#file 1 os.startfile("file2.py") exit()
Within a Python script, I'm trying to execute the following sequence of events:
Open a command window and run a program. When it completes, it outputs a text file.
Once that text file has been created, close the program.
After that has happened, run a new program using the text file as an input
Here's what I have so far:
subprocess.popen(['cmd','/c',r'programThatRuns.exe'])
subprocess.wait() # ? subprocess.check_call()? kill?
subprocess.popen(['cmd','/c',r'otherProgramThatRuns.exe'])
So I guess I'm really stuck on the second line
I think all you need is:
subprocess.check_call(['programThatRuns.exe'])
subprocess.check_call(['otherProgramThatRuns.exe'])
The check_call function will run the program and wait for it to finish. If it fails (non-0 exit code) it will throw a CalledProcessError exception.
You generally don't want to run programs through cmd, just run them directly. You only need to force using cmd if the program isn't an executable, e.g. for a builtin command like dir, for a .bat or .cmd file, or if you want to use file associations.
Have you tried using subprocess.call?
Python 2 - Python 3
Run the command described by args. Wait for command to complete, then return the returncode attribute.
Seems to be what you're trying to do. Simply run the first process, check that the file exists, and pass the file into the second process to use.
subprocess.check_call will also work for what you're trying to do, except that if the process returns a non-zero return code it'll raise an exception while call will simply return the return code.
You have to apply 'wait' on the child process, i.e.
o = subprocess.popen(['cmd','/c',r'programThatRuns.exe'])
o.wait()
subprocess.popen(['cmd','/c',r'otherProgramThatRuns.exe'])
or you use check_call
I have a bug in my program and want to check it out using debug. In my IDE (WingIDE) I have a debug functionality. But I can not use that call the program from shell. So I use the Python module pdb. My application is single threaded.
I have looked into Code is behaving differently in Release vs Debug Mode but that seems something different to me.
I limited it down this the following code.
What I did :
I created a short method it will only be called when using no IDE.
def set_pdb_trace():
run_in_ide = not sys.stdin.isatty()
if not run_in_ide:
import pdb; pdb.set_trace() # use only in python interpreter
This work fine, I used it in many situations.
I want to debug the following method :
import sys
import os
import subprocess32
def call_backported():
command = 'lsb_release -r'
timeout1 = 0.001 # make value too short, so time-out will enforced
try:
p = subprocess32.Popen(command, shell=True,
stdout=subprocess32.PIPE,
stderr=subprocess32.STDOUT)
set_pdb_trace()
tuple1 = p.communicate(input=b'exit %errorlevel%\r\n', timeout=timeout1)
print('No time out')
value = tuple1[0].decode('utf-8').strip()
print('Value : '+ value)
except subprocess32.TimeoutExpired, e:
print('TimeoutExpired')
Explanation.
I want to call subprocess with a timeout. For Python 3.3+ it is build in, but my application has be able to run using Python2.7 also. So I used https://pypi.python.org/pypi/subprocess32/3.2.6 as a backport.
To read the returned value I used How to retrieve useful result from subprocess?
Without timeout, setting timeout to f.e. 1 sec the method works as expected. The result value and 'No time out' is printed.
I want to enforce a timeout so I set the timeout very short time 0.001 . So now only 'TimeoutExpired' should be printed.
I want to execute this is shell.
When if first comment out line #set_pdb_trace() 'TimeoutExpired' is printed, so expected behaviour.
Now I uncomment set_pdb_trace() and execute in shell.
The debugger displays, I press 'c' (continue) and 'No time out' with the result is printed. This result is different then without debug. The generate output is :
bernard#bernard-vbox2:~/clones/it-should-work/unit_test$ python test_subprocess32.py
--Return--
> /home/bernard/clones/it-should-work/unit_test/test_subprocess32.py(22)set_pdb_trace()->None
-> import pdb; pdb.set_trace() # use only in python interpreter
(Pdb) c
No time out
Value : Release: 13.10
bernard#bernard-vbox2:~/clones/it-should-work/unit_test$
How is this possible? And how to solve?
You introduced a delay between opening the subprocess and writing to it.
When you create the Popen() object, the child process is started immediately. When you then call p.communicate() and try to write to it, the process is not quite ready yet to receive input, and that delay together with the time it takes to read the process output is longer than your 0.0.1 timeout.
When you insert the breakpoint, the process gets a chance to spin up; the lsb_release command doesn't wait for input and produces its output immediately. By the time p.communicate() is called there is no need to wait for the pipe anymore and the output is produced immediately.
If you put your breakpoint before the Popen() call, then hit c, you'll see the timeout trigger again.
I have a script that I want to exit early under some condition:
if not "id" in dir():
print "id not set, cannot continue"
# exit here!
# otherwise continue with the rest of the script...
print "alright..."
[ more code ]
I run this script using execfile("foo.py") from the Python interactive prompt and I would like the script to exit going back to interactive interpreter. How do I do this? If I use sys.exit(), the Python interpreter exits completely.
In the interactive interpreter; catch SystemExit raised by sys.exit and ignore it:
try:
execfile("mymodule.py")
except SystemExit:
pass
Put your code block in a method and return from that method, like such:
def do_the_thing():
if not "id" in dir():
print "id not set, cannot continue"
return
# exit here!
# otherwise continue with the rest of the script...
print "alright..."
# [ more code ]
# Call the method
do_the_thing()
Also, unless there is a good reason to use execfile(), this method should probably be put in a module, where it can be called from another Python script by importing it:
import mymodule
mymodule.do_the_thing()
I'm a little obsessed with ipython for interactive work but check out the tutorial on shell embedding for a more robust solution than this one (which is your most direct route).
Instead of using execfile, you should make the script importable (name=='main protection, seperated into functions, etc.), and then call the functions from the interpreter.