Handle Exception When Running Python Script From Another Python Script - python

I am running a python script from another python script and I am wondering how I can catch exceptions from the parent python script.
My parent python script calls another python script n amount of times. Eventually that called script will exit with a 'ValueError' exception. I'm wondering if there is a way for my parent python script to notice this and then stop executing.
Here is the base code as-is:
import os
os.system('python other_script.py')
I have tried things such as this to no avail:
import os
try:
os.system('python other_script.py')
except ValueError:
print("Caught ValueError!")
exit()
and
import os
try:
os.system('python other_script.py')
except:
print("Caught Generic Exception!")
exit()

The os.system() always returns an integer result code. And,
When it returns 0, the command ran successfully;
when it returns a nonzero value, that indicates an error.
For checking that you can simply add a condition,
import os
result = os.system('python other_script.py')
if 0 == result:
print(" Command executed successfully")
else:
print(" Command didn't executed successfully")
But, I recommend you to use subprocess module insted of os.system(). It is a bit complicated than os.system() but it is way more flexible than os.system().
With os.system() the output is sent to the terminal, but with subprocess, you can collect the output so you can search it for error messages or whatever. Or you can just discard the output.
The same program can be done using subprocess as well;
# Importing subprocess
import subprocess
# Your command
cmd = "python other_script.py"
# Starting process
process = subprocess.Popen(cmd, shell=True, stdout=subprocess.PIPE, stderr=subprocess.PIPE.PIPE)
# Getting the output and errors of the program
stdout, stderr = process.communicate()
# Printing the errors
print(stderr)
Hope this helps :)

Related

get pid of process created with subprocess.check_call()

I am currently trying to get the process id of a process started with subprocess.check_call.
i.e.
from subprocess import check_output
# I want to retrieve the PID of this process:
try:
p = check_output(['some broken program'])
except:
if CalledProcessError: # but Popen does not throw me a CalledProcessError even if program crashes
print("triage some stuff")
print(p.pid) # this doesn't work unless its Popen
I have tried using Popen which works perfectly, however, it doesn't seem to be able to catch when a program is terminated i.e. CalledProcessError.
Can anyone advise, whether there is a way to get around either problem? Thanks!
You have to import CalledProcessError too. Code (with check_output): from subprocess import check_output, CalledProcessError
To detect a specific exception you have to use the following syntax:
try:
statement
except ExceptionName:
another_statement
In your situation:
try:
p = check_output(['some broken program'])
except CalledProcessError:
print("triage some stuff")
You cannot reference p inside the except block as p may be undefined.
Moreover, as stated in this post, subprocess.run() is recommended over subprocess.check_output()

Make python3 program press "enter" multiple times

I use OpenVPN at my company and am trying to automate user creation process. There's a problem at the certificate generation step I faced now. When trying to build a key for the user (all parameters are predefined) program has to press Enter multiple times and in the end "y" and "Enter" 2 times. I tried using Popen and PIPE, but no luck so far. Would appreciate any insight.
import sys, os
from subprocess import Popen, PIPE
# Generate an .ovpn file
try:
username = sys.argv[1]
except:
print "Error. Supply a username!"
sys.exit()
print("Adding user")
os.system("useradd" + " -m" + " -s" + " /bin/bash" + username)
print("Sourcing vars")
os.system('source + /home/myuser/openvpn-ca/vars')
enter = Popen(['/home/myuser/openvpn-ca/build-key {}'.format(username)]),
stdin=PIPE, shell=True)
enter.communicate(input='\n')
Edit:
This is different than what it was marked [duplicate] for. Here's why:
I don't need to generate a custom certificate, change any values etc. It just needs to press "Enter" multiple times and input "yes" and "Enter" 2 times.
You cannot source a shell script from Python; or rather, you can, but it will simply start a new subprocess which sources something and then disappears, without changing anything in your Python environment or subsequent subprocesses.
Try something like this instead:
import sys
import logging # to get diagnostics on standard error instead
import subprocess
# Maybe switch to level=logging.WARNING once you are confident this works
logging.basicConfig(level=logging.INFO, format='%(module)s:%(asctime)s:%(message)s')
try:
username = sys.argv[1]
except:
logging.error("Error. Supply a username!")
sys.exit()
logging.info("Adding user")
subprocess.run(["useradd", "-m", "-s", "/bin/bash", username],
check=True, universal_newlines=True)
logging.info("Building key")
subprocess.run('''
source /home/myuser/openvpn-ca/vars
/home/myuser/openvpn-ca/build-key {}'''.format(username),
shell=True, check=True, input='\n\n', universal_newlines=True)
The switch to subprocess.run() requires a reasonably new version of Python 3. In older versions, subprocess.check_call() would do roughly the same thing, but didn't have an input= argument, so you really did have to use the basic Popen() for this.
Additional notes:
The plus sign after source was obviously a syntax error
We use check=True throughout to make sure Python checks that the commands finish successfully.
Mixing os.system() with subprocess is not an error, but certainly a suspicious code smell.
(Much) more about using subprocess on U*x here: https://stackoverflow.com/a/51950538/874188

Self Restarting a Python Script

I have created a watchdog timer for my script (Python 3), which allows me to halt execution if anything goes wrong (not shown in code below). However, I would like to have the ability to restart the script automatically using only Python (no external scripts). The code needs to be cross platform compatible.
I have tried subprocess and execv (os.execv(sys.executable, ['python'] + sys.argv)), however I am seeing very weird functionality on Windows. I open the command line, and run the script ("python myscript.py"). The script stops but does not exit (verified through Task Manager), and it will not restart itself unless I press enter twice. I would like it to work automatically.
Any suggestions? Thanks for your help!
import threading
import time
import subprocess
import os
import sys
if __name__ == '__main__':
print("Starting thread list: " + str(threading.enumerate()))
for _ in range(3):
time.sleep(1)
print("Sleeping")
''' Attempt 1 with subprocess.Popen '''
# child = subprocess.Popen(['python',__file__], shell=True)
''' Attempt 2 with os.execv '''
args = sys.argv[:]
args.insert(0, sys.executable)
if sys.platform == 'win32':
args = ['"%s"' % arg for arg in args]
os.execv(sys.executable, args)
sys.exit()
Sounds like you are using threading in your original script, which explains why your can't break your original script when simply pressing Ctrl+C. In that case, you might want to add a KeyboardInterrupt exception to your script, like this:
from time import sleep
def interrupt_this()
try:
while True:
sleep(0.02)
except KeyboardInterrupt as ex:
# handle all exit procedures and data cleaning
print("[*] Handling all exit procedures...")
After this, you should be able to automatically restart your relevant procedure (even from within the script itself, without any external scripts). Anyway, it's a bit hard to know without seeing the relevant script, so maybe I can be of more help if you share some of it.

Controlling a python script from another script

I am trying to learn how to write a script control.py, that runs another script test.py in a loop for a certain number of times, in each run, reads its output and halts it if some predefined output is printed (e.g. the text 'stop now'), and the loop continues its iteration (once test.py has finished, either on its own, or by force). So something along the lines:
for i in range(n):
os.system('test.py someargument')
if output == 'stop now': #stop the current test.py process and continue with next iteration
#output here is supposed to contain what test.py prints
The problem with the above is that, it does not check the output of test.py as it is running, instead it waits until test.py process is finished on its own, right?
Basically trying to learn how I can use a python script to control another one, as it is running. (e.g. having access to what it prints and so on).
Finally, is it possible to run test.py in a new terminal (i.e. not in control.py's terminal) and still achieve the above goals?
An attempt:
test.py is this:
from itertools import permutations
import random as random
perms = [''.join(p) for p in permutations('stop')]
for i in range(1000000):
rand_ind = random.randrange(0,len(perms))
print perms[rand_ind]
And control.py is this: (following Marc's suggestion)
import subprocess
command = ["python", "test.py"]
n = 10
for i in range(n):
p = subprocess.Popen(command, stdout=subprocess.PIPE, stderr=subprocess.STDOUT)
while True:
output = p.stdout.readline().strip()
print output
#if output == '' and p.poll() is not None:
# break
if output == 'stop':
print 'sucess'
p.kill()
break
#Do whatever you want
#rc = p.poll() #Exit Code
You can use subprocess module or also the os.popen
os.popen(command[, mode[, bufsize]])
Open a pipe to or from command. The return value is an open file object connected to the pipe, which can be read or written depending on whether mode is 'r' (default) or 'w'.
With subprocess I would suggest
subprocess.call(['python.exe', command])
or the subprocess.Popen --> that is similar to os.popen (for instance)
With popen you can read the connected object/file and check whether "Stop now" is there.
The os.system is not deprecated and you can use as well (but you won't get a object from that), you can just check if return at the end of execution.
From subprocess.call you can run it in a new terminal or if you want to call multiple times ONLY the test.py --> than you can put your script in a def main() and run the main as much as you want till the "Stop now" is generated.
Hope this solve your query :-) otherwise comment again.
Looking at what you wrote above you can also redirect the output to a file directly from the OS call --> os.system(test.py *args >> /tmp/mickey.txt) then you can check at each round the file.
As said the popen is an object file that you can access.
What you are hinting at in your comment to Marc Cabos' answer is Threading
There are several ways Python can use the functionality of other files. If the content of test.py can be encapsulated in a function or class, then you can import the relevant parts into your program, giving you greater access to the runnings of that code.
As described in other answers you can use the stdout of a script, running it in a subprocess. This could give you separate terminal outputs as you require.
However if you want to run the test.py concurrently and access variables as they are changed then you need to consider threading.
Yes you can use Python to control another program using stdin/stdout, but when using another process output often there is a problem of buffering, in other words the other process doesn't really output anything until it's done.
There are even cases in which the output is buffered or not depending on if the program is started from a terminal or not.
If you are the author of both programs then probably is better using another interprocess channel where the flushing is explicitly controlled by the code, like sockets.
You can use the "subprocess" library for that.
import subprocess
command = ["python", "test.py", "someargument"]
for i in range(n):
p = subprocess.Popen(command, stdout=subprocess.PIPE, stderr=subprocess.STDOUT)
while True:
output = p.stdout.readline()
if output == '' and p.poll() is not None:
break
if output == 'stop now':
#Do whatever you want
rc = p.poll() #Exit Code

Python: subprocess32 process.stdout.readline() waiting time

If I run the following function "run" with for example "ls -Rlah /" I get output immediately via the print statement as expected
import subprocess32 as subprocess
def run(command):
process = subprocess.Popen(command,
stdout=subprocess.PIPE,
stderr=subprocess.STDOUT)
try:
while process.poll() == None:
print process.stdout.readline()
finally:
# Handle the scenario if the parent
# process has terminated before this subprocess
if process.poll():
process.kill()
However if I use the python example program below it seems to be stuck on either process.poll() or process.stdout.readline() until the program has finished. I think it is stdout.readline() since if I increase the number of strings to output from 10 to 10000 (in the example program) or add in a sys.stdout.flush() just after every print, the print in the run function does get executed.
How can I make the output from a subprocess more real-timeish?
Note: I have just discovered that the python example program does not perform a sys.stdout.flush() when it outputs, is there a way for the caller of subprocess to enforce this somehow?
Example program which outputs 10 strings every 5 seconds.
#!/bin/env python
import time
if __name__ == "__main__":
i = 0
start = time.time()
while True:
if time.time() - start >= 5:
for _ in range(10):
print "hello world" + str(i)
start = time.time()
i += 1
if i >= 3:
break
On most systems, command line programs line buffer or block buffer depending on whether stdout is a terminal or a pipe. On unixy systems, the parent process can create a pseudo-terminal to get terminal-like behavior even though the child isn't really run from a terminal. You can use the pty module to create a pseudo-terminal or use the pexpect module which eases access to interactive programs.
As mentioned in comments, using poll to read lines can result in lost data. One example is data left in the stdout pipe when the process terminates. Reading pty is a bit different than pipes and you'll find you need to catch an IOError when the child closes to get it all to work properly as in the example below.
try:
import subprocess32 as subprocess
except ImportError:
import subprocess
import pty
import sys
import os
import time
import errno
print("running %s" % sys.argv[1])
m,s = (os.fdopen(pipe) for pipe in pty.openpty())
process = subprocess.Popen([sys.argv[1]],
stdin=s,
stdout=s,
stderr=subprocess.STDOUT)
s.close()
try:
graceful = False
while True:
line = m.readline()
print line.rstrip()
except IOError, e:
if e.errno != errno.EIO:
raise
graceful = True
finally:
# Handle the scenario if the parent
# process has terminated before this subprocess
m.close()
if not graceful:
process.kill()
process.wait()
You should flush standard output in your script:
print "hello world" + str(i)
sys.stdout.flush()
When standard output is a terminal, stdout is line-buffered. But when it is not, stdout is block buffered and you need to flush it explicitly.
If you can't change the source of your script, you can use the -u option of Python (in the subprocess):
-u Force stdin, stdout and stderr to be totally unbuffered.
Your command should be: ['python', '-u', 'script.py']
In general, this kind of buffering happens in userspace. There are no generic ways to force an application to flush its buffers: some applications support command line options (like Python), others support signals, others do not support anything.
One solution might be to emulate a pseudo terminal, giving "hints" to the programs that they should operate in line-buffered mode. Still, this is not a solution that works in every case.
For things other than python you could try using unbuffer:
unbuffer disables the output buffering that occurs when program output is redirected from non-interactive programs. For example, suppose you are watching the output from a fifo by running it through od and then more.
od -c /tmp/fifo | more
You will not see anything until a full page of output has been produced.
You can disable this automatic buffering as follows:
unbuffer od -c /tmp/fifo | more
Normally, unbuffer does not read from stdin. This simplifies use of unbuffer in some situations. To use unbuffer in a pipeline, use the -p flag. Example:
process1 | unbuffer -p process2 | process3
So in your case:
run(["unbuffer",cmd])
There are some caveats listed in the docs but it is another option.

Categories