Running Newman command with subprocess - python

I have a Newman command that is part of my script. I'd like the entire script to quit (or go back to the main menu) when the collection encounters an error.
from subprocess import CalledProcessError, Popen, PIPE
from io import TextIOWrapper
def run_sh(command):
process = Popen(shlex.split(command), stdout=PIPE)
for line in TextIOWrapper(process.stdout, newline=""):
print(line, "")
cmd = newman run "My_Collection.postman_collection.json" --folder "My_Folder" -e ../../Postman/Environments/My_Environment.json -d "CREDS.txt" -r cli,csv -n 1 --reporter-csv-includeBody --reporter-csv-export ./RESPONSES.csv
The run_sh(cmd) is part of a bigger script. I'd like the script to not carry on to the next step if errors are encountered when running the collection. Using try and except won't work because the run_sh command successfully goes through (it runs the collection but the collection yields errors).
Example of an error:
# failure detail
1. AssertionError Status code is 202
expected response to have status code 202 but got 401
at assertion:0 in test-script
inside "MY FOLDER"
What would be the best way to go about this?

Related

Filter output received from subprocess.check_call

What specific syntax needs to be added to the Python 3 script below in order for the script to filter through each line of the results and evaluate whether any of the lines of output contain specific substrings?
Here is the code which now successfully runs a git clone command:
newpath="C:\\path\\to\\destination\\"
cloneCommand='git clone https://github.com/someuser/somerepo.git'
proc = subprocess.check_call(cloneCommand, stdout=subprocess.PIPE, shell=True, cwd=newpath, timeout=None)
The above successfully clones the intended repo. But the problem is that there is not error handling.
I would like to be able to have the script listen for the words deltas and done in each line of output so that it can indicate success when the following line is printed in the output:
Resolving deltas: 100% (164/164), done.
subprocess.Popen(...) allows us to filter each line of the streaming output. However, subprocess.Popen(...) does not work when we run remote commands like git clone because subprocess.Popen(...) does not wait to receive the return from a remote call like git clone.
What syntax do we need to use to filter the output from calls to subprocess.check_call(...)?
A small script that we can execute to test our Popen code. It generates some STDOUT and STDERR before exiting with a code of our choosing, optionally with some delay:
from sys import stdout, stderr, exit, argv
from time import sleep
stdout.write('OUT 1\nOUT 2\nOUT 3\n')
sleep(2)
stderr.write('err 1\nerr 2\n')
exit(int(argv[1]))
A script demonstrating how to use Popen. The arguments to this script will be the external command that we want to execute.
import sys
from subprocess import Popen, PIPE
# A function that takes some subprocess command arguments (list, tuple,
# or string), runs that command, and returns a dict containing STDOUT,
# STDERR, PID, and exit code. Error handling is left to the caller.
def run_subprocess(cmd_args, shell = False):
p = Popen(cmd_args, stdout = PIPE, stderr = PIPE, shell = shell)
stdout, stderr = p.communicate()
return dict(
stdout = stdout.decode('utf-8').split('\n'),
stderr = stderr.decode('utf-8').split('\n'),
pid = p.pid,
exit_code = p.returncode,
)
# Run a command.
cmd = sys.argv[1:]
d = run_subprocess(cmd)
# Inspect the returned dict.
for k, v in d.items():
print('\n#', k)
print(v)
If the first script is called other_program.py and this script is called demo.py, you would run the whole thing along these lines:
python demo.py python other_program.py 0 # Exit with success.
python demo.py python other_program.py 1 # Exit with failure.
python demo.py python other_program.py X # Exit with a Python error.
Usage example with git clone as discussed with OP in comments:
$ python demo.py git clone --progress --verbose https://github.com/hindman/jump
# stdout
['']
# stderr
["Cloning into 'jump'...", 'POST git-upload-pack (165 bytes)', 'remote: Enumerating objects: 70, done. ', 'remote: Total 70 (delta 0), reused 0 (delta 0), pack-reused 70 ', '']
# pid
7094
# exit_code
0

How to know if a service is installed

I'm looking for a way to check with a Python script if a service is installed. For example, if I want to check than a SSH server in installed/running/down in command line, I used :
service sshd status
If the service is not installed, I have a message like this:
sshd.service
Loaded: not-found (Reason: No such file or directory)
Active: inactive (dead)
So, I used a subprocess check_output to get this three line but the python script is not working. I used shell=True to get the output but it doen't work. Is it the right solution to find if a service is installed or an another method is existing and much more efficient?
There is my python script:
import subprocess
from shlex import split
output = subprocess.check_output(split("service sshd status"), shell=True)
if "Loaded: not-found" in output:
print "SSH server not installed"
The probleme with this code is a subprocess.CalledProcessError: Command returned non-zero exit status 1. I know that's when a command line return something which doesn't exist but I need the result as I write the command in a shell
Choose some different systemctl call, which differs for existing and non-existing services. For example
systemctl cat sshd
will return exit code 0 if the service exists and 1 if not. And it should be quite easy to check, isn't it?
Just catch the error and avoid shell=True:
import subprocess
try:
output = subprocess.check_output(["service", "sshd", "status"], stderr=subprocess.STDOUT)
except subprocess.CalledProcessError as e:
print(e.output)
print(e.returncode)

Calling ffmpeg kills script in background only

I've got a python script that calls ffmpeg via subprocess to do some mp3 manipulations. It works fine in the foreground, but if I run it in the background, it gets as far as the ffmpeg command, which itself gets as far as dumping its config into stderr. At this point, everything stops and the parent task is reported as stopped, without raising an exception anywhere. I've tried a few other simple commands in the place of ffmpeg, they execute normally in foreground or background.
This is the minimal example of the problem:
import subprocess
inf = "3HTOSD.mp3"
outf = "out.mp3"
args = [ "ffmpeg",
"-y",
"-i", inf,
"-ss", "0",
"-t", "20",
outf
]
print "About to do"
result = subprocess.call(args)
print "Done"
I really can't work out why or how a wrapped process can cause the parent to terminate without at least raising an error, and how it only happens in so niche a circumstance. What is going on?
Also, I'm aware that ffmpeg isn't the nicest of packages, but I'm interfacing with something that has using ffmpeg compiled into it, so using it again seems sensible.
It might be related to Linux process in background - “Stopped” in jobs? e.g., using parent.py:
from subprocess import check_call
check_call(["python", "-c", "import sys; sys.stdin.readline()"])
should reproduce the issue: "parent.py script shown as stopped" if you run it in bash as a background job:
$ python parent.py &
[1] 28052
$ jobs
[1]+ Stopped python parent.py
If the parent process is in an orphaned process group then it is killed on receiving SIGTTIN signal (a signal to stop).
The solution is to redirect the input:
import os
from subprocess import check_call
try:
from subprocess import DEVNULL
except ImportError: # Python 2
DEVNULL = open(os.devnull, 'r+b', 0)
check_call(["python", "-c", "import sys; sys.stdin.readline()"], stdin=DEVNULL)
If you don't need to see ffmpeg stdout/stderr; you could also redirect them to /dev/null:
check_call(ffmpeg_cmd, stdin=DEVNULL, stdout=DEVNULL, stderr=STDOUT)
I like to use the commands module. It's simpler to use in my opinion.
import commands
cmd = "ffmpeg -y -i %s -ss 0 -t 20 %s 2>&1" % (inf, outf)
status, output = commands.getstatusoutput(cmd)
if status != 0:
raise Exception(output)
As a side note, sometimes PATH can be an issue, and you might want to use an absolute path to the ffmpeg binary.
matt#goliath:~$ which ffmpeg
/opt/local/bin/ffmpeg
From the python/subprocess/call documentation:
Wait for command to complete, then return the returncode attribute.
So as long as the process you called does not exit, your program does not go on.
You should set up a Popen process object, put its standard output and error in different buffers/streams and when there is an error, you terminate the process.
Maybe something like this works:
proc = subprocess.Popen(args, stderr = subprocess.PIPE) # puts stderr into a new stream
while proc.poll() is None:
try:
err = proc.stderr.read()
except: continue
else:
if err:
proc.terminate()
break

running a python script on server

i have a python script on the server
#!/usr/bin/env python
import cgi
import cgitb; #cgitb.enable()
import sys, os
from subprocess import call
import time
import subprocess
form = cgi.FieldStorage()
component = form.getvalue('component')
command = form.getvalue('command')
success = True
print """Content-Type: text/html\n"""
if component=="Engine" and command=="Start":
try:
process = subprocess.Popen(['/usr/sbin/telepath','engine','start'], shell=False, stdout=subprocess.PIPE)
print "{ans:12}"
except Exception, e:
success = False
print "{ans:0}"
When I run this script and add the component and command parameters to be "Engine" and "Start" respectively - it starts the process and prints to the shell
"""Content-Type: text/html\n"""
{ans:12}
but most importantly - it starts the process!
however, when I run the script by POSTing to it, it returns {ans:12} but does not run the process which was the whole intention in the first place. Any logical explanation?
I suspect it's one of two things, firstly your process is probably running but your python code doesn't handle the output so do:
process = subprocess.Popen(['/usr/sbin/telepath','engine','start'], shell=False, stdout=subprocess.PIPE)
print process.stdout.read()
This is the most likely and explains why you see the output from the command line and not the browser, or secondly because the script is run through the browsers as the user apache and not with your userid check the permission for /usr/sbin/telepath.

How can I tell whether screen is running?

I am trying to run a Python program to see if the screen program is running. If it is, then the program should not run the rest of the code. This is what I have and it's not working:
#!/usr/bin/python
import os
var1 = os.system ('screen -r > /root/screenlog/screen.log')
fd = open("/root/screenlog/screen.log")
content = fd.readline()
while content:
if content == "There is no screen to be resumed.":
os.system ('/etc/init.d/tunnel.sh')
print "The tunnel is now active."
else:
print "The tunnel is running."
fd.close()
I know there are probably several things here that don't need to be and quite a few that I'm missing. I will be running this program in cron.
from subprocess import Popen, PIPE
def screen_is_running():
out = Popen("screen -list",shell=True,stdout=PIPE).communicate()[0]
return not out.startswith("This room is empty")
Maybe the error message that you redirect on the first os.system call is written on the standard error instead of the standard output. You should try replacing this line with:
var1 = os.system ('screen -r 2> /root/screenlog/screen.log')
Note the 2> to redirect standard error to your file.

Categories