I'm trying to prevent uploads to S3 in case any previous pipelined command will fail, unfortunately none of these two methods works as expected:
Shell pipeline
for database in sorted(databases):
cmd = "bash -o pipefail -o errexit -c 'mysqldump -B {database} | gpg -e -r {GPGRCPT} | gof3r put -b {S3_BUCKET} -k {database}.sql.e'".format(database = database, GPGRCPT = GPGRCPT, S3_BUCKET = S3_BUCKET)
try:
subprocess.check_call(cmd, shell = True, executable="/bin/bash")
except subprocess.CalledProcessError as e:
print e
Popen with PIPEs
for database in sorted(databases):
try:
cmd_mysqldump = "mysqldump {database}".format(database = database)
p_mysqldump = subprocess.Popen(shlex.split(cmd_mysqldump), stdout=subprocess.PIPE)
cmd_gpg = "gpg -a -e -r {GPGRCPT}".format(GPGRCPT = GPGRCPT)
p_gpg = subprocess.Popen(shlex.split(cmd_gpg), stdin=p_mysqldump.stdout, stdout=subprocess.PIPE)
p_mysqldump.stdout.close()
cmd_gof3r = "gof3r put -b {S3_BUCKET} -k {database}.sql.e".format(S3_BUCKET = S3_BUCKET, database = database)
p_gof3r = subprocess.Popen(shlex.split(cmd_gof3r), stdin=p_gpg.stdout, stderr=open("/dev/null"))
p_gpg.stdout.close()
except subprocess.CalledProcessError as e:
print e
I tried something like this with no luck:
....
if p_gpg.returncode == 0:
cmd_gof3r = "gof3r put -b {S3_BUCKET} -k {database}.sql.e".format(S3_BUCKET = S3_BUCKET, database = database)
p_gof3r = subprocess.Popen(shlex.split(cmd_gof3r), stdin=p_gpg.stdout, stderr=open("/dev/null"))
p_gpg.stdout.close()
...
Basically gof3r is streaming data to S3 even if there are errors, for instance when I intentionally change mysqldump -> mysqldumpp to generate an error.
I had the exact same question, and I managed it with:
cmd = "cat file | tr -d '\\n'"
subprocess.check_call( [ '/bin/bash' , '-o' , 'pipefail' , '-c' , cmd ] )
Thinking back, and searching in my code, I used another method too:
subprocess.check_call( "ssh -c 'make toto 2>&1 | tee log.txt ; exit ${PIPESTATUS[0]}'", shell=True )
All commands in a pipeline run concurrently e.g.:
$ nonexistent | echo it is run
the echo is always run even if nonexistent command does not exist.
pipefail affects the exit status of the pipeline as a whole -- it does not make gof3r exit any sooner
errexit has no effect because there is a single pipeline here.
If you meant that you don't want to start the next pipeline if the one from the previous iteration fails then put break after print e in the exception handler.
p_gpg.returncode is None while gpg is running. If you don't want gof3r to run if gpg fails then you have to save gpg's output somewhere else first e.g., in a file:
filename = 'gpg.out'
for database in sorted(databases):
pipeline_no_gof3r = ("bash -o pipefail -c 'mysqldump -B {database} | "
"gpg -e -r {GPGRCPT}'").format(**vars())
with open(filename, 'wb', 0) as file:
if subprocess.call(shlex.split(pipeline_no_gof3r), stdout=file):
break # don't upload to S3, don't run the next database pipeline
# upload the file on success
gof3r_cmd = 'gof3r put -b {S3_BUCKET} -k {database}.sql.e'.format(**vars())
with open(filename, 'rb', 0) as file:
if subprocess.call(shlex.split(gof3r_cmd), stdin=file):
break # don't run the next database pipeline
Related
I'm having trouble passing Tshark as a command to Popen. In particular, when I add the capture filter, the program gets stuck.
command = ‘sudo tshark -i wlan1 -f “subtype probe-req” -n -N mnNtdv -Tfields -e wlan.ta -e wlan.ra -e wlan.seq -e wlan_radio.signal_dbm -e wlan.fc.type_subtype’
p = subprocess.Popen(command, stdout=subprocess.PIPE, shell=True)
for packet in iter(p.stdout.readline, b’‘):
packet_string = packet.rstrip().decode(“utf-8") #bytes to string
packet_info = re.split(' |\t’, packet_string) #extract info probe request
print(“PCKT String: “,packet_info)
When I remove the -f filter everything works fine, but when I add it the program seems stuck before the for loop.
I've solved it adding -l in the command
I'm having an issue running a simple python script that reads a helm command from a .sh script and outputs it.
When I run the command directly in the terminal, it runs fine:
helm list | grep prod- | cut -f5
# OUTPUT: prod-L2.0.3.258
But when I run python test.py (see below for whole source code of test.py), I get an error as if the command I'm running is helm list -f5 and not helm list | grep prod- | cut -f5:
user#node1:$ python test.py
# OUTPUT:
# Opening file 'helm_chart_version.sh' for reading...
# Running command 'helm list | grep prod- | cut -f5'...
# Error: unknown shorthand flag: 'f' in -f5
The test.py script:
import subprocess
# Open file for reading
file = "helm_chart_version.sh"
print("Opening file '" + file + "' for reading...")
bashCommand = ""
with open (file) as fh:
next(fh)
bashCommand = next(fh)
print("Running command '" + bashCommand + "'...")
process = subprocess.Popen(bashCommand.split(), stdout=subprocess.PIPE)
output, error = process.communicate()
if error is None:
print output
else:
print error
Contents of helm_chart_version.sh:
cat helm_chart_version.sh
# OUTPUT:
## !/bin/bash
## helm list | grep prod- | cut -f5
Try to avoid running complex shell pipelines from higher-level languages. Given the command you show, you can run helm list as a subprocess, and then do the post-processing on it in Python.
process = subprocess.run(["helm", "list"], capture_output=True, text=True, check=True)
for line in process.stdout.splitlines():
if 'prod-' not in line:
continue
words = line.split()
print(words[4])
The actual Python script you show doesn't seem to be semantically different from just directly running the shell script. You can use the sh -x option or the shell set -x command to cause it to print out each line as it executes.
I can source bash script (without shebang) easy as bash command in terminal but trying to do the same via python command
sourcevars = "cd /etc/openvpn/easy-rsa && . ./vars"
runSourcevars = subprocess.Popen(sourcevars, shell = True)
or
sourcevars = [". /etc/openvpn/easy-rsa/vars"]
runSourcevars = subprocess.Popen(sourcevars, shell = True)
I receive :
Please source the vars script first (i.e. "source ./vars")
Make sure you have edited it to reflect your configuration.
What's the matter, how to do it correctly?I've read some topics here,e.g here but could not solve my problem using given advices. Please explain with examples.
UPDATED:
# os.chdir = ('/etc/openvpn/easy-rsa')
initvars = "cd /etc/openvpn/easy-rsa && . ./vars && ./easy-rsa ..."
# initvars = "cd /etc/openvpn/easy-rsa && . ./vars"
# initvars = [". /etc/openvpn/easy-rsa/vars"]
cleanall = ["/etc/openvpn/easy-rsa/clean-all"]
# buildca = ["printf '\n\n\n\n\n\n\n\n\n' | /etc/openvpn/easy-rsa/build-ca"]
# buildkey = ["printf '\n\n\n\n\n\n\n\n\n\nyes\n ' | /etc/openvpn/easy-rsa/build-key AAAAAA"]
# buildca = "cd /etc/openvpn/easy-rsa && printf '\n\n\n\n\n\n\n\n\n' | ./build-ca"
runInitvars = subprocess.Popen(cmd, shell = True)
# runInitvars = subprocess.Popen(initvars,stdout=subprocess.PIPE, shell = True, executable="/bin/bash")
runCleanall = subprocess.Popen(cleanall , shell=True)
# runBuildca = subprocess.Popen(buildca , shell=True)
# runBuildca.communicate()
# runBuildKey = subprocess.Popen(buildkey, shell=True )
UPDATE 2
buildca = ["printf '\n\n\n\n\n\n\n\n\n' | /etc/openvpn/easy-rsa/build-ca"]
runcommands = subprocess.Popen(initvars+cleanall+buildca, shell = True)
There's absolutely nothing wrong with this in and of itself:
# What you're already doing -- this is actually fine!
sourcevars = "cd /etc/openvpn/easy-rsa && . ./vars"
runSourcevars = subprocess.Popen(sourcevars, shell=True)
# ...*however*, it won't have any effect at all on this:
runOther = subprocess.Popen('./easy-rsa build-key yadda yadda', shell=True)
However, if you subsequently try to run a second subprocess.Popen(..., shell=True) command, you'll see that it doesn't have any of the variables set by sourcing that configuration.
This is entirely normal and expected behavior: The entire point of using source is to modify the state of the active shell; each time you create a new Popen object with shell=True, it's starting a new shell -- their state isn't carried over.
Thus, combine into a single call:
prefix = "cd /etc/openvpn/easy-rsa && . ./vars && "
cmd = "/etc/openvpn/easy-rsa/clean-all"
runCmd = subprocess.Popen(prefix + cmd, shell=True)
...such that you're using the results of sourcing the script in the same shell invocation as that in which you actually source the script.
Alternately (and this is what I'd do), require your Python script to be invoked by a shell which already has the necessary variables in its environment. Thus:
# ask your users to do this
set -a; . ./vars; ./yourPythonScript
...and you can error out if people don't do so very easy:
import os, sys
if not 'EASY_RSA' in os.environ:
print >>sys.stderr, "ERROR: Source vars before running this script"
sys.exit(1)
I am trying to execute a tshark command to get some output for a validation and using subprocess.Popen to get this work done, but i am seeing sometimes subprocess.Popen is not able to execute the command. Below is a small function of my code:
import subprocess
import logging
def fetch_avps(request_name, logger, tcpdump, port, session_id):
out_list = []
if request_name == 'CCR':
com_sessn_filter = """tshark -r "%s" -odiameter.tcp.ports:"%s" -R 'diameter.cmd.code == 272 and diameter.flags.request==1 and !tcp.analysis.retransmission and diameter.flags.T == 0' -Tpdml -Tfields -ediameter.Session-Id -ediameter.CC-Request-Type -ediameter.User-Name -ediameter.Subscription-Id-Data -ediameter.Value-Digits | grep "%s" | cut -f 1-6 --output-delimiter=':'""" %(tcpdump, port, session_id)
elif request_name == 'CCA':
com_sessn_filter = """tshark -r "%s" -odiameter.tcp.ports:"%s" -R 'diameter.cmd.code == 272 and diameter.flags.request==0 and !tcp.analysis.retransmission and diameter.flags.T == 0' -Tpdml -Tfields -ediameter.Session-Id -ediameter.CC-Request-Type -ediameter.Result-Code -ediameter.Validity-Time -ediameter.Value-Digits -ediameter.Unit-Quota-Threshold | grep "%s" | cut -f 1-6 --output-delimiter=':'""" %(tcpdump, port, session_id)
p = subprocess.Popen(com_sessn_filter, stdout=subprocess.PIPE, stderr=subprocess.PIPE, shell=True)
out = p.stdout.read()
command_out_list = (out.strip().split("\n"))
sys.stdout.flush()
for i in range(len(command_out_list)):
out_list.append(command_out_list[i].split(":"))
if out_list[0][0] == '':
logger.error("Failed to execute Tshark command")
logger.debug("Failed to execute Tshark command \"%s\" for Session-Id \"%s\"" %(com_sessn_filter, session_id))
return 0
For example in the above code if I have 20 Sessions in a loop then subprocess.Popen might fail to execute around 12-13 times. Any help will be very useful.
Below is the stderr i am getting whenever it fails to execute.
(process:11306): GLib-ERROR **: /build/buildd/glib2.0-2.32.4/./glib/gmem.c:165: failed to allocate 4048572208 bytes Trace/breakpoint trap (core dumped)
When I run my Python application (that synchronizes a remote directory locally) I have a problem if the directory that contains my app has one or more spaces in its name.
Directory name appears in ssh options like "-o UserKnownHostsFile=<path>" and "-i <path>".
I try to double quote paths in my function that generates the command string, but nothing. I also try to replace spaces like this: path.replace(' ', '\\ '), but it doesn't work.
Note that my code works with dirnames without spaces.
The error returned by ssh is "garbage at the end of line" (code 12)
The command line generated seems ok..
rsync -rztv --delete --stats --progress --timeout=900 --size-only --dry-run \
-e 'ssh -o BatchMode=yes \
-o UserKnownHostsFile="/cygdrive/C/Users/my.user/my\ app/.ssh/known_hosts" \
-i "/cygdrive/C/Users/my.user/my\ app/.ssh/id_rsa"'
user#host:/home/user/folder/ "/cygdrive/C/Users/my.user/my\ app/folder/"
What am I doing wrong? Thank you!
Have you tried building your command as a list of arguments - I just had a similar problem passing a key file for the ssh connection:
command = [
"rsync",
"-rztv",
"--delete",
"--stats",
"--progress",
"--timeout=900",
"--size-only",
"--dry-run",
"-e",
"ssh -o BatchMode=yes -o UserKnownHostsFile='/cygdrive/C/Users/my.user/my\ app/.ssh/known_hosts' -i '/cygdrive/C/Users/my.user/my\ app/.ssh/id_rsa'",
"user#host:/home/user/folder/",
"/cygdrive/C/Users/my.user/my\ app/folder/"
]
sp = subprocess.Popen(command, shell=True, stdout=subprocess.PIPE, stderr=subprocess.STDOUT)
output = sp.communicate()[0]
The returned data from the sub-command is including spaces as delimiters. Try updating the Internal Field Separator (IFS) list like:
# store a copy of the current IFS
SAVEIFS=$IFS;
# set IFS to be newline-only
IFS=$(echo -en "\n\b");
# execute your command(s)
rsync -rztv --delete --stats --progress --timeout=900 --size-only --dry-run -e 'ssh -o BatchMode=yes -o UserKnownHostsFile="/cygdrive/C/Users/my.user/my\ app/.ssh/known_hosts" -i "/cygdrive/C/Users/my.user/my\ app/.ssh/id_rsa"' user#host:/home/user/folder/ "/cygdrive/C/Users/my.user/my\ app/folder/"
# put the original IFS back
IFS=$SAVEIFS;
I haven't tested using your command, though it has worked in all cases I've tried in the past.
To avoid escaping issues, use a raw string,
raw_string = r'''rsync -rztv --delete --stats --progress --timeout=900 --size-only --dry-run -e 'ssh -o BatchMode=yes -o UserKnownHostsFile="/cygdrive/C/Users/my.user/my app/.ssh/known_hosts" -i "/cygdrive/C/Users/my.user/my app/.ssh/id_rsa"' user#host:/home/user/folder/ "/cygdrive/C/Users/my.user/my app/folder/"'''
sp = subprocess.Popen(raw_string, shell=True, stdout=subprocess.PIPE, stderr=subprocess.STDOUT)
output = sp.communicate()[0]