How to execute python subprocess.Popen with many Arguments? - python

I need to execute the same command on a local and remote server. So I'm using subprocess.Popen to execute, and local command work as expected, but when I execute on remote it gives me some error like command not found. I appreciate your support as I am new to this.
Local execution function
def topic_Offset_lz(self):
CMD = "/dsapps/admin/edp/scripts/edp-admin.sh kafka-topic offset %s -e %s | grep -v Getting |grep -v Verifying | egrep -v '^[[:space:]]*$|^#' | awk -F\: '{print $3}'|sed '%sq;d'" % (self.topic,self.envr,self.partition)
t_out_lz, t_error_lz = subprocess.Popen(CMD, stdout=subprocess.PIPE, stderr=subprocess.PIPE, shell=True).communicate()
return t_out_lz
Remote server execution
def topic_offset_sl(self):
CMD = "/dsapps/admin/edp/scripts/edp-admin.sh kafka-topic offset %s -e %s | grep -v Getting |grep -v Verifying | egrep -v '^[[:space:]]*$|^#' | awk -F\: '{print $3}'|sed '%sq;d'" % (self.topic, self.envr, self.partition)
t_out_sl, t_error_sl = subprocess.Popen(["ssh", "-q", CMD], stdout=subprocess.PIPE, stderr=subprocess.PIPE, shell=True).communicate()
return t_error_sl
Error I'm getting for the remote execution
Landing Zone Offset: 0
SoftLayer Zone Offset: /bin/sh: ^# |sed 1: command not found
/bin/sh: d: command not found

I came up with below solution, might be there will easy way rather than this.
def topic_offset_sl(self):
CMD_SL1 = "ssh -q %s '/dsapps/admin/edp/scripts/edp-admin.sh kafka-topic offset %s -e %s'" % (KEY_SERVER,self.topic, self.envr)
CMD_SL2 = "| grep -v Getting |grep -v Verifying | egrep -v '^[[:space:]]*$|^#' | awk -F\: '{print $3}'|sed '%sq;d'" % (self.partition)
t_out_sl, t_error_sl = subprocess.Popen(CMD_SL1 + CMD_SL2 , stdout=subprocess.PIPE, stderr=subprocess.PIPE, shell=True).communicate()
return t_out_sl

The ssh command passes its argument vector as a single command line string, not an array. To do this, it simply concatenates the arguments, without performing shell quoting:
$ ssh target "python -c 'import sys;print(sys.argv)'" 1 2 3
['-c', '1', '2', '3']
$ ssh target "python -c 'import sys;print(sys.argv)'" "1 2 3"
['-c', '1', '2', '3']
If there was proper shell quoting, the distinction between 1 2 3 and "1 2 3" would have been preserved, and the first argument would not need double-quoting.
Anyway, in your case, the following might work:
def topic_offset_sl(self):
CMD = "ssh -q " + pipes.quote("/dsapps/admin/edp/scripts/edp-admin.sh"
+ " kafka-topic offset %s -e %s" % (self.topic, self.envr)) \
+ "grep -v Getting |grep -v Verifying | egrep -v '^[[:space:]]*$|^#'"
+ " | awk -F\: '{print $3}'|sed '%sq;d'" % self.partition
t_out_sl, t_error_sl = subprocess.Popen(CMD], stdout=subprocess.PIPE, stderr=subprocess.PIPE, shell=True).communicate()
return t_error_sl
This assumes you only want to run the /dsapps/admin/edp/scripts/edp-admin.sh script remotely and not the rest.
Note that the way you use string splicing to construct command lines likely introduces shell command injection vulnerabilities (both locally and on the remote server).

Related

Raid alert catch python script need to have Nagios logic added

Can someone please help how can I add Nagios logic to catch alerts to my below python script?
I tried adding the sys.exit(0) and sys.exit(1) for all OK and CRITICAL, Or Please Let me know what I should do, So that this script when run Nagios catch the 0,1,2 and display the message.
#!/usr/bin/python
import subprocess
import os, sys
#Check python present or not
# dnf install python3.6-stack
# export PATH=/opt/python-3.6/bin:$PATH
def check_MegaRaid():
# Next script
failed=subprocess.run(["sudo /opt/MegaRAID/MegaCli/MegaCli64 -AdpAllInfo \ -aALL | grep -i 'Failed Disks' | awk -F':' '{print $2}'"], shell=True, stdout=subprocess.PIPE, universal_newlines=True)
failed_status = failed.stdout
print("failed_status is",failed_status)
critical=subprocess.run(["sudo /opt/MegaRAID/MegaCli/MegaCli64 -AdpAllInfo \ -aALL | grep -i 'Critical Disks' | awk -F':' '{print $2}'"], shell=True, stdout=subprocess.PIPE, universal_newlines=True)
critical_status = critical.stdout
print("critical_status is",critical_status)
if failed_status.strip() and critical_status.strip() == "0" :
print("Raid check all OK" )
sys.exit(0)
#return 0
else:
print("CRITICAL")
sys.exit(1)
#return 1
def check_raid():
process=subprocess.run(["sudo /sbin/mdadm --detail /dev/md127 | grep -i state | grep -w clean, | awk -F',' '{print $2}' |sed -e 's/^[ \t]*//' "], shell=True, stdout=subprocess.PIPE, universal_newlines=True)
output = process.stdout
check_process=subprocess.run(["sudo /sbin/mdadm --detail /dev/md127 | grep -i state | awk -F':' '{print $2}' |sed -e 's/^[ \t]*//' "], shell=True, stdout=subprocess.PIPE, universal_newlines=True)
check = check_process.stdout
if output.strip() == 'degraded':
print("Raid disk state is CRITICAL ",output)
#return 1
sys.exit(1)
elif check.strip() == 'clean':
print("Raid check all OK")
#return 0
sys.exit(0)
else:
print("sudo /sbin/mdadm --detail /dev/md127 cmd not found : This is an dataraid machine")
check_MegaRaid()
#Check whether system configure raid
process=subprocess.run(["sudo cat /GEO_VERSION | grep -i raid | awk -F'Layout:' '{print $2}' | sed 's/[0-9]*//g' | sed -e 's/^[ \t]*//'"], shell=True, stdout=subprocess.PIPE, universal_newlines=True)
raid_value = process.stdout
if raid_value.strip() == 'raid':
print("System configure Raid functions")
check_raid()
else:
print("There is no raid configured in this system")
exit()
Referencing https://nagios-plugins.org/doc/guidelines.html in case you're interested.
0 is OK
1 is Warning
2 is Critical
3 is Unknown
So the first thing you need to do is replace your sys.exit(1) with a sys.exit(2)
I would also replace that final exit() with a sys.exit(3) to signal that it is an Unknown exit, which will you help you identify mis-configured services in the UI.
You'll also want to indicate the status first, a typical one-line plugin output will look like:
STATUS: message | perfdata
But it doesn't look like you're using performance data, so change your critical exits to be prepended with the characters CRITICAL: and your OK statuses with OK:.

How to convert output from Python after sudo command from <type> int to <type> str?

I have the following code to receive list of process with sudo:
sudoPass = 'mypass'
command = "launchctl list | grep -v com.apple"
x = os.system('echo %s|sudo -S %s' % (sudoPass, command))
But, I receive answer in int. I need in str. Is it possible to convert it to str without loosing data?
os.system returns (in most cases, see https://docs.python.org/3/library/os.html#os.system) the exit value of the process. Meaning most of the time 0 is everything went fine.
What you look for is the subprocess module (https://docs.python.org/3/library/subprocess.html) that allow you to capture output like so :
import subprocess
sudoPass = 'mypass\n' #Note the new line
command = "launchctl list | grep -v com.apple"
x = subprocess.Popen('echo %s|sudo -S %s' % (sudoPass, command), stdout=subprocess.PIPE, stderr=subprocess.PIPE, shell=True)
stdout, stderr = x.communicate()
print(stdout)

Python - Sending Variable to subprocess

I have a list of IP addresses that I need to run a curl command on Remotely.
I am using a for loop to iterate through the ips.
The command that I need to run remotely is
curl --silent http://<IP>:9200/_cat/master | awk '{print $2}'
The above output will return an IP address of a master node in my cluster.
My code states
status = subprocess.Popen(["ssh", "%s" % ip, "curl http://ip:9200/_cat/master | awk '{print $2}'"], shell=False, stdout=subprocess.PIPE, stderr=subprocess.PIPE)
I am having trouble passing the ip variable as part of my command.
I have also tried doing this.
status = subprocess.Popen(["ssh", "%s" % ip, "curl http://",ip,":9200/_cat/master | awk '{print $2}'"], shell=False, stdout=subprocess.PIPE, stderr=subprocess.PIPE)
But it does not seem to work. How can I get this to work?
The third parameter will likely work better as one string. Try:
status = subprocess.Popen(
["ssh", "%s" % ip,
"curl http://%s:9200/_cat/master | awk '{print $2}'" % ip
], ....
I think I recall trying things your way, passing in the list, but had a lot of issues.
Instead, I've settled on passing in a single execution string just about everywhere in my code.
popen = subprocess.Popen(
"ping -n 1 %s" % "192.168.1.10",
stdout=subprocess.PIPE,
stderr=subprocess.PIPE,
)
The one case where I used the list is an extremely simple exe call
popen = Popen( [sys.executable, script_name], stdout=output_file, stderr=output_file )

Handle result of os.system

I'm using python to script a functional script and I can't handler the result of this command line:
os.system("ps aux -u %s | grep %s | grep -v 'grep' | awk '{print $2}'" % (username, process_name)
It shows me pids but I can't use it as List.
If I test:
pids = os.system("ps aux -u %s | grep %s | grep -v 'grep' | awk '{print $2}'" % (username, process_name)
print type(pids)
#Results
29719
30205
31037
31612
<type 'int'>
Why is pids an int? How can I handle this result as List?
Stranger part:
print type(os.system("ps aux -u %s | grep %s | grep -v 'grep' | awk '{print $2}'" % (username, process_name))
There is nothing. Not any type written on my console..
os.system does not capture the output of the command it runs. To do so you need to use subprocess.
from subprocess import check_output
out = check_output("your command goes here", shell=true)
The above will work in Python 2.7. For older Pythons, use:
import subprocess
p = subprocess.Popen("your command goes here", stdout=subprocess.PIPE, shell=True)
out, err = p.communicate()
os module documentation
os.system(command)
Execute the command (a string) in a subshell. This is implemented by calling the Standard C function system(), and has the same limitations. Changes to sys.stdin, etc. are not reflected in the environment of the executed command.
On Unix, the return value is the exit status of the process encoded in the format specified for wait(). Note that POSIX does not specify the meaning of the return value of the C system() function, so the return value of the Python function is system-dependent.
If you want access to the output of the command, use the subprocess module instead, e.g. check_output:
subprocess.check_output(args, *, stdin=None, stderr=None, shell=False, universal_newlines=False)
Run command with arguments and return its output as a byte string.

Invoking shell commands using os.system

I am trying to use os.system to invoke an external (piped) shell command:
srcFile = os.path.abspath(sys.argv[1])
srcFileIdCmd = "echo -n '%s' | cksum | cut -d' ' -f1" % srcFile
print "ID command: %s" % srcFileIdCmd
srcFileID = os.system(srcFileIdCmd)
print "File ID: %s" % srcFileID
outputs
ID command: echo -n '/my/path/filename' | cksum | cut -d' ' -f1
File ID: 0
But when I run
echo -n '/my/path/filename' | cksum | cut -d' ' -f1
manually on a command line, I get 2379496500, not 0.
What do I need to change to get the correct value out of the shell command?
Use
sp = subprocess.Popen(["program", "arg"], stdout=subprocess.PIPE)
instead, and then read from the file sp.stdout. The pogram in question can be a shell, and you can pass complex shell commands to it as parameters (["/usr/bin/bash", "-c", "my-complex-command"]).

Categories