Pattern matching and extracting data from a remote machine's output. - python

Below I am connected to a remote machine and reading (cat) a file. The output is something like this:
AIMS_PASS=wreretet
ASAPMSTR_PASS=dfdgdg
CREP_PASS=gfhfh
DSS_PASS=dgfhhfh
ELS_PASS=Rdgdh
EXTAPI_PASS=qadgdbbc
I need the words before _PASS like AIMS, ASAPMSTR, CREP,..But these are output from the remote server. I know cut -d _ -f 1 would work if the data is local. How do I apply this command on the output from remote server. Specifically inside the if loop.
pswd = re.compile(r'\w_PASS\W')
if conn is None:
print machine +" " + "Successfully Authenticated\n"
stdin, stdout, stderr = ssh.exec_command("""python -c 'import os; \
print os.path.isfile("/a/etc/portal/db/secrets/db.shared") \
'""")
ret_val = stdout.read()
if ret_val:
print "db.shared file is there!"
stdin, stdout, stderr = ssh.exec_command("cat /a/etc/portal/db/secrets/db.shared")
data = stdout.read()
pswd_line = pswd.findall(data)
if pswd_line:
print data
<SOMETHING WHICH JUST GIVES ME THE WORD BEFORE '_PASS'>
#stdin, stdout, stderr = ssh.exec_command("cut -d _ -f 1")
#print stdout.read()
ssh.close()
break
else:
stdin, stdout, stderr = ssh.exec_command("exit")

If I understand correctly what your data variable holds:
x = "AIMS_PASS=wreretet\nASAPMSTR_PASS=dfdgdg"
[line.split('_PASS')[0] for line in x.split('\n')]
>>> ['AIMS', 'ASAPMSTR']
I use the Python split method to first split by new line, then split by _PASS and then take the first element.

Related

In Paramiko, execute commands from a list or dict and save the result to a list or dict

In Paramiko, how to pass a list or dict to exec_command and save results to a list or dict?
I need sleep between exec_command.
Commands are not executed sequentially, but in the order of 1, 2, 1.
stdin, stdout, stderr = ssh.exec_command(d.values()[0])
reuslt1 = stdout.read()
stdin, stdout, stderr = ssh.exec_command(d.values()[1])
reuslt2 = stdout.read()
stdin, stdout, stderr = ssh.exec_command(d.values()[0])
reuslt3 = stdout.read()
If there are no two problems mentioned above, I have tried map(), it works fine.
cmd = ['xxx', 'xxx']
def func(cmd):
stdin, stdout, stderr= ssh.exec_command(cmd)
result = stdout.read()
return result
list(map(func, cmd))
My problem is that I need to SSH a remote Linux, replace a string in a file.
ssh = paramiko.SSHClient()
ssh.set_missing_host_key_policy(paramiko.AutoAddPolicy())
ssh.connect(ip, port, username, password)
command = {
"search" : "grep$img_name ='string' file",
"modify" : "sed -i 's/$img_name = $now/$img_name = $word/g' file",
}
stdin, stdout, stderr = ssh.exec_command(command.values()[0])
before = stdout.read()
sleep(1) ##If I don't add the wait, I will grep the string twice before the modification.
ssh.exec_command(command.values()[1])
sleep(1)
stdin, stdout, stderr = ssh.exec_command(command.values()[0])
after = stdout.read() ##Confirm that my modification was successful
ssh.close()
I don't want to repeat coding stdin, stdout, stderr = ssh.exec_command().
I believe you are looking for this: Iterating over dictionaries using 'for' loops.
So in Python 3:
for key, value in command.items():
stdin, stdout, stderr = ssh.exec_command(value)
results[key] = stdout.read()
Regarding the sleep: The stdout.read() not only reads a command output. It, as an side effect of reading the output, waits for the command to finish. As you do not call stdout.read() for the sed, you do not wait for it to finish. So actually, the for loop above should solve that problem too, as it waits for all commands, including the sed, to finish.

Real time output of wget command by using exec_command in Paramiko

I am trying to download a file in all the machine and therefore I have created a python script. It uses the module paramiko
just a snippet from the code:
from paramiko import SSHClient, AutoAddPolicy
ssh = SSHClient()
ssh.load_system_host_keys()
ssh.set_missing_host_key_policy(AutoAddPolicy())
ssh.connect(args.ip,username=args.username,password=args.password)
stdin, stdout, stderr = ssh.exec_command("wget xyz")
print(stdout.read())
The output will be printed after the download has been completed.!
Is there a way I can print the output real time?
Edit: I have looked at this answer and applied something like this:
def line_buffered(f):
line_buf = ""
print "start"
print f.channel.exit_status_ready()
while not f.channel.exit_status_ready():
print("ok")
line_buf += f.read(size=1)
if line_buf.endswith('\n'):
yield line_buf
line_buf = ''
in, out, err = ssh.exec_command("wget xyz")
for l in line_buffered(out):
print l
But, It is not printing the data real time.! It waits for the file to download and then prints the whole status of the download.
Also, I have tried this command : echo one && sleep 5 && echo two && sleep 5 && echo three and the output is realtime with line_buffered function. But, for wget command, it is not working..
wget's progress information outputs to stderr so you should write line_buffered(err).
Or you can use exec_command("wget xyz", get_pty=True) which would combine stdout and stderr.

Retrive latest directory

import paramiko
ssh =paramiko.SSHClient()
ssh.set_missing_host_key_policy(paramiko.AutoAddPolicy())
ssh.connect(hostname='test.com',username='test',password='test123')
srcpath = ('/tmp/test/')
destpath = ('/tmp/file/')
transfer=ssh.open_sftp()
stdin, stdout, stderr = ssh.exec_command('cd /tmp/test/; ls -1t *txt* | head -1')
out = stdout.read().splitlines()
print out
error = stderr.read().splitlines()
print error
transfer.close()
ssh.close()
Above is my code, i tried to retrieve the latest directory on remote server. I am facing below error.
Error:
['bash: head: command not found']
is there any other way to retrieve latest directory ?
You actually don't need "head" nor "tail", just access the last line from python as follows.
This is your code little edited to catch the last line as last_line:
import paramiko
ssh =paramiko.SSHClient()
ssh.set_missing_host_key_policy(paramiko.AutoAddPolicy())
ssh.connect(hostname='test.com',username='test',password='test123')
srcpath = ('/tmp/test/')
destpath = ('/tmp/file/')
transfer=ssh.open_sftp()
stdin, stdout, stderr = ssh.exec_command('cd /tmp/test/; ls -1t *txt*')
out = stdout.read().splitlines()
last_line = out[-1] ## takes out the last line without need for tail command
print out
print last_line
error = stderr.read().splitlines()
print error
transfer.close()
ssh.close()

subprocess cmd is returning null (Python)

Trying to write a script that checks a directory for files which then uses the names of the files found to insert in to a subprocess command as shown below:
for filename in os.listdir('/home/dross/python/scripts/var/running/'):
print(str(filename))
cmd = 'app_query --username=dross --password=/home/dross/dross.txt "select row where label = \'Id: ' + filename + '\' SHOW status"'
print(cmd)
query = subprocess.Popen(cmd, shell=True, stdout=subprocess.PIPE)
query.wait()
If I run the command manually from the command line there are 2 possible values returned "Error:No result" or "True"
When the "Error: No result" condition is true the script returns the same however when the "True" condition is present nothing is returned.
If the result of the print statement is copied and pasted in to the os command line it runs and returns "True"
What could be the deception I am seeing here ?
Is there a better approach to achieve what I am trying to do ?
You seem to be missing a call to .communicate(), to read the results of the command through the pipe.
In your original query = subprocess.Popen(cmd, shell=True, stdout=subprocess.PIPE) then anything send to stderr will be displayed on the screen, which seems to be what's happening for your error message. Anything sent to stdout will be sent to the pipe, ready for reading with communicate()
Some experimenting, showing that you won't see what's written to the subprocess.PIPE channels unless you communicate with the command you've run, and that stderr will display to the terminal if it's not redirected:
>>> import subprocess
>>> query = subprocess.Popen('echo STDERR 1>&2', shell=True, stdout=subprocess.PIPE)
STDERR
>>> query.wait()
0
>>> print(query.communicate())
('', None)
>>> query = subprocess.Popen('echo STDERR 1>&2', shell=True, stdout=subprocess.PIPE, stderr=subprocess.PIPE)
>>> query.wait()
0
>>> print(query.communicate())
('', 'STDERR\n')
>>> query = subprocess.Popen('echo STDOUT', shell=True, stdout=subprocess.PIPE)
>>> query.wait()
0
>>> print(query.communicate())
('STDOUT\n', None)
So, to use your code from the question, you want something like this:
for filename in os.listdir('/home/dross/python/scripts/var/running/'):
print(filename) # print can convert to a string, no need for str()
cmd = 'app_query --username=dross --password=/home/dross/dross.txt "select row where label = \'Id: ' + filename + '\' SHOW status"'
print(cmd)
query = subprocess.Popen(cmd, shell=True, stdout=subprocess.PIPE, stderr=subprocess.PIPE)
query.wait()
output, error = query.communicate()
print("stdout: {}".format(output))
print("stderr: {}".format(error))

p.stdout.read() empty when using python subprocess

I try to get the output of a curl command by using the python subprocess. But the output is empty. Below is my source code:
def validateURL(url):
p = subprocess.Popen("curl",
stdin = subprocess.PIPE,
stdout = subprocess.PIPE,
stderr = subprocess.PIPE,
shell = False)
p.stdin.write("http://validator.w3.org/check?uri=" + url + "\n")
p.stdin.close()
stdout_data = p.stdout.read()
print stdout_data
result = re.findall("Error", stdout_data)
print result # empty here
if (len(result) != 0):
return 'ERR'
else:
return 'OK'
Why?
PS: I run this piece of code on my mac os and I use Python 2.7.
Drop the stderr = subprocess.PIPE,, and see the error message printed by curl. Act accordingly to fix it.
One possible reason is that the URL should be specified as a command-line argument, and not on stdin:
p = subprocess.Popen(("curl", "http://..."), stdout=subprocess.PIPE)
You were passing data to Popen after it executed the command.
Try this:
def validateURL(url):
p = subprocess.Popen(["curl", "http://validator.w3.org/check?uri=" + url + "\n"],
stdin = subprocess.PIPE,
stdout = subprocess.PIPE,
stderr = subprocess.PIPE,
shell = False)
stdout_data = p.stdout.read()
print stdout_data
result = re.findall("Error", stdout_data)
print result # empty here
if (len(result) != 0):
return 'ERR'
else:
return 'OK'
You're not specifying the URL on the command line, so curl is printing an error message and exiting. Thus, there is no output on stdout. You're trying to send the URL on standard input, but curl does not work that way.
Instead, try:
p = subprocess.Popen(["curl", "http://validator.w3.org/check?uri=" + url],
stdout=subprocess.PIPE, shell=False)
Or, you know, just use urllib2 (or requests) and do it in native Python instead of shelling out to curl and dealing with all that plumbing.

Categories