chown via paramiko by username and not uid - python

I need to run chown on a certain file on a remote server to change the owner (not the group). The paramiko chown command takes 3 arguments: path, gid, uid.
In my code, I have the username, not the uid. So this is my code:
#some code here
...
object_stat = sftp_client.stat(object_path)
sftp_client.chown(object_path, owner_username, int(object_stat.st_gid))
...
#more code
Is there a way to workaround this? If we can avoid using shell commands it is preferred.
Thanks!

import pexpect
new_child=pexpect.spawn("ssh ....")
new_child.expect("Password:")
new_child.sendline(mypass)
new_child.expect("$")#or whatever the bash symbol is
new_child.sendline("chown. ...")
new_chile.expect("$")
if you want output use new_child.before

Here's an answer from the far future for anyone looking this up. You can use the SFTP client to send arbitrary SSH commands, i.e.,
def sendCommand(sftp, *command, **kwargs):
cmd = " ".join([str(c) for c in command])
session = sftp.sock.get_transport().open_channel(kind = "session")
try:
session.exec_command(cmd)
stdout = bytearray()
stderr = bytearray()
rc = 0
while True:
if session.exit_status_ready():
while True:
data = session.recv(8192)
if not data:
break
stdout.extend(data)
while True:
data = session.recv_stderr(8192)
if not data:
break
stderr.extend(data)
break
rc = session.recv_exit_status()
if rc != 0 and not kwargs.get("ignore_errors", False):
raise ValueError("Command {0} failed with exit code {1}.\n{2}".format(" ".join(command), rc, stderr))
else:
try:
return stdout.decode("UTF-8")
except UnicodeDecodeError:
return stdout
finally:
session.close()
Then, using this, we can use the channel to run getent. When ran against the passwd database, you'll get something like root:x:0:0:root:/root:/bin/bash. Index 2 is UID for the user, index 3 is GID for the user (note - not the GID for an arbitrary group name).
uid = sendCommand(client, "getent", "passwd", username).split(":")[2]
For groups, do the same but use the group database.
gid = sendCommand(client, "getent", "group", group).split(":")[2]
getent can be passed a UID or GID as well, which will let you do lookups in the reverse direction. Note I'm strictly speaking about POSIX hosts, YMMV with other systems.

Related

how to make a variable case insensitive?

I want change the below code to run the command adb -s %s get-state"%(adb_id) such that the adb_id is made case-insensitive,it should work if the adb_id is 1281b6a1 or 1281B6A1 ?can anyone provide guidance on how to do that?
import subprocess
from subprocess import Popen, PIPE, STDOUT
#adb_id = '1281b6a1'
adb_id = '1281B6A1'
cmd = r"C:\adb -s %s get-state"%(adb_id)#cmd = os.getcwd() + "\\adb devices"
proc = subprocess.Popen(cmd.split(' '), stdout=subprocess.PIPE, stderr=subprocess.PIPE)
(output,error) = proc.communicate()
#Check if adb detects any devices
if error != '':
print "ERROR:%s"%error
else :
print "Provided Id is found in ADB as ", output
print str ( output ).strip()
You can't make adb case-insensitive, so if you want the user to be able to enter the device ID without worrying about case, you'll need to find the correct case of the device's name and pass that to adb.
And to do that you'll need to get the output of adb devices to find the device's actual name. Then find what the user entered in that device list using a case-insensitive search, and finally return the canonical device name from that.
devlist = subprocess.check_output("adb devices")
devname = "\r\n%s\t" % adb_id.lower() # device name is followed by tab
posn = devlist.lower().find(devname)
if posn + 1: # found
adb_id = devlist[posn+2:posn+2+len(adb_id)]
else:
print("that device is not connected")
Now adb_id is the case-corrected version of the device ID and can be passed via subprocess to adb.
A better solution is probably to use the output of adb devices to make a menu. That way the user doesn't have to type the full device name.

Enter an ssh password using the standard python library (not pexpect)

Related questions that are essentially asking the same thing, but have answers that don't work for me:
Make python enter password when running a csh script
How to interact with ssh using subprocess module
How to execute a process remotely using python
I want to ssh into a remote machine and run one command. For example:
ssh <user>#<ipv6-link-local-addr>%eth0 sudo service fooService status
The problem is that I'm trying to do this through a python script with only the standard libraries (no pexpect). I've been trying to get this to work using the subprocess module, but calling communicate always blocks when requesting a password, even though I supplied the password as an argument to communicate. For example:
proc = subprocess.Popen(
[
"ssh",
"{testUser1}#{testHost1}%eth0".format(**locals()),
"sudo service cassandra status"],
shell=False,
stdin=subprocess.PIPE)
a, b = proc.communicate(input=testPasswd1)
print "a:", a, "b:", b
print "return code: ", proc.returncode
I've tried a number of variants of the above, as well (e.g., removing "input=", adding/removing subprocess.PIPE assignments to stdout and sterr). However, the result is always the same prompt:
ubuntu#<ipv6-link-local-addr>%eth0's password:
Am I missing something? Or is there another way to achieve this using the python standard libraries?
This answer is just an adaptation of this answer by Torxed, which I recommend you go upvote. It simply adds the ability to capture the output of the command you execute on the remote server.
import pty
from os import waitpid, execv, read, write
class ssh():
def __init__(self, host, execute='echo "done" > /root/testing.txt',
askpass=False, user='root', password=b'SuperSecurePassword'):
self.exec_ = execute
self.host = host
self.user = user
self.password = password
self.askpass = askpass
self.run()
def run(self):
command = [
'/usr/bin/ssh',
self.user+'#'+self.host,
'-o', 'NumberOfPasswordPrompts=1',
self.exec_,
]
# PID = 0 for child, and the PID of the child for the parent
pid, child_fd = pty.fork()
if not pid: # Child process
# Replace child process with our SSH process
execv(command[0], command)
## if we havn't setup pub-key authentication
## we can loop for a password promt and "insert" the password.
while self.askpass:
try:
output = read(child_fd, 1024).strip()
except:
break
lower = output.lower()
# Write the password
if b'password:' in lower:
write(child_fd, self.password + b'\n')
break
elif b'are you sure you want to continue connecting' in lower:
# Adding key to known_hosts
write(child_fd, b'yes\n')
else:
print('Error:',output)
# See if there's more output to read after the password has been sent,
# And capture it in a list.
output = []
while True:
try:
output.append(read(child_fd, 1024).strip())
except:
break
waitpid(pid, 0)
return ''.join(output)
if __name__ == "__main__":
s = ssh("some ip", execute="ls -R /etc", askpass=True)
print s.run()
Output:
/etc:
adduser.conf
adjtime
aliases
alternatives
apm
apt
bash.bashrc
bash_completion.d
<and so on>

How to properly perform host or dig command in python

I want to process host or dig commands using python to check if a domain is blacklisted. I use these
surbl_result = os.system(host_str + ".multi.surbl.org")
#this works like performing a terminal command which is host johnnydeppsource.com.multi.surbl.org
It returns a response which is an integer 0 (which means it is listed in the blacklist) or 256(it is not listed)
if surbl_result == 0: #blacklisted in surbl
black_list = True
but sometimes, the host command fails and gives a serve fail response
Host johnnydeppsource.com.multi.surbl.org not found: 2(SERVFAIL)
And this returns a zero value permitting it to add the new domain even if it is blacklisted.. Are there other ways to perform this kind of thing? This is contained in my django 1.6 application. Any leads will help..
os.system(command) returns the exit_status after Executing the command (a string) in a subshell.
Better to use in the below manner:
from subprocess import Popen, PIPE
subproc = Popen(host_str + ".multi.surbl.org", stdout=PIPE, shell=True)
output, errorCode = subproc.communicate()
if errorCode == None:
black_list = True

How to get the actual shell prompt string in Python?

I have a Python routine which invokes some kind of CLI (e.g telnet) and then executes commands in it. The problem is that sometimes the CLI refuses connection and commands are executed in the host shell resulting in various errors. My idea is to check whether the shell prompt alters or not after invoking the CLI.
The question is: how can I get the shell prompt string in Python?
Echoing PS1 is not a solution, because some CLIs cannot run it and it returns a notation-like string instead of the actual prompt:
SC-2-1:~ # echo $PS1
\[\]\h:\w # \[\]
EDIT
My routine:
def run_cli_command(self, ssh, cli, commands, timeout = 10):
''' Sends one or more commands to some cli and returns answer. '''
try:
channel = ssh.invoke_shell()
channel.settimeout(timeout)
channel.send('%s\n' % (cli))
if 'telnet' in cli:
time.sleep(1)
time.sleep(1)
# I need to check the prompt here
w = 0
while (channel.recv_ready() == False) and (w < timeout):
w += 1
time.sleep(1)
channel.recv(9999)
if type(commands) is not list:
commands = [commands]
ret = ''
for command in commands:
channel.send("%s\r\n" % (command))
w = 0
while (channel.recv_ready() == False) and (w < timeout):
w += 1
time.sleep(1)
ret += channel.recv(9999) ### The size of read buffer can be a bottleneck...
except Exception, e:
#print str(e) ### for debugging
return None
channel.close()
return ret
Some explanation needs here: the ssh parameter is a paramiko.SSHClient() instance. I use this code to login to a server and from there I call another CLI which can be SSH, telnet, etc.
I’d suggest sending commands that alter PS1 to a known string. I’ve done so when I used Oracle sqlplus from a Korn shell script, as coprocess, to know when to end reading data / output from the last statement I issued. So basically, you’d send:
PS1='end1>'; command1
Then you’d read lines until you see "end1>" (for extra easiness, add a newline at the end of PS1).

How to redirect output of ssh.exec_command to a file in Python paramiko module?

I want to run a python script say test.py on a Linux target machine (which has a python interpreter) and capture the output of the command in a text file as the part of another python script invoke.py using paramiko module.
The statement in the script
stdin, stdout, sterr = ssh.exec_command("python invoke.py > log.txt")
generates a blank file log.txt.
Please suggest corrections / alternate way to do this. to write the output to the file correctly.
test.py when run locally outputs sane text (which is expected to be logged in log.txt).
There are some relevant posts here and here, but no one deals with output of python script
instead of calling client.exec_command() you can use client.open_channel() and use channel session's recv() and recv_stderr() streams to read write stdout/std err:
def sshExecute(hostname, username, password, command, logpath = None):
buffSize = 2048
port = 22
client = paramiko.Transport((hostname, port))
client.connect(username=username, password=password)
if logpath == None:
logpath = "./"
timestamp = int(time.time())
hostname_prefix = "host-%s-%s"%(hostname.replace(".","-"),timestamp)
stdout_data_filename = os.path.join(logpath,"%s.out"%(hostname_prefix))
stderr_data_filename = os.path.join(logpath,"%s.err"%(hostname_prefix))
stdout_data = open(stdout_data_filename,"w")
stderr_data = open(stderr_data_filename,"w")
sshSession = client.open_channel(kind='session')
sshSession.exec_command(command)
while True:
if sshSession.recv_ready():
stdout_data.write(sshSession.recv(buffSize))
if sshSession.recv_stderr_ready():
stderr_data.write(sshSession.recv_stderr(buffSize))
if sshSession.exit_status_ready():
break
stdout_data.close()
stderr_data.close()
sshSession.close()
client.close()
return sshSession.recv_exit_status()
Hope this fully working function helps you

Categories