I'm trying to implement a python script that executes local bash scripts or simple commands on remote CyberArk machines. Here is my code:
if __name__ == '__main__':
for ip in IP_LIST:
bash_cmd = f"ssh -o stricthostkeychecking=no {USER}%{LOCAL_USER}%{ip}#{PROXY} 'bash -s' < {BASH_SCRIPT}"
exit_code = subprocess.call(bash_cmd, shell=True)
print(exit_code)
bash_cmd = f"scp {USER}%{LOCAL_USER}%{ip}#{PROXY}:server_info_PY.txt ."
exit_code = subprocess.call(bash_cmd, shell=True)
print(exit_code)
The main problem is that i get this CyberArk authentication error most of the times, but not always, so it's kind of random and i don't know why:
PSPSD072E Perform session error occurred. Reason: PSPSD033E Error receiving PSM For SSH server
response (Extra information: [289E [4426b00e-cc44-11ec-bca1-005056b74f99] Failed to impersonate as
user <user>. Error: [ITATS004E Authentication failure for User <user>.
In this case the ssh exit code is 255, but if i check sshd service logs on the remote machine, there are no errors. I even tried with the os library to execute bash commands, but I got same result.
I was thinking of multiple ssh sessions hanging after executing this script a lot of times, but on the remote machine i only find the one i'm using.
Could someone explain what is happening or do you have any ideas?
Notes: I don't have any access to the PSM server, that is stored in the variable PROXY
Edit 1: I tried to use Paramiko library to create the ssh connection, but i get an authentication error related to Paramiko and not related to CyberArk. I also tried Fabric library which is based on Paramiko, so it didn't work.
If i try to run the same ssh command manually from my terminal it works and i can see that it first connects to the PROXY and then to the ip of the remote machine. From the script side it looks like he can't even connect to the PROXY because of the CyberArk authentication error.
Edit 2: I logged some informations about all commands running when executing the python script and i found out that the first command which is launched is /bin/sh/ -c plus the ssh string:
/bin/sh -c ssh <user>#<domain>
Could be this the main problem? The prepending of /bin/sh -c? Or it's a normal behaviour when using subprocess library? There is a way to just execute the ssh command without this prepend?
Edit 3: I removed shell=True but got same auhtentication error. So, if i execute manually the ssh command i get no error, but if it is executed from the python script i get the error, but i can't find any contradiction at proccess level using ps aux in both cases.
Since the authentication error is kind of random, I just added a while loop that resets known_hosts file and runs the ssh command for n retries.
succeeded_cmd_exec = False
retries = 5
while not succeeded_cmd_exec:
if retries == 0:
break
bash_cmd = f'ssh-keygen -f "{Configs.KNOWN_HOSTS}" -R "{Configs.PROXY}"'
_, _, exit_code = exec_cmd(bash_cmd)
if exit_code == 0:
radius_password = generate_password(Configs.URI, Configs.PASSWORD)
bash_cmd = f"sshpass -p \"{radius_password}\" ssh -o stricthostkeychecking=no {Configs.USER}%{Configs.LOCAL_USER}%{ip}#{Configs.PROXY} 'ls'"
stdout, stderr, exit_code = exec_cmd(bash_cmd)
if exit_code == 0:
print('Output from SSH command:\n')
print(stdout)
succeeded_cmd_exec = True
else:
retries = retries - 1
print(stdout)
print('SSH command failed, retrying ... ')
print('Sleeping 15 seconds')
time.sleep(15)
else:
print('Reset known hosts files failed, retrying ...')
if retries == 0 and not succeeded_cmd_exec:
print(f'Failed processing IP {ip}')
The exec_cmd function is defined like this:
def exec_cmd(bash_cmd: str):
process = subprocess.Popen(bash_cmd, shell=True, executable='/bin/bash', stdout=subprocess.PIPE, stderr=subprocess.PIPE)
stdout, stderr = process.communicate()
process.wait()
return stdout.decode('utf-8'), stderr.decode('utf-8'), process.returncode
Related
I want to run a Python script which calls remote commands over ssh.
I want some of the commands to continue even if the script or the connection dies.
Not a duplicate of this, which is the opposite.
My current code, which occasionally disconnects, is
import paramiko
def run_copy_script(sh_script_path):
assert os.path.isfile(sh_script_path)
script_stdout_log_path = os.path.splitext(sh_script_path)[0]
client = paramiko.SSHClient()
client.set_missing_host_key_policy(paramiko.AutoAddPolicy())
# transport = client.get_transport()
# transport.set_keepalive(30)#causes an error
try:
client.connect(hostname="my_host", username="my_user", password="my_pass")
except Exception as e:
print(f"[!] Cannot connect to the SSH Server:. ERROR: {e}")
exit()
command = f"echo running {sh_script_path} && {sh_script_path} > >(tee -a stdout.log) 2> >(tee -a stderr.log >&2)"
stdin, stdout, stderr = client.exec_command(command)
Note: I am not looking for commands through shell.
I am aware things like nohup are possible, but am looking to remain inside Python.
I am trying to SSH to a server with Python and I have been able to do so successfully. I am able to run the commands within Python successfully with one exception, the main command that is the focus of my program. It is a SIPp command that will only run within the SSH server and in a specific folder.
When I run the command in my terminal, it works perfectly fine; however, when I connect to the SSH server through PExpect or Paramiko (both work fine), I try to send my command but I get the
Error Opening Terminal: Unknown
I have so far, read the docs, tried using os, subprocess, and multiple different ways of connecting with Paramiko and Pxssh. The several people I work with were not able to figure it out either.
The SIPp command that I am trying to send and read the output of:
sipp -r 5 -m 20 -trace_msg -inf users.csv -sf register.xml -d 10000 -i [IP addresses]
# some of the command was left out for simplicity's sake
# there is no issue with the command
Connecting to SSH through Pxssh (PExpect):
from pexpect import pxssh
from getpass import getpass
try:
s = pxssh.pxssh()
hostname = input('hostname: ')
username = input('username: ')
password = getpass("password :", None)
s.login(hostname, username, password)
s.sendline('cd [location of the folder]')
s.prompt()
print(s.before)
s.sendline('sipp -r 5 -m 20 -trace_msg -inf users.csv -sf register.xml -d 10000 -i [IP addresses]') #this is the only line that doesn't work / output anything.
s.prompt()
print(s.before)
s.sendline('ls')
s.prompt()
print(s.before)
s.logout()
except pxssh.ExceptionPxssh as e:
print("Something went wrong. Try again with the correct Host Name, Username, and Password")
print(e)
Connecting to SSH through Paramiko:
from paramiko import client
from getpass import getpass
class ssh:
client = None
def __init__(self, address, username, password):
self.client = client.SSHClient()
self.client.set_missing_host_key_policy(client.AutoAddPolicy())
self.client.connect(address, username=username, password=password, look_for_keys=False)
def sendCommand(self, command):
if self.client:
stdin, stdout, stderr = self.client.exec_command(command)
output = stdout.readlines()
print(output, stderr.readlines())
while not stdout.channel.exit_status_ready():
if stdout.channel.recv_ready():
alldata = stdout.channel.recv(1024)
prevdata = b"1"
while prevdata:
prevdata = stdout.channel.recv(1024)
alldata += prevdata
print(str(alldata, "utf8"))
self.client.close()
else:
print("Connection not opened.")
connection = ssh([ssh info])
connection.sendCommand("cd [location] ; sipp -r 5 -m 20 -trace_msg -inf users.csv -sf register.xml -d 10000 -i [IP addresses]")
Both give me this error: Error opening terminal: unknown.
My guess is that it is not spawning an actual terminal but I can't figure out what to do at this point. Any help would be sincerely appreciated
Your command needs terminal emulation.
Either:
Try to find if there's a way to run the command, so that it does not require the terminal emulation. Maybe -bg switch can help.
Possibly this was a bug in an older version of SIPP. Make sure you have the latest version. See Startup failure when running from environment without TERM.
Or, enable the terminal emulation (what can bring unwanted side effects). With Paramiko SSHClient.exec_command, use its get_pty argument:
stdin, stdout, stderr = self.client.exec_command(command, get_pty=True)
Essentially I wrote a script that reboots a server using python and an SSH library called paramiko. My script runs as it should, but I don't know if it is actually rebooting the server because the server is not on site in the office. Is there a way where I can print and output "proof" that the server is actually being rebooted ? I am a little new to using python to give commands to network devices using SSH.
I did actually run my code and it runs as it should, but I have not tested to see if a server is actually turning on and off.
There is no need to copy and paste all of my code, but there are two functions that are extremely important:
def connectToSSH(deviceIP, deviceUsername, devicePassword):
ssh_port = 22
ssh = paramiko.SSHClient()
ssh.set_missing_host_key_policy(paramiko.AutoAddPolicy())
ssh.connect(deviceIP, ssh_port, deviceUsername, devicePassword)
time.sleep(5)
return ssh
def reboot_server(ssh):
prompt = raw_input('Are you sure you want to reboot this server ?')
if prompt.lower() == 'y' or prompt.lower() == 'n':
print('Proceeding to reboot the switch\n')
else:
print('Proceeding to exit the program\n')
sys.exit(-1)
channel = ssh.invoke_shell()
ssh.exec_command("/sbin/reboot -f > /dev/null 2>&1 &") # executes command to reboot server , is this the right command ? I found this on another stackOverflow post ?
channel.close()
print("Please wait for server to be rebooted")
I am receiving no compile errors but I want to be sure that the command:
ssh.exec_command("/sbin/reboot -f > /dev/null 2>&1 &")
is actually rebooting the server. If it is, is there a way I can print/output proof that it is being rebooted ? If so, how do I go about doing that ?
I want to know the disk usage of remote servers and i thought of doing it using ssh
here's what i have done so far:-
def disk_usage(server):
msg=""
ps = subprocess.Popen(["ssh", "-o", "BatchMode=yes", "-l", "mygroup", server, "df -k /some/directory"], stdout=subprocess.PIPE)
out, err = ps.communicate()
if err != None:
msg += "\n"+err
else:
msg = out
return msg
Final_msg = ""
server_list= ['server A','server B','server C']
for server in server_list:
Final_msg+="For Server :"+server+"\n"+disk_usage(server)
print Final_msg
The script works fine, but problem is when the ssh for any server is not configured it just displays a blank output for that server
Output:-
For Server A :
Filesystem 1K-blocks Used Available Use% Mounted on
/dev/cfd/ace 8064048 3581524 4072892 47% /app
For Server B :
For server C :
Filesystem 1K-blocks Used Available Use% Mounted on
/dev/wsa/ace 306423 244524 23243434 90% /app
Here ssh for server B is not configured so i'm getting a blank output because the batchmode is on (BatchMode=yes) for all the ssh connections, but i want the user to know why there was no output.
when i run the same command on the shell for the sever where ssh is not configured i get the below error:
Permission denied (publickey,gssapi-keyex,gssapi-with-mic,password).
I want the same error in my output of the script for that particular server where ssh is not configured.
any ideas?
To detect that an error happened, you should check the returncode attribute of the Popen object (ps).
To get the output from stderr, you have to pass stderr=subprocess.PIPE to Popen, just as you do for stdout.
if your local machine has static ip i would recommend using sockets so your data usage script will connect to your local machine and deliver data.
or if you have domain to post your server info to your web app via urllib.
I have python script that connects to a remote server with lenny operating system. It runs a process in background using following line:
shell.send("cd /my/directory/; nohup ./exec_name > /dev/null 2>&1 &\n")
Then after some other codes, it sends a kill command to the server to stop process execution; here's the code:
shell.send("kill -9 process_pid \n")
It returns no error, but doesn't kill the process and it's still alive in the system. I also tried killall -9 process_name, but I got the same result. Any help?
For more information, here's the code for connecting to the server:
ssh = paramiko.SSHClient()
ssh.set_missing_host_key_policy(paramiko.AutoAddPolicy())
ssh.connect(hostname = "host_ip", username = "un"], password = "up")
channel = ssh.get_transport().open_session()
pty = channel.get_pty()
shell = ssh.invoke_shell()
I should mention that the user has root privileges.
EDIT 1:
I forgot to say that I tried this:
ssh.exec_command("kill -9 process_pid \n")
But it returned this error:
SSHClient is not active right now.
Edit 2:
As #JimB mentioned in the comment, the problem about exec_command is that the transport has been staled. I made a temporary SSH connection and killed the process by that; it was successful. But I'm still searching for a better way.