I have python script that connects to a remote server with lenny operating system. It runs a process in background using following line:
shell.send("cd /my/directory/; nohup ./exec_name > /dev/null 2>&1 &\n")
Then after some other codes, it sends a kill command to the server to stop process execution; here's the code:
shell.send("kill -9 process_pid \n")
It returns no error, but doesn't kill the process and it's still alive in the system. I also tried killall -9 process_name, but I got the same result. Any help?
For more information, here's the code for connecting to the server:
ssh = paramiko.SSHClient()
ssh.set_missing_host_key_policy(paramiko.AutoAddPolicy())
ssh.connect(hostname = "host_ip", username = "un"], password = "up")
channel = ssh.get_transport().open_session()
pty = channel.get_pty()
shell = ssh.invoke_shell()
I should mention that the user has root privileges.
EDIT 1:
I forgot to say that I tried this:
ssh.exec_command("kill -9 process_pid \n")
But it returned this error:
SSHClient is not active right now.
Edit 2:
As #JimB mentioned in the comment, the problem about exec_command is that the transport has been staled. I made a temporary SSH connection and killed the process by that; it was successful. But I'm still searching for a better way.
Related
I'm trying to implement a python script that executes local bash scripts or simple commands on remote CyberArk machines. Here is my code:
if __name__ == '__main__':
for ip in IP_LIST:
bash_cmd = f"ssh -o stricthostkeychecking=no {USER}%{LOCAL_USER}%{ip}#{PROXY} 'bash -s' < {BASH_SCRIPT}"
exit_code = subprocess.call(bash_cmd, shell=True)
print(exit_code)
bash_cmd = f"scp {USER}%{LOCAL_USER}%{ip}#{PROXY}:server_info_PY.txt ."
exit_code = subprocess.call(bash_cmd, shell=True)
print(exit_code)
The main problem is that i get this CyberArk authentication error most of the times, but not always, so it's kind of random and i don't know why:
PSPSD072E Perform session error occurred. Reason: PSPSD033E Error receiving PSM For SSH server
response (Extra information: [289E [4426b00e-cc44-11ec-bca1-005056b74f99] Failed to impersonate as
user <user>. Error: [ITATS004E Authentication failure for User <user>.
In this case the ssh exit code is 255, but if i check sshd service logs on the remote machine, there are no errors. I even tried with the os library to execute bash commands, but I got same result.
I was thinking of multiple ssh sessions hanging after executing this script a lot of times, but on the remote machine i only find the one i'm using.
Could someone explain what is happening or do you have any ideas?
Notes: I don't have any access to the PSM server, that is stored in the variable PROXY
Edit 1: I tried to use Paramiko library to create the ssh connection, but i get an authentication error related to Paramiko and not related to CyberArk. I also tried Fabric library which is based on Paramiko, so it didn't work.
If i try to run the same ssh command manually from my terminal it works and i can see that it first connects to the PROXY and then to the ip of the remote machine. From the script side it looks like he can't even connect to the PROXY because of the CyberArk authentication error.
Edit 2: I logged some informations about all commands running when executing the python script and i found out that the first command which is launched is /bin/sh/ -c plus the ssh string:
/bin/sh -c ssh <user>#<domain>
Could be this the main problem? The prepending of /bin/sh -c? Or it's a normal behaviour when using subprocess library? There is a way to just execute the ssh command without this prepend?
Edit 3: I removed shell=True but got same auhtentication error. So, if i execute manually the ssh command i get no error, but if it is executed from the python script i get the error, but i can't find any contradiction at proccess level using ps aux in both cases.
Since the authentication error is kind of random, I just added a while loop that resets known_hosts file and runs the ssh command for n retries.
succeeded_cmd_exec = False
retries = 5
while not succeeded_cmd_exec:
if retries == 0:
break
bash_cmd = f'ssh-keygen -f "{Configs.KNOWN_HOSTS}" -R "{Configs.PROXY}"'
_, _, exit_code = exec_cmd(bash_cmd)
if exit_code == 0:
radius_password = generate_password(Configs.URI, Configs.PASSWORD)
bash_cmd = f"sshpass -p \"{radius_password}\" ssh -o stricthostkeychecking=no {Configs.USER}%{Configs.LOCAL_USER}%{ip}#{Configs.PROXY} 'ls'"
stdout, stderr, exit_code = exec_cmd(bash_cmd)
if exit_code == 0:
print('Output from SSH command:\n')
print(stdout)
succeeded_cmd_exec = True
else:
retries = retries - 1
print(stdout)
print('SSH command failed, retrying ... ')
print('Sleeping 15 seconds')
time.sleep(15)
else:
print('Reset known hosts files failed, retrying ...')
if retries == 0 and not succeeded_cmd_exec:
print(f'Failed processing IP {ip}')
The exec_cmd function is defined like this:
def exec_cmd(bash_cmd: str):
process = subprocess.Popen(bash_cmd, shell=True, executable='/bin/bash', stdout=subprocess.PIPE, stderr=subprocess.PIPE)
stdout, stderr = process.communicate()
process.wait()
return stdout.decode('utf-8'), stderr.decode('utf-8'), process.returncode
I want to run a Python script which calls remote commands over ssh.
I want some of the commands to continue even if the script or the connection dies.
Not a duplicate of this, which is the opposite.
My current code, which occasionally disconnects, is
import paramiko
def run_copy_script(sh_script_path):
assert os.path.isfile(sh_script_path)
script_stdout_log_path = os.path.splitext(sh_script_path)[0]
client = paramiko.SSHClient()
client.set_missing_host_key_policy(paramiko.AutoAddPolicy())
# transport = client.get_transport()
# transport.set_keepalive(30)#causes an error
try:
client.connect(hostname="my_host", username="my_user", password="my_pass")
except Exception as e:
print(f"[!] Cannot connect to the SSH Server:. ERROR: {e}")
exit()
command = f"echo running {sh_script_path} && {sh_script_path} > >(tee -a stdout.log) 2> >(tee -a stderr.log >&2)"
stdin, stdout, stderr = client.exec_command(command)
Note: I am not looking for commands through shell.
I am aware things like nohup are possible, but am looking to remain inside Python.
I am trying to connect to a server using SSH protocol through a jump server. When I connect through a terminal using a protocol, the jump server opens a shell and asks for a server number from the list of available servers provided, followed by a user or password. Using the library Paramiko.
My code:
import paramiko
client = paramiko.SSHClient()
client.set_missing_host_key_policy(paramiko.AutoAddPolicy())
client.connect(
hostname="server_ip",
username="user",
password="pass",
look_for_keys=False,
allow_agent=False
)
com='1'
stdin, stdout, stderr = client.exec_command(com)
data = stdout.read() + stderr.read()
print(data.decode('utf-8'))
I get message:
Invalid target.
My shell on the jump server looks like this:
Your jump server probably shows the selection in an interactive shell session only. So you will have to use SSHClient.invoke_shell, what is otherwise not good thing to do when automating a connection.
See also What is the difference between exec_command and send with invoke_shell() on Paramiko?
Essentially I wrote a script that reboots a server using python and an SSH library called paramiko. My script runs as it should, but I don't know if it is actually rebooting the server because the server is not on site in the office. Is there a way where I can print and output "proof" that the server is actually being rebooted ? I am a little new to using python to give commands to network devices using SSH.
I did actually run my code and it runs as it should, but I have not tested to see if a server is actually turning on and off.
There is no need to copy and paste all of my code, but there are two functions that are extremely important:
def connectToSSH(deviceIP, deviceUsername, devicePassword):
ssh_port = 22
ssh = paramiko.SSHClient()
ssh.set_missing_host_key_policy(paramiko.AutoAddPolicy())
ssh.connect(deviceIP, ssh_port, deviceUsername, devicePassword)
time.sleep(5)
return ssh
def reboot_server(ssh):
prompt = raw_input('Are you sure you want to reboot this server ?')
if prompt.lower() == 'y' or prompt.lower() == 'n':
print('Proceeding to reboot the switch\n')
else:
print('Proceeding to exit the program\n')
sys.exit(-1)
channel = ssh.invoke_shell()
ssh.exec_command("/sbin/reboot -f > /dev/null 2>&1 &") # executes command to reboot server , is this the right command ? I found this on another stackOverflow post ?
channel.close()
print("Please wait for server to be rebooted")
I am receiving no compile errors but I want to be sure that the command:
ssh.exec_command("/sbin/reboot -f > /dev/null 2>&1 &")
is actually rebooting the server. If it is, is there a way I can print/output proof that it is being rebooted ? If so, how do I go about doing that ?
I have a script that SSH connects from Windows7 to a remote ubuntu server and executes a command. The script returns Ubuntu command output to the Windows cmd window in one go after the command has executed and finished. I am just wondering if there is anyway to return real-time SSH output in my script below, or do I always have to wait for the command to finish before seeing the output.
Here's my working code:
import paramiko
ssh = paramiko.SSHClient()
ssh.set_missing_host_key_policy(paramiko.AutoAddPolicy())
host = '9.10.11.12'
port, user, password = 22, 'usr', 'pass'
ssh.connect(host, port, user, password)
stdin,stdout,stderr = ssh.exec_command("cd /opt/app && ./app-tool some_command")
for line in stdout.readlines():
print(line)
ssh.close()
Alternatively, if this is not possible with SSH how would I introduce a spinning cursor icon into the above script? Thanks.
Figured it out in the end, I used 'iter' method in the following line:
for line in iter(stdout.readline,""):
print(line)
The output of your command seems to less than the default buffer size because of which it is getting flushed once the command completes.
By default the bufsize is -1 which means that the system default buffer size is used. If bufsize is set to 1 then it is line buffered.
Use
ssh.exec_command("<cmd>",bufsize=1)