I want to communicate with a data-logger via Telnet. Therefore, I wrote the following python-script:
import subprocess
command ='plink.exe -telnet -P 23 12.17.46.06'
p = subprocess.Popen(command, stdin=subprocess.PIPE, stdout=subprocess.PIPE, bufsize=1, shell=False)
answer = p.communicate('command')[0]
print answer
By running the script, a plink-windows pops up. The python script seems to wait for some action to be done inside the plink command window. By closing the window manually, the desired "answer" shows up inside python.
I am looking for a command / procedure to close plink directly out of python. It seems not to be sufficient to just close the subprocess, as in this case only the communication between python and plink gets closed and not the program plink.exe itself.
Any help is appreciated!
Regards, Phil
The documentation for the communicate() function says: Wait for process to terminate. Thus the function does not return until plink.exe exits and thus your program doesn't get the output until then.
You should add to your 'command' something that will close the telnet connection. When the far end closes the telnet connection plink.exe will exit and its window will close. If your telnet session runs a unix shell you could add '; exit' to your command.
You can check if your task within plink tunnel is complete and then execute
taskkill within your script
something like,
killProg=taskkill /f /fi "imagename eq plink.exe"
p.communicate('killProg')[0]
That will kill plink while keeping the tunnel open to execute other commands.
Related
This question already has an answer here:
Paramiko SSH exec_command (shell script) returns before completion
(1 answer)
Closed 1 year ago.
I am writing a program in python on Ubuntu. In that program I am trying to print a message after completing a task "Delete a File" on Remote machine (RaspberryPi), connected to network.
But In actual practice, print command is not waiting till completion of task on remote machine.
Can anybody guide me on how do I do that?
My Coding is given below
import paramiko
# Connection with remote machine
client = paramiko.SSHClient()
client.set_missing_host_key_policy(paramiko.AutoAddPolicy())
client.connect('192.168.2.34', username='pi', password='raspberry')
filename = 'fahad.txt'
filedelete ='rm ' + filename
stdin, stdout, stderr = client.exec_command(filedelete)
print ("File Deleted")
client.close()
This is indeed a duplicate of paramiko SSH exec_command(shell script) returns before completion, but the answer there is not terribly detailed. So...
As you noticed, exec_command is a non-blocking call. So you have to wait for completion of the remote command by using either:
Channel.exit_status_ready if you want a non-blocking check of the command completion (i.e.: pooling)
Channel.recv_exit_status if you want to block until the command completion (and get back the exit status — an exit status of 0 means normal completion).
In your particular case, you need the later:
stdin, stdout, stderr = client.exec_command(filedelete) # Non-blocking call
exit_status = stdout.channel.recv_exit_status() # Blocking call
if exit_status == 0:
print ("File Deleted")
else:
print("Error", exit_status)
client.close()
In addition to doing what Sylvian Leroux suggests:
If your commands involve running a bash script that needs to keep running after paramiko closes the ssh session (every time you send a command this happens) use:
nohup ./my_bash_script.sh >/dev/null 2>&1.
nohup tells the system that this process should ignore the "hang up" signal received when the ssh session is closed.
>/dev/null 2>&1 redirects the output. This is necessary because in some situations control will not be given back to your python script until an output is received.
To run command line applications like "stress" and "vlc" and keep them running after you return, the only solution I have found is to put your commands in a bash script followed by a & or &>/dev/null then call that bash script with paramiko using the method I mention in the previous paragraph.
This seems a bit "hacky" but it is the only way I have found after days of searching.
I would like to control adb (Android Debug Bridge) from a python script.
In order to do this I want to use the shell command from adb:
import subprocess as sp
adb = 'path-to-adb.exe'
print("Running shell command!")
p = sp.Popen([adb, 'shell'], stdout=sp.PIPE, stderr=sp.PIPE, stdin=sp.PIPE)
p.stdin.write('ls\r\n'.encode('utf-8'))
print(p.stdout.readlines().decode('utf-8'))
print('Do more stuff and eventually shut down.')
The idea is that I would write a command to the android shell, wait for the response then write another and so on... But whenever I call read() or readlines() on the running process it just does not return.
If however I call communicate() it works fine and returns the expected result. The problem with communicate() is that it ends the process.
I have looked at several questions here on Stackoverflow but here the answere always seems to be to wait the process to terminate (by either using communicate() or subprocess.run()). Am I missing something here? Am I just not supposed to interact with a running process?
I am working on a Python script for automated smoke and unit testing on mobile devices. I use ios-deploy for the iOS solution. Because I try to kill the LLDB session before I terminate the test process, I use a pipe for communication between the processes. Here is a piece of code:
Pipe declaration:
_pipe_cmd_rcv, _pipe_cmd_snd = Pipe()
# Pipe to receive commands
self._pipe_cmd_rcv = _pipe_cmd_rcv
# Pipe to send commands
self._pipe_cmd_snd = _pipe_cmd_snd
The part where I send the exit command to LLDB, followed by a Y to confirm the exit:
self._pipe_cmd_snd.send("exit \n")
self._pipe_cmd_snd.send("Y \n")
And finally the part where I want to receive the input:
pcs = subprocess.Popen(cmd.split(), stdin=self._pipe_cmd_rcv, stdout=subprocess.PIPE, universal_newlines=True)
My intention is to send the exit command to the stdin of the process running LLDB, but unfortunately after the whole test process is finished, I can't use my Terminal anymore. If I type CTRL + C it returns the prompt, and when I hit enter, it pastes the prompt as input. It is like it is stuck in a loop. I have to open a new Terminal window to use it "the normal way". And this is not desired, because the script will be used to run on a CI system. Can anyone figure out what I am doing wrong?
You have spawned the process with input through self._pipe_cmd_rcv but you are trying to send the input to that process via self._pipe_cmd_snd, so it's blocking there for input. Following might help.
self._pipe_cmd_rcv.send("exit \n")
self._pipe_cmd_rcv.send("Y \n")
This question already has an answer here:
Paramiko SSH exec_command (shell script) returns before completion
(1 answer)
Closed 1 year ago.
I am writing a program in python on Ubuntu. In that program I am trying to print a message after completing a task "Delete a File" on Remote machine (RaspberryPi), connected to network.
But In actual practice, print command is not waiting till completion of task on remote machine.
Can anybody guide me on how do I do that?
My Coding is given below
import paramiko
# Connection with remote machine
client = paramiko.SSHClient()
client.set_missing_host_key_policy(paramiko.AutoAddPolicy())
client.connect('192.168.2.34', username='pi', password='raspberry')
filename = 'fahad.txt'
filedelete ='rm ' + filename
stdin, stdout, stderr = client.exec_command(filedelete)
print ("File Deleted")
client.close()
This is indeed a duplicate of paramiko SSH exec_command(shell script) returns before completion, but the answer there is not terribly detailed. So...
As you noticed, exec_command is a non-blocking call. So you have to wait for completion of the remote command by using either:
Channel.exit_status_ready if you want a non-blocking check of the command completion (i.e.: pooling)
Channel.recv_exit_status if you want to block until the command completion (and get back the exit status — an exit status of 0 means normal completion).
In your particular case, you need the later:
stdin, stdout, stderr = client.exec_command(filedelete) # Non-blocking call
exit_status = stdout.channel.recv_exit_status() # Blocking call
if exit_status == 0:
print ("File Deleted")
else:
print("Error", exit_status)
client.close()
In addition to doing what Sylvian Leroux suggests:
If your commands involve running a bash script that needs to keep running after paramiko closes the ssh session (every time you send a command this happens) use:
nohup ./my_bash_script.sh >/dev/null 2>&1.
nohup tells the system that this process should ignore the "hang up" signal received when the ssh session is closed.
>/dev/null 2>&1 redirects the output. This is necessary because in some situations control will not be given back to your python script until an output is received.
To run command line applications like "stress" and "vlc" and keep them running after you return, the only solution I have found is to put your commands in a bash script followed by a & or &>/dev/null then call that bash script with paramiko using the method I mention in the previous paragraph.
This seems a bit "hacky" but it is the only way I have found after days of searching.
I'm running a tool via Python in cmd. For each sample in a given directory I want that tool to do something. However, when I use process = subprocess.Popen(command) in the loop, the commands does not wait untill its finished, resulting in 10 prompts at once. And when I use subprocess.Popen(command, stdout=subprocess.PIPE) the command remains black and I can't see the progress, although it does wait untill the command is finished.
Does anyone know a way how to call an external tool via Python in cmd, that does wait untill the command is finished and thats able to show the progress of the tool in the cmd?
#main.py
for sample in os.listdir(os.getcwd()):
if ".fastq" in sample and '_R1_' in sample and "Temp" not in sample:
print time.strftime("%H:%M:%S")
DNA_Bowtie2.DNA_Bowtie2(os.getcwd()+'\\'+sample+'\\'+sample)
#DNA_Bowtie2.py
# Run Bowtie2 command and wait for process to be finished.
process = subprocess.Popen(command, stdout=subprocess.PIPE)
process.wait()
process.stdout.read()
Edit: command = a perl or java command. With above make-up I cannot see tool output since the prompt (perl window, or java window) remains black.
It seems like your subprocess forks otherwise there is no way the wait() would return before the process has finished.
The order is important here: first read the output, then wait.
If you do it this way:
process.wait()
process.stdout.read()
you can experience a deadlock if the pipe buffer is completely full: the subprocess blocks on waiting on stdout and never reaches the end, your program blocks on wait() and never reaches the read().
Do instead
process.stdout.read()
process.wait()
which will read until EOF.
This holds for if you want the stdout of the process at all.
If you don't want that, you should omit the stdout=PIPE stuff. Then the output is directed into that prompt window. Then you can omit process.stdout.read() as well.
Normally, the process.wait() should then prevent that 10 instances run at once. If that doesn't work, I don't know why not...