I am creating a movie controller (Pause/Stop...) using python where I ssh into a remote computer, and issue commands into a named pipe like so
echo -n q > ~/pipes/pipename
I know this works if I ssh via the terminal and do it myself, so there is no problem with the setup of the named pipe redirection. My problem is that setting up an ssh session takes time (1-3 seconds), whereas I want the pause command to be instantaneous. Therefore, I thought of setting up a persistent pipe like so:
controller = subprocess.Popen ( "ssh -T -x <hostname>", shell = True, close_fds = True, stdin=subprocess.PIPE, stderr=subprocess.PIPE, stdout=subprocess.PIPE )
Then issue commands to it like so
controller.stdin.write ( 'echo -n q > ~/pipes/pipename' )
I think the problem is that ssh is interactive so it expects a carriage return. This is where my problems begin, as nearly everyone who has asked this question has been told to use an existing module:
Vivek's answer
Chakib's Answer
shx2's Answer
Crafty Thumber's Answer
Artyom's Answer
Jon W's Answer
Which is fine, but I am so close. I just need to know how to include the carriage return, otherwise, I have to go learn all these other modules, which mind you is not trivial (for example, right now I can't figure out how pexpect uses either my /etc/hosts file or my ssh keyless authentications).
To add a newline to the command, you will need to add a newline to the string:
controller.stdin.write('\n')
You may also need to flush the pipe:
controller.stdin.flush()
And of course the controller has to be ready to receive new data, or you could block forever trying to send it data. (And if the reason it's not ready is that it's blocking forever waiting for you to read from its stdout, which is possible on some platforms, you're deadlocked unrecoverably.)
I'm not sure why it's not working the way you have it set up, but I'll take a stab at this. I think what I would do is change the Popen call to:
controller = subprocess.Popen("ssh -T -x <hostname> \"sh -c 'cat > ~/pipes/pipename'\"", ...
And then simply controller.stdin.write('q').
Related
I am trying to create a program to easily handle IT requests, and I have created a program to test if a PC on my network is active from a list.
To do this, I wrote the following code:
self.btn_Ping.clicked.connect(self.ping)
def ping(self):
hostname = self.listWidget.currentItem().text()
if hostname:
os.system("ping " + hostname + " -t")
When I run it my main program freezes and I can't do anything until I close the ping command window. What can I do about this? Is there any other command I can use to try to ping a machine without making my main program freeze?
The docs state that os.system() returns the value returned by the command you called, therefore blocking your program until it exits.
They also state that you should use the subprocess module instead.
From ping documentation:
ping /?
Options:
-t Ping the specified host until stopped.
To see statistics and continue - type Control-Break;
To stop - type Control-C.
So, by using -t you are waiting until that machine has stopped, and in case that machine is not stopping, your Python script will run forever.
As mentioned by HyperTrashPanda, use another parameter for launching ping, so that it stops after one or some attempts.
As mentioned in Tim Pietzcker's answer, the use of subprocess is highly recommended over os.system (and others).
To separate the new process from your script, use subprocess.Popen. You should get the output printed normally into sys.stdout. If you want something more complex (e.g. for only printing something if something changes), you can set the stdout (and stderr and stdin) arguments:
Valid values are PIPE, DEVNULL, an existing file descriptor (a positive integer), an existing file object, and None. PIPE indicates that a new pipe to the child should be created. DEVNULL indicates that the special file os.devnull will be used. With the default settings of None, no redirection will occur; the child’s file handles will be inherited from the parent.
-- docs on subproces.Popen, if you scroll down
If you want to get the exit code, use myPopenProcess.poll().
I created a class method (this will only run on Linux) that sends a list of commands to a remote computer over SSH and returns the output using subprocess.Popen:
def remoteConnection(self, list_of_remote_commands):
ssh = subprocess.Popen(["ssh", self.ssh_connection_string], stdin=subprocess.PIPE, stdout=subprocess.PIPE, stderr=subprocess.PIPE,universal_newlines=True, bufsize=0)
# send ssh commands to stdin
for command in list_of_remote_commands:
ssh.stdin.write(command + "\n")
ssh.stdin.close()
output_dict = {'stdin': list(ssh.stdin), 'stdout': list(ssh.stdout), 'stderr': list(ssh.stderr)}
return output_dict
Whilst I'm still getting to grips with the subprocess module I'd read quite a bit about Popen and no one ever mentioned closing it (SSH Connection with Python 3.0, Proper way to close all files after subprocess Popen and communicate, https://docs.python.org/2/library/subprocess.html) so I assumed that that wasn't a problem.
However when testing this out in ipython outside of a function I noticed that the variable ssh still seemed active. I tried closing ssh.stdin, ssh.stdout and ssh.stderr and even ssh.close(), ssh.terminate() and ssh.kill() but nothing seemed to close it. I thought perhaps it doesn't matter but my function will be called many times for months or even years so I don't want it to spawn a new process everytime it is run otherwise I'm going to quickly use up my maximum processes limit. So I use ssh.pid to find the PID and look it up using ps aux | grep PID and it's still there even after doing all of the above.
I also tried:
with subprocess.Popen(["ssh", self.ssh_connection_string], stdin=subprocess.PIPE, stdout=subprocess.PIPE, stderr=subprocess.PIPE,universal_newlines=True, bufsize=0) as shh:
instead of:
ssh = subprocess.Popen(["ssh", self.ssh_connection_string], stdin=subprocess.PIPE, stdout=subprocess.PIPE, stderr=subprocess.PIPE,universal_newlines=True, bufsize=0)
I also remember solving a similar problem a while back using ssh -T but even:
ssh = subprocess.Popen(["ssh", "-T", self.ssh_connection_string], stdin=subprocess.PIPE, stdout=subprocess.PIPE, stderr=subprocess.PIPE,universal_newlines=True, bufsize=0)
Didn't work.
I'm sure I would have found something about closing Popen if I needed to but then why is the process still open on my computer - can anyone help me understand what's going on here?
In your case, you have a deadlock here:
output_dict = {'stdin': list(ssh.stdin), 'stdout': list(ssh.stdout), 'stderr': list(ssh.stderr)}
Mostly because list(ssh.stdin) blocks forever: trying to read standard input of a process doesn't work (there's also an extra risk because you redirected both standard output & error to different pipes without using threading to consume them)
You mean to use ssh.communicate, passing the whole input as argument. Simply do:
command_input = "".join(["{}\n".format(x) for x in list_of_remote_commands])
output,error = ssh.communicate(command_input) # may need .encode() for python 3
return_code = ssh.wait()
then
output_dict = {'stdin': list_of_commands, 'stdout': output.splitlines(), 'stderr': error.splitlines()}
I may add that in the particular ssh case, using paramiko module is better (python paramiko ssh) and avoids using subprocess completely.
Normally you can automate answers to an interactive prompt by piping stdin:
import subprocess as sp
cmd = 'rpmbuild --sign --buildroot {}/BUILDROOT -bb {}'.format(TMPDIR, specfile)
p = sp.Popen(cmd, stdout=sp.PIPE, stderr=sp.PIPE, stdin=sp.PIPE, universal_newline=True, shell=True)
for out in p.communicate(input='my gpg passphrase\n'):
print(out)
For whatever reason, this is not working for me. I've tried writing to p.stdin, before executing p.communicate(), I've tried flushing the buffer, I've tried using bytes without universal_newlines=True, I've hard coded things, etc. In all scenarios, the command is executed and hangs on:
Enter pass phrase:
My first hunch was that stdin was not the correct file descriptor and that rpmbuild was internally calling a gpg command, and maybe my input isn't piped. But when I do p.stdin.close() I get an OSerror about subprocess trying to write to the closed descriptor.
What is the rpmbuild command doing to stdin that prevents me from writing to it?
Is there a hack I can do? I tried echo "my passphrase" | rpmbuild .... as the command but that doesn't work.
I know I can do something with gpg like command and sign packages without a passphrase but I kind of want to avoid that.
EDIT:
After some more reading, I realize this is issue is common to commands that require password input, typically using a form of getpass.
I see a solution would be to use a library like pexpect, but I want something from the standard library. I am going to keep looking, but I think maybe i can try writing to something similar /dev/tty.
rpm uses getpass(3) which reopens /dev/tty.
There are 2 approaches to automating:
1) create a pseudotty
2) (linux) find the reopened file descriptor in /proc
If scripting, expect(1) has (or had) a short example with pseudotty's that can be used.
I don't think I'm understanding python subprocess properly at all but here's a simple example to illustrate a point I'm confused about:
#!/usr/bin/env python
import subprocess
lookup_server = subprocess.Popen("nc -l 5050", shell=True)
lookup_client = subprocess.Popen("nc localhost 5050", shell=True, stdin=subprocess.PIPE)
print lookup_client.poll()
lookup_client.stdin.write("magic\n")
print lookup_client.poll()
lookup_client.send_signal(subprocess.signal.SIGINT)
print lookup_client.poll()
lookup_server.wait()
print "Lookup server terminated properly"
The output comes back as
None
None
None
and never completes. Why is this? Also, if I change the first argument of Popen to an array of all of those arguments, none of the nc calls execute properly and the script runs through without ever waiting. Why does that happen?
Ultimately, I'm running into a problem in a much larger program that does something similar using netcat and another program running locally instead of two versions of nc. Either way, I haven't been able to write to or read from them properly. However, when I run them in the python console everything runs as I would expect. All this has me very frustrated. Let me know if you have any insights!
EDIT: I'm running this on Ubuntu Linux 12.04, when I man nc, I get the BSD General Commands manual so I'm assuming this is BSD netcat.
The problem here is that you're sending SIGINT to the process. If you just close the stdin, nc will close its socket and quit, which is what you want.
It sounds like you're actually using nc for the client (although not the server) in your real program, which means you have two easy fixes:
Instead of lookup_client.send_signal(subprocess.signal.SIGINT), just do lookup_client.stdin.close(). nc will see this as an EOF on its input, and exit normally, at which point your server will also exit.
#!/usr/bin/env python
import subprocess
lookup_server = subprocess.Popen("nc -l 5050", shell=True)
lookup_client = subprocess.Popen("nc localhost 5050", shell=True, stdin=subprocess.PIPE)
print lookup_client.poll()
lookup_client.stdin.write("magic\n")
lookup_client.stdin.close()
print lookup_client.poll()
lookup_server.wait()
print "Lookup server terminated properly"
When I run this, the most common output is:
None
None
magic
Lookup server terminated properly
Occasionally the second None is a 0 instead, and/or it comes after magic instead of before, but otherwise, it's always all four lines. (I'm running on OS X.)
For this simple case (although maybe not your real case), just use communicate instead of trying to do it manually.
#!/usr/bin/env python
import subprocess
lookup_server = subprocess.Popen("nc -l 5050", shell=True)
lookup_client = subprocess.Popen("nc localhost 5050", shell=True, stdin=subprocess.PIPE)
print lookup_client.communicate("magic\n")
lookup_server.wait()
print "Lookup server terminated properly"
Meanwhile:
Also, if I change the first argument of Popen to an array of all of those arguments, none of the nc calls execute properly and the script runs through without ever waiting. Why does that happen?
As the docs say:
On Unix with shell=True… If args is a sequence, the first item specifies the command string, and any additional items will be treated as additional arguments to the shell itself.
So, subprocess.Popen(["nc", "-l", "5050"], shell=True) does /bin/sh -c 'nc' -l 5050, and sh doesn't know what to do with those arguments.
You probably do want to use an array of args, but then you have to get rid of shell=True—which is a good idea anyway, because the shell isn't helping you here.
One more thing:
lookup_client.send_signal(subprocess.signal.SIGINT)
print lookup_client.poll()
This may print either -2 or None, depending on whether the client has finished responding to the SIGINT and been killed before you poll it. If you want to actually get that -2, you have to call wait rather than poll (or do something else, like loop until poll returns non-None).
Finally, why didn't your original code work? Well, sending SIGINT is asynchronous; there's no guarantee as to when it might take effect. For one example of what could go wrong, it could take effect before the client even opens the socket, in which case the server is still sitting around waiting for a client that never shows up.
You can throw in a time.sleep(5) before the signal call to test this—but obviously that's not a real fix, or even an acceptable hack; it's only useful for testing the problem. What you need to do is not kill the client until it's done everything you want it to do. For complex cases, you'll need to build some mechanism to do that (e.g., reading its stdout), while for simple cases, communicate is already everything you need (and there's no reason to kill the child in the first place).
Your invocation of nc is bad, what will happen if I invoke this as you in command line:
# Server window:
[vyktor#grepfruit ~]$ nc -l 5050
# Client Window
[vyktor#grepfruit ~]$ nc localhost 5050
[vyktor#grepfruit ~]$ echo $?
1
Which mean (1 in $?) failure.
Once you use -p:
-p, --local-port=NUM local port number
NC starts listening, so:
# Server window
[vyktor#grepfruit ~]$ nc -l -p 5050
# Keeps handing
# Client window
[vyktor#grepfruit ~]$ echo Hi | nc localhost 5050
# Keeps hanging
Once you add -c to client invocation:
-c, --close close connection on EOF from stdin
You'll end up with this:
# Client window
[vyktor#grepfruit ~]$ echo Hi | nc localhost 5050 -c
[vyktor#grepfruit ~]$
# Server window
[vyktor#grepfruit ~]$ nc -l -p 5050
Hi
[vyktor#grepfruit ~]$
So you need this python piece of code:
#!/usr/bin/env python
import subprocess
lookup_server = subprocess.Popen("nc -l -p 5050", shell=True)
lookup_client = subprocess.Popen("nc -c localhost 5050", shell=True,
stdin=subprocess.PIPE)
lookup_client.stdin.write("magic\n")
lookup_client.stdin.close() # This
lookup_client.send_signal(subprocess.signal.SIGINT) # or this kill
lookup_server.wait()
print "Lookup server terminated properly"
I'm a new paramiko user and am having difficulty running commands on a remote server with paramiko. I want to export a path and also run a program called tophat in the background. I can login fine with paramiko.sshclient() but my code to exec_command has no results.
stdin, stdout, sterr = ssh.exec_command('export PATH=$PATH:/proj/genome/programs
/tophat-1.3.0/bin:/proj/genome/programs/cufflinks-1.0.3/bin:/proj/genome/programs/
bowtie-0.12.7:/proj/genome/programs/samtools-0.1.16')
stdin, stdout, sterr = ssh.exec_command('nohup tophat -o /output/path/directory -I
10000 -p 8 --microexon-search -r 50 /proj/genome/programs/bowtie-0.12.7/indexes
/ce9 /input/path/1 /input/path/2 &')
there is no nohup.out file and python just goes to the next line with no error messages. I have tried without nohup as well and the result is the same. I was trying to follow this paramiko tutorial.
am I using exec_command incorrectly?
I also ran into the same issue and after looking at this article and this answer, I see the solution is to call the recv_exit_status() method of the Channel. Here is my code:
import paramiko
import time
cli = paramiko.client.SSHClient()
cli.set_missing_host_key_policy(paramiko.client.AutoAddPolicy())
cli.connect(hostname="10.66.171.100", username="mapping")
stdin_, stdout_, stderr_ = cli.exec_command("ls -l ~")
# time.sleep(2) # Previously, I had to sleep for some time.
stdout_.channel.recv_exit_status()
lines = stdout_.readlines()
for line in lines:
print line
cli.close()
Now my code will be blocked until the remote command is finished. This method is explained here, and please pay some attention to the warning.
exec_command() is non blocking, and it just sends the command to the server then Python will run the following code.
I think you should wait for the command execution ends and do the rest work after that.
"time.sleep(10)" could help which requires "import time".
Some examples show that you could read from the stdout ChannelFile object, or simply using stdout.readlines(), it seems to read all the response from the server, guess this could help.
Your code, the above 2 lines of exec_command, they're actually running in different exec sessions. I'm not sure if this has some impact in your case.
I'd suggest you take a look at the demos in the demos folder, they're using Channel class, which has better API to do blocking / nonblocking sending for both shell and exec.
You better to load the bash_profile before you run your command. Otherwise you may get a 'command not found' exception.
For example,I write the command command = 'mysqldump -uu -pp -h1.1.1.1 -P999 table > table.sql' in the purpose of dumping a Mysql table
Then I have to load the bash_profile manually before that dumping command by typing . ~/.profile; .~/.bash_profile;.
Example
my_command = 'mysqldump -uu -pp -h1.1.1.1 -P999 table > table.sql;'
pre_command = """
. ~/.profile;
. ~/.bash_profile;
"""
command = pre_command + my_command
stdin, stdout, stderr = ssh.exec_command(command)