I am trying to use pexpect to ssh into a computer but I do not want to return back to the original computer. The code I have is:
#!/usr/bin/python2.6
import pexpect, os
def ssh():
# Logs into computer through SSH
ssh_newkey = 'Are you sure you want to continue connecting'
# my ssh command line
p=pexpect.spawn('ssh build#10.51.11.10')
i=p.expect([ssh_newkey,'password:',pexpect.EOF])
p.sendline("password")
i=p.expect('-bash-3.2')
print os.getcwd()
ssh()
This allows me to ssh into the computer but when I run the os.getcwd() the pexpect has returned me to the original computer. You see I want to ssh into another computer and use their environment not drag my environment using pexpect. Can anyone suggest how to get this working or an alternative way.
Thanks
The process that launches ssh is never going to leave the computer it runs on. When you ssh into another computer, you start a new process there. That process is an entirely separate thing, a separate program to run. If you want to do anything on the remote machine, you have to either send the commands to execute over the connection, or copy over the program you want to run and execute it remotely.
your instance to the other machine is p. p.sendline what you want on the other machine and p.expect the result. in the case outlined
p.sendline("pwd && hostname")
p.expect("-bash-3.2") # although its better to set the prompt yourself so that this can be ported to any machine
response = p.before
print "received response [[" + response + "]]"
Try that. Also try module pxssh to use ssh with python. This module uses pexpect and has all of the methods in it to do exactly what you want here
Related
I am using the python paramiko module to run a built in parmiko function SSH.execute on a remote server. I want to run a script on the server which will require 4 prompts. I was planning to do a more complex version of this:
ExpectedString = 'ExpectedOutput'
Output = SSH.execute('./runScript')
if Output == ExpectedString:
SSH.execute('Enter this')
else:
raise SomeException
The problem is nothing comes back for output as the server was waiting for a number to entered and the script gets stuck at this SSH.execute command. So even if another SSH.execute command is run after it never gets run! Should I be looking to use something other than paramiko?
You need to interact with the remote script. Actually, SSH.execute doesn't exist, I assume you're talking about exec_command. Instead of just returning the output, it actually gives you wrappers for stdout, stdin and stderr streams. You can directly use these in order to communicate with the remote script.
Basically, this is how you run a command and pass data over stdin (and receive output using stdout):
ssh.connect('127.0.0.1', username='foo', password='bar')
stdin, stdout, stderr = ssh.exec_command("some_script")
stdin.write('expected_input\n')
stdin.flush()
data = stdout.read.splitlines()
You should check for the prompts, of course, instead of relying on good timing.
#leoluk - yep, I understand your problem, both those recommended solutions won't work. The problem, as you said, with exec_command is that you can only read the output once the command completes. So, if you wanted to remotely run the command rm -i *, you won't be able to read which file is to be deleted before you can respond with a "yes" or a "no". The key here is to use invoke_shell. See this youtube link - https://www.youtube.com/watch?v=lLKdxIu3-A4 - this helped and got me going.
I've been cracking my head over this but nothing comes to my mind yet.
I want my script to execute a .py file inside of another already started process. I have a maya process opened, and inside for example modo I want to start file hello.py (print 'hello!') inside that exact Maya.
I already got the PID of that maya process, but don't know how to actually send a command to execute.
is theres some attribute/flag in subprocess or signal modules I could be missing? or is it done even another way?
import os
openedMaya = []
r = os.popen('tasklist /v').read().strip().split('\n')
for i in range(len(r)):
s = r[i]
if 'maya.exe' in s and ': untitled' in s:
openedMaya.append(s)
mayaPID = openedMaya.split('maya.exe')[1].split('Console')[0]
I need a command that could execute hello.py in that maya process.
You could use RPyC to act as a bridge so that you can communicate from one software to another. The idea is that you use RPyC to run an idle server in Maya, where the PYTHONPATH is also pointing to your hello.py script. This server stays active in the session, but the user shouldn't notice it exists.
Then in your other software you use RPyC to broadcast a message using the same port as the server so that it triggers it in Maya. This would then run your command.
It's slightly more overhead, but I have been able to use this successfully for stand-alone tools to trigger events in Maya. As far as using subprocess, you can use it to run a command in a new Maya session, but I don't think there's a way to use it for an existing one.
Hope that nudges you in the right direction.
Maybe an easier way would be to transfer your mesh by using an intermediate file. One process creates the file, another process (running inside the host app) reads it in.
Thanks for the advices, at the end I found a solution by opening the port of maya, by starting a mel command (at the startup):
commandPort -n ":<some_port>";
and connecting from modo to that port through socket:
HOST = '127.0.0.1'
PORT = <some_port>
ADDR=(HOST,PORT)
client = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
client.connect(ADDR)
client.send(<message_that_you_want_to_send)
data = client.recv(1024)
client.close()
and i'm able to do whatever I want inside that opened maya, as long as I send mel commands.
Thanks for the help though!
I just bought a server and was wondering if there was a way to run the code remotely but store/display the results locally. For example, I write some code to display a graph, the positions on the graph are computed by the (remote) server, but the graph is displayed on the (local) tablet.
I would like to do this because the tablet I carry around with me on a day-to-day basis is very slow for computational physics simulations. I understand that I can setup some kind of communications protocol that allows the server to compute things and then sends the computations to my tablet for a script on my tablet to handle the data. However, I would like to avoid writing a possibly new set of communications scripts (to handle different formats of data) every single time I run a new simulation.
This is a complete "The Russians used a pencil" solution, but have you considered running a VNC server on the machine that is doing the computations?
You could install a VNC client onto your tablet/phone/PC and view it that way, there are tons of them available. No need to go about creating anything from scratch.
With ssh, you can do this with a python script or a shell script.
ssh machine_name "python" < ~/script/path/script.py
As the OP indicated in the comments that he wants to interact with the script on the remote machine, I have made some change here.
Copy the python or shell script to the remote machine. This can be done in several ways. For example with scp. But also, with ssh, like here:
ssh machine_name bash -c "cat > /tmp/script.py" < ~/script/path/script.py
Interact with the script on the remote machine
ssh machine_name python -u /tmp/script.py
You should be able to interact with your script running in the remote machine now!
Notice the use of -u to set stdin/stdout of python in unbuffered mode. This is needed to be able to interact with the script.
-u Force stdin, stdout and stderr to be totally unbuffered. On systems where it matters, also put stdin,
stdout and stderr in binary mode. Note that there is internal buffering in xreadlines(), readlines() and
file-object iterators ("for line in sys.stdin") which is not influenced by this option. To work around
this, you will want to use "sys.stdin.readline()" inside a "while 1:" loop.
Here is an example.
The code, which was copied to the server:
#!/usr//bin/env python3
while True:
value = input("Please enter the value: ")
if value != "bye":
print("Input received from the user is: ", value)
else:
print("Good bye!!")
break
Interactive session:
$ ssh machine_name python -u python/pyecho.py
Please enter the value: 123
Input received from the user is: 123
Please enter the value: bye
Good bye!!
REF:
https://unix.stackexchange.com/questions/87405/how-can-i-execute-local-script-on-remote-machine-and-include-arguments
Feedback in the comments below.
I am writing a script to automate a process on a remote server. The basics would look something like:
import pexpect
logonPrompt = '[$#] '
test = pexpect.spawn('ssh user#server')
test.expect(logonPrompt)
test.sendline('/etc/init.d/service restart')
test.expect(logonPrompt)
Now after the service restarts, I want to spawn a new command to drop me into a 'less' output of the service's log. Simply running test.sendline('less /logs/service/logfile') doesn't work properly.
I've done simular using the subprocess module and simply doing a subprocess.call(['less', '/logs/service/logfile')], which puts the console into the less process, and then continues when I exit that process.
Is this possible to do with pexpect? Is there a way to combine the power of the two? I need pexpect because I have to do some wizardary before restarting the service, so I can't simply do a subprocess call to ssh and run the commands.
I am using the python paramiko module to run a built in parmiko function SSH.execute on a remote server. I want to run a script on the server which will require 4 prompts. I was planning to do a more complex version of this:
ExpectedString = 'ExpectedOutput'
Output = SSH.execute('./runScript')
if Output == ExpectedString:
SSH.execute('Enter this')
else:
raise SomeException
The problem is nothing comes back for output as the server was waiting for a number to entered and the script gets stuck at this SSH.execute command. So even if another SSH.execute command is run after it never gets run! Should I be looking to use something other than paramiko?
You need to interact with the remote script. Actually, SSH.execute doesn't exist, I assume you're talking about exec_command. Instead of just returning the output, it actually gives you wrappers for stdout, stdin and stderr streams. You can directly use these in order to communicate with the remote script.
Basically, this is how you run a command and pass data over stdin (and receive output using stdout):
ssh.connect('127.0.0.1', username='foo', password='bar')
stdin, stdout, stderr = ssh.exec_command("some_script")
stdin.write('expected_input\n')
stdin.flush()
data = stdout.read.splitlines()
You should check for the prompts, of course, instead of relying on good timing.
#leoluk - yep, I understand your problem, both those recommended solutions won't work. The problem, as you said, with exec_command is that you can only read the output once the command completes. So, if you wanted to remotely run the command rm -i *, you won't be able to read which file is to be deleted before you can respond with a "yes" or a "no". The key here is to use invoke_shell. See this youtube link - https://www.youtube.com/watch?v=lLKdxIu3-A4 - this helped and got me going.