I am writing a script to automate a process on a remote server. The basics would look something like:
import pexpect
logonPrompt = '[$#] '
test = pexpect.spawn('ssh user#server')
test.expect(logonPrompt)
test.sendline('/etc/init.d/service restart')
test.expect(logonPrompt)
Now after the service restarts, I want to spawn a new command to drop me into a 'less' output of the service's log. Simply running test.sendline('less /logs/service/logfile') doesn't work properly.
I've done simular using the subprocess module and simply doing a subprocess.call(['less', '/logs/service/logfile')], which puts the console into the less process, and then continues when I exit that process.
Is this possible to do with pexpect? Is there a way to combine the power of the two? I need pexpect because I have to do some wizardary before restarting the service, so I can't simply do a subprocess call to ssh and run the commands.
Related
Everything I've found so far is good ONLY IF if it's not being used in a web environment.
I have a simple python 'admin tool' web app running in Apache, and I'm using the simple configuration in the vhost :
AddHandler cgi-script .py.
There's no need for high performance, there's no python framework. Users select certain criteria which is sent via a GET request to the server & a python script then gets called by apache and the report is compiled. Problem is that the report complexity is growing, and the length of time it takes to compile this report can be > 20 minutes so I want to spawn a new totally independent process to generate the report asynchronously, allowing the server to return quickly to the client.
My issue is that I've tried absolutely everything I can find to be able to do this, and everything fails in the same way; when I spawn a new process the parent script will not return to the client until the new process has completed.
All of these do exactly the same - they start a new process BUT the main script waits for the child process to finish before returning to the client. I want the main script to return straight back to the client, and leave the newly spawned process running to compile the report.
os.system("""
cd /the_directory
python -m web.wait &
""")
command = ['python', '/the_directory/wait.py &']
Popen(command, shell=True, start_new_session=True)
L = ['python', '/the_directory/wait.py']
os.spawnvp(os.P_NOWAIT, 'python', L)
L = ['test.sh']
os.spawnvpe(os.P_NOWAIT, '/the_directory/test.sh', L, os.environ)
where test.sh contains:
#!/bin/sh
/the_directory/wait.py &
jobs = []
for i in range(5):
p = multiprocessing.Process(target=wait.dostuff, )
jobs.append(p)
p.start()
I want the simplest possible solution to this - it's an admin tool maintained only by me for a very few users.
If there's really no way to accomplish this then I guess I will have to use cron running once a minute and use a simple queue, maybe with redis or something, but this is not ideal.
I used to do this kind of thing relatively easily in php, I can't believe it's not possible in python.
You could use Popen with shell=False
example:
Popen(["bash", "test.sh"], shell=False)
I have this python script which is running and I call a subprocess to run a game server. The thing is I need the script to continue while the server is running. But as soon as the sub process starts, and the server starts running, the script is paused until I shut down the game server.
How can I allow the script to continue after the server has been initialized?
command = f'f:&cd {server_path}&{exec_prop}'
process = subprocess.Popen(command , stdout=subprocess.PIPE , shell=True)
process.communicate()
communicate blocks until the process terminates. Since this is not want you want, remove that line. It does not seem like you are using it to actually communicate anyway in your example.
On a side note, if you can avoid shell=True that is usually considered better practice. The stdout redirection seems unnecessary as well since you are not communicating..
Hello minds of stackoverflow,
I've run into a perplexing bug. I have a python script that creates a new thread that ssh's into a remote machine and starts a process. However, this process does not return on its own (and I want it to keep running throughout the duration of my script). In order to force the thread to return, at the end of my script I ssh into the machine again and kill -9 the process. This is working well, expect for the fact that it breaks the terminal.
To start the thread I run the following code:
t = threading.Thread(target=run_vUE_rfal, args=(vAP.IP, vUE.IP))
t.start()
The function run_vUE_rfal is as follows:
cmd = "sudo ssh -ti ~/.ssh/my_key.pem user#%s 'sudo /opt/company_name/rfal/bin/vUE-rfal -l 3 -m -d %s -i %s'" % (vUE_IP, vAP_IP, vUE_IP)
output = commands.getstatusoutput(cmd)
return
It seems when the command is run, it somehow breaks my terminal. It is broken in that instead of creating a new line for each print, it appends the WIDTH of my terminal in whitespace to the end of each line and prints it as seemingly one long string. Also, I am unable to see my keyboard input to that terminal, but it still successfully read. My terminal looks something like this:
normal formatted output
normal formatted output
running vUE-rfal
print1
print2
print3_extra_long
print4
If I replace the body of the run_vUE_rfal function with some simple prints, the terminal does not break. I have many other ssh's and telnets in this script that work fine. However, this is the only one I'm running in a separate thread as it is the only one that does not return. I need to maintain the ability to close the process of the remote machine when my script is finished.
Any explanations to the cause and idea for a fix are much appreciated.
Thanks in advance.
It seems the process you control is changing terminal settings. These are bypassing stderr and stdout - for good reasons. E.g. ssh itself needs this to ask users for passwords even when it's output is being redirected.
A way to solve this could be to use the python-module pexpect (it's a 3rd-party library) to launch your process, as it will create its' own fake-tty you don't care about.
BTW, to "repair" your terminal, use the reset command. As you already noticed, you can enter commands. reset will set the terminal to default settings.
I am working on my python script to launch a server, may be in background or in a different process and then further do some processing before killing the launched server.
Once the rest of the processing is over, then kill the launched server.
For Example
server_cmd = 'launch_server.exe -source '+ inputfile
print server_cmd
cmd_pid = subprocess.Popen(server_cmd).pid
...
...
... #Continue doing some processing
cmd_pid.terminate() # Once the processing is done, terminate the server
Some how the script does not continue after launching the server as the server may be running in infinite loop listening for a request. Is there a good way to send this process in background so that it doesn't expect for command line input.
I am using Python 2.7.8
It's odd that your script does not continue after launching the server command. In subprocess module, Popen starts another child process while the parent process (your script) should move on.
However in your code there's already a bug: cmd_pid is an int object and does not have terminate method. You should use subprocess.Popen object to call terminate method.
Making a small change resolved the problem
server_proc = subprocess.Popen(server_cmd, stdout=subprocess.PIPE)
server_proc.terminate()
Thanks Xu for correction in terminate.
I am trying to use pexpect to ssh into a computer but I do not want to return back to the original computer. The code I have is:
#!/usr/bin/python2.6
import pexpect, os
def ssh():
# Logs into computer through SSH
ssh_newkey = 'Are you sure you want to continue connecting'
# my ssh command line
p=pexpect.spawn('ssh build#10.51.11.10')
i=p.expect([ssh_newkey,'password:',pexpect.EOF])
p.sendline("password")
i=p.expect('-bash-3.2')
print os.getcwd()
ssh()
This allows me to ssh into the computer but when I run the os.getcwd() the pexpect has returned me to the original computer. You see I want to ssh into another computer and use their environment not drag my environment using pexpect. Can anyone suggest how to get this working or an alternative way.
Thanks
The process that launches ssh is never going to leave the computer it runs on. When you ssh into another computer, you start a new process there. That process is an entirely separate thing, a separate program to run. If you want to do anything on the remote machine, you have to either send the commands to execute over the connection, or copy over the program you want to run and execute it remotely.
your instance to the other machine is p. p.sendline what you want on the other machine and p.expect the result. in the case outlined
p.sendline("pwd && hostname")
p.expect("-bash-3.2") # although its better to set the prompt yourself so that this can be ported to any machine
response = p.before
print "received response [[" + response + "]]"
Try that. Also try module pxssh to use ssh with python. This module uses pexpect and has all of the methods in it to do exactly what you want here