I was trying to automate a system where we have a linux box but customized.. to go to shell we have to pass some input. like below:
tanuj$ ssh admin#10.10.10.10
Used by Tanuj
Password:
command line interface
app > en
app # config t
app (config) #
I have written a script using pexpect in python. I am able to login and execute the command
pexpect.sendline("ls -lrt")
pexpect.expect("#")
but when i am using pexpect.before() .. is getting nothing .. when the command output is long and also i could see the pexpect.before has also got the command in it.
any idea how to solve this.. or is there any other python module which i can use to automation a ssh session like i have here.
i also tried using paramiko but it did not work because we have a to execute some commands before we can reach to normal shell prompt.
I am also facing the similar problem. I was about to ask the question. You are using a # sign in your
pexpect.expect('#')
This # sign comments everything written after it. Further I guess you should create a child process to spawn a process, Like(I don't know if I'm right in your situation):
child=pexpect.spawn('ssh admin#10.10.10.10')
child.expect('prompt_expected_by_you') # put in the prompt you expect after sending SSH
child.sendline('your_password')
child.expect('Prompt_expected_by_you')
child.sendline('ls -ltr')
child.expect('Prompt_expected_by_you')
print child.before, # , will keep continue your newline print
print child.after
child.sendline('exit')
child.expect('Prompt_expected_by_you')
child.sendline('exit')
child.expect('Prompt_expected_by_you') # may not be required
child.close # close the child session
I have successfully used these commands in FTP, but not able to print result of 'LS -LTR' in SSH. I guess i'll have to initiate a shell, but not sure. Any progress on your side?
Could someone help???
Related
I am building an email purge tool. The premise is that the .py needs to connect to the IPPSSession using Powershell. like so:
sp.run(f"Connect-IPPSSession -UserPrincipalName {Email}", shell=True)
However, when I go running the commands later in the program, it does not recognize the commands.
From what I have read, it appears (subprocess) sp.run is connecting and promptly disconnecting.
For the commands later in the program to be recognized, I need to maintain a connection.
Is there a way to have the IPPSSession run the entire length of the program? I guess I could rewrite the whole program in PowerShell exclusively....
After some stimulants and quite a bit of thinking. I found a better way to format the query. Behold:
Email_Purge = f"Connect-IPPSSession -UserPrincipalName {Email} ; New-ComplianceSearchAction -SearchName {Search_Name!r} -purge -PurgeType {Purge_Type}"
if Purge_Type == "SoftDelete" or Purge_Type == "HardDelete":
sp.run(Email_Purge, shell=True)
else:
print("Please enter [SoftDelete] or [HardDelete]")
The session runs the whole length of the Var. so all of the input happens first, and then it executes and breaks the session cleanly.
My existing shell script "installation.sh" need user inputs as part of installation and I am trying to handle it with python pexpect.spwan(), However after executing script, it always stays in pexpect.spawn() process becuase script never ends and not able to match any user input. Is there better way to handle it ? appreciate response.
child = pexpect.spawn ('/opt/scripts/installation.sh')
for line in child:
print(line.decode().strip())
child.expect('\r\n')
print(child.expect('Do you need to configure DNS? y\|n \[n\] '))
print(line.decode().strip('\n'))
child.sendline('y\n')
child.expect('Input IP address of DNS server:')
print(line.decode().strip('\n'))
child.sendline('10.11.11.13\n')
child.expect('Do you need to configure more DNS servers?')
child.sendline('n')
Question:
I am trying to execute a cmd which reads from a PostgreSQL db. I am able to manually switch to root, then switch to the postgre user and access the information I desire.
The problem I have is that when I run this, it just hangs and nothing happens.
I have the root password and will need this when switching from the current user But I am not being prompted to enter it.
How can I get this not to hang and the password be prompted?
The code below only executes 'ls' for simplicity.
Code:
def popen_cmd_shell(command):
print command
process = subprocess.Popen(command,
stdout=subprocess.PIPE,
stderr=subprocess.STDOUT,
shell=True)
proc_stdout = process.communicate()[0].strip()
return proc_stdout
if __name__ == '__main__':
querylist = popen_cmd_shell('su - root; ls;')
print querylist
Update One:
I am unable to use any library that dose not come with python 2.7 on Linux SUSE. I just need to execute a command and exit.
Update Two:
I am unable to run the script as root as I need to perform other tasks which require me not to be root.
Update Three:
As per LeBarton suggestions I have got the script to log into root, although the the ls command never gets executed as root, it gets executed as the user I originally was. When I run the command I get prompted to enter the root password and get transfered from "#host" to "host" who cannot execute any command other than exit. When I exit all the commands executed output appears.
I do not wish to store the user password in the code as LeBarton has it. How can I execute a command as root and return back and continue the rest of the script, without getting locked into the new users and needing to type 'exit'.
The "stderr=subprocess.STDOUT" seems to have been what was causing it to hang.
Code:
if __name__ == '__main__':
def subprocess_cmd(command):
process = subprocess.Popen(command,stdout=subprocess.PIPE, shell=True)
proc_stdout = process.communicate()[0].strip()
print proc_stdout
subprocess_cmd('echo a; su - root; ls; cd;ls;')
...continue with rest of script where I execute commands as original user
Answer:
Thanks to tripleee for his excellent answer.
I Have achieved what I set out to do with the follwoing code:
if __name__ == '__main__':
def subprocess_cmd(command):
process = subprocess.Popen(command,stdout=subprocess.PIPE, shell=False)
proc_stdout = process.communicate()[0].strip()
print proc_stdout
subprocess_cmd(['su','-','root','-c','su -s /bin/sh postgres -c \'psql -U msa ..........])
I just needed to the place the command I was executing as root after -c. So it now switches to the postgres user and finds the data it needs from root returning to the normal user after.
You are misunderstanding what su does. su creates a privileged subprocess; it does not change the privileges of your current process. The commands after su will be executed after su finishes, with your normal privileges.
Instead, you want to pass a -c option to specify the commands you want to run with elevated privileges. (See also the su man page.)
popen_cmd_shell('su -c ls - root')
sudo was specifically designed to simplify this sort of thing, so you should probably be using that instead.
Scripted access to privileged commands is a sticky topic. One common approach is to have your command perform the privileged operation, then drop its privileges. Both from a security and a design point of view, this approach tends to simplify the overall logic. You need to make sure your privileged code is as simple and short as possible--no parsing in the privileged section, for example.
Once the privileged code is properly tested, audited, and frozen, bestowing the script with the required privileges should be a simple matter (although many organizations are paranoid, and basically unable to establish a workable policy for this sort of thing).
Regardless of which approach you take, you should definitely avoid anything with shell=True in any security-sensitive context, and instead pass any external commands as a list, not as a single string.
popen_cmd_shell(['su', '-c', 'ls', '-', 'root'])
(Maybe also rename the function, since you specifically do not want a shell. Obviously, also change the function to specify shell=False.)
Again, these security concerns hold whether you go with dropping privileges, or requesting privilege escalation via su or sudo.
Your command is this
su - root; ls;
The shell is interpretting it as this
"su -root; ls;"
You probably don't have an executable in your with that exact name with spaces.
Try separating it into as list with
['su', '-', 'root', ';', 'ls', ';' ]
EDIT
stdout=subprocess.PIPE, is causing the program to hang. If you are trying to pass the password in, using process.communicate('root password') works.
Do you just want to access the PostgreSQL database? If so, you don't need to use command line at all...
The Python library psycopg2 will allow to send commands to the PostgreSQL server, more on that here: https://wiki.postgresql.org/wiki/Psycopg2_Tutorial
However I recommend an ORM such as SQLAlchemy, they make communicating with a database a trivial task.
I am trying to automate pbrun using the following code
ssh user#server.com
pbrun -u user1 bash
pass active directory password
run the command
exit
I created the following script but it's not able to pass the password for pbrun:
import time
import pexpect
child = pexpect.spawn('ssh user#server.com')
child.expect("user#server.com's password:")
child.sendline('Password')
child.expect ('.')
child = pexpect.spawn ('pbrun -u user1 bash')
child.expect ('.*')
time.sleep(10)
child.sendline ('Password') - Active directory password
child.expect ('.*')
child.sendline ('ls')
data = child.readline('ls')
print data
The above code successfully does ssh and runs pbrun but is unable to send the password asked by pbrun. Any help is appreciated.
I was able to achieve this by below script, tried python but was not successful, sharing this script which may be helpful to others.
#!/usr/bin/expect -f
if { $argc<1 } {
send_user "usage: $argv0 <passwdfile> \n"
exit 1
}
set timeout 20
set passwdfile [ open [lindex $argv 0] ]
catch {spawn -noecho ./myscript.sh}
expect "Password:" {
while {[gets $passwdfile passwd] >= 0} {
send "$passwd\r"
}
}
expect "*]$\ " {send "exit\r"}
close $passwdfile
send "ls\r"
expect eof
Run the script as below:
./run.exp passfile.txt
here passfile.txt has the password in text and myscript.sh has the pbrun command
In general it's not a great idea to expect wildcards like . or .* because those can match a partial input and your script will continue and send its next line potentially before the server at the other end is even able to receive/handle it, causing breakage. Be as specific as possible, ideally trying to match the end of whatever the server sends right before it waits for input.
You have access to the string buffers containing what pexpect receives before and after the matched pattern in each child.expect() statement with the following constructs which you can print/process at will:
print child.before
print child.after
You might want to get familiar with them - they're your friends during development/debugging, maybe you can even use them in the actual script implementation.
Using sleeps for timing is not great either - most of the time they'll just unnecessarily slow down your script execution and sooner or later things will move at different/unexpected speeds and your script will break. Better expect patterns eventually with a timeout exception are generally preferred instead - I can't think of a case in which sleeps would be just as (or more) reliable.
Check your script's exact communication using these techniques and adjust your patterns accordingly.
I basically want to create a web page through which a unix terminal at the server side can be reached and commands can be sent to and their results can be received from the terminal.
For this, I have a WSGIServer. When a connection is opened, I execute the following:
def opened(self):
self.p = Popen(["bash", "-i"], bufsize=1, stdin=PIPE, stdout=PIPE, stderr=STDOUT)
self.p.stdout = Unbuffered(self.p.stdout)
self.t = Thread(target=self.listen_stdout)
self.t.daemon = True
self.t.start()
When a message comes to the server from the client, It is handled in the following function, which only redirects the coming message to the stdin of subprocess p which is an interactive bash:
def received_message(self, message):
print(message.data, file=self.p.stdin)
Then outputs of the bash is read in the following function within a separate thread t. It only sends the outputs to the client.
def listen_stdout(self):
while True:
c = self.p.stdout.read(1)
self.send(c)
In such a system, I am able to send any command(ls, cd, mkdir etc.) to the bash working at the server side and receive their outputs. However, when I try to run ssh xxx#xxx, the error pseudo-terminal will not be allocated because stdin is not a terminal is shown.
Also, in a similar way, when I run sudo ..., the prompt for password is not sent to the client somehow, but it appears on the terminal of the server script, instead.
I am aware of expect; however, only for such sudo and ssh usage, I do not want to mess my code up with expect. Instead, I am looking for a general solution that can fake sudo and ssh and redirect prompt's to the client.
Is there any way to solve this? Ideas are appreciated, thanks.
I found the solution. What I need was creating a pseudo-terminal. And, at the slave side of the tty, make a setsid() call to make this process a new session and run commands on it.
Details are here:
http://www.rkoucha.fr/tech_corner/pty_pdip.html