The idea is to execute code on Windows via Python's subprocess.Popen, but, in the command I'm executing, there's also a password which I need to keep secure.
Does Windows store (somewhere, e.g. logs) commands executed this way via Python?
Normally, I would execute such a command via standard Windows command line like net use, but now I moved it to my Python script, and because it contains a password, I want to keep it as secure as possible.
Related
These are the things I need to do:
Open putty.exe
Enter username and password.
Run a shell script.
I am using UFT (VB Scripting). I am able to open PuTTY but not able to enter username and password or run any commands using UFT.
Is there any other way I can achieve this? I have searched it and found that we can use Plink. Then the problem would be that the whole team will have to install Plink for that purpose. And that is not possible.
Thanks in advance.
PuTTY has the -m switch, that you can use to provide a path to a file with a list of commands to execute:
putty.exe user#example.com -m c:\local\path\commands.txt
Where the commands.txt will, in your case, contain a path to your shell script, like:
/home/user/myscript.sh
Though for automation, your better use the Plink command-line connection tool, instead of the GUI PuTTY application, as you have already found out. The Plink is a part of PuTTY package, so everyone who has PuTTY should have Plink too.
The Plink (plink.exe) has the same command-line arguments as PuTTY. And in addition to those, you can specify your command directly on its command like:
plink.exe user#example.com /home/user/myscript.sh
or using its standard input
plink.exe user#example.com < c:\local\path\command.txt
(of course, you will use redirection mechanism of your language, instead of the <).
Note that providing a command using the -m switch or directly on command-line implies a non-interactive mode, while using the standard input uses an interactive mode by default. So the results or behavior may differ. Use the -t and -T switches to force the interactive and the non-interactive mode, respectively.
You can add cmd arguments when you launch putty directly;
start C:\Users\putty.exe -load "server" -l userID -pw Password -m commands.txt
Can you not request the user name and pass prior and pass this along to the executable?
To run a single remote command or short series of commands is even easier by using the plink -batch flag instead of needing a script file. For example to show the OS name and a directory listing, do this:
plink user#host -pw password -batch uname;ls
I am trying to use Fabric to run commands on a remote machine.
This works fine, until the command on the remote machine is interactive. In that case, Fabric return the interactive shell, but force me to type the info needed, while I am trying to send a command that does everything in remote, so I can automate the procedure.
Example:
from fabric.api import *
env.hosts=['myhost.mydomain']
env.user='root'
run(test1/myapp; exectask; exit)
I run a cli application on a remote machine, that uses interactive shell, so it is waiting for me to type the command (exectask); then once done, to exit, I call the exit command.
What happens now is that the app launch, Fabric show me the interactive UI and I still need to type exectask and after, exit.
How can I tell fabric to run that app, then pass the command to the interactive shell and then the exit to quit?
I see that Fabric has the prompt feature, but that's to ask the user to input data, while I want to just pass the commands and get the result back.
You might want to look into pexpect, a pure-Python module that works like expect. Essentially, it allows your program to spawn an external program or process, then control it just like a human was interacting with it. You program in what the program should expect to see (hence the name), then what action(s) to take.
I'm trying use python's cmd library to create a shell with limited commands. One requirement I have is to be able to run a command that executes an existing shell script which opens an ssh session on a remote machine and from there allows the user to interact with the remote shell as if it was a regular ssh session.
Simply using subprocess.Popen('[/path/to/connect.sh]') works well at least as a starting point except for one issue. You can interact with the remote shell but the input that you type is not shown on stdout...so for example you see the prompt on your stdout but when you type 'ls' you don't see it being typed but when you hit return it works as expected.
I'm trying to wrap my head around how to print the input to stdout and still send it along to the remote ssh session.
EDIT:
Actual code without using cmd was just the one line:
ssh_session = subprocess.Popen(['connect.sh'])
it was fired from a do_* method in a class which extended cmd.Cmd. I think I may end up using paramiko but would still be interested in anyone's input on this.
Assuming you are using a Unix like system, SSH detects if you are on a terminal or not. When it detects that you are not on a terminal, like when using subprocess, it will not echo the characters typed. Instead you might want to use a pseudo-terminal, see pexpect, or pty. This way you can get the output from SSH as if it was running on a true terminal.
I have stabled password less login between two ubuntu system then I run following command
$scp file user#hostname2:/tmp/
it works fine and I got file transferred without asking for password.
however when I try to execute the same through following python statement-
subprocess.check_output(['scp', file, r'user#hostname2:/tmp/'])
I am not able to copy files to remote computer, so I am bit curious about which user's privileges python program uses to execute command via subprocess ?
How can I debug such programs ?
echo Something=Something > file
I can use paramiko's exec_command to do cat, grep and ls, but whenever I try to modify a file it does nothing. I already ran su before this. The file remains exactly the same as it was before running the command.
This is because you have to open a new channel for each exec_command call. This loses the authentication of the su command since it is associated to a specific channel.
You have a couple of options.
run the command with sudo, which may not be possible over paramiko
Log in as root, which is not necessarily a good idea
Use invoke_shell() on your channel, then send commands via std in to the shell
Option 3 allows for interactive use of ssh with paramiko, keeping state information intact. That is what you need for su commands. It also allows you to create a pexpect type wrapper around your shell connection, watching the stdout pipe for indications that things are done, and you can send additional commands through stdin. Just watch out for the pipes filling up and blocking until you read data.