I have a program which uses python's cmd module for command line interfaces.
Now I want it to run on my Linux server whenever any normal user logs in to it,
in a way that user never gets the default Linux prompt (i.e he should not be able to kill the program or send it to background or any such stuff).
For security issues the program should never allow user to gain access of normal prompt. The user should always use program's cmdline to fire all commands. (The program has various filters built in it).
Tried putting the program execution command in /etc/password (replacing default bash shell with the program execution cmd) for the user & also tried to put it in users .bashrc file, but of no use; user can still gain access default prompt.
Any pointers for this can very helpful.
This probably belongs on superuser, but you should add it into /etc/shells file, and set a user's login shell to that one.
For example: sudo useradd -d /path/to/newuserhomedir -s /usr/sbin/nologin newusername would create a guy called newusername whose login shell would be the program /usr/sbin/nologin.
Related
These are the things I need to do:
Open putty.exe
Enter username and password.
Run a shell script.
I am using UFT (VB Scripting). I am able to open PuTTY but not able to enter username and password or run any commands using UFT.
Is there any other way I can achieve this? I have searched it and found that we can use Plink. Then the problem would be that the whole team will have to install Plink for that purpose. And that is not possible.
Thanks in advance.
PuTTY has the -m switch, that you can use to provide a path to a file with a list of commands to execute:
putty.exe user#example.com -m c:\local\path\commands.txt
Where the commands.txt will, in your case, contain a path to your shell script, like:
/home/user/myscript.sh
Though for automation, your better use the Plink command-line connection tool, instead of the GUI PuTTY application, as you have already found out. The Plink is a part of PuTTY package, so everyone who has PuTTY should have Plink too.
The Plink (plink.exe) has the same command-line arguments as PuTTY. And in addition to those, you can specify your command directly on its command like:
plink.exe user#example.com /home/user/myscript.sh
or using its standard input
plink.exe user#example.com < c:\local\path\command.txt
(of course, you will use redirection mechanism of your language, instead of the <).
Note that providing a command using the -m switch or directly on command-line implies a non-interactive mode, while using the standard input uses an interactive mode by default. So the results or behavior may differ. Use the -t and -T switches to force the interactive and the non-interactive mode, respectively.
You can add cmd arguments when you launch putty directly;
start C:\Users\putty.exe -load "server" -l userID -pw Password -m commands.txt
Can you not request the user name and pass prior and pass this along to the executable?
To run a single remote command or short series of commands is even easier by using the plink -batch flag instead of needing a script file. For example to show the OS name and a directory listing, do this:
plink user#host -pw password -batch uname;ls
Not able to send commands to shell I logged into
Originally, I wrote a Python script. It was able to send commands like
subprocess.run(['kubectl', 'config', 'get-context'], shell=True)
but when it came time to get to the child shell, in this case bash, the command wouldn't run until I exited that shell and it would say things like it couldn't find the command.
I then tried to do it with the module "sh," but was also unsuccessful
I thought maybe using Python was problem and also realized my ultimate goal was to use a different shell (cypher-shell) and so skipped immediately to that with bash as the parent shell. In there I have a line that is sometimes successful, sometimes not
kubectl run -it --rm cypher-shell --image=gcr.io/cloud-marketplace/neo4j-public/causal-cluster-k8s:3.4 --restart=Never --namespace=default --command -- ./bin/cypher-shell -u neo4j -p "password" -a "domain.name"
But even when it successfully logs in it, it just hangs until I manually exit and then it runs the next commands
Note: I saw this and so, perhaps, it's not a child shell? Run shell command from child shell
I can't say I know exactly what you are doing, but if I understand your objective correctly you want the Python program to continue to log while the script continues to run? The problem is that the logger continues to run and holds up your program. The way I would deal with that would be to run the logger as a background process.
With bash, that would be ./script.sh & which would make it run without holding the rest of the program back from running.
Hopefully that may give you an idea! Good luck.
I am trying to use Fabric to run commands on a remote machine.
This works fine, until the command on the remote machine is interactive. In that case, Fabric return the interactive shell, but force me to type the info needed, while I am trying to send a command that does everything in remote, so I can automate the procedure.
Example:
from fabric.api import *
env.hosts=['myhost.mydomain']
env.user='root'
run(test1/myapp; exectask; exit)
I run a cli application on a remote machine, that uses interactive shell, so it is waiting for me to type the command (exectask); then once done, to exit, I call the exit command.
What happens now is that the app launch, Fabric show me the interactive UI and I still need to type exectask and after, exit.
How can I tell fabric to run that app, then pass the command to the interactive shell and then the exit to quit?
I see that Fabric has the prompt feature, but that's to ask the user to input data, while I want to just pass the commands and get the result back.
You might want to look into pexpect, a pure-Python module that works like expect. Essentially, it allows your program to spawn an external program or process, then control it just like a human was interacting with it. You program in what the program should expect to see (hence the name), then what action(s) to take.
I am calling a Python script supplied as part of a package from a Bash script I wrote, which calls the script in addition to doing various other things. Specifically, earlier on in the Bash script, I use a Zenity password dialog to get the user's MySQL root password and store it in a Bash variable to do some database setup. Later on, the Python script is called, and it too wants the user's MySQL root password.
I tried simply reusing the variable containing the password by using a pipe and a here string, but neither works; the script simply hangs there until I return to the terminal and enter the password again. I dug a bit into the Python script and found it uses getpass to ask for the password. Is there any way I can give it the password from the previously-created Bash variable so that I only have to ask the user for it once through a GUI?
If the information is allready present in a bash variable (say the name is pwd), IMHO the simplest solution is to export it to the environment.
export pwd
Then you can get it in the Python script :
pwd = os.environ('pwd')
I have a python script which is performing some nagios configuration. The script is running as a user which has full sudo rights (the user can run any command with sudo, without password prompt). The final step in the configuration is this:
open(NAGIOS_COMMAND_FILE, 'a').write(cmdline)
The NAGIOS_COMMAND_FILE is only writable by root, so this command should be run by root. I can think of two ways of achieving this (both unsatisfactory):
Run the whole script as root. I do not like doing this, since any error in my script will be executed with full root rights.
Put the open(NAGIOS_COMMAND_FILE, 'a').write(cmdline) command in a separate script, and use the subprocess library to call that script, with sudo. I do not like creating an extra script just to run a single command.
I suppose there is no way of changing the running user just for a single command, in my current script, or am I wrong?
Why don't you give write permission on NAGIOS_COMMAND_FILE to your user who have all sudo rights?
Never, ever run a web server as root or as a user with full sudo privileges. This isn't a pythonic thing, it is a "keep my server from being pwned" thing.
Look at os.seteuid, the "principle of least privilege", and man sudoers and run your server as regular "httpd-server" where "httpd-server" has sudoer permission to write to NAGIOS_COMMAND_FILE. And then be sure that what you write to the command file is as clean as you can make it.
It is actually possible to change user for a single command.
Fabric provides a way to log in as any user to a server. It relies on ssh connections I believe. So you could connect to localhost with a different user in your python script and execute the desired command.
http://docs.fabfile.org/en/1.4.3/api/core/decorators.html
Anyway, as others have already precised, it is best to allow the user running the script permission to execute this one command and avoid relying on root for execution.
I would agree with the post above, either give your user write perms to the NAGIOS_COMMAND_FILE or add that use to a group that has those permissions, like nagcmd.