How do we run tmux from ssh agent using python - python

how do we create a tmux session using python based ssh client programs?
from pssh.clients import ParallelSSHClient
def Estabish_Connection(host,username,passkey):
client = ParallelSSHClient([str(host)],user=str(username), password=str(passkey))
return client
client2 = Estabish_Connection('host','root','password')
cmd_out = client2.run_command('tmux new -s myname')
for host,out in cmd_out.items():
for line in out.stdout:
print(line)
It doesn't create a tmux window on checking manually from a terminal.
> tmux ls
no server running on /tmp/tmux-0/default

open terminal failed: not a terminal
Okay well post the tmux bits of the script, by default it will try to attach to the current tty (which doesn't exist hence the error) if you have a new session in there for example you'll need to add a -d param to prevent this
tmux new-session -s username -d

Related

How to run multiple Putty commands from Notepad

I have created Python TkInter to open a Putty session as below
# If the server name is not empty, open an SSH connection using PuTTY
if server_name:
#set the putty path
putty_path = 'C:\\Program Files\\PuTTY\\putty.exe'
check_db_type = db_type.get().lower()
check_db_type_action = get_action().lower()
abs_path = resource_path(f"sudo\\{check_db_type}\\{check_db_type}_sudo.txt")
# THIS IS WHAT STARTS THE SSH PUTTY SESSION
server_command = f'{putty_path} -ssh {username}#{server_name} -pw {password} -m "{abs_path}" -t'
subprocess.Popen(server_command)
# close the subprocess if it is already running and open a new subprocess
# with contextlib.suppress(Exception):
# subprocess.Popen.terminate()
# subprocess.Popen(server_command)
else:
window.destroy()
I am passing the notepad filepath in the server_command. The file contains these commands.
sudo su - postgres works well but other commands are not working
sudo su - postgres || df -kh
I have tried 3 variants of command separation as below but nothing works
sudo su - postgres; df -kh
sudo su - postgres && df -kh
sudo su - postgres
df -kh
Is there any other way to run multiple commands like these apart from using a text file? Thank you so much
I have tried to separate the 2 commands in the notepad in 3 different ways but only the first command <sudo su - postgres> is working. The next command <df -kh>, shows filesystem space, is not working as expected
I am using Windows on my laptop but access database servers through Putty remotely
++Update
When my team login as ADM (ADM account for users) and then sudo su - username, it opens a different subshell than the previous one. Can check by echo $$.
To put it simply, ADM is parent shell and sudo su - username will open child shell. After sudo su - postgres, all my following commands <uptime; df -kh, etc> running in parent shell in the background. Until I exit sudo su - postgres <CTRL + D> I cannot see the output.

Script hangs while using Fabric's Connection.run() in the background

Overview
I'm trying to use python fabric to run an ssh command as root on a remote server.
The command: nohup ./foo &
foo is expected to command run for several days. I must be able to disassociate foo from fabric's remote ssh session, and put foo in the background.
The Fabric FAQ says you should use something like screen or tmux when you run your fabric script (which runs the backgrounded command). I tried that, but my fabric script still hung. foo is not hanging.
Question
How do I use fabric to run this command on a remote server without the script hanging: nohup ./foo &
Details
This is my script:
#!/bin/sh
# Credit: https://unix.stackexchange.com/a/20895/6766
if "true" : '''\'
then
exec "/nfs/it/network_python/$OSREL/bin/python" "$0" "$#"
exit 127
fi
'''
from getpass import getpass
import os
from fabric import Connection, Config
assert os.geteuid()==0, "ERROR: Must run as root"
for host in ['host1.foo.local', 'host2.foo.local']:
# Make an ssh connection to the host...
conn = Connection(host)
# The script always hangs at this line
result = conn.run('nohup ./foo &', warn=True, hide=True)
I always open a tmux session to run the aforementioned script in; even doing so, the script hangs when I get to conn.run(), above.
I'm running the script on a vanilla CentOS 6.5 VM; it runs under python 2.7.10 and fabric 2.1.
The Fabric FAQ is unclear... I thought the FAQ wanted tmux used on the local side when I executed the Fabric script.
The correct way to fix this problem is to replace nohup in the remote command, with screen -d -m <command>. Now I can run the whole script locally with no hangs (and I don't have to use tmux in the local term).
Explicitly, I have to rewrite the last line of my script in my question as:
# Remove &, and nohup...
result = conn.run('screen -d -m ./foo', warn=True, hide=True)

Python fabric unable to start process

I'm using python fabric to deploy binaries to an ec2 server and am attempting to run them in background (a subshell).
All the fabric commands for performing local actions, putting files, and executing remote commands w/o elevated privileges work fine. The issue I run into is when I attempt to run the binary.
with cd("deploy"):
run('mkdir log')
sudo('iptables -t nat -A PREROUTING -p tcp --dport 80 -j REDIRECT --to-port 8080', user="root")
result = sudo('./dbserver &', user="root") # <---- This line
print result
if result.failed:
print "Running dbserver failed"
else:
print "DBServer now running server" # this gets printed despite the binary not running
After I login to the server and ps aux | grep dbserver nothing shows up. How can I get fabric to execute the binary? The same command ./dbserver & executed from the shell does exactly what I want it to. Thanks.
This is likey reated to TTY issues, and/or that you're attempting to background a process.
Both of these are discussed in the FAQ under these two headings:
http://www.fabfile.org/faq.html#init-scripts-don-t-work
http://www.fabfile.org/faq.html#why-can-t-i-run-programs-in-the-background-with-it-makes-fabric-hang
Try making the sudo like this:
sudo('nohup ./dbserver &', user=root, pty=False)

python subprocess output on nohup

Trying to monitor the available physical disc space of a remote machine using a python script, which executes the df -h . command using subprocess.popen.
import subprocess
import time
command = 'ssh remoteserver "df -h ."'
while True:
proc = subprocess.Popen(command,shell=True, stdout=subprocess.PIPE, stderr=subprocess.PIPE)
output,err=proc.communicate()
print output
print err
time.sleep(60)
The script runs fine and prints the output to the terminal when run from command line
$> python2.7 script.py
Filesystem Size Used Avail Use% Mounted on
remoteserver:/home/user
555G 447G 109G 81% /home
The scripts does not produce any output or seems to be blocking when the script is started with nohup command.
$> nohup python2.7 script.py &
Would like the script to work and fetch the disc space of remote machine using the above script when started in nohup.
I'm not 100% sure of the underlying issue here, but when you invoke NOHUP in the shell, it's disconnected some of the STDIN/STDOUT from the terminal process, which I suspect it causing some of this interactions you're seeing.
Given that you're doing this from a remote machine, I'd actually recommend you look at using something like Fabric as a library to do what you're after. It's pretty straightforward, and does most of the handling of terminal sessions as well as closing things down nicely for you when you're complete.
something like:
from fabric import api
from fabric.api import env
import fabric
env.host_string = '%s#%s' % (username, remote_host)
env.disable_known_hosts = True
env.password = password
fabric.state.output['stdout'] = False
fabric.state.output['stderr'] = False
results = api.run('df -h')
You might try sending stdin=subprocess.PIPE to the subprocess command, then calling proc.stdin.close() on the next line, before the communicate() call. Or you can try changing the command to 'ssh remoteserver "df -h ." </dev/null'. Others report using FNULL = open(os.devnull, 'r') and passing in FNULL to the stdin= argument, but I'm not sure if you need to call FNULL.close() after or not.
SSH is most likely waiting for input for some reason when it is run from nohup. Perhaps it is unable to authenticate in the nohup environment and is asking for password input?
To make sure SSH is not waiting for input, try adding -o "BatchMode yes" to the ssh command and see if there are some clues in the output/error from the subprocess communicate call.

Permission denied on git repository with Fabric

I'm writing a fab script to do a git pull on a remote server, but I get Permission denied (publickey,keyboard-interactive). when fabric runs the command.
If I ssh to the server and then do the pull, it works. (I've already setup the keys on the server, so it doesn't ask for passphrases, etc.)
Here's my fabric task:
import fabric.api as fab
def update():
'''
update workers code
'''
with fab.cd('~/myrepo'):
# pull changes
print colors.cyan('Pulling changes...')
fab.run('git pull origin master')
How do I get it to work with Fabric?
Edit: My server is a Google Compute instance, and it provides a gcutil tool to ssh to the instance. This is the command it runs to connect to the server:
ssh -o UserKnownHostsFile=/dev/null -o CheckHostIP=no -o StrictHostKeyChecking=no -i /Users/John/.ssh/google_compute_engine -A -p 22 John#123.456.789.101
The script is able to connect to the server AFAICT (it's able to run commands on the server like cd and supervisor and git status), it's just git pull that fails.
you need to edit fabfile like this in order to enable ssh agent fowarding option.
from fabric.api import *
env.hosts = ['123.456.789.101']
env.user = 'John'
env.key_filename = '/Users/John/.ssh/google_compute_engine'
env.forward_agent = True
def update():
'''
update workers code
'''
with cd('~/myrepo'):
# pull changes
print colors.cyan('Pulling changes...')
run('git pull origin master')

Categories