I've a python script which calls a bash script to connect a vpn account. If I run python script from console, I can do 2FA and connect to my VPN account seamlessly. However, if I run this python script as a background process (with nohup, etc.) then the python process becomes suspended (+ suspended (tty output)) whenever I try to connect VPN and python program is not responding (looks like it is stuck in a state that expects an input).
vpn_manager.py:
connection_command = 'sh {}'.format(os.path.join(base_path, 'scripts', 'vpn.sh'))
response = subprocess.run(connection_command, shell=True, stdout=subprocess.PIPE)
stdout = response.stdout.decode('utf-8')
if 'state: Connected' in stdout:
update_icon(environment)
Shell script vpn.sh:
printf "1\nUSERNAME\nPASSWORD\n2\n" | /opt/cisco/anyconnect/bin/vpn -s connect VPN_HOST
Normally, this VPN command asks for a user name and password, then waits for me to verify it from my 2FA app on my phone.
How can I make this python code working as a background process and not interrupted with that VPN prompts?
Using pexpect is a better approach to communicate as #CharlesDuffy suggested.
Solution with pexpect will be similar to below example.
import pexpect
failed = False
vpn = pexpect.spawn('/opt/cisco/anyconnect/bin/vpn -s connect {}'.format(host))
ret = vpn.expect([pexpect.TIMEOUT, CONNECT_SUCCESS, CONNECT_ERR_1, CONNECT_ERR_2, ...])
if ret != 1:
failed = True
if not failed:
vpn.sendline('1')
ret = vpn.expect([pexpect.TIMEOUT, SELECT_GROUP_SUCCESS, SELECT_GROUP_ERR_1, SELECT_GROUP_ERR_2, ...])
if ret != 1:
failed = True
if not failed:
vpn.sendline(USER_NAME)
ret = vpn.expect([pexpect.TIMEOUT, USER_NAME_SUCCESS, USER_NAME_ERR_1, USER_NAME_ERR_2, ...])
if ret != 1:
failed = True
if not failed:
vpn.sendline(PASSWORD)
ret = vpn.expect([pexpect.TIMEOUT, PASSWORD_SUCCESS, PASSWORD_ERR_1, PASSWORD_ERR_2, ...])
if ret != 1:
failed = True
if not failed:
vpn.sendline(AUTHENTICATION_METHOD)
ret = vpn.expect([pexpect.TIMEOUT, AUTHENTICATION_SUCCESS, AUTHENTICATION_ERR_1, AUTHENTICATION_ERR_2, ...])
if ret != 1:
failed = True
if not failed:
print('Connected!')
else:
print('Failed to connect!')
Send the output to a file (or named pipe)
printf "1\nUSERNAME\nPASSWORD\n2\n" | /opt/cisco/anyconnect/bin/vpn -s connect VPN_HOST > /tmp/vpn.out
Then in you python script you can check that file content until you read the expected content.
Related
I have a requirement to execute some commands via ssh on a remote host for which I have to use only subprocess.
For security concerns, I have whitelisted the commands that can be executed on the remote but arguments to those commands are user provided.
For ex: ls command will be whitelisted but -l argument to the command will come from user via API.
Currently my implementation will not stop a user to execute any arbitrary command on remote by using ; or & in the user supplied arguments.
For example, a user can pass: -l ; cat /etc/passwd to the ls command which will be executed at remote.
Now I have added a function to check if there are any special characters in the user supplied input but this looks bit insecure because blacklisting only few characters may still leave a loophole.
Is there any other safer solution to this issue with the restriction that I can only use subprocess and user input cannot be restricted?
Please help.
My current code:
def executeCommand(cmd, log):
try:
resultobj = subprocess.run(cmd, capture_output=True, check=True, universal_newlines=True)
if not resultobj.stdout.strip() == '':
log.info("Command output: %s", resultobj.stdout)
return resultobj.returncode, resultobj.stdout
log.error("Command Execution returned None: %s", cmd)
return -1, resultobj.stdout + "\n" + resultobj.stderr
except subprocess.CalledProcessError as e:
log.error("Command Execution Failed for Command: %s with error %s", cmd, e.stderr)
return e.returncode, e.stderr
except subprocess.SubprocessError as e:
log.error("Command Execution Failed for Command: %s with error %s", cmd, traceback.format_exc())
return -1, traceback.format_exc()
Command passed to above function will look like:
ssh_cmd = ['ssh', '-oConnectTimeout=10', '-oBatchMode=yes', '-oStrictHostKeyChecking=no', '-q', '1.1.1.1', 'ls -l ; cat /etc/passwd']
retcode, cmd_result = executeCommand(ssh_cmd, log)
The special character blacklist function pasted below:
def isBlacklistedChar(str):
filter_chars = "&;|"
if any(c in filter_chars for c in str):
return True
else:
return False
Hi I am trying to create a function which remotely executes my packet sniffing script on my raspberry pi using paramiko and ssh.
def startPacketReceiver():
ssh = paramiko.SSHClient()
ssh.set_missing_host_key_policy(AutoAddPolicy())
ssh.connect(RECV_IP_ADDRESS, username="pi", password="raspberry")
ssh_stdin, ssh_stdout, ssh_stderr = ssh.exec_command("sudo gcc Code/test.c && sudo ./a.out")
print("Done")
The test.c file is the packet sniffing script. It will only terminate with a CTRL-C (or equivalent method). It does not terminate naturally/eventually.
I want to be able to start the receiver and then quit the receiver e.g:
startPacketReceiver()
...
stopPacketReceiver()
Currently when I run the python script I never get the "Done" print message, meaning that the program is hung on the exec_command and will not continue until it is terminated.
Additional Info
The test.c file loops infinitely, essentially:
while(1)
{
saddr_size = sizeof saddr;
//Receive a packet
data_size = recvfrom(sock_raw , buffer , 65536 , 0 , &saddr , (socklen_t*)&saddr_size);
//fprintf(stderr,"%d",data_size);
if(data_size <0 )
{
fprintf(stderr,"Failed to get packet\n");
printf("Recvfrom error , failed to get packets\n");
return 1;
}
//Now process the packet
ProcessPacket(buffer , data_size);
}
and so to stop it you must CTRL-C it.
You need to send your password to the sudo command. Please enable tty mode for executing your command by passing get_pty = True argument to exec_command function call. And then you need to pass your password through ssh_stdin file interface.
def startPacketReceiver():
ssh = paramiko.SSHClient()
ssh.set_missing_host_key_policy(AutoAddPolicy())
ssh.connect(RECV_IP_ADDRESS, username="pi", password="raspberry")
ssh.stdin, ssh_stdout, ssh_stderr = ssh.exec_command("gcc Code/test.c && sudo ./a.out", get_pty=True)
print("raspberry", ssh_stdin) # Your password for sudo command
print("Done")
return ssh, ssh_stdin, ssh_stdout, ssh_stderr
And then you can write your stopPacketReceiver to send Ctrl-C signal.
def stopPacketReceiver(ssh, ssh_stdin, ssh_stdout, ssh_stderr):
print('\x03', file=ssh_stdin) # Send Ctrl-C signal
print(ssh_stdout.read()) #print the stdout
print(ssh_stderr.read())
I suggest taking a look at daemon(3)
Then you can capture SIGNALS or reopen standard input.
Depends on what you would like to do with your python script.
(If you don't want to use any python any python library other than sys and os)
EDIT:
I am pretty sure when the python script will terminate the ssh connection will obviously close and so any program running on that tty by the user who initiated the connection will be terminated.
For that reason your c program needs to be daemonized and may need
to have its uid and/or euid changed.
Despite that i tried to reproduce your code and I ran into a similar problem: the python script was running the command then print "Done"
but as I tried to read stdout, the entire script was pause.
I think it was waiting for the script return status.
So i did the following changes:
try:
port = '22'
client = paramiko.SSHClient()
client.set_missing_host_key_policy(paramiko.AutoAddPolicy())
client.connect('<host_name>', port=22, username='<username>',
password='<password>')
chan = client.get_transport().open_session()
chan.get_pty()
chan.exec_command("cd <script_directory>;\
; gcc test.c; ./a.out")
while True:
print(chan.recv(1024))
finally:
client.close()
The while loop here is to get output of the c program.
But if you close the python script the c program will follow.
I did not dug to much into this library.
EDIT 2:
Look up nohup
if you don't want to use de daemon approach.
Trying to send 'powershell' command through telnet (from linux to windows) and fail on Timeout.
other commands i send through telnet, such as 'dir' command are ok.
this is part of the code i'm using:
p = host.pobject()
p.cmd = cmd
child = self.connection or self.OpenTelnetConnection()
t = stopwatch.Timer()
try:
child.sendline('{0}\r'.format(cmd))
child.expect(self.prompt, timeout=timeout)
# output = child.before
output = child.after
if stdout:
sys.stdout.write(child.after)
sys.stdout.flush()
child.sendline('echo %errorlevel%\r')
child.expect(self.prompt)
p.rc = int(child.after.split("\r\n")[1])
p.runtime = t.stop()
if p.rc:
p.stderr = output.split("\r\n")[1:-1]
else:
p.stdout = output.split("\r\n")[1:-1]
return p
except Exception, e:
self.report.Error("Failed to run command {0}. {1}".format(cmd, e),
exception=["TestFailure"], testName="WindowsHost")
The solution i found is to send the powershell command as 1st argument.
for example if i want to send 'host' command to the powershell i'll send:
'powershell host'
I'm trying setup a ssh tunnel via pexpect with following code:
#!/bin/env python2.4
import pexpect, sys
child = pexpect.spawn('ssh -CfNL 0.0.0.0:3306:127.0.0.1:3306 user#server.com')
child.logfile = sys.stdout
while True:
code = child.expect([
'Are you sure you want to continue connecting \(yes/no\)\?',
'password:',
pexpect.EOF,
pexpect.TIMEOUT
])
if code == 0:
child.sendline('yes')
elif code == 1:
child.sendline('passwordhere')
elif code == 2:
print ".. EOF"
break
elif code == 3:
print ".. Timeout"
break
What I expect is after password was sent and ssh tunnel established, the while loop exits so that I can continue processing with other business logic.
But the code above block util timeout (about 30 seconds) if ssh tunnel established.
Could anyone please give me some advice on how to avoid the block?
I think the simplest solution is to use ssh host-key authentication, combined with backgrounding ssh with &... this is a very basic implementation, but you could enhance it to kill the process after you're done... also, note that I added -n to your ssh args, since we're backgrounding the process.
import subprocess
USER = 'user'
HOST = 'server.com'
cmd = r"""ssh -CfNnL 0.0.0.0:3306:127.0.0.1:3306 %s#%s &""" % (USER, HOST)
subcmd = cmd.split(' ')
retval = subprocess.Popen(subcmd, stdout=subprocess.PIPE, stderr=subprocess.PIPE)
stat = retval.poll()
while stat == None:
stat = retval.poll()
print "ssh in background"
Finally, if you don't already have ServerAliveInterval in your ssh_config, consider calling ssh as ssh -o ServerAliveInterval=30 <other_options_and_args> to make sure you detect loss of the tunnel as soon as possible, and to keep it from aging out of any NAT implementations in the path (during inactivity).
I'm writing some pexpect stuff that's basically sending commands over telnet.
But, it's possible that my telnet session could die (due to networking problems, a cable getting pulled, whatnot).
How do I initialize a telnet session such that, if it dies, I can catch it and tell it to reconnect and then continue execution of the code where it was at.
Is this possible?
IMHO, you're normally better-off with a currently-maintained library like exscript or telnetlib, but the efficient incantation in pexpect is:
import pexpect as px
cmds = ['cmd1', 'cmd2', 'cmd3']
retcode = -1
while (retcode<10):
if (retcode<2):
child = px.spawn('telnet %s %s' % (ip_addr,port))
lregex = '(sername:)' # Insert regex for login prompt here
pregex = '(prompt1>)|(prompt2$)' # Insert your prompt regex here
# retcode = 0 for px.TIMEOUT, 1 for px.EOF, 2 for lregex match...
retcode = child.expect([px.TIMEOUT, px.EOF, lregex, pregex],timeout = 10)
if (retcode==2):
do_login(child) # Build a do_login() method to send user / passwd
elif (2<retcode<10) and (len(cmds)>0):
cmd = cmds.pop(0)
child.sendline(cmd)
else:
retcode = 10
I did this, and it worked:
def telnet_connect():
print "Trying to connect via telnet..."
telnet_connecting = pexpect.spawn('telnet localhost 10023', timeout=2)
while 1:
try:
telnet_connecting.expect('login: ')
break
except:
telnet_connecting = telnet_connect()
break
return telnet_connecting
Recursion FTW?