I am attempting to implement port forwarding on my machine by running a series of commands on Mac in terminal that can be found here. I am attempting to run the port mapping commands as listed in the latter link by utilizing subprocess in Python.
The command I am attempting to run using subprocess is:
echo "
rdr pass inet proto tcp from any to any port 80 -> 127.0.0.1 port 8080
" | sudo pfctl -ef -
My Python implementation of the above command:
from subprocess import Popen, PIPE
commands = ['echo', '"rdr pass inet proto tcp from any to any port 80 -> 127.0.0.1 port 8080"\n', '|', 'sudo', 'pfctl', '-ef', '-']
p = Popen(['sudo', '-S']+commands, stdin=PIPE, stderr=PIPE, universal_newlines=True)
print(p.communicate('root_password\n')[1])#be able to read password for sudo from stdin
However, this Python script does not work, as when I run sudo pfctl -s nat to display all port forwarding rules on my machine my expected output
rdr pass inet proto tcp from any to any port = 80 -> 127.0.0.1 port 8080
is not displayed to the shell.
However, running
echo "
rdr pass inet proto tcp from any to any port 80 -> 127.0.0.1 port 8080
" | sudo pfctl -ef -
does indeed produce
rdr pass inet proto tcp from any to any port = 80 -> 127.0.0.1 port 8080
after running the display command sudo pfctl -s nat.
How can my subprocess implementation be improved so that the rule listing is outputted correctly?
Edit: oddly, I can run the display command sudo pfctl -s nat using subprocess
p = Popen(['sudo', '-S']+'pfctl -s nat'.split(), stdin=PIPE, stderr=PIPE, universal_newlines=True)
print(p.communicate(f'myrootpassword\n')[1]))
To achieve the same output as sudo pfctl -s nat when run in the shell itself.
I am running the command and the Python attempt on macOS Sierra 10.12.5.
You're trying to use Popen() to execute a pipeline of several commands, and it doesn't work that way. That's a shell feature.
I believe you need to call Popen() several times, one per command, and explicitly hook their stdin/stdout to each other.
subprocess.Popen explicitly escapes shell metacharacters (including |). For example:
>>> p = Popen(['echo', 'abcdefg', '|', 'wc'], stdout=PIPE, universal_newlines=True)
>>> p.communicate()
('abcdefg | wc\n', None)
Note that abcdefg was not piped into wc, because the pipe character was escaped and interpreted as a character instead of a process pipe. Thus, your command isn't being piped into sudo, since the pipe character is escaped. You can force Popen to intrepret shell metacharacters by using shell=True (takes a string as a command, not a list):
>>> p = Popen(" ".join(['echo', 'abcdefg', '|', 'wc']), stdout=PIPE, universal_newlines=True, shell=True)
>>> p.communicate()
(' 1 1 8\n', None)
Shell metacharacters are escaped for good reason if you don't trust your input (to prevent injection attacks). If you do, which I think you do since you're passing in roots pw, then just use shell=True.
HTH.
Related
I am developing a python 3.11 program which will run on a few different servers and needs to connect to the local Redis server. On each machine the latter might run on a different port, sometimes the default 6379 but not always.
On the commandline I can issue the following command which on both my Linux and MacOS servers works well:
(base) bob#Roberts-Mac-mini ~ % sudo lsof -n -i -P | grep LISTEN | grep IPv4 | grep redis
redis-ser 60014 bob 8u IPv4 0x84cd01f56bf0ee21 0t0 TCP *:9001 (LISTEN)
What's the better way to get the running port using python functions/libraries?
What if you run your commands within a py script using the os library:
import os
cmd = 'ls -l' <-- change the command you want to run
os.system(cmd)
or else you could also use subprocess library as well:
import subprocess
print(subprocess.check_output(['ls', '-l']))
Consider this python script:
import subprocess
nc = subprocess.Popen(["/bin/bash"], stdin=subprocess.PIPE, text=True)
nc.stdin.write("nc localhost 2222\n")
nc.stdin.write("pwd\n")
When I listen with netcat as nc -lnvp 2222
I successfully connect and send the string pwd nothing more happens of course.
Now I get a non stable php reverse shell(Completely new event) and I connect through netcat successfully. I execute this script to upgrade shell and print current directory. By the way that listener is another Popen instance.
import subprocess
nc = subprocess.Popen(["/bin/bash"], stdin=subprocess.PIPE, text=True)
nc.stdin.write("nc localhost 2222\n")
nc.stdin.write('python3 -c "import pty;pty.spawn(\'/bin/bash\')"\n')
nc.stdin.write('pwd\n')
Now when I execute that python script, I expected the input will go through netcat, get executed in that new bash tty and spawn a stable shell and pass pwd to return current directory. But this script only works upto spawing stable shell and then stdin input doesn't go through nc or something else happens that I'm not aware of.
What's happening here?
Edit: I need to be able to run multiple commands. Using subprocess.communicate(input=<command>) causes deadlock and can't accept stdin.
I need to ssh into a machine via a bastion. Therefore the command is rather very long for this:
ssh -i <pemfile location> -A -o 'proxycommand ssh -i <pemfile location> ec2-user#<bastion ip address> -W %h:%p' hadoop#<machine ip>
This command is rather very long. So I tried to write a python script which takes ip addresses and pemfile location as inputs and does ssh.
#!/usr/local/bin/python3
import argparse
import subprocess
import os
import sys
import errno
parser = argparse.ArgumentParser(description="Tool to ssh into EMR via a bastion host")
parser.add_argument('master', type=str, help='IP Address of the EMR master-node')
parser.add_argument('bastion', type=str, help='IP Address of bastion EC2 instance')
parser.add_argument('pemfile', type=str, help='Path to the pemfile')
args = parser.parse_args()
cmd_list = ["ssh", "-i", args.pemfile, "-A", "-o", "'proxycommand ssh -i {} ec2-user#{} -W %h:%p'".format(args.pemfile, args.bastion), "hadoop#{}".format(args.master)]
command = ""
for w in cmd_list:
command = command + " " + w
print("")
print("Executing command : ", command)
print("")
subprocess.call(cmd_list)
I get the following error when I run this script :
command-line: line 0: Bad configuration option: 'proxycommand
But I am able to run the exact command via bash.
Why is the ssh from python script failing then?
You are making the (common) mistake of mixing syntactic quotes with literal quotes. At the command line, the shell removes any quotes before passing the string to the command you are running; you should simply do the same.
cmd_list = ["ssh", "-i", args.pemfile, "-A",
"-o", "proxycommand ssh -i {} ec2-user#{} -W %h:%p".format(
args.pemfile, args.bastion), "hadoop#{}".format(args.master)]
See also When to wrap quotes around a shell variable? for a discussion of how quoting works in the shell, and perhaps Actual meaning of 'shell=True' in subprocess as a starting point for the Python side.
However, scripting interactive SSH sessions is going to be brittle; I recommend you look into a proper Python library like Paramiko for this sort of thing.
I'm using Python subprocess module to call "iperf" command. Then I parse the output and get the source port of the iperf client, e.g. 4321 but when I monitor the network 4321 is missing and I can only see UDP ports 12851 and 0. It is strange that when I call iperf command directly from Ubuntu terminal I can see the source port that iperf reports (4321) in the network.
Can anybody help me and explain why this change of port happening? And how I can enforce subprocess to send the data on the original port that iperf sends?
This is how I call iperf and obtain the source port:
import subprocess, sys, os
cmd = "iperf -c %s -p %s -u -b %sm -t 10 -l 1500" %(self.ip,self.port,self.bw)
print cmd
p = subprocess.Popen(cmd, stdout=subprocess.PIPE, shell=True)
(output, err) = p.communicate()
o_list = output.split(']')
o_list = o_list[1].split(' ')
for i in range(len(o_list)):
if o_list[i] == "port":
self.my_port = int(o_list[i+1])
break
#endIf
And I use same command In terminal and get different output:
iperf -c 10.1.1.2 -p 5001 -u -b 10m -t 10 -l 1500
I'm doing a project in Software-Defined Networking area and using POX as network controller, so I can easily monitor desired packets (here UDP packets) and their source and destination ports. This is the code that I added to forwarding.l2_learning to monitor UDP ports:
if msg.match.dl_type == 0x0800:
if msg.match.nw_proto == 17:
log.warning("FOUND UDP" + str(msg.match.tp_src))
Thank you in advance
I just want to run tcpdump in background using paramiko.
Here is the part of the code:
ssh = paramiko.SSHClient()
ssh.set_missing_host_key_policy(paramiko.AutoAddPolicy())
ssh.connect(host, username=login, password=password)
transport = ssh.get_transport()
channel = transport.open_session()
channel.get_pty()
channel.set_combine_stderr(True)
cmd = "(nohup tcpdump -i eth1 port 443 -w /tmp/dump20150317183305940107.pcap) &"
channel.exec_command(cmd)
status = channel.recv_exit_status()
After I execute this code, pgrep tcpdump returns nothing.
If I remove & sign tcpdump runs correctly, but my ssh shell is blocked.
How can I run tcpdump in background correctly?
What command I've tried:
cmd = 'nohup tcpdump -i eth1 port 443 -w /tmp/dump20150317183305940107.pcap &\n'
cmd = "screen -d -m 'tcpdump -i eth1 port 443 -w /tmp/dump20150317183305940107.pcap'"
cmd = 'nohup sleep 5 && echo $(date) >> "test.log" &'
With & you make your remote command exit instantly. The remote sshd will therefore likely (depends on implementation, but openssh does) kill all processes that were started from your command invocation. In your case, you just spawn a new process nohup tcpdump which will instantly return due to & at the end. The channel.recv_exit_status() will only block until the exit code for the & ooperation is ready which instantly is. Your code then just terminates, terminating your ssh session which will make the remote sshd kill the spawned nohup tcpdump proc. That why you end up with no tcpdump process.
Here's what you can do:
Since exec_command is going to spawn a new thread for your command, you can just leave it open and proceed with other tasks. But make sure to empty buffers every now and then (for verbose remote commands) to prevent paramiko from stalling.
ssh = paramiko.SSHClient()
ssh.set_missing_host_key_policy(paramiko.AutoAddPolicy())
ssh.connect(host, username=login, password=password)
transport = ssh.get_transport()
channel_tcpdump = transport.open_session()
channel_tcpdump.get_pty()
channel_tcpdump.set_combine_stderr(True)
cmd = "tcpdump -i eth1 port 443 -w /tmp/dump20150317183305940107.pcap" # command will never exit
channel_tcpdump.exec_command(cmd) # will return instantly due to new thread being spawned.
# do something else
time.sleep(15) # wait 15 seconds
_,stdout,_ = ssh.exec_command("pgrep tcpdump") # or explicitly pkill tcpdump
print stdout.read() # other command, different shell
channel_tcpdump.close() # close channel and let remote side terminate your proc.
time.sleep(10)
just need add a sleep command,
cmd = "(nohup tcpdump -i eth1 port 443 -w /tmp/dump20150317183305940107.pcap) &"
change to
cmd = "(nohup tcpdump -i eth1 port 443 -w /tmp/dump20150317183305940107.pcap) &; sleep 1"