Different UDP source ports while using python subprocess - python

I'm using Python subprocess module to call "iperf" command. Then I parse the output and get the source port of the iperf client, e.g. 4321 but when I monitor the network 4321 is missing and I can only see UDP ports 12851 and 0. It is strange that when I call iperf command directly from Ubuntu terminal I can see the source port that iperf reports (4321) in the network.
Can anybody help me and explain why this change of port happening? And how I can enforce subprocess to send the data on the original port that iperf sends?
This is how I call iperf and obtain the source port:
import subprocess, sys, os
cmd = "iperf -c %s -p %s -u -b %sm -t 10 -l 1500" %(self.ip,self.port,self.bw)
print cmd
p = subprocess.Popen(cmd, stdout=subprocess.PIPE, shell=True)
(output, err) = p.communicate()
o_list = output.split(']')
o_list = o_list[1].split(' ')
for i in range(len(o_list)):
if o_list[i] == "port":
self.my_port = int(o_list[i+1])
break
#endIf
And I use same command In terminal and get different output:
iperf -c 10.1.1.2 -p 5001 -u -b 10m -t 10 -l 1500
I'm doing a project in Software-Defined Networking area and using POX as network controller, so I can easily monitor desired packets (here UDP packets) and their source and destination ports. This is the code that I added to forwarding.l2_learning to monitor UDP ports:
if msg.match.dl_type == 0x0800:
if msg.match.nw_proto == 17:
log.warning("FOUND UDP" + str(msg.match.tp_src))
Thank you in advance

Related

Finding on which port a given server is running within python program

I am developing a python 3.11 program which will run on a few different servers and needs to connect to the local Redis server. On each machine the latter might run on a different port, sometimes the default 6379 but not always.
On the commandline I can issue the following command which on both my Linux and MacOS servers works well:
(base) bob#Roberts-Mac-mini ~ % sudo lsof -n -i -P | grep LISTEN | grep IPv4 | grep redis
redis-ser 60014 bob 8u IPv4 0x84cd01f56bf0ee21 0t0 TCP *:9001 (LISTEN)
What's the better way to get the running port using python functions/libraries?
What if you run your commands within a py script using the os library:
import os
cmd = 'ls -l' <-- change the command you want to run
os.system(cmd)
or else you could also use subprocess library as well:
import subprocess
print(subprocess.check_output(['ls', '-l']))

Unable to use Subprocess module to successfully parse and run long command

I am attempting to implement port forwarding on my machine by running a series of commands on Mac in terminal that can be found here. I am attempting to run the port mapping commands as listed in the latter link by utilizing subprocess in Python.
The command I am attempting to run using subprocess is:
echo "
rdr pass inet proto tcp from any to any port 80 -> 127.0.0.1 port 8080
" | sudo pfctl -ef -
My Python implementation of the above command:
from subprocess import Popen, PIPE
commands = ['echo', '"rdr pass inet proto tcp from any to any port 80 -> 127.0.0.1 port 8080"\n', '|', 'sudo', 'pfctl', '-ef', '-']
p = Popen(['sudo', '-S']+commands, stdin=PIPE, stderr=PIPE, universal_newlines=True)
print(p.communicate('root_password\n')[1])#be able to read password for sudo from stdin
However, this Python script does not work, as when I run sudo pfctl -s nat to display all port forwarding rules on my machine my expected output
rdr pass inet proto tcp from any to any port = 80 -> 127.0.0.1 port 8080
is not displayed to the shell.
However, running
echo "
rdr pass inet proto tcp from any to any port 80 -> 127.0.0.1 port 8080
" | sudo pfctl -ef -
does indeed produce
rdr pass inet proto tcp from any to any port = 80 -> 127.0.0.1 port 8080
after running the display command sudo pfctl -s nat.
How can my subprocess implementation be improved so that the rule listing is outputted correctly?
Edit: oddly, I can run the display command sudo pfctl -s nat using subprocess
p = Popen(['sudo', '-S']+'pfctl -s nat'.split(), stdin=PIPE, stderr=PIPE, universal_newlines=True)
print(p.communicate(f'myrootpassword\n')[1]))
To achieve the same output as sudo pfctl -s nat when run in the shell itself.
I am running the command and the Python attempt on macOS Sierra 10.12.5.
You're trying to use Popen() to execute a pipeline of several commands, and it doesn't work that way. That's a shell feature.
I believe you need to call Popen() several times, one per command, and explicitly hook their stdin/stdout to each other.
subprocess.Popen explicitly escapes shell metacharacters (including |). For example:
>>> p = Popen(['echo', 'abcdefg', '|', 'wc'], stdout=PIPE, universal_newlines=True)
>>> p.communicate()
('abcdefg | wc\n', None)
Note that abcdefg was not piped into wc, because the pipe character was escaped and interpreted as a character instead of a process pipe. Thus, your command isn't being piped into sudo, since the pipe character is escaped. You can force Popen to intrepret shell metacharacters by using shell=True (takes a string as a command, not a list):
>>> p = Popen(" ".join(['echo', 'abcdefg', '|', 'wc']), stdout=PIPE, universal_newlines=True, shell=True)
>>> p.communicate()
(' 1 1 8\n', None)
Shell metacharacters are escaped for good reason if you don't trust your input (to prevent injection attacks). If you do, which I think you do since you're passing in roots pw, then just use shell=True.
HTH.

How to pass Unix Commands across network using python

So basically I have this remote computer with a bunch of files.
I want to run unix commands (such as ls or cat) and receive them locally.
Currently I have connected via python's sockets (I know the IP address of remote computer). But doing:
data = None
message = "ls\n"
sock.send(message)
while not data:
data = sock.recv(1024) <- stalls here forever
...
is not getting me anything.
There is an excellent Python library for this. It's called Paramiko: http://www.paramiko.org/
Paramiko is, among other things, an SSH client which lets you invoke programs on remote machines running sshd (which includes lots of standard servers).
You can use Python's subprocess module to accomplish your task. It is a built-in module and does not have much dependencies.
For your problem, I would suggest the Popen method, which runs command on remote computer and returns the result to your machine.
out = subprocess.Popen(cmd,shell=True,stdout=subprocess.PIPE, stderr=subprocess.PIPE)
t = out.stdout.read() + out.stderr.read()
socket.send(t)
where cmd is your command which you want to execute.
This will return the result of the command to your screen.
Hope that helps !!!
This is what I did for your situation.
In terminal 1, I set up a remote shell over a socket using ncat, a nc variant:
$ ncat -l -v 50007 -e /bin/bash
In terminal 2, I connect to the socket with this Python code:
$ cat python-pass-unix-commands-socket.py
import socket
sock = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
sock.connect(('', 50007))
sock.send('ls\n')
data = sock.recv(1024)
print data
sock.close()
$ python pass-unix-commands-socket.py
This is the output I get in terminal 1 after running the command:
Ncat: Version 6.40 ( http://nmap.org/ncat )
Ncat: Listening on :::50007
Ncat: Listening on 0.0.0.0:50007
Ncat: Connection from 127.0.0.1.
Ncat: Connection from 127.0.0.1:39507.
$
And in terminal 2:
$ python pass-unix-commands-socket.py
alternating-characters.in
alternating-characters.rkt
angry-children.in
angry-children.rkt
angry-professor.in
angry-professor.rkt
$

Why the python-tcpdum command can't capture packet in a file

I have the following TcpDump command written in Python but it doesn't give me any output file with the requested packets although I have TcpDump installed and tested on my Ubuntu VM :
command = 'sudo /usr/sbin/tcpdump -i eth1 {} -c {} -s 0 -w {}'\
.format( 'tcp host 10.0.2.15','30000',
'/home/results/xyz.pcap')
cat test.py
import os
command = '/usr/sbin/tcpdump -i eth1 {} -c {} -s 0 -w {}'.format( 'host 192.168.254.74','30000','res.pcap')
print(command)
os.system(command)
sudo python test.py
/usr/sbin/tcpdump -i eth1 host 192.168.1.10 -c 30000 -s 0 -w res.pcap
tcpdump: listening on eth1, link-type EN10MB (Ethernet), capture size 10 bytes
^C0 packets captured
6 packets received by filter
0 packets dropped by kernel
ls -l | grep test
-rw------- 1 admin admin 155 Dec 2 23:05 test.py
Seems to work just fine for me.
The test file is 'test.py'. I run it under sudo and exit after some time. I can see that 6 packets were captured and the file size is > 0.
Make sure the command itself runs properly outside of python.

Process dies, if it is run via paramiko ssh session and with "&" in the end

I just want to run tcpdump in background using paramiko.
Here is the part of the code:
ssh = paramiko.SSHClient()
ssh.set_missing_host_key_policy(paramiko.AutoAddPolicy())
ssh.connect(host, username=login, password=password)
transport = ssh.get_transport()
channel = transport.open_session()
channel.get_pty()
channel.set_combine_stderr(True)
cmd = "(nohup tcpdump -i eth1 port 443 -w /tmp/dump20150317183305940107.pcap) &"
channel.exec_command(cmd)
status = channel.recv_exit_status()
After I execute this code, pgrep tcpdump returns nothing.
If I remove & sign tcpdump runs correctly, but my ssh shell is blocked.
How can I run tcpdump in background correctly?
What command I've tried:
cmd = 'nohup tcpdump -i eth1 port 443 -w /tmp/dump20150317183305940107.pcap &\n'
cmd = "screen -d -m 'tcpdump -i eth1 port 443 -w /tmp/dump20150317183305940107.pcap'"
cmd = 'nohup sleep 5 && echo $(date) >> "test.log" &'
With & you make your remote command exit instantly. The remote sshd will therefore likely (depends on implementation, but openssh does) kill all processes that were started from your command invocation. In your case, you just spawn a new process nohup tcpdump which will instantly return due to & at the end. The channel.recv_exit_status() will only block until the exit code for the & ooperation is ready which instantly is. Your code then just terminates, terminating your ssh session which will make the remote sshd kill the spawned nohup tcpdump proc. That why you end up with no tcpdump process.
Here's what you can do:
Since exec_command is going to spawn a new thread for your command, you can just leave it open and proceed with other tasks. But make sure to empty buffers every now and then (for verbose remote commands) to prevent paramiko from stalling.
ssh = paramiko.SSHClient()
ssh.set_missing_host_key_policy(paramiko.AutoAddPolicy())
ssh.connect(host, username=login, password=password)
transport = ssh.get_transport()
channel_tcpdump = transport.open_session()
channel_tcpdump.get_pty()
channel_tcpdump.set_combine_stderr(True)
cmd = "tcpdump -i eth1 port 443 -w /tmp/dump20150317183305940107.pcap" # command will never exit
channel_tcpdump.exec_command(cmd) # will return instantly due to new thread being spawned.
# do something else
time.sleep(15) # wait 15 seconds
_,stdout,_ = ssh.exec_command("pgrep tcpdump") # or explicitly pkill tcpdump
print stdout.read() # other command, different shell
channel_tcpdump.close() # close channel and let remote side terminate your proc.
time.sleep(10)
just need add a sleep command,
cmd = "(nohup tcpdump -i eth1 port 443 -w /tmp/dump20150317183305940107.pcap) &"
change to
cmd = "(nohup tcpdump -i eth1 port 443 -w /tmp/dump20150317183305940107.pcap) &; sleep 1"

Categories