How to modify scapy packet payload - python

I have a python file that declares sets of packets to be sent through a system that modifies the payload and sends them back. A script imports the packets from the python file, sends and receives them and needs to be able to predict what the modified packets will look like when they come back.
My question is, how can I produce packets with modified payload from the list of packets read from the file?
The input file defines packets with variable length headers, something like:
payload_len = 50
pkts = (Ether()/IP()/Raw(payload_len*b'\x00'), \
Ether()/IP()/TCP()/Raw(payload_len*b'\x00'), \
Ether()/IP()/UDP()/Raw(payload_len*b'\x00')
The system that modifies the packets puts a four byte known tag (e.g. 0xdeadbeef) in the payload. It can put that tag either at the start or the end of the payload.
So the script needs to do something like the following for every packet in the list:
from packet_list import *
predict = pkts
predict[0].payload[0] = b'\xde'
predict[0].payload[1] = b'\xad'
predict[0].payload[2] = b'\xbe'
predict[0].payload[3] = b'\xef'
or
predict[2].payload[payload_len-4] = b'\xde'
predict[2].payload[payload_len-3] = b'\xad'
predict[2].payload[payload_len-2] = b'\xbe'
predict[2].payload[payload_len-1] = b'\xef'

You can use load in order to access Raw bytes:
for pkt in pkts:
payload = pkt.lastlayer()
payload.load = b"\xde\xad\xbe\xef" + payload.load[4:] # or payload.load[:-4] + b"\xde\xad\xbe\xef"

Related

Scapy - TCPSession from list of packets

I'm trying to use TCPSession funcionality (like: sniff(offline="./my_file.pcap", prn=func, store=False, session=TCPSession)) but without creating a PCAP file.
I receive a list of RAW Packets so I can build a list of Scapy packets but I need the TCPSession funcionality because of the HTTP Packets: Without TCPSession the headers and the body are in different packets so HTTP Layers Class can't identify the body part.
So I have this code that finds the HTTP Requests:
import pickle
from scapy.all import *
from scapy.layers import http
load_layer("http")
def expand(x):
yield x
while x.payload:
x = x.payload
yield x
file_pickle = open('prueba.pkl','rb')
pkt_list = pickle.load(file_pickle)
for pkt_raw in pkt_list:
p = Ether(pkt_raw)
if p.haslayer(IP):
srcIP = p[IP].src
if p.haslayer(HTTP):
if p.haslayer(HTTPRequest):
print(list(expand(p)), end="\n---------------------------------------------------\n")
The execution of this code finds the HTTP Requests but without the Body part of the POST Requests:
[...]<HTTPRequest Method='POST' Path='/NP3POCF.jsp' Http_Version='HTTP/1.1' Accept='*/*' Accept_Encoding='gzip, deflate' Connection='keep-alive' Content_Length='56' Content_Type='application/x-www-form-urlencoded' Host='172.16.191.129' User_Agent='python-requests/2.7.0 CPython/3.7.5 Linux/5.3.0-kali2-amd64' |>]
With a sniffer with TCPSession (such as Scapy sniff function) the packet has a Raw Layer that contains the body of the request.
Any help to apply TCPSession? Thank You.
You can call sniff(offline=X) with X a packet list, a packet, a file name or a list of files.
Make sure you are using the github development version (see https://scapy.readthedocs.io/en/latest/installation.html#current-development-version), as I'm not sure if this is in a release yet.

When forwarding data via python how do I space out the data strings?

I have a positioning system that sends the data from a tag it's tracking to a specific port, 8787. I have a Python script that then takes that data string and forwards it to a specific port for my database, 8011. There are three tags that send data. The positioning system is hard coded to send to port 8787 and the database is Oracle and only takes in data from port 8011 hence the need for the forwarding of the data.
Here is the string of data each tag sends (it's always in this format):
{"id":"0xDECA38303180234E","timestamp":1450653835.723,"msgid":6825,"coordinates":{"x":4.160,"y":2.368,"z":-0.604,"heading":0.000,"pqf":65},"meas":[{"anchor":"0xDECA323031300FBF","dist":4.343,"tqf":64,"rssi":-48},{"anchor":"0xDECA323030901DE2","dist":0.779,"tqf":32,"rssi":-46},{"anchor":"0xDECA313032901F24","dist":1.223,"tqf":32,"rssi":-44},{"anchor":"0xDECA353034301E99","dist":4.929,"tqf":32,"rssi":-46}]}
When it's one tag streaming data the python script reads each string separately then forwards each data string one at a time. When there are two or more tags the python script groups the incoming data strings and sends them together which causes errors with the database as it's set up to accept a specific format. I can print them separately via a modified print command, but I can't forward each data string separately.
Here is the Python script that forwards the data:
import socket
from pip._vendor import requests
import json
import time
port = 8787
s = socket.socket(socket.AF_INET, socket.SOCK_DGRAM)
s.setsockopt(socket.SOL_SOCKET, socket.SO_REUSEADDR, 1)
s.bind(("", port))
print ("waiting on port:", port)
while 1:
#var_data = s.recvfrom(1024)
var_headers = {'Content-type': 'application/json'}
var_data, addr = s.recvfrom(16029)
print (var_data)
var_data_json = json.dumps(var_data) ##UndefinedVariable
#print (var_data_json)
response = requests.post('http://localhost:8011/SB_PositionLocatingService/PositionLocatingProxyService', data=var_data, headers=var_headers)
I tried the timer function with no success. Any help would be most appreciative. Thanks
Brian
Here is how the python streams the data from one tag and it works perfectly:
{"id":"0xDECA38303180234D","timestamp":1451680331.477,
{"id":"0xDECA38303180234D","timestamp":1451680331.478,
{"id":"0xDECA38303180234D","timestamp":1451680331.479,
From two tags it fails since it's sent as one:
{"id":"0xDECA38303180234D","timestamp":1451680331.477,
{"id":"0xDECA38303180235F","timestamp":1451680331.478,
{"id":"0xDECA38303180234D","timestamp":1451680331.478,
{"id":"0xDECA38303180235F","timestamp":1451680331.479,
From three tags fails since it's sent as one:
{"id":"0xDECA38303180234D","timestamp":1451680331.477,
{"id":"0xDECA38303180235F","timestamp":1451680331.478,
{"id":"0xDECA38303180234E","timestamp":1451680331.479,
{"id":"0xDECA38303180234D","timestamp":1451680331.478,
{"id":"0xDECA38303180235F","timestamp":1451680331.479,
{"id":"0xDECA38303180234E","timestamp":1451680331.480,
Here is what I'm trying for with two and three tags so it's spaced just like it is with one tag:
{"id":"0xDECA38303180234D","timestamp":1451680331.477,
{"id":"0xDECA38303180235F","timestamp":1451680331.478,
{"id":"0xDECA38303180234E","timestamp":1451680331.479,
{"id":"0xDECA38303180234D","timestamp":1451680331.478,
{"id":"0xDECA38303180235F","timestamp":1451680331.479,
{"id":"0xDECA38303180234E","timestamp":1451680331.480,
I've had a quick go at it, code below. The problem was that you had no way of knowing when one reading ended and the other began. I've replaced your loop with one that will continually add the input string to a variable, then continually check that variable for a complete reading. When it encounters a complete reading, it'll forward it on, leaving the remaining string to be added to, and then parsed in the next iteration.
Watch out for...
Any change whatsoever to your data structure: if the Regular Expression no longer works, it'll stop forwarding on any data, and just make a massive string until you eventually run out of memory
Any problems forwarding your data on (i.e., if your listening server breaks, this code will just carry on running)
Sensors sending readings simultaneously and getting mixed together, so that if one sensor said "hello" and the other "bonjour", you'll get the string "hbeolnljoour", which would break this script. In that case, you'll need to start assigning individual ports to each sensor.
Hope this helps
# author: Brian Bowles
# edited by: Dom Weldon, London, 05 Jan 2015
# load deps
import socket
from pip._vendor import requests
import json
import re # note: replaced time with re (regular expressions, see: https://docs.python.org/2/library/re.html)
# adjustable settings
port_listen = 8787
port_forward = 8011
forwarding_url = "http://localhost:{0}/SB_PositionLocatingService/PositionLocatingProxyService".format(port_forward)
forwarding_headers = {'Content-type': 'application/json'}
# start listening on port
s = socket.socket(socket.AF_INET, socket.SOCK_DGRAM)
s.setsockopt(socket.SOL_SOCKET, socket.SO_REUSEADDR, 1)
s.bind(("", port))
print ("waiting on port:", port)
# We have strings being fired at us from sensors, we need to pick out individual
# chunks of JSON data then forward them on using a POST request. First, let's
# process the data every 1024 bytes.
while var_data = s.recvfrom(1024)
# and add each 1024 byte chunk to a big string of all the data we receive.
all_input_string += var_data
# an individual sensor sends data in the format below.
'''
{
"id": "0xDECA38303180234E",
"timestamp": 1450653835.723,
"msgid": 6825,
"coordinates": {
"x": 4.16,
"y": 2.368,
"z": -0.604,
"heading": 0,
"pqf": 65
},
"meas": [
{
"anchor": "0xDECA323031300FBF",
"dist": 4.343,
"tqf": 64,
"rssi": -48
},
{
"anchor": "0xDECA323030901DE2",
"dist": 0.779,
"tqf": 32,
"rssi": -46
},
{
"anchor": "0xDECA313032901F24",
"dist": 1.223,
"tqf": 32,
"rssi": -44
},
{
"anchor": "0xDECA353034301E99",
"dist": 4.929,
"tqf": 32,
"rssi": -46
}
]
}
'''
# does it look like we have one *complete* sensor output contained within
# our big string? use a regex (albeit a lazy one) to find out
# (the regex looks for the string "}]}"" found only at the end of one sensor
# reading then returns two strings: the text before it, and the text after
# it).
match = re.match('([^\}\]\}]+)(\}\]\})(.*)', all_input_string)
if (match):
# yes, pick it out of the string and send it on, save the remainder
# of the string for the next pass through
input_components = match.groups()
reading_to_send = input_components[0] + input_components[1] # the first complete reading
all_input_string = input_components[2] # the remainder of the string
# send off the complete reading
print("Received the following data to forward on: {0}".format(reading_to_send))
response = requests.post(forwarding_port, data=reading_to_send, headers=forwarding_headers)
print("Forwarded response. Response code: {0}".format(response.status_code))

Telnet send command and then read response

This shouldn't be that complicated, but it seems that both the Ruby and Python Telnet libs have awkward APIs. Can anyone show me how to write a command to a Telnet host and then read the response into a string for some processing?
In my case "SEND" with a newline retrieves some temperature data on a device.
With Python I tried:
tn.write(b"SEND" + b"\r")
str = tn.read_eager()
which returns nothing.
In Ruby I tried:
tn.puts("SEND")
which should return something as well, the only thing I've gotten to work is:
tn.cmd("SEND") { |c| print c }
which you can't do much with c.
Am I missing something here? I was expecting something like the Socket library in Ruby with some code like:
s = TCPSocket.new 'localhost', 2000
while line = s.gets # Read lines from socket
puts line # and print them
end
I found out that if you don't supply a block to the cmd method, it will give you back the response (assuming the telnet is not asking you for anything else). You can send the commands all at once (but get all of the responses bundled together) or do multiple calls, but you would have to do nested block callbacks (I was not able to do it otherwise).
require 'net/telnet'
class Client
# Fetch weather forecast for NYC.
#
# #return [String]
def response
fetch_all_in_one_response
# fetch_multiple_responses
ensure
disconnect
end
private
# Do all the commands at once and return everything on one go.
#
# #return [String]
def fetch_all_in_one_response
client.cmd("\nNYC\nX\n")
end
# Do multiple calls to retrieve the final forecast.
#
# #return [String]
def fetch_multiple_responses
client.cmd("\r") do
client.cmd("NYC\r") do
client.cmd("X\r") do |forecast|
return forecast
end
end
end
end
# Connect to remote server.
#
# #return [Net::Telnet]
def client
#client ||= Net::Telnet.new(
'Host' => 'rainmaker.wunderground.com',
'Timeout' => false,
'Output_log' => File.open('output.log', 'w')
)
end
# Close connection to the remote server.
def disconnect
client.close
end
end
forecast = Client.new.response
puts forecast

Decode HTTP packet content in python as seen in wireshark

Ok, so bascially what I want to do is intercept some packets that I know contains some JSON data. But HTTP packets aren't human-readable, so that's my problem, I need to make the entire packet (not just the header, which is already plain text), human-readable. I have no experience with networking at all.
import pcap
from impacket import ImpactDecoder, ImpactPacket
def print_packet(pktlen, data, timestamp):
if not data:
return
decoder = ImpactDecoder.EthDecoder()
ether = decoder.decode(data)
iphdr = ether.child()
tcphdr = iphdr.child()
if iphdr.get_ip_src() == '*******':
print tcphdr
p = pcap.pcapObject()
dev = 'wlan0'
p.open_live(dev, 1600, 0, 100)
try:
p.setfilter('tcp', 0, 0)
while 1:
p.loop(1, print_packet)
except KeyboardInterrupt:
print 'shutting down'
I've found tools like libpcap-python, scapy, Impacket pcapy and so on. They all seem good, but I can't figure out how to decode the packets properly with them.
Wireshark has this thing called "Line-based text data: text/html" which basically displays the information I'm after, so I thought it would be trivial to get the same info with python, it turns out it was not.
Both HTTP and JSON are human readable. On Wireshark, select a packet that relates to your HTTP transaction and right-click, select Follow TCP Stream, which should display the transaction in a Human readable form.

Using python sockets to receive large http requests

I am using python sockets to receive web style and soap requests. The code I have is
import socket
svrsocket = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
host = socket.gethostname();
svrsocket.bind((host,8091))
svrsocket.listen(1)
clientSocket, clientAddress = svrsocket.accept()
message = clientSocket.recv(4096)
Some of the soap requests I receive, however, are huge. 650k huge, and this could become several Mb. Instead of the single recv I tried
message = ''
while True:
data = clientSocket.recv(4096)
if len(data) == 0:
break;
message = message + data
but I never receive a 0 byte data chunk with firefox or safari, although the python socket how to says I should.
What can I do to get round this?
Unfortunately you can't solve this on the TCP level - HTTP defines its own connection management, see RFC 2616. This basically means you need to parse the stream (at least the headers) to figure out when a connection could be closed.
See related questions here - https://stackoverflow.com/search?q=http+connection
Hiya
Firstly I want to reinforce what the previous answer said
Unfortunately you can't solve this on the TCP level
Which is true, you can't. However you can implement an http parser on top of your tcp sockets. And that's what I want to explore here.
Let's get started
Problem and Desired Outcome
Right now we are struggling to find the end to a datastream. We expected our stream to end with a fixed ending but now we know that HTTP does not define any message suffix
And yet, we move forward.
There is one question we can now ask, "Can we ever know the length of the message in advance?" and the answer to that is YES! Sometimes...
You see HTTP/1.1 defines a header called Content-Length and as you'd expect it has exactly what we want, the content length; but there is something else in the shadows: Transfer-Encoding: chunked. unless you really want to learn about it, we'll stay away from it for now.
Solution
Here is a solution. You're not gonna know what some of these functions are at first, but if you stick with me, I'll explain. Alright... Take a deep breath.
Assuming conn is a socket connection to the desired HTTP server
...
rawheaders = recvheaders(conn,end=CRLF)
headers = dict_headers(io.StringIO(rawheaders))
l_content = headers['Content-Length']
#okay. we've got content length by magic
buffersize = 4096
while True:
if l_content <= 0: break
data = clientSocket.recv(buffersize)
message += data
l_content -= len(data)
...
As you can see, we enter the loop already knowing the Content-Length as l_content
While we iterate we keep track of the remaining content by subtracting the length of clientSocket.recv(buff) from l_content.
When we've read at least as much data as l_content, we are done
if l_content <= 0: break
Frustration
Note: For some these next bits I'm gonna give psuedo code because the code can be a bit dense
So now you're asking, what is rawheaders = recvheaders(conn), what is headers = dict_headers(io.StringIO(rawheaders)),
and HOW did we get headers['Content-Length']?!
For starters, recvheaders. The HTTP/1.1 spec doesn't define a message suffix, but it does define something useful: a suffix for the http headers! And that suffix is CRLF aka \r\n.That means we know when we've recieved the headers when we read CRLF. So we can write a function like
def recvheaders(sock):
rawheaders = ''
until we read crlf:
rawheaders = sock.recv()
return rawheaders
Next, parsing the headers.
def dict_header(ioheaders:io.StringIO):
"""
parses an http response into the status-line and headers
"""
#here I expect ioheaders to be io.StringIO
#the status line is always the first line
status = ioheaders.readline().strip()
headers = {}
for line in ioheaders:
item = line.strip()
if not item:
break
//headers look like this
//'Header-Name' : 'Value'
item = item.split(':', 1)
if len(item) == 2:
key, value = item
headers[key] = value
return status, headers
Here we read the status line then we continue to iterate over every remaining line
and build [key,value] pairs from Header: Value with
item = line.strip()
item = item.split(':', 1)
# We do split(':',1) to avoid cases like
# 'Header' : 'foo:bar' -> ['Header','foo','bar']
# when we want ---------> ['Header','foo:bar']
then we take that list and add it to the headers dict
#unpacking
#key = item[0], value = item[1]
key, value = item
header[key] = value
BAM, we've created a map of headers
From there headers['Content-Length'] falls right out.
So,
This structure will work as long as you can guarantee that you will always recieve Content-Length
If you've made it this far WOW, thanks for taking the time and I hope this helped you out!
TLDR; if you want to know the length of an http message with sockets, write an http parser

Categories