How is an ICMP packet constructed in python - python

For the sake of learning I am currently trying to create a simple python porgram to send a ICMP ping packet to some device. To get started I looked through the source code of the python module Pyping: https://github.com/Akhavi/pyping/blob/master/pyping/core.py
I am trying to understand all that is going on when sending and constructing the packet however i have managed to get stuck on one part of the code and can't seem to figure out exactly what its fucntion and use is. I have been looking into ICMP packets and i understand that they contain Type code checksum and data now the piece of code that puzzles me is:
self.own_id = os.getpid() & 0xFFFF
header = struct.pack(
"!BBHHH", ICMP_ECHO, 0, checksum, self.own_id, self.seq_number
)
padBytes = []
startVal = 0x42
for i in range(startVal, startVal + (self.packet_size)):
padBytes += [(i & 0xff)] # Keep chars in the 0-255 range
data = bytes(padBytes)
My questions would be:
What is the use of adding the self.own_id and self.seq_number to the header?
What is being calculated in the for-loop, and why does it have a specific start value of 0x42?
I am new to networking and any help would be really appreciated.

Description of ICMP Echo Request packets
The ICMP Echo Request PDU looks like this:
0 1 2 3
0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
| Type(8) | Code(0) | Checksum |
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
| Identifier | Sequence Number |
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
| Payload |
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
And here's a description of the various fields from the wiki link above:
The Identifier and Sequence Number can be used by the client to match the reply with the request that caused the reply.
In practice, most Linux systems use a unique identifier for every ping process, and sequence number is an increasing number within that process. Windows uses a fixed identifier, which varies between Windows versions, and a sequence number that is only reset at boot time.
Description of pyping Code
Header Generation
Look at the full function body of send_one_ping, which is where your code is from. I will annotate it with some information:
def send_one_ping(self, current_socket):
"""
Send one ICMP ECHO_REQUEST
"""
# Header is type (8), code (8), checksum (16), id (16), sequence (16)
# Annotation: the Type is 8 bits, the code is 8 bits, the
# header checksum is 16 bits
# Additional Header Information is 32-bits (identifier and sequence number)
# After that is Payload, which is of arbitrary length.
So this line
header = struct.pack(
"!BBHHH", ICMP_ECHO, 0, checksum, self.own_id, self.seq_number
)
This line creates the packet header using struct with layout !BBHHH, which means:
B - Unsigned Char (8 bits)
B - Unsigned Char (8 bits)
H - Unsigned Short (16 bits)
H - Unsigned Short (16 bits)
H - Unsigned Short (16 bits)
And so the header will look like this:
0 1 2 3
0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
| ICMP_ECHO | 0 | checksum |
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
| self.own_id | self.seq_number |
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
Note this:
self.own_id sets the identifier of the application sending this data. For this code, it just uses the program's Program Identifier number.
self.seq_number sets the sequence number. This helps you identify which ICMP request packet this is if you were to send multiple in a row. It would help you do things like calculate ICMP packet loss.
Both the Identifier and Sequence Number fields combined can be used by a client to match up echo replies with echo requests.
Payload Generation
Now let's move on to the Payload portion. Payloads are of arbitrary length, but the Ping class this code is taken from defaults to a total packet payload size of 55 bytes.
So the portion below just creates a bunch of arbitrary bytes to stuff into the payload section.
padBytes = []
startVal = 0x42
# Annotation: 0x42 = 66 decimal
# This loop would go from [66, 66 + packet_size],
# which in default pyping means [66, 121)
for i in range(startVal, startVal + (self.packet_size)):
padBytes += [(i & 0xff)] # Keep chars in the 0-255 range
data = bytes(padBytes)
At the end of it, byte(padBytes) actually looks like this:
>> bytes(padBytes)
b'BCDEFGHIJKLMNOPQRSTUVWXYZ[\\]^_`abcdefghijklmnopqrstuvwx'
Why 0x42 was chosen?
As far as I know, 0x42 has no actual significance as a Payload identifier, so this seems rather arbitrary. The payload here is actually pretty meaningless. As you can see from the Payload Generation section, it just generates a contiguous sequence that doesn't really mean anything. They could have just decided to fill the entire packet payload with 0x42 bytes if they wanted.

Use scapy http://www.secdev.org/projects/scapy/ !
Scapy is packet manipulation framework written in python.
You can forge a lot of kind of packets (http, tcp, ip, udp, icmp, etc...)

ICMP echo request and echo reply are better known as ping.
The ID and the sequence number allow to match the replies with the requests. They are necessary beacuse the ICMP protocol does not have source and destinations ports like TCP and UDP do, only a source and a destination IP address. Multiple processes can ping the same host and the replies must be delivered to the correct process. For this reason, each reply contains data literally copied from the corresponding request. The ID identifies the process sending the requests. The sequence number helps to match replies with requests within the process. That is necessary to calculate the RTT (round-trip time) and to detect unanswered pings.
The data computed in the loop is payload which is also literally copied from a request to a reply. The payload is optional and a ping implementation can use it for whatever it wants.
Why 0x42? I guess the author was probably a Douglas Adams fan.

Related

struct.unpack_from() buffer reading issue with data longer than 255 bytes

I am using zmq to pass messages back and forth from a client/server. I am using pickle to deserialze the message, so I can parse information out of it. When I am sending a message that is less than 256 bytes, everything works as expected, and I can grab the message ID from the 12 byte. However, if I send a message with a buffer length of more than 255 bytes, struct.pack_into doesn't seem the be reading the fields correctly, and is giving the wrong messageID. I printed out the bytes, and they are still sending correctly. It seems like struct.pack_into can no longer properly find the 12th byte. Somehow the size is changing its behavior. Suggestions?
Client:
buf = ctypes.create_string_buffer(256)
struct.pack_into("!qII", buf, 0, message.Timestamp, message.MessageID, message.Payload)
# Send reply back to client with version
socket.send(buf)
Server:
message = socket.recv()
message = pickle.dumps(message) # Serializes object
(timestamp, message_ID) = struct.unpack_from('!qi', message, 4) # Start reading on 4th byte (pickle adds header)
Q : Suggestions?
Just follow the struct documented properties.
Current code fails on both loading data into struct.pack() and keeps trying to unload some other data layout in struct.unpack_from(), than was used for the struct-payload assembly.
struct.pack_into( "!qII", ################## Use this packing LAYOUT !:: BIG_ENDIAN byte-ordering ( "network"-alike )
buf, # ----BUFFER[256] q:: ( signed ) long long 8-B ~ int64
0, # ----start_at_offset 0 I:: unsigned int 4-B ~ uint32
message.Timestamp, # ----a------+ I:: unsigned int 4-B ~ uint32
message.MessageID, # ----b------:---------------+
message.Payload # ----c------:---------------:-------+ <End-of-STORAGE-FORMAT>
) # |_:____________+8_:____+4_:____+4
# Big | v | v | v |
# # ENDIAN |0.1.2.3.4.5.6.7|8.9.A.B|C.D.E.F|
# # ?in just|
# Send reply back to client w version # ?4-Bytes|
socket.send( buf ) # if len( message.Payload ) > 4 ...^^^.^^^^^ Houston, we have a problem ...
# |
# |
# V
#-----V-------- network-transported payload
# V
# :
# :
# :
#-----V-------- network-delivered (same) payload
# V
socket.recv()
( timestamp,
message_ID
) = struct.unpack_from( '!qi', ########### Use this packing LAYOUT !:: BIG_ENDIAN byte-ordering ( "network"-alike )
message, # ----PAYLOAD-DATA q:: ( signed ) long long 8-B ~ int64
4 # ----from_offset 4! I:: unsigned int 4-B ~ uint32
) # !
# # MESSAGE |_______!_______________________<End-of-STORAGE-FORMAT>
# # DATA | ! |
# # |0.1.2.3!4.5.6.7.8.9.A.B.C.D.E.F|
# | +8 +4
# | | |
# |0.1.2.3.4.5.6.7|8.9.A.B|
# : : <End-of-UNPACK-FORMAT>
#( timestamp <----------------------------------------------<-:_______________: :
# message_ID <-------------------------------------------------------------<-:_______:
# )
So,
unless there is some hidden intent to somehow cross-breed a mix of bytes, taking the last-4-bytes from the original, sending Python int64-8B-long-long message.Timestamp & joining them with the uint32-4-bytes from the original, sending Python message.MessageID to decode a receiving-side Python a new int64-8B-long-long, formed from this mix, stored into a python ( timestamp, ... ) first item, leaving the rest ( 4B-long part of the delivered payload (originated as a message.Payload in the sending Python side) to now get decoded & stored as ( ..., message_ID ),wewill have to repair the code, to at least avoid the current Byte-cross-breeding.
So, either use the very same struct-format-string-TEMPLATES on either side, or one may design an adaptive data-storage layout compositions, where the PAYLOAD-assembler puts the correct byte-length into a known, fix position (best coded as the first I-uint32, or a 10s-string representation, if an easy, human-readable, wireline packet inspection is wished to be kept)
The PAYLOAD-receiver will then first decode the PAYLOAD-data on this known, fix, position -- so as to learn the actual whole PAYLOAD-length from it -- and as a second step it will adaptively compose the learned length-"matching" format-string-TEMPLATE :
...
TEMPLATE_MASK = "!10s I I {0:}s"
...
aRecvdMSG = socket.recv()
TEMPLATE = TEMPLATE_MASK.format( int( struct.unpack( "!10s", aRecvdMSG )[0] )
-10 # SUB _10s_ for _str__ used for advice of payload length
- 4 # SUB ___I_ for _int32_ used for ...
- 4 # SUB ___I_ for _int32_ used for ID
)
...
( aPayloadByteLENGTH,
aVersionNUM,
aMessageID,
) = struct.unpack( TEMPLATE, aRecvdMSG )
The same principle applies to the sending Python side, where PAYLOAD-assembly can use the very same adaptively composed TEMPLATE.format(...)-method, so as the struct.pack()-method gets all details in-order & well aligned, so as to avoid both any kind of the declared buffer-overflows and any kind of mixing bytes in order/alignment-related "cross-breeding" ill-demapped bytes from the delivered PAYLOAD-content into the hands of the blind & believing receiving-side Python interpreter.
Having used this for more than a decade for many-items long message-PAYLOAD assembly/dissasembly in low-latency TAT distributed-computing systems, you can rely on re-using this know how for any less demanding data-portability guaranteed ZeroMQ-served communication DATA-interexchange

unable to unpack information between custom Preamble in Python and telnetlib

I have an industrial sensor which provides me information via telnet over port 10001.
It has a Data Format as follows:
Also the manual:
All the measuring values are transmitted int32 or uint32 or float depending on the sensors
Code
import telnetlib
import struct
import time
# IP Address, Port, timeout for Telnet
tn = telnetlib.Telnet("169.254.168.150", 10001, 10)
while True:
op = tn.read_eager() # currently read information limit this till preamble
print(op[::-1]) # make little-endian
if not len(op[::-1]) == 0: # initially an empty bit starts (b'')
data = struct.unpack('!4c', op[::-1]) # unpacking `MEAS`
time.sleep(0.1)
my initial attempt:
Connect to the sensor
read data
make it to little-endian
OUTPUT
b''
b'MEAS\x85\x8c\x8c\x07\xa7\x9d\x01\x0c\x15\x04\xf6MEAS'
b'\x04\xf6MEAS\x86\x8c\x8c\x07\xa7\x9e\x01\x0c\x15\x04\xf6'
b'\x15\x04\xf6MEAS\x85\x8c\x8c\x07\xa7\x9f\x01\x0c\x15'
b'\x15\x04\xf6MEAS\x87\x8c\x8c\x07\xa7\xa0\x01\x0c'
b'\xa7\xa2\x01\x0c\x15\x04\xf6MEAS\x87\x8c\x8c\x07\xa7\xa1\x01\x0c'
b'\x8c\x07\xa7\xa3\x01\x0c\x15\x04\xf6MEAS\x87\x8c\x8c\x07'
b'\x88\x8c\x8c\x07\xa7\xa4\x01\x0c\x15\x04\xf6MEAS\x88\x8c'
b'MEAS\x8b\x8c\x8c\x07\xa7\xa5\x01\x0c\x15\x04\xf6MEAS'
b'\x04\xf6MEAS\x8b\x8c\x8c\x07\xa7\xa6\x01\x0c\x15\x04\xf6'
b'\x15\x04\xf6MEAS\x8a\x8c\x8c\x07\xa7\xa7\x01\x0c\x15'
b'\x15\x04\xf6MEAS\x88\x8c\x8c\x07\xa7\xa8\x01\x0c'
b'\x01\x0c\x15\x04\xf6MEAS\x88\x8c\x8c\x07\xa7\xa9\x01\x0c'
b'\x8c\x07\xa7\xab\x01\x0c\x15\x04\xf6MEAS\x8b\x8c\x8c\x07\xa7\xaa'
b'\x8c\x8c\x07\xa7\xac\x01\x0c\x15\x04\xf6MEAS\x8c\x8c'
b'AS\x89\x8c\x8c\x07\xa7\xad\x01\x0c\x15\x04\xf6MEAS\x8a'
b'MEAS\x88\x8c\x8c\x07\xa7\xae\x01\x0c\x15\x04\xf6ME'
b'\x15\x04\xf6MEAS\x87\x8c\x8c\x07\xa7\xaf\x01\x0c\x15\x04\xf6'
b'\x15\x04\xf6MEAS\x8a\x8c\x8c\x07\xa7\xb0\x01\x0c'
b'\x0c\x15\x04\xf6MEAS\x8a\x8c\x8c\x07\xa7\xb1\x01\x0c'
b'\x07\xa7\xb3\x01\x0c\x15\x04\xf6MEAS\x89\x8c\x8c\x07\xa7\xb2\x01'
b'\x8c\x8c\x07\xa7\xb4\x01\x0c\x15\x04\xf6MEAS\x89\x8c\x8c'
b'\x85\x8c\x8c\x07\xa7\xb5\x01\x0c\x15\x04\xf6MEAS\x84'
b'MEAS\x87\x8c\x8c\x07\xa7\xb6\x01\x0c\x15\x04\xf6MEAS'
b'\x04\xf6MEAS\x8b\x8c\x8c\x07\xa7\xb7\x01\x0c\x15\x04\xf6'
b'\x15\x04\xf6MEAS\x8b\x8c\x8c\x07\xa7\xb8\x01\x0c\x15'
b'\x15\x04\xf6MEAS\x8a\x8c\x8c\x07\xa7\xb9\x01\x0c'
b'\xa7\xbb\x01\x0c\x15\x04\xf6MEAS\x87\x8c\x8c\x07\xa7\xba\x01\x0c'
try to unpack the preamble !?
How do I read information like Article number, Serial number, Channel, Status, Measuring Value between the preamble?
The payload size seems to be fixed here for 22 Bytes (via Wireshark)
Parsing the reversed buffer is just weird; please use struct's support for endianess. Using big-endian '!' in a little-endian context is also odd.
The first four bytes are a text constant. Ok, fine perhaps you'll need to reverse those. But just those, please.
After that, use struct.unpack to parse out 'IIQI'. So far, that was kind of working OK with your approach, since all fields consume 4 bytes or a pair of 4 bytes. But finding frame M's length is the fly in the ointment since it is just 2 bytes, so parse it with 'H', giving you a combined 'IIQIH'. After that, you'll need to advance by only that many bytes, and then expect another 'MEAS' text constant once you've exhausted that set of measurements.
I managed to avoid TelnetLib altogether and created a tcp client using python3. I had the payload size already from my wireshark dump (22 Bytes) hence I keep receiving 22 bytes of Information. Apparently the module sends two distinct 22 Bytes payload
First (frame) payload has the preamble, serial, article, channel information
Second (frame) payload has the information like bytes per frame, measuring value counter, measuring value Channel 1, measuring value Channel 2, measuring value Channel 3
The information is in int32 and thus needs a formula to be converted to real readings (mentioned in the instruction manual)
(as mentioned by #J_H the unpacking was as He mentioned in his answer with small changes)
Code
import socket
import time
import struct
DRANGEMIN = 3261
DRANGEMAX = 15853
MEASRANGE = 50
OFFSET = 35
# Create a TCP/IP socket
sock = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
server_address = ('169.254.168.150', 10001)
print('connecting to %s port %s' % server_address)
sock.connect(server_address)
def value_mm(raw_val):
return (((raw_val - DRANGEMIN) * MEASRANGE) / (DRANGEMAX - DRANGEMIN) + OFFSET)
if __name__ == '__main__':
while True:
Laser_Value = 0
data = sock.recv(22)
preamble, article, serial, x1, x2 = struct.unpack('<4sIIQH', data)
if not preamble == b'SAEM':
status, bpf, mValCounter, CH1, CH2, CH3 = struct.unpack('<hIIIII',data)
#print(CH1, CH2, CH3)
Laser_Value = CH3
print(str(value_mm(Laser_Value)) + " mm")
#print('RAW: ' + str(len(data)))
print('\n')
#time.sleep(0.1)
Sure enough, this provides me the information that is needed and I compared the information via the propreitary software which the company provides.

Python sockets stealing each other's packets

I'm trying to write a program to test data transfer speeds for various-sized packets in parallel. I noticed something odd, though, that the size of the packet seemed to have no effect on transfer time according to my program, whereas the Unix ping binary would time out on some of the packet sizes I'm using. I was sending 4 packets containing the string 'testquest' and one that was just 2000 bytes set to 0. However, when I printed the results, they all contained 'testquest' (and were far shorter than 2000 bytes). The only thing I can conclude is that these sockets are somehow all receiving the same packet, which would explain how they all had the same rtt.
I made this MCVE to illustrate the issue (you can ignore the 'checksum' function, it's included for completeness but I know from experience that it works):
#!/usr/bin/env python3
import socket
import struct
import time
from multiprocessing.pool import ThreadPool as Pool
from sys import argv, byteorder
def calculate_checksum(pkt):
"""
Implementation of the "Internet Checksum" specified in RFC 1071 (https://tools.ieft.org/html/rfc1071)
Ideally this would act on the string as a series of 16-bit ints (host
packed), but this works.
Network data is big-endian, hosts are typically little-endian,
which makes this much more tedious than it needs to be.
"""
countTo = len(pkt) // 2 * 2
total, count = 0, 0
# Handle bytes in pairs (decoding as short ints)
loByte, hiByte = 0, 0
while count < countTo:
if (byteorder == "little"):
loByte = pkt[count]
hiByte = pkt[count + 1]
else:
loByte = pkt[count + 1]
hiByte = pkt[count]
total += hiByte * 256 + loByte
count += 2
# Handle last byte if applicable (odd-number of bytes)
# Endianness should be irrelevant in this case
if countTo < len(pkt): # Check for odd length
total += pkt[len(pkt) - 1]
total &= 0xffffffff # Truncate sum to 32 bits (a variance from ping.c, which
# uses signed ints, but overflow is unlikely in ping)
total = (total >> 16) + (total & 0xffff) # Add high 16 bits to low 16 bits
total += (total >> 16) # Add carry from above (if any)
return socket.htons((~total) & 0xffff)
def ping(args):
sock, payload = args[0], args[1]
header = struct.pack("!BBH", 8, 0, 0)
checksum = calculate_checksum(header+payload)
header = struct.pack("!BBH", 8, 0, checksum)
timestamp = time.time()
sock.send(header+payload)
try:
response = sock.recv(20+len(payload))
except socket.timeout:
return 0
return (len(response), (time.time() - timestamp) * 1000)
host = argv[1] # A host that doesn't respond to ping packets > 1500B
# 1 is ICMP protocol number
sockets = [socket.socket(socket.AF_INET, socket.SOCK_RAW, proto=1) for i in range(12)]
for i, sock in enumerate(sockets):
sock.settimeout(0.1)
sock.bind(("0.0.0.0", i))
sock.connect((host, 1)) # Port number should never matter for ICMP
args = [(sockets[i], bytes(2**i)) for i in range(12)]
for arg in args:
print(ping(arg))
arg[0].close()
This actually shows me something more troubling - it seems that the rtt is actually decreasing with increasing packet size! Calling this program (as root, to get socket permissions) outputs:
0
0
(24, 15.784025192260742)
(28, 0.04601478576660156)
(28, 0.025033950805664062)
(28, 0.033855438232421875)
(28, 0.03528594970703125)
(28, 0.04887580871582031)
(28, 0.05316734313964844)
(28, 0.03790855407714844)
(28, 0.0209808349609375)
(28, 0.024080276489257812)
but now notice what happens when I try to send a packet of size 2048 using ping:
user#mycomputer ~/src/connvitals $ time ping -c1 -s2048 $box
PING <hostname redacted> (<IP address redacted>): 2048 data bytes
--- <hostname redacted> ping statistics ---
1 packets transmitted, 0 packets received, 100.0% packet loss
real 0m11.018s
user 0m0.005s
sys 0m0.008s
Not only is the packet dropped, but it takes 11 seconds to do so! So why - if my timeout is set to 100ms - is this packet getting a "successful" response from my python script in only ~0.04ms??
Thank you in advance for any help you can provide.
Update:
I just checked again, and it seems that it's multiple sockets that are the problem, and the threading seems to have nothing to do with it. I get the same issue when I ping with each socket - then immediately close it - sequentially.
All your sockets are identical, and all bound to the same host. There simply isn't any information in the packet for the kernel to know which socket to go to, and raw(7) seems to imply all sockets will receive them.
You're probably getting all the responses in all the threads, meaning you're getting 12 times as many responses per thread as you're expecting.

Writing data to UART in Python and reading them from C

I am writing a byte to serial port using Python.
import serial
ser = serial.Serial ("/dev/ttyACM0")
ser.baudrate = 115200
ser.write('\x57')
ser.close()
When I connect TX to RX I have no problem to read that byte (sent from Python code), using GtkTerm. But when I am trying to read this data on micro controller using C, I always read 240. But when I use GtkTerm to send hexadecimal data directly (View -> Send Hexadecimal data), I read (on microcontroller) appropriate value. What could be wrong?
C code:
char byte = getc_();
printf_("1 byte received: i: %i \n",byte);
get_c() function:
char getc_()
{
#ifdef LIB_MUTEX
mutex_lock(&mutex_getc_);
#endif
char res = uart_read();
#ifdef LIB_MUTEX
mutex_unlock(&mutex_getc_);
#endif
return res;
}
Likely wrong communication rate. Change receiver/transmitting baud to maybe 1/4 or 1/6 -- Suggest 19200. (or speed up transmitting/reciver)
240 is 0xF0. With RS- 232, data is sent
Start bit - LS bit - next bit - ... -next bit - MS bit - Stop bit
// or
0 - LS bit - next bit - ... -next bit - MS bit - 1
// or 0xF0
0 - 0 - 0 - 0 - 0 - 1 - 1 - 1 - 1 - 1
If the receiving end is seeing too many 0 bits (look at least significant bit first), and data received is in bit groups of 0's and 1's, it usually means data is sent at a slower baud than what the receiver is using.
Another clue to deciphering mis-match baud is what is the ratio of the count of bytes sent versus received. Given various bit patterns this is not a hard rule, but the side with more data is likely the one at the higher baud.
After some time I find this solution. First byte I always receive 240, but with additional small sleeps I get the correct value which I sent.
import serial
import time
ser = serial.Serial (port = "/dev/ttyACM0", bytesize = 8, stopbits = 1)
ser.baudrate = 115200
sleep_time = 0.05
ser.write('\x41') #240
time.sleep(sleep_time)
ser.write('\x41') #right value A
time.sleep(sleep_time)
ser.write('\x41') #right value A
...
ser.close()

Python and Scapy: plain text SMTP session

Is it possible to perform the the simplest SMTP session using Scapy?
I tried to read captured with tcpdump using Scapy and to send packets, but no luck...
This is what I have
#!/usr/bin/python
from scapy.all import *
from scapy.layers.inet import IP, TCP
source_ip = '1.2.3.4'
source_port = 5100
source_isn = 1000
dest_ip = '1.2.3.5'
dest_port = 25
ip=IP(src=source_ip, dst=dest_ip)
SYN=TCP(sport=source_port, dport=dest_port, flags="S", seq=source_isn)
SYNACK=sr1(ip/SYN)
source_isn = SYN.seq + 1
source_ack = SYNACK.seq + 1
CK=TCP(ack=source_ack, sport=source_port, dport=dest_port, flags="A",seq=source_isn)
handshakedone=sr1(ip/ACK)
DTA=TCP(ack=handshakedone.seq+len(handshakedone.load), seq=source_isn, sport=source_port,dport=dest_port,flags="PA")
sr(ip/DTA/Raw(load='mail from: test#gmail.com\r\n'))
send(ip/DTA/Raw(load='rcpto to: me#gmail.com\r\n'))
source_isn = ACK.seq + len(mfrom)
.....
RST=TCP(ack=SYNACK.seq + 1, seq=source_isn, sport=source_port, dport=dest_port, flags="RA")
send(ip/RST)
Handshake is successful but what ACK and SEQ values should be during the session? How can I calculate them?
TCP seq and ack numbers are described in rfc 793 (start at page 24). The whole spec is too long to post here, but basically, every byte of payload has a sequence number. In addition to the payload bytes, there are two control flags (SYN and FIN) that get their own sequence numbers. Initial sequence numbers should be randomized, but don't really matter if you're just playing around. The ack number in your packet is the next sequence number you expect to receive, and the seq field in the packet is the first sequence number in the segment.
So to ack all packets up to a given one, add the sequence number from the given packet to its length (including FIN or SYN flags, if set) and put that in your ack field.

Categories