I'm trying to write a program to test data transfer speeds for various-sized packets in parallel. I noticed something odd, though, that the size of the packet seemed to have no effect on transfer time according to my program, whereas the Unix ping binary would time out on some of the packet sizes I'm using. I was sending 4 packets containing the string 'testquest' and one that was just 2000 bytes set to 0. However, when I printed the results, they all contained 'testquest' (and were far shorter than 2000 bytes). The only thing I can conclude is that these sockets are somehow all receiving the same packet, which would explain how they all had the same rtt.
I made this MCVE to illustrate the issue (you can ignore the 'checksum' function, it's included for completeness but I know from experience that it works):
#!/usr/bin/env python3
import socket
import struct
import time
from multiprocessing.pool import ThreadPool as Pool
from sys import argv, byteorder
def calculate_checksum(pkt):
"""
Implementation of the "Internet Checksum" specified in RFC 1071 (https://tools.ieft.org/html/rfc1071)
Ideally this would act on the string as a series of 16-bit ints (host
packed), but this works.
Network data is big-endian, hosts are typically little-endian,
which makes this much more tedious than it needs to be.
"""
countTo = len(pkt) // 2 * 2
total, count = 0, 0
# Handle bytes in pairs (decoding as short ints)
loByte, hiByte = 0, 0
while count < countTo:
if (byteorder == "little"):
loByte = pkt[count]
hiByte = pkt[count + 1]
else:
loByte = pkt[count + 1]
hiByte = pkt[count]
total += hiByte * 256 + loByte
count += 2
# Handle last byte if applicable (odd-number of bytes)
# Endianness should be irrelevant in this case
if countTo < len(pkt): # Check for odd length
total += pkt[len(pkt) - 1]
total &= 0xffffffff # Truncate sum to 32 bits (a variance from ping.c, which
# uses signed ints, but overflow is unlikely in ping)
total = (total >> 16) + (total & 0xffff) # Add high 16 bits to low 16 bits
total += (total >> 16) # Add carry from above (if any)
return socket.htons((~total) & 0xffff)
def ping(args):
sock, payload = args[0], args[1]
header = struct.pack("!BBH", 8, 0, 0)
checksum = calculate_checksum(header+payload)
header = struct.pack("!BBH", 8, 0, checksum)
timestamp = time.time()
sock.send(header+payload)
try:
response = sock.recv(20+len(payload))
except socket.timeout:
return 0
return (len(response), (time.time() - timestamp) * 1000)
host = argv[1] # A host that doesn't respond to ping packets > 1500B
# 1 is ICMP protocol number
sockets = [socket.socket(socket.AF_INET, socket.SOCK_RAW, proto=1) for i in range(12)]
for i, sock in enumerate(sockets):
sock.settimeout(0.1)
sock.bind(("0.0.0.0", i))
sock.connect((host, 1)) # Port number should never matter for ICMP
args = [(sockets[i], bytes(2**i)) for i in range(12)]
for arg in args:
print(ping(arg))
arg[0].close()
This actually shows me something more troubling - it seems that the rtt is actually decreasing with increasing packet size! Calling this program (as root, to get socket permissions) outputs:
0
0
(24, 15.784025192260742)
(28, 0.04601478576660156)
(28, 0.025033950805664062)
(28, 0.033855438232421875)
(28, 0.03528594970703125)
(28, 0.04887580871582031)
(28, 0.05316734313964844)
(28, 0.03790855407714844)
(28, 0.0209808349609375)
(28, 0.024080276489257812)
but now notice what happens when I try to send a packet of size 2048 using ping:
user#mycomputer ~/src/connvitals $ time ping -c1 -s2048 $box
PING <hostname redacted> (<IP address redacted>): 2048 data bytes
--- <hostname redacted> ping statistics ---
1 packets transmitted, 0 packets received, 100.0% packet loss
real 0m11.018s
user 0m0.005s
sys 0m0.008s
Not only is the packet dropped, but it takes 11 seconds to do so! So why - if my timeout is set to 100ms - is this packet getting a "successful" response from my python script in only ~0.04ms??
Thank you in advance for any help you can provide.
Update:
I just checked again, and it seems that it's multiple sockets that are the problem, and the threading seems to have nothing to do with it. I get the same issue when I ping with each socket - then immediately close it - sequentially.
All your sockets are identical, and all bound to the same host. There simply isn't any information in the packet for the kernel to know which socket to go to, and raw(7) seems to imply all sockets will receive them.
You're probably getting all the responses in all the threads, meaning you're getting 12 times as many responses per thread as you're expecting.
Related
I wrote a python 3 script which tests an SPI link to an FPGA. It runs on an Raspberry Pi 3. The test works like this: after putting the FPGA in test mode (a push switch), send the first byte, which can be any value. Then further bytes are sent indefinitely. Each one increments by the first value sent, truncated to 8 bits. Thus, if the first value is 37, the FPGA expects the following sequence:
37, 74, 111, 148, 185, 222, 4, 41 ...
Some additional IO pins are used to signal between the devices - RUN (RPi output) starts the test (necessary because the FPGA times out in about 15ms if it expects a byte) and ERR (FPGA output) signals an error. Errors can thus be counted at both ends.
In addition, the RPi script writes a one line summary of bytes sent and number of erros every million bytes.
All of this works just fine. But after running for about 3 days, I get the following error on the RPi:
free(): invalid pointer: 0x00405340
I get this exact same error on two identical test setups, even the same memory address. The last report says
"4294M bytes sent, 0 errors"
I seem to have proved the SPI link, but I am concerned that this long-running program crashes for no apparent reason.
Here is the important part of my test code:
def _report(self, msg):
now = datetime.datetime.now()
os.system("echo \"{} : {}\" > spitest_last.log".format(now, msg))
def spi_test(self):
global end_loop
input("Put the FPGA board into SPI test mode (SW1) and press any key")
self._set_run(True)
self.END_LOOP = False
print("SPI test is running, CTRL-C to end.")
# first byte is sent without LOAD, this is the seed
self._send_byte(self._val)
self._next_val()
end_loop = False
err_flag = False
err_cnt = 0
byte_count = 1
while not end_loop:
mb = byte_count % 1000000
if mb == 0:
msg = "{}M bytes sent, {} errors".format(int(byte_count/1000000), err_cnt)
print("\r" + msg, end="")
self._report(msg)
err_flag = True
else:
err_flag = False
#print("sending: {}".format(self._val))
self._set_load(True)
if self._errors and err_flag:
self._send_byte(self._val + 1)
else:
self._send_byte(self._val)
if self.is_error():
err_cnt += 1
msg = "{}M bytes sent, {} errors".format(int(byte_count/1000000), err_cnt)
print("\r{}".format(msg), end="")
self._report(msg)
self._set_load(False)
# increase the value by the seed and truncate to 8 bits
self._next_val()
byte_count += 1
# test is done
input("\nSPI test ended ({} bytes sent, {} errors). Press ENTER to end.".format(byte_count, err_cnt))
self._set_run(False)
(Note for clarification : there is a command line option to artifically create an error every million bytes. Hence the " err_flag" variable.)
I've tried using python3 in console mode, and there seems to be no issue with the size of the byte_count variable (there shouldn't be, according to what I have read about python integer size limits).
Anyone have an idea as to what might cause this?
This issue is connected to spidev versions older than 3.5 only. The comments below were done under assumption that I was using the upgraded version of spidev.
#############################################################################
I can confirm this problem. It is persistent with both RPi3B and RPi4B. Using python 3.7.3 at both RPi3 and RPi4. The version of spidev which I tried were 3.3, 3.4 and the latest 3.5. I was able to reproduce this error several times by simply looping through this single line.
spidevice2.xfer2([0x00, 0x00, 0x00, 0x00])
It takes up to 11 hours depending on the RPi version. After 1073014000 calls (rounded to 1000), the script crashes because of "invalid pointer". The total amount of bytes sent is the same as in danmcb's case. It seems as if 2^32 bytes represent a limit.
I tried different approaches. For example, calling close() from time to time followed by open(). This did not help.
Then, I tried to create the spiDev object locally, so it would re-created for every batch of data.
def spiLoop():
spidevice2 = spidev.SpiDev()
spidevice2.open(0, 1)
spidevice2.max_speed_hz = 15000000
spidevice2.mode = 1 # Data is clocked in on falling edge
for j in range(100000):
spidevice2.xfer2([0x00, 0x00, 0x00, 0x00])
spidevice2.close()
It still crashed at after approx. 2^30 calls of xfer2([0x00, 0x00, 0x00, 0x00]) which corresponds to approx. 2^32 bytes.
EDIT1
To speed up the process, I was sending in blocks of 4096 bytes using the code below. And I repeatedly created the SpiDev object locally. It took 2 hours to arrive at 2^32 bytes count.
def spiLoop():
spidevice2 = spidev.SpiDev()
spidevice2.open(0, 1)
spidevice2.max_speed_hz = 25000000
spidevice2.mode = 1 # Data is clocked in on falling edge
to_send = [0x00] * 2**12 # 4096 bytes
for j in range(100):
spidevice2.xfer2(to_send)
spidevice2.close()
del spidevice2
def runSPI():
for i in range(2**31 - 1):
spiLoop()
print((2**12 * 100 * (i + 1)) / 2**20, 'Mbytes')
EDIT2
Reloading the spidev on the fly does not help either. I tried this code on both RPi3 and RPi4 with the same result:
import importlib
def spiLoop():
importlib.reload(spidev)
spidevice2 = spidev.SpiDev()
spidevice2.open(0, 1)
spidevice2.max_speed_hz = 25000000
spidevice2.mode = 1 # Data is clocked in on falling edge
to_send = [0x00] * 2**12 # 4096 bytes
for j in range(100):
spidevice2.xfer2(to_send)
spidevice2.close()
del spidevice2
def runSPI():
for i in range(2**31 - 1):
spiLoop()
print((2**12 * 100 * (i + 1)) / 2**20, 'Mbytes')
EDIT3
Executing the code snippet did not isolate the problem either. It crashed after the 4th chuck of 1Gbyte-data was sent.
program = '''
import spidev
spidevice = None
def configSPI():
global spidevice
# We only have SPI bus 0 available to us on the Pi
bus = 0
#Device is the chip select pin. Set to 0 or 1, depending on the connections
device = 1
spidevice = spidev.SpiDev()
spidevice.open(bus, device)
spidevice.max_speed_hz = 250000000
spidevice.mode = 1 # Data is clocked in on falling edge
def spiLoop():
to_send = [0xAA] * 2**12
loops = 1024
for j in range(loops):
spidevice.xfer2(to_send)
return len(to_send) * loops
configSPI()
bytes_total = 0
while True:
bytes_sent = spiLoop()
bytes_total += bytes_sent
print(int(bytes_total / 2**20), "Mbytes", int(1000 * (bytes_total / 2**30)) / 10, "% finished")
if bytes_total > 2**30:
break
'''
for i in range(100):
exec(program)
print("program executed", i + 1, "times, bytes sent > ", (i + 1) * 2**30)
I belive the original asker's issue is a reference leak. Specifically py-spidev issue 91. Said reference leak has been fixed in the 3.5 release of spidev.
Python uses a shared pool of objects to represent small integer values*, rather than re-creating them each time. So when code leaks references to small numbers the result is not a memory leak but instead a reference count that keeps increasing. The python spidev library had an issue where it leaked references to small integers in this way.
On a 32-bit system** the eventual result is that the reference count overflows. Then something decrements the overflowed reference count and the reference counting system frees the object.
What I can't explain is the other answer that claims they can still reproduce the issue with 3.5. This issue was supposed to have been fixed in that version.
* Specifically numbers in the range -3 to 256 inclusive, so anything that can be represented in an unsigned byte plus a few negative values (presumably because they are commonly used as error returns) and 256 (presumably because it's often used as a multiplier).
** On a 64-bit system the reference count will not overflow within a human lifetime.
I have an industrial sensor which provides me information via telnet over port 10001.
It has a Data Format as follows:
Also the manual:
All the measuring values are transmitted int32 or uint32 or float depending on the sensors
Code
import telnetlib
import struct
import time
# IP Address, Port, timeout for Telnet
tn = telnetlib.Telnet("169.254.168.150", 10001, 10)
while True:
op = tn.read_eager() # currently read information limit this till preamble
print(op[::-1]) # make little-endian
if not len(op[::-1]) == 0: # initially an empty bit starts (b'')
data = struct.unpack('!4c', op[::-1]) # unpacking `MEAS`
time.sleep(0.1)
my initial attempt:
Connect to the sensor
read data
make it to little-endian
OUTPUT
b''
b'MEAS\x85\x8c\x8c\x07\xa7\x9d\x01\x0c\x15\x04\xf6MEAS'
b'\x04\xf6MEAS\x86\x8c\x8c\x07\xa7\x9e\x01\x0c\x15\x04\xf6'
b'\x15\x04\xf6MEAS\x85\x8c\x8c\x07\xa7\x9f\x01\x0c\x15'
b'\x15\x04\xf6MEAS\x87\x8c\x8c\x07\xa7\xa0\x01\x0c'
b'\xa7\xa2\x01\x0c\x15\x04\xf6MEAS\x87\x8c\x8c\x07\xa7\xa1\x01\x0c'
b'\x8c\x07\xa7\xa3\x01\x0c\x15\x04\xf6MEAS\x87\x8c\x8c\x07'
b'\x88\x8c\x8c\x07\xa7\xa4\x01\x0c\x15\x04\xf6MEAS\x88\x8c'
b'MEAS\x8b\x8c\x8c\x07\xa7\xa5\x01\x0c\x15\x04\xf6MEAS'
b'\x04\xf6MEAS\x8b\x8c\x8c\x07\xa7\xa6\x01\x0c\x15\x04\xf6'
b'\x15\x04\xf6MEAS\x8a\x8c\x8c\x07\xa7\xa7\x01\x0c\x15'
b'\x15\x04\xf6MEAS\x88\x8c\x8c\x07\xa7\xa8\x01\x0c'
b'\x01\x0c\x15\x04\xf6MEAS\x88\x8c\x8c\x07\xa7\xa9\x01\x0c'
b'\x8c\x07\xa7\xab\x01\x0c\x15\x04\xf6MEAS\x8b\x8c\x8c\x07\xa7\xaa'
b'\x8c\x8c\x07\xa7\xac\x01\x0c\x15\x04\xf6MEAS\x8c\x8c'
b'AS\x89\x8c\x8c\x07\xa7\xad\x01\x0c\x15\x04\xf6MEAS\x8a'
b'MEAS\x88\x8c\x8c\x07\xa7\xae\x01\x0c\x15\x04\xf6ME'
b'\x15\x04\xf6MEAS\x87\x8c\x8c\x07\xa7\xaf\x01\x0c\x15\x04\xf6'
b'\x15\x04\xf6MEAS\x8a\x8c\x8c\x07\xa7\xb0\x01\x0c'
b'\x0c\x15\x04\xf6MEAS\x8a\x8c\x8c\x07\xa7\xb1\x01\x0c'
b'\x07\xa7\xb3\x01\x0c\x15\x04\xf6MEAS\x89\x8c\x8c\x07\xa7\xb2\x01'
b'\x8c\x8c\x07\xa7\xb4\x01\x0c\x15\x04\xf6MEAS\x89\x8c\x8c'
b'\x85\x8c\x8c\x07\xa7\xb5\x01\x0c\x15\x04\xf6MEAS\x84'
b'MEAS\x87\x8c\x8c\x07\xa7\xb6\x01\x0c\x15\x04\xf6MEAS'
b'\x04\xf6MEAS\x8b\x8c\x8c\x07\xa7\xb7\x01\x0c\x15\x04\xf6'
b'\x15\x04\xf6MEAS\x8b\x8c\x8c\x07\xa7\xb8\x01\x0c\x15'
b'\x15\x04\xf6MEAS\x8a\x8c\x8c\x07\xa7\xb9\x01\x0c'
b'\xa7\xbb\x01\x0c\x15\x04\xf6MEAS\x87\x8c\x8c\x07\xa7\xba\x01\x0c'
try to unpack the preamble !?
How do I read information like Article number, Serial number, Channel, Status, Measuring Value between the preamble?
The payload size seems to be fixed here for 22 Bytes (via Wireshark)
Parsing the reversed buffer is just weird; please use struct's support for endianess. Using big-endian '!' in a little-endian context is also odd.
The first four bytes are a text constant. Ok, fine perhaps you'll need to reverse those. But just those, please.
After that, use struct.unpack to parse out 'IIQI'. So far, that was kind of working OK with your approach, since all fields consume 4 bytes or a pair of 4 bytes. But finding frame M's length is the fly in the ointment since it is just 2 bytes, so parse it with 'H', giving you a combined 'IIQIH'. After that, you'll need to advance by only that many bytes, and then expect another 'MEAS' text constant once you've exhausted that set of measurements.
I managed to avoid TelnetLib altogether and created a tcp client using python3. I had the payload size already from my wireshark dump (22 Bytes) hence I keep receiving 22 bytes of Information. Apparently the module sends two distinct 22 Bytes payload
First (frame) payload has the preamble, serial, article, channel information
Second (frame) payload has the information like bytes per frame, measuring value counter, measuring value Channel 1, measuring value Channel 2, measuring value Channel 3
The information is in int32 and thus needs a formula to be converted to real readings (mentioned in the instruction manual)
(as mentioned by #J_H the unpacking was as He mentioned in his answer with small changes)
Code
import socket
import time
import struct
DRANGEMIN = 3261
DRANGEMAX = 15853
MEASRANGE = 50
OFFSET = 35
# Create a TCP/IP socket
sock = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
server_address = ('169.254.168.150', 10001)
print('connecting to %s port %s' % server_address)
sock.connect(server_address)
def value_mm(raw_val):
return (((raw_val - DRANGEMIN) * MEASRANGE) / (DRANGEMAX - DRANGEMIN) + OFFSET)
if __name__ == '__main__':
while True:
Laser_Value = 0
data = sock.recv(22)
preamble, article, serial, x1, x2 = struct.unpack('<4sIIQH', data)
if not preamble == b'SAEM':
status, bpf, mValCounter, CH1, CH2, CH3 = struct.unpack('<hIIIII',data)
#print(CH1, CH2, CH3)
Laser_Value = CH3
print(str(value_mm(Laser_Value)) + " mm")
#print('RAW: ' + str(len(data)))
print('\n')
#time.sleep(0.1)
Sure enough, this provides me the information that is needed and I compared the information via the propreitary software which the company provides.
I am writing a byte to serial port using Python.
import serial
ser = serial.Serial ("/dev/ttyACM0")
ser.baudrate = 115200
ser.write('\x57')
ser.close()
When I connect TX to RX I have no problem to read that byte (sent from Python code), using GtkTerm. But when I am trying to read this data on micro controller using C, I always read 240. But when I use GtkTerm to send hexadecimal data directly (View -> Send Hexadecimal data), I read (on microcontroller) appropriate value. What could be wrong?
C code:
char byte = getc_();
printf_("1 byte received: i: %i \n",byte);
get_c() function:
char getc_()
{
#ifdef LIB_MUTEX
mutex_lock(&mutex_getc_);
#endif
char res = uart_read();
#ifdef LIB_MUTEX
mutex_unlock(&mutex_getc_);
#endif
return res;
}
Likely wrong communication rate. Change receiver/transmitting baud to maybe 1/4 or 1/6 -- Suggest 19200. (or speed up transmitting/reciver)
240 is 0xF0. With RS- 232, data is sent
Start bit - LS bit - next bit - ... -next bit - MS bit - Stop bit
// or
0 - LS bit - next bit - ... -next bit - MS bit - 1
// or 0xF0
0 - 0 - 0 - 0 - 0 - 1 - 1 - 1 - 1 - 1
If the receiving end is seeing too many 0 bits (look at least significant bit first), and data received is in bit groups of 0's and 1's, it usually means data is sent at a slower baud than what the receiver is using.
Another clue to deciphering mis-match baud is what is the ratio of the count of bytes sent versus received. Given various bit patterns this is not a hard rule, but the side with more data is likely the one at the higher baud.
After some time I find this solution. First byte I always receive 240, but with additional small sleeps I get the correct value which I sent.
import serial
import time
ser = serial.Serial (port = "/dev/ttyACM0", bytesize = 8, stopbits = 1)
ser.baudrate = 115200
sleep_time = 0.05
ser.write('\x41') #240
time.sleep(sleep_time)
ser.write('\x41') #right value A
time.sleep(sleep_time)
ser.write('\x41') #right value A
...
ser.close()
I have revised my question in response to being put on hold. Hopefully this will better match SO standards.
The purpose of this program is to build and send UDP packets which use Alternating Bit Protocol as a simple resending mechanism. I have confirmed already that the packets can be sent and received correctly. The issue is with the ABP bit and its flipping.
The problem facing me now is that despite trying multiple different methods, I cannot flip the ABP bit used for confirming that a packet or ack received is the correct numbered one. I start out by sending a packet with ABP bit=0, and in response, the receiving process should see this and send back an ack with ABP bit=0. Upon receiving that, the sender program flips its ABP bit to 1 and sends a new packet with this ABP bit value. The receiver will get that, send a matching ack with ABP bit=1, and the sender will receive, flip its bit back to 0, and continue the cycle until the program has finished sending information.
Code below, sorry for length, but it is complete and ready to run. The program takes four command line arguments, here is the command I have been using:
% python ftpc.py 164.107.112.71 4000 8000 manygettysburgs.txt
where ftpc.py is the name of the sender program, 164.107.112.71 is an IP address, 4000 and 8000 are port numbers, and manygettysburgs.txt is a text file I have been sending. It should not make a difference if a different .txt is used, but for full accuracy use a file with a length of between 8000 and 9000 characters.
import sys
import socket
import struct
import os
import select
def flipBit(val): #flip ABP bit from 0 to 1 and vice versa
foo = 1 - val
return foo
def buildPacketHeader(IP, Port, Flag, ABP):
#pack IP for transport
#split into four 1-byte values
SplitIP = IP.split('.')
#create a 4-byte struct to pack IP, and pack it in remoteIP
GB = struct.Struct("4B")
remoteIP = GB.pack(int(SplitIP[0]),int(SplitIP[1]),int(SplitIP[2]),int(SplitIP[3]))
#remoteIP is now a 4-byte string of packed IP values
#pack Port for transport
#create a 2-byte struct to pack port, and pack it in remotePort
GBC = struct.Struct("H")
remotePort = GBC.pack(int(Port)) #needs another byte
#remotePort is now a 2-byte string
#add flag
flag = bytearray(1)
flag = str(Flag)
#add ABP
ABP = flipBit(ABP)
abp = str(ABP)
#create header and join the four parts together
Header = ''.join(remoteIP)
Header += remotePort
Header += flag
Header += abp
return Header
#assign arguments to local values
IP = sys.argv[1]
PORT = sys.argv[2]
TPORT = sys.argv[3]
localfile = sys.argv[4]
#declare the socket
s = socket.socket(socket.AF_INET, socket.SOCK_DGRAM)
#create destination information arrays
remoteIP = bytearray(4)
remotePort = bytearray(2)
#create flag array
flag = bytearray(1)
#create ABP bit
bit = 1
print bit
#send file size packet
#get size of file from os command
filesize = bytearray(4)
#save as 4-byte string
filesize = str(os.stat(localfile).st_size)
#package the filesize string
filesizestr = buildPacketHeader(IP,PORT,1,bit) #build header
print bit
filesizestr += filesize #complete the packet
s.sendto(filesizestr, ('127.0.0.1', int(TPORT))) #send packet
# end of send file name packet
#begin listening for responses
read, write, err = select.select([s], [], [], 1) #timeout set to 1 seconds
if len(read) > 0:
#data received
data = read[0].recv(1200)
if data[7] != bit:
print "failed ack"
#resend packet
else:
print "Timeout."
#resend packet
#send next data packet
#get filename as string (from arg4 diredtly)
filename = bytearray(20)
#save as 20-byte string
filename = localfile
#package the filename string
filenamestr = buildPacketHeader(IP,PORT,2,bit) #build header
print bit
filenamestr += filename #complete the packet
s.sendto(filenamestr, ('127.0.0.1', int(TPORT))) #send packet
#end of send file name packet
#send file content packet
#reading while loop goes here
with open(localfile, 'rb', 0) as f: #open the file
while True:
fstr = f.read(1000)
if not fstr:
print "NOTHING"
break
#put together the main packet base
filecontentstr = buildPacketHeader(IP,PORT,3,bit)
print bit
filecontentbytearray = bytearray(1000) #create ytear array
filecontentbytearray = fstr #assign fstr to byte array
filecontentsend = ''.join(filecontentstr) #copy filecontentstr to new string since we will be using filecontentstr again in the future for other packets
filecontentsend += filecontentbytearray #append read data to be sent
s.sendto(filecontentsend, ('127.0.0.1', int(TPORT))) #send the file content packet
#end of send file content packet
s.close()
In this code, every time that buildPacketHeader is called, it performs flipBit as part of its operations. flipBit is supposed to flip the bit's value for ABP. I have prints set up to print out the new value of bit after all calls to buildPacketHeader, so as to track the value. Whenever I run the program, however, I always see the same value for the ABP bit.
I've tried several methods, including changing to a bool. Here are some changes to flipBit I have tried:
def flipBit(val): #flip ABP bit from 0 to 1 and vice versa
if val == 0:
val = 1
else:
val = 0
return val
and some with bools instead:
def flipBit(val):
val = not val
return val
def flipBit(val):
val = (True, False)[val]
return val
I figure that many of these methods are in fact working options due to past experience. That said, I am completely baffled as to why it is not working as expected in this program. I would assume that it is my inexperience with python that is at fault, for despite having now used it for a decent amount of time, there still are peculiarities which escape me. Any help is greatly appreciated.
I don't understand what your objection is to Python ints, but the ctypes module provides a world of low-level mutable objects; e.g.,
>>> import ctypes
>>> i = ctypes.c_ushort(12) # 2-byte unsigned integer, initial value 12
>>> i
c_ushort(12)
>>> i.value += 0xffff - 12
>>> hex(i.value)
'0xffff'
>>> i.value += 1
>>> i.value # silently overflowed to 0
0
This is my first time ever answering my own SO question, so hopefully this turns out right. If anyone has additions or further answers, then feel free to answer as well, or comment on this one.
I ended up solving the problem by adding another return value to the return statement of buildPacketHeader, so that in addition to returning a string I also return the new value of bit as well. I confirmed that it was working by setting up the following prints inside of buildPacketHeader:
#add ABP
print "before:",ABP #test line for flipBit
ABP = flipBit(ABP)
abp = str(ABP)
print "after:",ABP #test line for flipBit
The output of which is shown here (I ended it early but the proof of functionality is still visible)
% python ftpc.py 164.107.112.70 4000 8000 manygettysburgs.txt
before: 1
after: 0
Timeout, resending packet...
before: 0
after: 1
Timeout, resending packet...
before: 1
after: 0
As can be seen, the before of the second packet is the after of the first packet, and the before of the third packet is the after of the second packet. Through this, you can see that the program is now flipping bits correctly.
The change made to buildPacketHeader is shown below:
return Header
becomes
return Header, ABP
and calls to buildPacketHeader:
filesizestr = buildPacketHeader(IP,PORT,1,bit)
become
filesizestr, bit = buildPacketHeader(IP,PORT,1,bit)
Very simple for such a bother. Make sure you return values if you want to make them change.
Is it possible to perform the the simplest SMTP session using Scapy?
I tried to read captured with tcpdump using Scapy and to send packets, but no luck...
This is what I have
#!/usr/bin/python
from scapy.all import *
from scapy.layers.inet import IP, TCP
source_ip = '1.2.3.4'
source_port = 5100
source_isn = 1000
dest_ip = '1.2.3.5'
dest_port = 25
ip=IP(src=source_ip, dst=dest_ip)
SYN=TCP(sport=source_port, dport=dest_port, flags="S", seq=source_isn)
SYNACK=sr1(ip/SYN)
source_isn = SYN.seq + 1
source_ack = SYNACK.seq + 1
CK=TCP(ack=source_ack, sport=source_port, dport=dest_port, flags="A",seq=source_isn)
handshakedone=sr1(ip/ACK)
DTA=TCP(ack=handshakedone.seq+len(handshakedone.load), seq=source_isn, sport=source_port,dport=dest_port,flags="PA")
sr(ip/DTA/Raw(load='mail from: test#gmail.com\r\n'))
send(ip/DTA/Raw(load='rcpto to: me#gmail.com\r\n'))
source_isn = ACK.seq + len(mfrom)
.....
RST=TCP(ack=SYNACK.seq + 1, seq=source_isn, sport=source_port, dport=dest_port, flags="RA")
send(ip/RST)
Handshake is successful but what ACK and SEQ values should be during the session? How can I calculate them?
TCP seq and ack numbers are described in rfc 793 (start at page 24). The whole spec is too long to post here, but basically, every byte of payload has a sequence number. In addition to the payload bytes, there are two control flags (SYN and FIN) that get their own sequence numbers. Initial sequence numbers should be randomized, but don't really matter if you're just playing around. The ack number in your packet is the next sequence number you expect to receive, and the seq field in the packet is the first sequence number in the segment.
So to ack all packets up to a given one, add the sequence number from the given packet to its length (including FIN or SYN flags, if set) and put that in your ack field.