So I was just messing around with sockets in python. I discovered that setting the socket option SO_RECVBUF to N makes the sockets recv buffer become 2N bytes large, according to the getsockopt function. For example:
import socket
a, b = socket.socketpair()
a.setsockopt(socket.SOL_SOCKET, socket.SO_RCVBUF, 4096)
print a.getsockopt(socket.SOL_SOCKET, socket.SO_RCVBUF) #prints 8192
b.send('1'*5000)
print len(a.recv(5000)) #prints 5000 instead of 4096 or something else.
a.setsockopt(socket.SOL_SOCKET, socket.SO_RCVBUF, 8192)
print a.getsockopt(socket.SOL_SOCKET, socket.SO_RCVBUF) #prints 16384
Can someone explain this to me? I am writing an HTTP server and I want to strictly limit the size of a request to protect my preciously scarce amount of RAM.
Internally this is making an OS level operation which according to man 7 socket says the following:
SO_RCVBUF
Sets or gets the maximum socket receive buffer in bytes. *The kernel doubles this value (to allow space for bookkeeping overhead) when it is set using setsockopt(2), and this doubled value is returned by getsockopt(2).** The default value is set by the /proc/sys/net/core/rmem_default file, and the maximum allowed value is set by the /proc/sys/net/core/rmem_max file. The minimum (doubled) value for this option is 256.
Liberally copied from this wonderful answer to a slightly different question: https://stackoverflow.com/a/11827867/758446
Related
I'm trying to communicate with my mpu9250 through SPI using the py-spidev module and I'm unable to understand how exactly the read function works.
I found this function snippet that performs the read register function and I'd like to know why the __READ_FLAG (__READ_FLAG = 0x80) is concatenated with the address byte for sending the dummy values to. Won't this change the register address completely?
def ReadReg(self, reg_address):
self.bus.open(self.spi_bus_number, self.spi_dev_number)
tx = [reg_address | self.__READ_FLAG, 0x00]
rx = self.bus.xfer2(tx)
self.bus.close()
return rx[1]
Found the answer for this in another datasheet that follows the same protocol.
Writing is done by lowering CSB and sending pairs control bytes and register data. The control
bytes consist of the SPI register address (= full register address without bit 7) and the write
command (bit7 = RW = ‘0’). Several pairs can be written without raising CSB. The transaction is
ended by a raising CSB.
Reading is done by lowering CSB and first sending one control byte. The control bytes consist
of the SPI register address (= full register address without bit 7) and the read command (bit 7 =
RW = ‘1’). After writing the control byte, data is sent out of the SDO pin (SDI in 3-wire mode);
the register address is automatically incremented.
I'm using psutil to query the number of bytes sent and received over ethernet on Windows 10.
I can use psutil.net_io_counters(pernic=True) to get the values. However, when I parse the values I need, they are slightly higher than the byte count that I view in the system UI.
import psutil
network_stats = psutil.net_io_counters(pernic=True)['Ethernet']
bytes_sent = getattr(network_stats, 'bytes_sent')
bytes_recv = getattr(network_stats, 'bytes_recv')
print "Bytes Sent = {0} | Bytes Received = {1}".format(bytes_sent, bytes_recv)
My issue is, the number of bytes returned from this script is always higher than the byte count displayed in the Network Connections -> Ethernet Status UI. However, the difference is relatively small.
Is it possible to use psutil to get the sent/received values to match the Ethernet Status Activity in the screenshot above? And why are the values returned slightly different in the first place?
I use python library "minimalmodbus" to communicate with a modbus device:
import minimalmodbus
from minimalmodbus import Instrument
minimalmodbus.BAUDRATE = 9600
m = Instrument('com2', 1)
m.debug=True
print m.read_long(4156)
the result:
MinimalModbus debug mode. Writing to instrument (expecting 9 bytes back): '\x01\x03\x10<\x00\x02\x00\xc7'
MinimalModbus debug mode. No sleep required before write. Time since previous read: 1422431606124.0 ms, minimum silent period: 4.01 ms.
MinimalModbus debug mode. Response from instrument: '\x01\x03\x04\x00\x01\x00\x00\xab\xf3' (9 bytes), roundtrip time: 28.0 ms. Timeout setting: 50.0 ms.
65536
the response data is 65536, which is 0x00010000 in hex. But I have already exactly known the data should be 1, 0x00000001 in hex. The reason is obvious: minimalmodbus interpret response data '\x00\x01\x00\x00' as 0x00010000, while for my modbus device supposed to be 0x00000001. I have referred to the documentation(http://minimalmodbus.sourceforge.net/#known-issues) where I saw this:
For the data types involving more than one register (float, long etc), there are differences in the byte order used by different manufacturers. A floating point value of 1.0 is encoded (in single precision) as 3f800000 (hex). In this implementation the data will be sent as '\x3f\x80' and '\x00\x00' to two consecutive registers. Make sure to test that it makes sense for your instrument. It is pretty straight-forward to change this code if some other byte order is required by anyone (see support section).
I want to ask: Have anyone encountered the same problem and found out an simple and quick method(as the author said) to change the default byte order of minimalmodbus?
EDIT:
I have found a way to resolve this problem, but I don't know whether it's the simplest:
def _performCommand(self, functioncode, payloadToSlave):
'''
reimplement the _performCommand function in subclass of minimalmodbus.Instrument
'''
payloadFromSlave = Instrument._performCommand(self, functioncode, payloadToSlave)
if functioncode in [3, 4]:
#reorder data in response while reading multiple registers
return payloadFromSlave[0] + self._restructure(payloadFromSlave)
else:
return payloadFromSlave
def _restructure(self, byteCode):
'''
reorder byte code for my device, e.g.:
'\x00\x01\x00\x02' --->'\x00\x02\x00\x01'
(byte order may differ for different manufacturers,refer to http://www.simplymodbus.ca/FAQ.htm#Order)
'''
newByteCode = ''
for i in range(len(byteCode)-2, -1, -2):
newByteCode += byteCode[i:i+2]
return newByteCode
I am having troubles with changing baudrate while the port is running. All the communication is run at 100k baud, but I also need to send some data at 10k baud. I've read I should use setBaudrate method, so I tried this:
ser = serial.Serial(2, baudrate=BAUD, timeout=TIMEOUT)
def reset(string):
if string:
ser.flushInput() #erase input and output buffers
ser.flushOutput()
ser.setBaudrate(RESET_BAUD) #change baudrate to 10k
ser.write(string)
ser.setBaudrate(BAUD) #go back to 100k
The problem is, it doesn't work right. I don't know what is wrong here, but the string just isn't received properly. But here is interesting part - if I remove the last line (going back to 100k) and run this function from the shell, everything is fine. Then I can just run the last command directly in shell, not inside function.
My question is what exactly happens here and how to avoid it? All I need is a function to send a string with different baudrate and then return to the original baudrate...
You need to wait long enough for the string to be sent before resetting the BAUD rate - otherwise it changes while some of it is still in the serial port (hardware) buffer.
Add time.sleep(0.01*len(string)) before the last line.
BTW try not to use reserved words like string as variable names as it can cause problems.
My guess is that the baud rate is being changed before the data is actually sent. A good bet is to force the data to be sent before trying to change the baud rate.
According to the docs, this is done by calling Serial.flush() (not flushInput() or flushOutput(), as these just discard the buffer contents).
I'm having a problem with sockets in python.
I have a a TCP server and client that send each other data in a while 1 loop.
It packages up 2 shorts in the struct module (struct.pack("hh", mousex, mousey)). But sometimes when recving the data on the other computer, it seems like 2 messages have been glued together. Is this nagle's algorithm?
What exactly is going on here? Thanks in advance.
I agree with other posters, that "TCP just does that". TCP guarantees that your bytes arrive in the right order, but makes no guarantees about the sizes of the chunks they arrive in. I would add that TCP is also allowed to split a single send into multiple recv's, or even for example to split aabb, ccdd into aab, bcc, dd.
I put together this module for dealing with the relevant issues in python:
http://stromberg.dnsalias.org/~strombrg/bufsock.html
It's under an opensource license and is owned by UCI. It's been tested on CPython 2.x, CPython 3.x, Pypy and Jython.
HTH
To be sure I'd have to see actual code, but it sounds like you are expecting a send of n bytes to show up on the receiver as exactly n bytes all the time, every time.
TCP streams don't work that way. It's a "streaming" protocol, as opposed to a "datagram" (record-oriented) one like UDP or STCP or RDS.
For fixed-data-size protocols (or any where the next chunk size is predictable in advance), you can build your own "datagram-like receiver" on a stream socket by simply recv()ing in a loop until you get exactly n bytes:
def recv_n_bytes(socket, n):
"attempt to receive exactly n bytes; return what we got"
data = []
while True:
have = sum(len(x) for x in data)
if have >= n:
break
want = n - have
got = socket.recv(want)
if got == '':
break
return ''.join(data)
(untested; python 2.x code; not necessarily efficient; etc).
You may not assume that data will become available for reading from the local socket in the same size pieces it was provided for sending at the other source end. As you have seen, this might sometimes be usually true, but by no means reliably so. Rather, what TCP guarantees is that what goes in one end will eventually come out the other, in order without anything missing or if that cannot be achieved by means built into the protocol such as retries, then whole thing will break with an error.
Nagle is one possible cause, but not the only one.