How to deal with errors on select with multiple IO steams? - python

I have the following call to select:
try:
rlst, wlst, plst = select.select(
[x.fileno() for x in self.rlist],
[x.fileno() for x in self.wlist],
[x.fileno() for x in self.plist])
except select.error, err:
[...]
Where self.rlist, self.wlist, and self.plist are lists of IO streams (either sockets, PIPE, files, whatever). Now, I am assuming that this select could fail when one of the streams fails for whatever reason.
How can I find out which of those streams caused the error? What I really want to do is remove that IO stream from its list and continue with the select.

Quoting from the Socket Programming HOWTO:
One very nasty problem with select: if somewhere in those input lists of sockets is one which has died a nasty death, the select will fail. You then need to loop through every single damn socket in all those lists and do a select([sock],[],[],0) until you find the bad one. That timeout of 0 means it won’t take long, but it’s ugly.

Related

Python, serial - changing baudrate, strange behaviour

I am having troubles with changing baudrate while the port is running. All the communication is run at 100k baud, but I also need to send some data at 10k baud. I've read I should use setBaudrate method, so I tried this:
ser = serial.Serial(2, baudrate=BAUD, timeout=TIMEOUT)
def reset(string):
if string:
ser.flushInput() #erase input and output buffers
ser.flushOutput()
ser.setBaudrate(RESET_BAUD) #change baudrate to 10k
ser.write(string)
ser.setBaudrate(BAUD) #go back to 100k
The problem is, it doesn't work right. I don't know what is wrong here, but the string just isn't received properly. But here is interesting part - if I remove the last line (going back to 100k) and run this function from the shell, everything is fine. Then I can just run the last command directly in shell, not inside function.
My question is what exactly happens here and how to avoid it? All I need is a function to send a string with different baudrate and then return to the original baudrate...
You need to wait long enough for the string to be sent before resetting the BAUD rate - otherwise it changes while some of it is still in the serial port (hardware) buffer.
Add time.sleep(0.01*len(string)) before the last line.
BTW try not to use reserved words like string as variable names as it can cause problems.
My guess is that the baud rate is being changed before the data is actually sent. A good bet is to force the data to be sent before trying to change the baud rate.
According to the docs, this is done by calling Serial.flush() (not flushInput() or flushOutput(), as these just discard the buffer contents).

How to read complete file with bitstring

I want to read as many 24 bit chunks as possible from a file.
How can I do this using bitstrings' ConstBitStream
when I don't now how many chunks there are?
Currently I do this:
eventList = ConstBitStream(filename = 'events.dat')
for i in range(1000) :
packet = eventList.read(24)
(here I have to calculate the number of events beforehand)
You could read until an ReadError exeption is generated
try:
while True:
packet = eventList.read(24)
except ReadError:
pass
Catching the ReadError is a perfectly good answer, but another way is to instead use the cut method, which returns a generator for bitstrings of a given length, so just
for packet in eventList.cut(24):
should work.

TCP Sockets: Double messages

I'm having a problem with sockets in python.
I have a a TCP server and client that send each other data in a while 1 loop.
It packages up 2 shorts in the struct module (struct.pack("hh", mousex, mousey)). But sometimes when recving the data on the other computer, it seems like 2 messages have been glued together. Is this nagle's algorithm?
What exactly is going on here? Thanks in advance.
I agree with other posters, that "TCP just does that". TCP guarantees that your bytes arrive in the right order, but makes no guarantees about the sizes of the chunks they arrive in. I would add that TCP is also allowed to split a single send into multiple recv's, or even for example to split aabb, ccdd into aab, bcc, dd.
I put together this module for dealing with the relevant issues in python:
http://stromberg.dnsalias.org/~strombrg/bufsock.html
It's under an opensource license and is owned by UCI. It's been tested on CPython 2.x, CPython 3.x, Pypy and Jython.
HTH
To be sure I'd have to see actual code, but it sounds like you are expecting a send of n bytes to show up on the receiver as exactly n bytes all the time, every time.
TCP streams don't work that way. It's a "streaming" protocol, as opposed to a "datagram" (record-oriented) one like UDP or STCP or RDS.
For fixed-data-size protocols (or any where the next chunk size is predictable in advance), you can build your own "datagram-like receiver" on a stream socket by simply recv()ing in a loop until you get exactly n bytes:
def recv_n_bytes(socket, n):
"attempt to receive exactly n bytes; return what we got"
data = []
while True:
have = sum(len(x) for x in data)
if have >= n:
break
want = n - have
got = socket.recv(want)
if got == '':
break
return ''.join(data)
(untested; python 2.x code; not necessarily efficient; etc).
You may not assume that data will become available for reading from the local socket in the same size pieces it was provided for sending at the other source end. As you have seen, this might sometimes be usually true, but by no means reliably so. Rather, what TCP guarantees is that what goes in one end will eventually come out the other, in order without anything missing or if that cannot be achieved by means built into the protocol such as retries, then whole thing will break with an error.
Nagle is one possible cause, but not the only one.

Pack array of namedtuples in PYTHON

I need to send an array of namedtuples by a socket.
To create the array of namedtuples I use de following:
listaPeers=[]
for i in range(200):
ipPuerto=collections.namedtuple('ipPuerto', 'ip, puerto')
ipPuerto.ip="121.231.334.22"
ipPuerto.puerto="8988"
listaPeers.append(ipPuerto)
Now that is filled, i need to pack "listaPeers[200]"
How can i do it?
Something like?:
packedData = struct.pack('XXXX',listaPeers)
First of all you are using namedtuple incorrectly. It should look something like this:
# ipPuerto is a type
ipPuerto=collections.namedtuple('ipPuerto', 'ip, puerto')
# theTuple is a tuple object
theTuple = ipPuerto("121.231.334.22", "8988")
As for packing, it depends what you want to use on the other end. If the data will be read by Python, you can just use Pickle module.
import cPickle as Pickle
pickledTuple = Pickle.dumps(theTuple)
You can pickle whole array of them at once.
It is not that simple - yes, for integers and simple numbers, it s possible to pack straight from named tuples to data provided by the struct package.
However, you are holding your data as strings, not as numbers - it is a simple thing to convert to int in the case of the port - as it is a simple integer, but requires some juggling when it comes to the IP.
def ipv4_from_str(ip_str):
parts = ip_str.split(".")
result = 0
for part in parts:
result <<= 8
result += int(part)
return result
def ip_puerto_gen(list_of_ips):
for ip_puerto in list_of_ips:
yield(ipv4_from_str(ip_puerto.ip))
yield(int(ip_puerto.puerto))
def pack(list_of_ips):
return struct.pack(">" + "II" * len(list_of_ips),
*ip_puerto_gen(list_of_ips)
)
And you then use the "pack" function from here to pack your structure as you seem to want.
But first, attempt to the fact that you are creating your "listaPiers" incorrectly (your example code simply will fail with an IndexError) - use an empty list, and the append method on it to insert new named tuples with ip/port pairs as each element:
listaPiers = []
ipPuerto=collections.namedtuple('ipPuerto', 'ip, puerto')
for x in range(200):
new_element = ipPuerto("123.123.123.123", "8192")
listaPiers.append(new_element)
data = pack(listaPiers)
ISTR that pickle is considered insecure in server processes, if the server process is receiving pickled data from untrusted clients.
You might want to come up with some sort of separator character(s) for the records and fields (perhaps \0 and \001 or \376 and \377). Then putting together a message is kind of like a text file broken up into records and fields separated by spaces and newlines. Or for that matter, you could use spaces and newlines, if your normal data doesn't include these.
I find this module very valuable for framing data in socket-based protocols:
http://stromberg.dnsalias.org/~strombrg/bufsock.html
It lets you do things like "read up until the next null byte" or "read the next 10 characters" - without needing to worry about the complexities of IP aggregating or splitting packets.

How to read JSON from socket in python? (Incremental parsing of JSON)

I have a socket opened and I'd like to read some json data from it. The problem is that the json module from standard library can only parse from strings (load only reads the whole file and calls loads inside) It even looks that all the way inside the module it all depends on the parameter being string.
This is a real problem with sockets since you can never read it all to string and you don't know how many bytes to read before you actually parse it.
So my questions are: Is there a (simple and elegant) workaround? Is there another json library that can parse data incrementally? Is it worth writing it myself?
Edit: It is XBMC jsonrpc api. There are no message envelopes, and I have no control over the format. Each message may be on a single line or on several lines.
I could write some simple parser that needs only getc function in some form and feed it using s.recv(1), but this doesn't as a very pythonic solution and I'm a little lazy to do that :-)
Edit: given that you aren't defining the protocol, this isn't useful, but it might be useful in other contexts.
Assuming it's a stream (TCP) socket, you need to implement your own message framing mechanism (or use an existing higher level protocol that does so). One straightforward way is to define each message as a 32-bit integer length field, followed by that many bytes of data.
Sender: take the length of the JSON packet, pack it into 4 bytes with the struct module, send it on the socket, then send the JSON packet.
Receiver: Repeatedly read from the socket until you have at least 4 bytes of data, use struct.unpack to unpack the length. Read from the socket until you have at least that much data and that's your JSON packet; anything left over is the length for the next message.
If at some point you're going to want to send messages that consist of something other than JSON over the same socket, you may want to send a message type code between the length and the data payload; congratulations, you've invented yet another protocol.
Another, slightly more standard, method is DJB's Netstrings protocol; it's very similar to the system proposed above, but with text-encoded lengths instead of binary; it's directly supported by frameworks such as Twisted.
If you're getting the JSON from an HTTP stream, use the Content-Length header to get the length of the JSON data. For example:
import httplib
import json
h = httplib.HTTPConnection('graph.facebook.com')
h.request('GET', '/19292868552')
response = h.getresponse()
content_length = int(response.getheader('Content-Length','0'))
# Read data until we've read Content-Length bytes or the socket is closed
data = ''
while len(data) < content_length or content_length == 0:
s = response.read(content_length - len(data))
if not s:
break
data += s
# We now have the full data -- decode it
j = json.loads(data)
print j
What you want(ed) is ijson, an incremental json parser.
It is available here: https://pypi.python.org/pypi/ijson/ . The usage should be simple as (copying from that page):
import ijson.backends.python as ijson
for item in ijson.items(file_obj):
# ...
(for those who prefer something self-contained - in the sense that it relies only on the standard library: I wrote yesterday a small wrapper around json - but just because I didn't know about ijson. It is probably much less efficient.)
EDIT: since I found out that in fact (a cythonized version of) my approach was much more efficient than ijson, I have packaged it as an independent library - see here also for some rough benchmarks: http://pietrobattiston.it/jsaone
Do you have control over the json? Try writing each object as a single line. Then do a readline call on the socket as described here.
infile = sock.makefile()
while True:
line = infile.readline()
if not line: break
# ...
result = json.loads(line)
Skimming the XBMC JSON RPC docs, I think you want an existing JSON-RPC library - you could take a look at:
http://www.freenet.org.nz/dojo/pyjson/
If that's not suitable for whatever reason, it looks to me like each request and response is contained in a JSON object (rather than a loose JSON primitive that might be a string, array, or number), so the envelope you're looking for is the '{ ... }' that defines a JSON object.
I would, therefore, try something like (pseudocode):
while not dead:
read from the socket and append it to a string buffer
set a depth counter to zero
walk each character in the string buffer:
if you encounter a '{':
increment depth
if you encounter a '}':
decrement depth
if depth is zero:
remove what you have read so far from the buffer
pass that to json.loads()
You may find JSON-RPC useful for this situation. It is a remote procedure call protocol that should allow you to call the methods exposed by the XBMC JSON-RPC. You can find the specification on Trac.
res = str(s.recv(4096), 'utf-8') # Getting a response as string
res_lines = res.splitlines() # Split the string to an array
last_line = res_lines[-1] # Normally, the last one is the json data
pair = json.loads(last_line)
https://github.com/A1vinSmith/arbitrary-python/blob/master/sockets/loopHost.py

Categories