Try-except script hangs instead of giving an error - python

I'm having trouble in writing a script for reading some temperature sensors (DS18B20) on a raspberry pi.
I have a working script, but sometimes the sensors fall out and then the script also stops.
I'm trying to make a more robust version by integrating a try-except statement. The goal is to proceed to the next sensor in the range if one of the sensors doesn't react. If I emulate sensor failure by plugging one of the sensors out, the script stops taking measurements for all the sensors (instead of for the sensor that has been plugged out). And it doesn't give me an error. Any ideas?
This is the part of the script with the try statement:
if time.time() <= timeout:
for index in range (numsensors):
try:
def read_temp_raw(): # gets the temps one by one
f = open(device_file[index], 'r')
lines = f.readlines()
f.close()
return lines
def read_temp(): # checks the received temperature for errors
lines = read_temp_raw()
while lines[0].strip()[-3:] != 'YES':
time.sleep(0.2)
lines = read_temp_raw()
equals_pos = lines[1].find('t=')
if equals_pos != -1:
temp_string = lines[1][equals_pos+2:]
# set proper decimal place for deg C
temp = float(temp_string) / 1000.0
# Round temp to x decimal points --> round(temp,x)
temp = round(temp, 2)
return temp
reading = (read_temp())
temp[index].append(reading)
print device[index],"=", temp[index]
continue
except IOError:
print "Error"

"What has been asked" inventory:
Is using try-except construct making underlying system more
robust?
Why is the code not giving any error on indicated sensor-failure?
A1:
The try-except clause sounds as self-explanatory & life-saving package, however it is not.
One has to fully understand what all sorts of exceptions-types the code is expecting to meet face-to-face and how to handle each of them. Naive or erroneous use of this syntax construct effectively masks the rest of the exceptions from your debugging radar screen, leaving the unhandled cases fail in a dark silence, out of your control and without knowing about them at all. True "Robustness" and "Failure Resilience" is something else than this.
This code sample will leave hidden all real-life collisions, except the only one listed, the IOError, but if that will not happen, all the others, that do happen, are not handled:
if time.time() <= timeout: # START if .time() is still before a T/O
for index in range (numsensors): # ITERATE over all sensors
try: # TRY:
<<<something>>> # <<<something>>>
except IOError: # EXC IOError:
<<<IOError__>>> # Handle EXC.IOError
Guess all your def...(..):-s may belong to a non-repeating section of the code, prior to the if:/for: as you need not modify the code "on-the-fly", do you?
def read_temp_raw(): # DEF: gets the temps one by one
f = open(device_file[index],'r') # SET aFileHANDLE access to IO.DEV ( beware var index "visibility" )
lines = f.readlines() # IO.DEV.READ till <EoF>
f.close() # IO.DEV.CLOSE
return lines # RET all lines
def read_temp(): # DEF: checks the received temperature for errors
lines = read_temp_raw() # GET lines from read_temp_raw()
while lines[0].strip()[-3:]!='YES': # WHILE last-3Bytes of 1st-line!="YES"
time.sleep(0.2) # NOP/Sleep()
lines = read_temp_raw() # GET lines again (beware var index)
equals_pos =lines[1].find('t=') # SET position of 't=' in 2nd-line
if equals_pos != -1: # IF position != -1
temp_string = lines[1][equals_pos+2:]
temp = float(temp_string) \
/ 1000.0 # DIV( 1000) decimal place for deg C
temp = round(temp, 2) # ROUND temp to x decimal points --> round(temp,x)
return temp # RET->
# ----------------------------- # ELSE: re-loop in WHILE
# -------------------------------- # LOOP AGAIN AD INFIMUM
A2: your try-except clause in the posted code is expecting only one kind of exception to be handled by it -- the IOError -- which is instantiated only when actual IO.DEV operation fails for an I/O-related reason, which does not mean a case, that you physically "un-plug" a sensor, while the IO.DEV is still present and can carry it's IO.DEV.READ(s) and thus no exceptions.EnvironmentError.IOError is to be raise-d
That means, the IO.DEV.READ(s) take place and the code results, as per the condition WHILE last-3Bytes of 1st-line dictates, in an endless loop, because the 1st-line "still" does not end with "YES".
Q.E.D.
The Goal focus
Coming back to the issue, you may rather set a safer test for a real-world case, where an erroneous input may appear during your sensor-network scan.
The principle may look like:
f.close() # IO.DEV.CLOSE
if ( len(lines) < 2 ): # IF <<lines>> are nonsense:
return [ "NULL_LENGTH_READING still with CODE EXPECTED ACK-SIG-> YES", \
"SIG ERROR FOR POST-PROCESSOR WITH AN UNREALISTIC VALUE t=-99999999" \
]
return( lines ) # OTHERWISE RET( lines )

Related

Python how to continue read output even an error occurred

I have a function to read output stored on Arduino. The output looks like this:
['stopped','15.00','25.44','40.5','nan','off','off','on','50.00','yes','on','45']
['version1.0']
['3.0','0.1']
The first line includes temperatures at different places, and I need the fourth one. The second line is meaningless, but I need its command to keep the device running. The third line is position and force.
I need to keep sending command to the device in oder to maintain the heaters on, or the device will automatically shut down if no command within 10s. Therefore, I add a command
ser.write(b"y\n")
The function:
def read_output(ser, verbose=True):
position = []
force = []
temp = []
while ser.inWaiting() > 0:
line = ser.readline().strip()
line = ''.join(line.decode()[1:-1])
line = line.split(' ')
ser.write(b"y\n") #meaningless command print "version1.0"
print(line)
if verbose:
print(line)
if len(line) == 2:
position += [float(line[0])]
force += [float(line[1])]
if len(line) == 12:
temp += [float(line[3])]
if int(float(line[3])) > 50
break
time.sleep(0.001) # no busy-wait, reduce CPU
return position, force, temp
I will collect data by p, f, temp = read_output(ser) and save them in csv at the end.
The code sometimes will be interrupted by misplaced output and with an error like this
['version1.0<3.0','0.1']
ValueError: could not convert string to float: 'version1.0<3.0'
It looks like the second and the third line overlap. I guess this could be due to that ['version1.0'] and position&force outputs ['3.0','1.0'] were printout at the same time.
If this happens, I cannot get the output. Can anyone help me skip this error and continue running the code?

Results won't be written into the txt file

Even though I'm using file.flush() at the end, no data will be written into the txt file.
# some essential code to connect to the server
while True:
try:
# do some stuff
try:
gaze_positions = filtered_surface['gaze_on_srf']
for gaze_pos in gaze_positions:
norm_gp_x, norm_gp_y = gaze_pos['norm_pos']
if (0 <= norm_gp_x <= 1 and 0 <= norm_gp_y <= 1):
with open('/the/path/to/the/file.txt', 'w') as file:
file.write('[' + norm_gp_x + ', ' + norm_gp_y + ']')
file.flush()
print(norm_gp_x, norm_gp_y)
except:
pass
except KeyboardInterrupt:
break
What am I doing wrong? Obvisouly I miss something, but I can't figure it out, what it is. Another odd thing: there's even no output for print(norm_gp_x, norm_gp_y). If I put the with open ... in a comment, I'll get the output.
got it:
First
if (0 <= norm_gp_x <= 1 and 0 <= norm_gp_y <= 1):
then:
file.write('[' + norm_gp_x + ', ' + norm_gp_y + ']')
So you're adding strings and integers. This triggers an exception, and since you used an universal except: pass construct, the code skips every iteration (note that this except statement also catches the KeyboardInterrupt exception you're trying to catch at a higher level, so that doesn't work either)
Never use that construct. If you want to protect about a specific exception (ex: IOError), use:
try IOError as e:
print("Warning: got exception {}".format(e))
so your exception is 1) focused and 2) verbose. Always wait until you get exceptions that you want to ignore to ignore them, selectively (read Catch multiple exceptions in one line (except block))
So the fix for your write it:
file.write('[{},{}]'.format(norm_gp_x, norm_gp_y))
or using the list representation since you're trying to mimic it:
file.write(str([norm_gp_x, norm_gp_y]))
Aside: your other issue is that you should use append mode
with open('/the/path/to/the/file.txt', 'a') as file:
or move you open statement before the loop, else you'll only get the last line in the file (a classic) since w mode truncates the file when opening. And you can drop flush since exiting the context closes the file.

Python - Perform file check based on format of 3 values then perform tasks

All,
I am trying to write a python script that will go through a crime file and separate the file based on the following items: UPDATES, INCIDENTS, and ARRESTS. The reports that I generally receive either show these sections as I have previously listed or by **UPDATES**, **INCIDENTS**, or **ARRESTS**. I have already started to write the following script to separate the files based on the following format with the **. However, I was wondering if there was a better way to check the files for both formats at the same time? Also, sometimes there is not an UPDATES or ARRESTS section which causes my code to break. I was wondering if there is a check I can do for this instance, and if this is the case, how can I still get the INCIDENTS section without the other two?
with open('CrimeReport20150518.txt', 'r') as f:
content = f.read()
print content.index('**UPDATES**')
print content.index('**INCIDENTS**')
print content.index('**ARRESTS**')
updatesLine = content.index('**UPDATES**')
incidentsLine = content.index('**INCIDENTS**')
arrestsLine = content.index('**ARRESTS**')
#print content[updatesLine:incidentsLine]
updates = content[updatesLine:incidentsLine]
#print updates
incidents = content[incidentsLine:arrestsLine]
#print incidents
arrests = content[arrestsLine:]
print arrests
You are currently using .index() to locate the headings in the text. The documentation states:
Like find(), but raise ValueError when the substring is not found.
That means that you need to catch the exception in order to handle it. For example:
try:
updatesLine = content.index('**UPDATES**')
print "Found updates heading at", updatesLine
except ValueError:
print "Note: no updates"
updatesLine = -1
From here you can determine the correct indexes for slicing the string based on which sections are present.
Alternatively, you could use the .find() method referenced in the documentation for .index().
Return -1 if sub is not found.
Using find you can just test the value it returned.
updatesLine = content.find('**UPDATES**')
# the following is straightforward, but unwieldy
if updatesLine != -1:
if incidentsLine != -1:
updates = content[updatesLine:incidentsLine]
elif arrestsLine != -1:
updates = content[updatesLine:arrestsLine]
else:
updates = content[updatesLine:]
Either way, you'll have to deal with all combinations of which sections are and are not present to determine the correct slice boundaries.
I would prefer to approach this using a state machine. Read the file line by line and add the line to the appropriate list. When a header is found then update the state. Here is an untested demonstration of the principle:
data = {
'updates': [],
'incidents': [],
'arrests': [],
}
state = None
with open('CrimeReport20150518.txt', 'r') as f:
for line in f:
if line == '**UPDATES**':
state = 'updates'
elif line == '**INCIDENTS**':
state = 'incidents'
elif line == '**ARRESTS**':
state = 'arrests'
else:
if state is None:
print "Warn: no header seen; skipping line"
else
data[state].append(line)
print data['arrests'].join('')
Try using content.find() instead of content.index(). Instead of breaking when the string isn't there, it returns -1. Then you can do something like this:
updatesLine = content.find('**UPDATES**')
incidentsLine = content.find('**INCIDENTS**')
arrestsLine = content.find('**ARRESTS**')
if incidentsLine != -1 and arrestsLine != -1:
# Do what you normally do
updatesLine = content.index('**UPDATES**')
incidentsLine = content.index('**INCIDENTS**')
arrestsLine = content.index('**ARRESTS**')
updates = content[updatesLine:incidentsLine]
incidents = content[incidentsLine:arrestsLine]
arrests = content[arrestsLine:]
elif incidentsLine != -1:
# Do whatever you need to do to files that don't have an arrests section here
elif arreststsLine != -1:
# Handle files that don't have an incidents section here
else:
# Handle files that are missing both
Probably you'll need to handle all four possible combinations slightly differently.
Your solution generally looks OK to me as long as the sections always come in the same order and the files don't get too big. You can get real feedback at stack exchange's code review https://codereview.stackexchange.com/

Python select used in python program

I was reading a proxy server developed using python
I don't understand the method def _read_write which uses select to write client and server socket.
def _read_write(self):
time_out_max = self.timeout/3
socs = [self.client, self.target]
count = 0
while 1:
count += 1
(recv, _, error) = select.select(socs, [], socs, 3)
if error:
break
if recv:
for in_ in recv:
data = in_.recv(BUFLEN)
if in_ is self.client:
out = self.target
else:
out = self.client
if data:
out.send(data)
count = 0
if count == time_out_max:
break
Please someone help me to understand.
Here is my quick and dirty annotation:
def _read_write(self):
# This allows us to get multiple
# lower-level timeouts before we give up.
# (but see later note about Python 3)
time_out_max = self.timeout/3
# We have two sockets we care about
socs = [self.client, self.target]
# Loop until error or timeout
count = 0
while 1:
count += 1
# select is very efficient. It will let
# other processes execute until we have
# data or an error.
# We only care about receive and error
# conditions, so we pass in an empty list
# for transmit, and assign transmit results
# to the _ variable to ignore.
# We also pass a timeout of 3 seconds, which
# is why it's OK to divide the timeout value
# by 3 above.
# Note that select doesn't read anything for
# us -- it just blocks until data is ready.
(recv, _, error) = select.select(socs, [], socs, 3)
# If we have an error, break out of the loop
if error:
break
# If we have receive data, it's from the client
# for the target, or the other way around, or
# even both. Loop through and deal with whatever
# receive data we have and send it to the other
# port.
# BTW, "if recv" is redundant here -- (a) in
# general (except for timeouts) we'll have
# receive data here, and (b) the for loop won't
# execute if we don't.
if recv:
for in_ in recv:
# Read data up to a max of BUFLEN,
data = in_.recv(BUFLEN)
# Dump the data out the other side.
# Indexing probably would have been
# more efficient than this if/else
if in_ is self.client:
out = self.target
else:
out = self.client
# I think this may be a bug. IIRC,
# send is not required to send all the
# data, but I don't remember and cannot
# be bothered to look it up right now.
if data:
out.send(data)
# Reset the timeout counter.
count = 0
# This is ugly -- should be >=, then it might
# work even on Python 3...
if count == time_out_max:
break
# We're done with the loop and exit the function on
# either a timeout or an error.

Python string manipulation -- performance problems

I have the following piece of code that I execute around 2 million times in my application to parse that many records. This part seems to be the bottleneck and I was wondering if anyone could help me by suggesting some nifty tricks that could make these simple string manipulations faster.
try:
data = []
start = 0
end = 0
for info in self.Columns():
end = start + (info.columnLength)
slice = line[start:end]
if slice == '' or len(slice) != info.columnLength:
raise 'Wrong Input'
if info.hasSignage:
if(slice[0:1].strip() != '+' and slice[0:1].strip() != '-'):
raise 'Wrong Input'
if not info.skipColumn:
data.append(slice)
start = end
parsedLine = data
except:
parsedLine = False
def fubarise(data):
try:
if nasty(data):
raise ValueError("Look, Ma, I'm doing a big fat GOTO ...") # sheesh #1
more_of_the_same()
parsed_line = data
except ValueError:
parsed_line = False
# so it can be a "data" or False -- sheesh #2
return parsed_line
There is no point in having different error messages in the raise statement; they are never seen. Sheesh #3.
Update: Here is a suggested improvement which uses struct.unpack to partition input lines rapidly. It also illustrates better exception handling, under the assumption that the writer of the code is also running it and stopping on the first error is acceptable. A robust implementation which logs all errors in all columns of all lines for a user audience is another matter. Note that typically the error checking for each column would be much more extensive e.g. checking for a leading sign but not checking whether the column contains a valid number seems a little odd.
import struct
def unpacked_records(self):
cols = self.Columns()
unpack_fmt = ""
sign_checks = []
start = 0
for colx, info in enumerate(cols, 1):
clen = info.columnLength
if clen < 1:
raise ValueError("Column %d: Bad columnLength %r" % (colx, clen))
if info.skipColumn:
unpack_fmt += str(clen) + "x"
else:
unpack_fmt += str(clen) + "s"
if info.hasSignage:
sign_checks.append(start)
start += clen
expected_len = start
unpack = struct.Struct(unpack_fmt).unpack
for linex, line in enumerate(self.whatever_the_list_of_lines_is, 1):
if len(line) != expected_len:
raise ValueError(
"Line %d: Actual length %d, expected %d"
% (linex, len(line), expected_len))
if not all(line[i] in '+-' for i in sign_checks):
raise ValueError("Line %d: At least one column fails sign check" % linex)
yield unpack(line) # a tuple
what about (using some classes to have an executable example):
class Info(object):
columnLength = 5
hasSignage = True
skipColumn = False
class Something(object):
def Columns(self):
return [Info()]*4
def bottleneck(self):
try:
data = []
start = 0
end = 0
line = '+this-is just a line for testing'
for info in self.Columns():
start = end
collength = info.columnLength
end = start + collength
if info.skipColumn: # start with this
continue
elif collength == 0:
raise ValueError('Wrong Input')
slice = line[start:end] # only now slicing, because it
# is probably most expensive part
if len(slice) != collength:
raise ValueError('Wrong Input')
elif info.hasSignage and slice[0] not in '+-': # bit more compact
raise ValueError('Wrong Input')
else:
data.append(slice)
parsedLine = data
except:
parsedLine = False
Something().bottleneck()
edit:
when length of slice is 0, slice[0] does not exist, so if collength == 0 has to be checked for first
edit2:
You are using this bit of code for many many lines, but the column info does not change, right? That allows you, to
pre-calculate a list of start points of each colum (no more need to calculate start, end)
knowing start-end in advance, .Columns() only needs to return columns that are not skipped and have a columnlength >0 (or do you really need to raise an input for length==0 at each line??)
the manditory length of each line is known and equal or each line and can be checked before looping over the column infos
edit3:
I wonder how you will know what data index belongs to which column if you use 'skipColumn'...
EDIT: I'm changing this answer a bit. I'll leave the original answer below.
In my other answer I commented that the best thing would be to find a built-in Python module that would do the unpacking. I couldn't think of one, but perhaps I should have Google searched for one. #John Machin provided an answer that showed how to do it: use the Python struct module. Since that is written in C, it should be faster than my pure Python solution. (I haven't actually measured anything so it is a guess.)
I do agree that the logic in the original code is "un-Pythonic". Returning a sentinel value isn't best; it's better to either return a valid value or raise an exception. The other way to do it is to return a list of valid values, plus another list of invalid values. Since #John Machin offered code to yield up valid values, I thought I'd write a version here that returns two lists.
NOTE: Perhaps the best possible answer would be to take #John Machin's answer and modify it to save the invalid values to a file for possible later review. His answer yields up answers one at a time, so there is no need to build a large list of parsed records; and saving the bad lines to disk means there is no need to build a possibly-large list of bad lines.
import struct
def parse_records(self):
"""
returns a tuple: (good, bad)
good is a list of valid records (as tuples)
bad is a list of tuples: (line_num, line, err)
"""
cols = self.Columns()
unpack_fmt = ""
sign_checks = []
start = 0
for colx, info in enumerate(cols, 1):
clen = info.columnLength
if clen < 1:
raise ValueError("Column %d: Bad columnLength %r" % (colx, clen))
if info.skipColumn:
unpack_fmt += str(clen) + "x"
else:
unpack_fmt += str(clen) + "s"
if info.hasSignage:
sign_checks.append(start)
start += clen
expected_len = start
unpack = struct.Struct(unpack_fmt).unpack
good = []
bad = []
for line_num, line in enumerate(self.whatever_the_list_of_lines_is, 1):
if len(line) != expected_len:
bad.append((line_num, line, "bad length"))
continue
if not all(line[i] in '+-' for i in sign_checks):
bad.append((line_num, line, "sign check failed"))
continue
good.append(unpack(line))
return good, bad
ORIGINAL ANSWER TEXT:
This answer should be a lot faster if the self.Columns() information is identical over all the records. We do the processing of the self.Columns() information one time, and build a couple of lists that contain just what we need to process a record.
This code shows how to compute parsedList but doesn't actually yield it up or return it or do anything with it. Obviously you would need to change that.
def parse_records(self):
cols = self.Columns()
slices = []
sign_checks = []
start = 0
for info in cols:
if info.columnLength < 1:
raise ValueError, "bad columnLength"
end = start + info.columnLength
if not info.skipColumn:
tup = (start, end)
slices.append(tup)
if info.hasSignage:
sign_checks.append(start)
expected_len = end # or use (end - 1) to not count a newline
try:
for line in self.whatever_the_list_of_lines_is:
if len(line) != expected_len:
raise ValueError, "wrong length"
if not all(line[i] in '+-' for i in sign_checks):
raise ValueError, "wrong input"
parsedLine = [line[s:e] for s, e in slices]
except ValueError:
parsedLine = False
Don't compute start and end every time through this loop.
Compute them exactly once prior to using self.Columns() (Whatever that is. If 'Columns` is class with static values, that's silly. If it's a function with a name that begins with a capital letter, that's confusing.)
if slice == '' or len(slice) != info.columnLength can only happen if line is too short compared to the total size required by Columns. Check once, outside the loop.
slice[0:1].strip() != '+' sure looks like .startswith().
if not info.skipColumn. Apply this filter before even starting the loop. Remove these from self.Columns().
First thing I would consider is slice = line[start:end]. Slicing creates new instances; you could try to avoid explicitly constructing line [start:end] and examine its contents manually.
Why are you doing slice[0:1]? This should yield a subsequence containing a single item of slice (shouldn't it?), thus it can probably be checked more efficiently.
I want to tell you to use some sort of built-in Python feature to split the string, but I can't think of one. So I'm left with just trying to reduce the amount of code you have.
When we are done, end should be pointing at the end of the string; if this is the case, then all of the .columnLength values must have been okay. (Unless one was negative or something!)
Since this has a reference to self it must be a snip from a member function. So, instead of raising exceptions, you could just return False to exit the function early and return an error flag. But I like the debugging potential of changing the except clause to not catch the exception anymore, and getting a stack trace letting you identify where the problem came from.
#Remi used slice[0] in '+-' where I used slice.startswith(('+', '-)). I think I like #Remi's code better there, but I left mine unchanged just to show you a different way. The .startswith() way will work for strings longer than length 1, but since this is only a string of length 1 the terse solution works.
try:
line = line.strip('\n')
data = []
start = 0
for info in self.Columns():
end = start + info.columnLength
slice = line[start:end]
if info.hasSignage and not slice.startswith(('+', '-')):
raise ValueError, "wrong input"
if not info.skipColumn:
data.append(slice)
start = end
if end - 1 != len(line):
raise ValueError, "bad .columnLength"
parsedLine = data
except ValueError:
parsedLine = False

Categories