Results won't be written into the txt file - python

Even though I'm using file.flush() at the end, no data will be written into the txt file.
# some essential code to connect to the server
while True:
try:
# do some stuff
try:
gaze_positions = filtered_surface['gaze_on_srf']
for gaze_pos in gaze_positions:
norm_gp_x, norm_gp_y = gaze_pos['norm_pos']
if (0 <= norm_gp_x <= 1 and 0 <= norm_gp_y <= 1):
with open('/the/path/to/the/file.txt', 'w') as file:
file.write('[' + norm_gp_x + ', ' + norm_gp_y + ']')
file.flush()
print(norm_gp_x, norm_gp_y)
except:
pass
except KeyboardInterrupt:
break
What am I doing wrong? Obvisouly I miss something, but I can't figure it out, what it is. Another odd thing: there's even no output for print(norm_gp_x, norm_gp_y). If I put the with open ... in a comment, I'll get the output.

got it:
First
if (0 <= norm_gp_x <= 1 and 0 <= norm_gp_y <= 1):
then:
file.write('[' + norm_gp_x + ', ' + norm_gp_y + ']')
So you're adding strings and integers. This triggers an exception, and since you used an universal except: pass construct, the code skips every iteration (note that this except statement also catches the KeyboardInterrupt exception you're trying to catch at a higher level, so that doesn't work either)
Never use that construct. If you want to protect about a specific exception (ex: IOError), use:
try IOError as e:
print("Warning: got exception {}".format(e))
so your exception is 1) focused and 2) verbose. Always wait until you get exceptions that you want to ignore to ignore them, selectively (read Catch multiple exceptions in one line (except block))
So the fix for your write it:
file.write('[{},{}]'.format(norm_gp_x, norm_gp_y))
or using the list representation since you're trying to mimic it:
file.write(str([norm_gp_x, norm_gp_y]))
Aside: your other issue is that you should use append mode
with open('/the/path/to/the/file.txt', 'a') as file:
or move you open statement before the loop, else you'll only get the last line in the file (a classic) since w mode truncates the file when opening. And you can drop flush since exiting the context closes the file.

Related

Reading data from a text file in Python according to the parameters provided

I have a text file something like this
Mqtt_allowed=true
Mqtt_host=192.168.0.1
Mqtt_port=2223
<=============>
cloud_allowed=true
cloud_host=m12.abc.com
cloud_port=1232
<=============>
local_storage=true
local_path=abcd
I needed to get each of the value w.r.t parameter provided by the user.
What i am doing right now is:
def search(param):
try:
with open('config.txt') as configuration:
for line in configuration:
if not line:
continue
function, f_input=line.split("=")
if function == param:
result=f_input.split()
break
else:
result="0"
except FileNotFoundError:
print("File not found: ")
return result
mqttIsAllowed=search("Mqtt_allowed")
print mqttIsAllowed
Now when i call only mqt stuff it is working fine but when i call cloud or anything after the "<==========>" separation it throws an error. Thanks
Just skip all the lines starting with <:
if not line or line.lstrip().startswith("<"):
continue
Or, if you really, really want to match the separator exactly:
if line.strip() == "<=============>":
continue
I think the first variant is better because if someone slightly modified the separator by accident, the second piece of code won't work at all.
Because you are trying to split on the = character in a style that seems to be standard INI format, it is safe to assume that your pairs will be at max size 2. I'm not a fan of using methods that rely on character checking (unless specifically called for), so give this a whirl:
def search(param):
result = '0' # declare here
try:
with open('config.txt') as configuration:
for line in configuration:
if not line:
continue
f_pair = line.strip().split("=") # remove \r\n, \n
if len(f_pair) > 2: # your separator will be much longer
continue
else if f_pair[0] == param:
result = f_pair[1]
# result = f_input.split() # why the 'split()' here?
break
except FileNotFoundError:
print("File not found: ")
return result
mqttIsAllowed=search("Mqtt_allowed")
I'm pretty sure the error you were getting was a ValueError: too many values to unpack.
Here is how I know that:
When you call this function for any of the Mqtt_* values, the loop never encounters the separator string <=============>. As soo as you try to call anything below that first separator (for example a cloud_* key), the loop eventually reaches the first separator and tries to execute:
function, f_input = line.split('=')
But that wont work, in fact it will tell you:
ValueError: too many values to unpack (expected 2)
And that is because you are forcing the split() call to push into only 2 variables, but a split('=') on your separator string will return a list of 15 elements (a '<', a '>' and 13 ''). Thus, doing what I have posted above ensures that your split('=') still goes off, but checks to see if you hit a separator or not.

Try-except script hangs instead of giving an error

I'm having trouble in writing a script for reading some temperature sensors (DS18B20) on a raspberry pi.
I have a working script, but sometimes the sensors fall out and then the script also stops.
I'm trying to make a more robust version by integrating a try-except statement. The goal is to proceed to the next sensor in the range if one of the sensors doesn't react. If I emulate sensor failure by plugging one of the sensors out, the script stops taking measurements for all the sensors (instead of for the sensor that has been plugged out). And it doesn't give me an error. Any ideas?
This is the part of the script with the try statement:
if time.time() <= timeout:
for index in range (numsensors):
try:
def read_temp_raw(): # gets the temps one by one
f = open(device_file[index], 'r')
lines = f.readlines()
f.close()
return lines
def read_temp(): # checks the received temperature for errors
lines = read_temp_raw()
while lines[0].strip()[-3:] != 'YES':
time.sleep(0.2)
lines = read_temp_raw()
equals_pos = lines[1].find('t=')
if equals_pos != -1:
temp_string = lines[1][equals_pos+2:]
# set proper decimal place for deg C
temp = float(temp_string) / 1000.0
# Round temp to x decimal points --> round(temp,x)
temp = round(temp, 2)
return temp
reading = (read_temp())
temp[index].append(reading)
print device[index],"=", temp[index]
continue
except IOError:
print "Error"
"What has been asked" inventory:
Is using try-except construct making underlying system more
robust?
Why is the code not giving any error on indicated sensor-failure?
A1:
The try-except clause sounds as self-explanatory & life-saving package, however it is not.
One has to fully understand what all sorts of exceptions-types the code is expecting to meet face-to-face and how to handle each of them. Naive or erroneous use of this syntax construct effectively masks the rest of the exceptions from your debugging radar screen, leaving the unhandled cases fail in a dark silence, out of your control and without knowing about them at all. True "Robustness" and "Failure Resilience" is something else than this.
This code sample will leave hidden all real-life collisions, except the only one listed, the IOError, but if that will not happen, all the others, that do happen, are not handled:
if time.time() <= timeout: # START if .time() is still before a T/O
for index in range (numsensors): # ITERATE over all sensors
try: # TRY:
<<<something>>> # <<<something>>>
except IOError: # EXC IOError:
<<<IOError__>>> # Handle EXC.IOError
Guess all your def...(..):-s may belong to a non-repeating section of the code, prior to the if:/for: as you need not modify the code "on-the-fly", do you?
def read_temp_raw(): # DEF: gets the temps one by one
f = open(device_file[index],'r') # SET aFileHANDLE access to IO.DEV ( beware var index "visibility" )
lines = f.readlines() # IO.DEV.READ till <EoF>
f.close() # IO.DEV.CLOSE
return lines # RET all lines
def read_temp(): # DEF: checks the received temperature for errors
lines = read_temp_raw() # GET lines from read_temp_raw()
while lines[0].strip()[-3:]!='YES': # WHILE last-3Bytes of 1st-line!="YES"
time.sleep(0.2) # NOP/Sleep()
lines = read_temp_raw() # GET lines again (beware var index)
equals_pos =lines[1].find('t=') # SET position of 't=' in 2nd-line
if equals_pos != -1: # IF position != -1
temp_string = lines[1][equals_pos+2:]
temp = float(temp_string) \
/ 1000.0 # DIV( 1000) decimal place for deg C
temp = round(temp, 2) # ROUND temp to x decimal points --> round(temp,x)
return temp # RET->
# ----------------------------- # ELSE: re-loop in WHILE
# -------------------------------- # LOOP AGAIN AD INFIMUM
A2: your try-except clause in the posted code is expecting only one kind of exception to be handled by it -- the IOError -- which is instantiated only when actual IO.DEV operation fails for an I/O-related reason, which does not mean a case, that you physically "un-plug" a sensor, while the IO.DEV is still present and can carry it's IO.DEV.READ(s) and thus no exceptions.EnvironmentError.IOError is to be raise-d
That means, the IO.DEV.READ(s) take place and the code results, as per the condition WHILE last-3Bytes of 1st-line dictates, in an endless loop, because the 1st-line "still" does not end with "YES".
Q.E.D.
The Goal focus
Coming back to the issue, you may rather set a safer test for a real-world case, where an erroneous input may appear during your sensor-network scan.
The principle may look like:
f.close() # IO.DEV.CLOSE
if ( len(lines) < 2 ): # IF <<lines>> are nonsense:
return [ "NULL_LENGTH_READING still with CODE EXPECTED ACK-SIG-> YES", \
"SIG ERROR FOR POST-PROCESSOR WITH AN UNREALISTIC VALUE t=-99999999" \
]
return( lines ) # OTHERWISE RET( lines )

How to handle AllServersUnavailable Exception

I wanted to do a simple write operation to a Cassandra instance (v1.1.10) on a single node. I just wanted to see how it handles constant writes and if it can keep up with the write speed.
pool = ConnectionPool('testdb')
test_cf = ColumnFamily(pool,'test')
test2_cf = ColumnFamily(pool,'test2')
test3_cf = ColumnFamily(pool,'test3')
test_batch = test_cf.batch(queue_size=1000)
test2_batch = test2_cf.batch(queue_size=1000)
test3_batch = test3_cf.batch(queue_size=1000)
chars=string.ascii_uppercase
counter = 0
while True:
counter += 1
uid = uuid.uuid1()
junk = ''.join(random.choice(chars) for x in range(50))
test_batch.insert(uid, {'junk':junk})
test2_batch.insert(uid, {'junk':junk})
test3_batch.insert(uid, {'junk':junk})
sys.stdout.write(str(counter)+'\n')
pool.dispose()
The code keeps crushing after a long write (when the counter is around 10M+) with the following message
pycassa.pool.AllServersUnavailable: An attempt was made to connect to each of the servers twice, but none of the attempts succeeded. The last failure was timeout: timed out
I set the queue_size=100 which didn't help. Also I fired up the cqlsh -3 console to truncate the table after the script crashed and got the following error:
Unable to complete request: one or more nodes were unavailable.
Tailing /var/log/cassandra/system.log gives no error sign but INFO on Compaction, FlushWriter and so on. What am I doing wrong?
I've had this problem too - as #tyler-hobbs suggested in his comment the node is likely overloaded (it was for me). A simple fix that I've used is to back-off and let the node catch up. I've rewritten your loop above to catch the error, sleep a while and try again. I've run this against a single node cluster and it works a treat - pausing (for a minute) and backing off periodically (no more than 5 times in a row). No data is missed using this script unless the error throws five times in a row (in which case you probably want to fail hard rather than return to the loop).
while True:
counter += 1
uid = uuid.uuid1()
junk = ''.join(random.choice(chars) for x in range(50))
tryCount = 5 # 5 is probably unnecessarily high
while tryCount > 0:
try:
test_batch.insert(uid, {'junk':junk})
test2_batch.insert(uid, {'junk':junk})
test3_batch.insert(uid, {'junk':junk})
tryCount = -1
except pycassa.pool.AllServersUnavailable as e:
print "Trying to insert [" + str(uid) + "] but got error " + str(e) + " (attempt " + str(tryCount) + "). Backing off for a minute to let Cassandra settle down"
time.sleep(60) # A delay of 60s is probably unnecessarily high
tryCount = tryCount - 1
sys.stdout.write(str(counter)+'\n')
I've added a complete gist here

Nested loops with Python cPickle

I would like to be able to have a series of nested loops that use the same pickle file. See below:
def pickleRead(self):
try:
with open(r'myfile', 'rb') as file:
print 'Reading File...'
while True:
try:
main = pickle.load(file)
id = main[0]
text = main[1]
while True:
try:
data = pickle.load(file)
data_id = data[0]
data_text = data[1]
coefficient = Similarity().jaccard(text.split(),data_text.split())
if coefficient > 0 and data_text is not None:
print str(id) + '\t' + str(data_id) + '\t' + str(coefficient)
except EOFError:
break
except Exception as err:
print err
except EOFError:
break
print 'Done Reading File...'
file.close()
except Exception as err:
print err
The second (inner) loop runs without any problems but the first one just does a single iteration and then stops. I am trying to grab a single row at a time then compare it against every other row in the file. There are several thousand rows and I have found that the cPickle module out performs anything similar. The problem is that it is limited in what is exposed. Can anyone point me in the right direction?
The inner loop only stops when it hits an EOFError while reading the file, so by the time you get to what would have been the second iteration of the outer loop, you've read the entire file. So trying to read more just gives you another EOFError, and you're out.
First, I should say ben w's answer does explain the behavior you're experiencing.
As for your broader question of "how do I accomplish my task using Python?" I recommend just using a single loop through the file to load all the pickled objects into a data structure in memory (a dictionary with IDs as keys and text as values seems like a natural choice). Once all the objects are loaded, you don't mess with the file at all; just use the in-memory data structure. You can use your existing nested-loop logic, if you like. It might look something like (pseudocode)
for k1 in mydict:
for k2 in mydict:
if k1 != k2:
do_comparison(mydict[k1], mydict[k2])

Python string manipulation -- performance problems

I have the following piece of code that I execute around 2 million times in my application to parse that many records. This part seems to be the bottleneck and I was wondering if anyone could help me by suggesting some nifty tricks that could make these simple string manipulations faster.
try:
data = []
start = 0
end = 0
for info in self.Columns():
end = start + (info.columnLength)
slice = line[start:end]
if slice == '' or len(slice) != info.columnLength:
raise 'Wrong Input'
if info.hasSignage:
if(slice[0:1].strip() != '+' and slice[0:1].strip() != '-'):
raise 'Wrong Input'
if not info.skipColumn:
data.append(slice)
start = end
parsedLine = data
except:
parsedLine = False
def fubarise(data):
try:
if nasty(data):
raise ValueError("Look, Ma, I'm doing a big fat GOTO ...") # sheesh #1
more_of_the_same()
parsed_line = data
except ValueError:
parsed_line = False
# so it can be a "data" or False -- sheesh #2
return parsed_line
There is no point in having different error messages in the raise statement; they are never seen. Sheesh #3.
Update: Here is a suggested improvement which uses struct.unpack to partition input lines rapidly. It also illustrates better exception handling, under the assumption that the writer of the code is also running it and stopping on the first error is acceptable. A robust implementation which logs all errors in all columns of all lines for a user audience is another matter. Note that typically the error checking for each column would be much more extensive e.g. checking for a leading sign but not checking whether the column contains a valid number seems a little odd.
import struct
def unpacked_records(self):
cols = self.Columns()
unpack_fmt = ""
sign_checks = []
start = 0
for colx, info in enumerate(cols, 1):
clen = info.columnLength
if clen < 1:
raise ValueError("Column %d: Bad columnLength %r" % (colx, clen))
if info.skipColumn:
unpack_fmt += str(clen) + "x"
else:
unpack_fmt += str(clen) + "s"
if info.hasSignage:
sign_checks.append(start)
start += clen
expected_len = start
unpack = struct.Struct(unpack_fmt).unpack
for linex, line in enumerate(self.whatever_the_list_of_lines_is, 1):
if len(line) != expected_len:
raise ValueError(
"Line %d: Actual length %d, expected %d"
% (linex, len(line), expected_len))
if not all(line[i] in '+-' for i in sign_checks):
raise ValueError("Line %d: At least one column fails sign check" % linex)
yield unpack(line) # a tuple
what about (using some classes to have an executable example):
class Info(object):
columnLength = 5
hasSignage = True
skipColumn = False
class Something(object):
def Columns(self):
return [Info()]*4
def bottleneck(self):
try:
data = []
start = 0
end = 0
line = '+this-is just a line for testing'
for info in self.Columns():
start = end
collength = info.columnLength
end = start + collength
if info.skipColumn: # start with this
continue
elif collength == 0:
raise ValueError('Wrong Input')
slice = line[start:end] # only now slicing, because it
# is probably most expensive part
if len(slice) != collength:
raise ValueError('Wrong Input')
elif info.hasSignage and slice[0] not in '+-': # bit more compact
raise ValueError('Wrong Input')
else:
data.append(slice)
parsedLine = data
except:
parsedLine = False
Something().bottleneck()
edit:
when length of slice is 0, slice[0] does not exist, so if collength == 0 has to be checked for first
edit2:
You are using this bit of code for many many lines, but the column info does not change, right? That allows you, to
pre-calculate a list of start points of each colum (no more need to calculate start, end)
knowing start-end in advance, .Columns() only needs to return columns that are not skipped and have a columnlength >0 (or do you really need to raise an input for length==0 at each line??)
the manditory length of each line is known and equal or each line and can be checked before looping over the column infos
edit3:
I wonder how you will know what data index belongs to which column if you use 'skipColumn'...
EDIT: I'm changing this answer a bit. I'll leave the original answer below.
In my other answer I commented that the best thing would be to find a built-in Python module that would do the unpacking. I couldn't think of one, but perhaps I should have Google searched for one. #John Machin provided an answer that showed how to do it: use the Python struct module. Since that is written in C, it should be faster than my pure Python solution. (I haven't actually measured anything so it is a guess.)
I do agree that the logic in the original code is "un-Pythonic". Returning a sentinel value isn't best; it's better to either return a valid value or raise an exception. The other way to do it is to return a list of valid values, plus another list of invalid values. Since #John Machin offered code to yield up valid values, I thought I'd write a version here that returns two lists.
NOTE: Perhaps the best possible answer would be to take #John Machin's answer and modify it to save the invalid values to a file for possible later review. His answer yields up answers one at a time, so there is no need to build a large list of parsed records; and saving the bad lines to disk means there is no need to build a possibly-large list of bad lines.
import struct
def parse_records(self):
"""
returns a tuple: (good, bad)
good is a list of valid records (as tuples)
bad is a list of tuples: (line_num, line, err)
"""
cols = self.Columns()
unpack_fmt = ""
sign_checks = []
start = 0
for colx, info in enumerate(cols, 1):
clen = info.columnLength
if clen < 1:
raise ValueError("Column %d: Bad columnLength %r" % (colx, clen))
if info.skipColumn:
unpack_fmt += str(clen) + "x"
else:
unpack_fmt += str(clen) + "s"
if info.hasSignage:
sign_checks.append(start)
start += clen
expected_len = start
unpack = struct.Struct(unpack_fmt).unpack
good = []
bad = []
for line_num, line in enumerate(self.whatever_the_list_of_lines_is, 1):
if len(line) != expected_len:
bad.append((line_num, line, "bad length"))
continue
if not all(line[i] in '+-' for i in sign_checks):
bad.append((line_num, line, "sign check failed"))
continue
good.append(unpack(line))
return good, bad
ORIGINAL ANSWER TEXT:
This answer should be a lot faster if the self.Columns() information is identical over all the records. We do the processing of the self.Columns() information one time, and build a couple of lists that contain just what we need to process a record.
This code shows how to compute parsedList but doesn't actually yield it up or return it or do anything with it. Obviously you would need to change that.
def parse_records(self):
cols = self.Columns()
slices = []
sign_checks = []
start = 0
for info in cols:
if info.columnLength < 1:
raise ValueError, "bad columnLength"
end = start + info.columnLength
if not info.skipColumn:
tup = (start, end)
slices.append(tup)
if info.hasSignage:
sign_checks.append(start)
expected_len = end # or use (end - 1) to not count a newline
try:
for line in self.whatever_the_list_of_lines_is:
if len(line) != expected_len:
raise ValueError, "wrong length"
if not all(line[i] in '+-' for i in sign_checks):
raise ValueError, "wrong input"
parsedLine = [line[s:e] for s, e in slices]
except ValueError:
parsedLine = False
Don't compute start and end every time through this loop.
Compute them exactly once prior to using self.Columns() (Whatever that is. If 'Columns` is class with static values, that's silly. If it's a function with a name that begins with a capital letter, that's confusing.)
if slice == '' or len(slice) != info.columnLength can only happen if line is too short compared to the total size required by Columns. Check once, outside the loop.
slice[0:1].strip() != '+' sure looks like .startswith().
if not info.skipColumn. Apply this filter before even starting the loop. Remove these from self.Columns().
First thing I would consider is slice = line[start:end]. Slicing creates new instances; you could try to avoid explicitly constructing line [start:end] and examine its contents manually.
Why are you doing slice[0:1]? This should yield a subsequence containing a single item of slice (shouldn't it?), thus it can probably be checked more efficiently.
I want to tell you to use some sort of built-in Python feature to split the string, but I can't think of one. So I'm left with just trying to reduce the amount of code you have.
When we are done, end should be pointing at the end of the string; if this is the case, then all of the .columnLength values must have been okay. (Unless one was negative or something!)
Since this has a reference to self it must be a snip from a member function. So, instead of raising exceptions, you could just return False to exit the function early and return an error flag. But I like the debugging potential of changing the except clause to not catch the exception anymore, and getting a stack trace letting you identify where the problem came from.
#Remi used slice[0] in '+-' where I used slice.startswith(('+', '-)). I think I like #Remi's code better there, but I left mine unchanged just to show you a different way. The .startswith() way will work for strings longer than length 1, but since this is only a string of length 1 the terse solution works.
try:
line = line.strip('\n')
data = []
start = 0
for info in self.Columns():
end = start + info.columnLength
slice = line[start:end]
if info.hasSignage and not slice.startswith(('+', '-')):
raise ValueError, "wrong input"
if not info.skipColumn:
data.append(slice)
start = end
if end - 1 != len(line):
raise ValueError, "bad .columnLength"
parsedLine = data
except ValueError:
parsedLine = False

Categories