Iterating over tf.tensor in graph execution mode - python

This slice of code is from a custom metric function in tensorflow:
r = []
p = []
count = 0
for idx,elem in enumerate(tf.round(data[:, -1])):
if elem == 1:
count += 1
r.append(count / (count_pos + 1e-6))
p.append(count / (idx + 1))
data is a 2-dimensional tensor and count_pos a scalar.
When I run the metric as a stand-alone function, everything works fine. But when I pass it to model.compile, I get the following error,referencing the for-loop in the code snippet above, probably due to graph execution mode:
OperatorNotAllowedInGraphError: iterating over `tf.Tensor` is not allowed: AutoGraph did convert this function. This might indicate you are trying to use an unsupported feature.
I know that similar questions have been discussed here, related to this error message. However they don't seem to help me in this particular situation, as I am not able to get rid of the for-loop

Related

RuntimeError: Operation does not have identity in f-string statement

I am evaluating a pytorch model. It gives results in following manner
results = model(batch)
# results is a list of dictionaries with 'boxes', 'labels' and 'scores' keys and torch tensor values
Then I try to print some of the values to check what is happening
print(
(
f"{results[0]['boxes'].shape[0]}\n" # Returns how many boxes there is
f"{results[0]['scores'].mean()}" # Mean credibility score of the boxes
)
)
This results in error
Exception has occurred: RuntimeError: operation does not have identity
To make things more confusing, print only fails some of the time. Why does this fail?
I had the same problem in my code. It turns out when trying to access attributes of empty tensors (e.g. shape, mean, etc.) the outcome is the no identity exception.
Code to reproduce:
import torch
a = torch.arange(12)
mask = a > 100
b = a[mask] # tensor([], dtype=torch.int64) -- empty tensor
b.min() # yields "RuntimeError: operation does not have an identity."
Figure out why your code returns empty tensors and this will solve the problem.

np.savetxt isn't storing array information

I have the following code:
#Time integration
T=28
AT=5/(1440)
L=T/AT
tr=np.linspace(AT,T,AT) %I set minimum_value to AT,
to avoid a DivisionByZero Error (in function beta_cc(i))
np.savetxt('tiempo_real.csv',tr,delimiter=",")
#Parameters
fcm28=40
beta_cc=0
fcm=0
s=0
# Hardening coeficient (s)
ct=input("Cement Type (1, 2 or 3): ")
print("Cement Type: "+str(ct))
if int(ct)==1:
s=0.2
elif int(ct)==2:
s=0.25
elif int(ct)==3:
s=0.38
else: print("Invalid answer")
# fcm determination
iter=1
maxiter=8065
while iter<maxiter:
iter += 1
beta_cc = np.exp(s*(1-(28/tr))**0.5)
fcm = beta_cc*fcm28
np.savetxt('Fcm_Results.csv',fcm,delimiter=",")
The code runs without errors, and it creates the two desired files, but there is no information stored in neither.
What I would like the np.savetxt to do is to create a .CSV file with the result of fcm at every iteration, (so a 1:8064 array)
Instead of the while-loop, I had previously tried using a For-loop, but as the timestep is a float, I had some problems with it.
Thank you very much.
PS. Not sure if I should mention: I used Python3 on Ubuntu.
If anyone has the same issue, I solved this by changing the loop to a FOR loop, appending the iterative values of the functions (beta_cc & fcm) in an array, and using the savetxt command.
iteration=0
maxiteration=8064
fcmM1=[]
tiemporeal=[]
for i in range(iterat,maxiter):
def beta_cc(i):
return np.exp(s*1-(28/tr)**0.5))
def fcm(i):
return beta_cc(i)**fcm28
tr=tr+AT
fcmM1.append(fcm(i))
tiemporeal.append(tr)
np.savetxt('M1_Resultados_fcm.csv',fcmM1,delimiter=",",header="Fcm",fmt="%s")

google tensor flow crash course. Issues with REPRESENTATION:Programming exercises Task 2: Make Better Use of Latitude

Hi got into another roadblock in tensorflow crashcourse...at the representation programming excercises at this page.
https://developers.google.com/…/repres…/programming-exercise
I'm at Task 2: Make Better Use of Latitude
seems I narrowed the issue to when I convert the raw latitude data into "buckets" or ranges which will be represented as 1 or zero in my feature. The actual code and issue I have is in the paste bin. Any advice would be great! thanks!
https://pastebin.com/xvV2A9Ac
this is to convert the raw latitude data in my pandas dictionary into "buckets" or ranges as google calls them.
LATITUDE_RANGES = zip(xrange(32, 44), xrange(33, 45))
the above code I changed and replaced xrange with just range since xrange is already deprecated python3.
could this be the problem? using range instead of xrange? see below for my conundrum.
def select_and_transform_features(source_df):
selected_examples = pd.DataFrame()
selected_examples["median_income"] = source_df["median_income"]
for r in LATITUDE_RANGES:
selected_examples["latitude_%d_to_%d" % r] = source_df["latitude"].apply(
lambda l: 1.0 if l >= r[0] and l < r[1] else 0.0)
return selected_examples
The next two are to run the above function and convert may exiting training and validation data sets into ranges or buckets for latitude
selected_training_examples = select_and_transform_features(training_examples)
selected_validation_examples = select_and_transform_features(validation_examples)
this is the training model
_ = train_model(
learning_rate=0.01,
steps=500,
batch_size=5,
training_examples=selected_training_examples,
training_targets=training_targets,
validation_examples=selected_validation_examples,
validation_targets=validation_targets)
THE PROBLEM:
oki so here is how I understand the problem. When I run the training model it throws this error
ValueError: Feature latitude_32_to_33 is not in features dictionary.
So I called selected_training_examples and selected_validation_examples
here's what I found. If I run
selected_training_examples = select_and_transform_features(training_examples)
then I get the proper data set when I call selected_training_examples which yields all the feature "buckets" including Feature #latitude_32_to_33
but when I run the next function
selected_validation_examples = select_and_transform_features(validation_examples)
it yields no buckets or ranges resulting in the
`ValueError: Feature latitude_32_to_33 is not in features dictionary.`
so I next tried disabling the first function
selected_training_examples = select_and_transform_features(training_examples)
and I just ran the second function
selected_validation_examples = select_and_transform_features(validation_examples)
If I do this, I then get the desired dataset for
selected_validation_examples .
The problem now is running the first function no longer gives me the "buckets" and I'm back to where I began? I guess my question is how are the two functions affecting each other? and preventing the other from giving me the datasets I need? If I run them together?
Thanks in advance!
a python developer gave me the solution so just wanted to share. LATITUDE_RANGES = zip(xrange(32, 44), xrange(33, 45)) can only be used once the way it was written so I placed it inside the succeding def select_and_transform_features(source_df) function which solved the issues. Thanks again everyone.

How to optimize filtering of a layer in QGS API

I'm developping a QGIS plugin (under version 2.8.1) for traffic assignment where I want to show the results of my simulation at each time step. Right now I'm using Time Manager plugin but it gets very slow when my layer has hundreds of thousands of attributes. In my case I know exactly what feature IDs I want to show at each time step so I thought it would be easy to make it faster.
Here is what I tried (sorry of my way of python programming but I'm quite new using this language): at each time step of my loop I set the ordered list of indexes of attributes to show (they are always ordered in my case).
# TEST 1 -----------------------------------
for step in time_steps:
index_start = my_list_of_indexes_start[step]
index_end = my_list_of_indexes_end[step]
expression = 'fid >= ' + str(index_start) + ' AND fid <= ' + str(index_end)
# Or for optimization tests
# expression = '"FIELD_TIME"' + "=" + str(step)
layer_dynamic.setSubsetString(expression)
self.iface.mapCanvas().refresh()
time.sleep(0.2)
# TEST 2 ------------------------------------
for step in time_steps:
index_start = my_list_of_indexes_start[step]
index_end = my_list_of_indexes_end[step]
indexes = list(j for j in range(index_start, index_end))
request = QgsFeatureRequest().setFilterFids(indexes)
layer_dynamic.getFeatures(request)
self.iface.mapCanvas().refresh()
time.sleep(0.2)
Solution 1 with
layer_dynamic.setSubsetString(expression)
works as it refresh the view with the correct filtered features displayed on canvas at each time step but it is even slower than using a SQL expression not based on the indexes but on attributes values (as shown in comment in TEST 1 loop).
Solution 2 with
layer_dynamic.getFeatures(request)
is fast but the display of the layer doesn't change.
Any idea why?
The method
bool QgsVectorLayer.setSubsetString(self, QString subset)
filters the layer (more details in setSubsetString), so only the features that match the filter (provided using a SQL statement or other definition string the the "subset" QString) "will belong to the layer" after it's being filtered. Thus, when you call refresh, only the filtered features are displayed.
On the other hand, the method
QgsFeatureIterator QgsVectorLayer.getFeatures(self, QgsFeatureRequest request=QgsFeatureRequest())
returns a iterator for the features matching you request (more details in getFeatures). It doesn't filter the layer. Using the iterator, you just iterate over the features matching the request.

TypeError: 'filter' object is not subscriptable

I am receiving the error
TypeError: 'filter' object is not subscriptable
When trying to run the following block of code
bonds_unique = {}
for bond in bonds_new:
if bond[0] < 0:
ghost_atom = -(bond[0]) - 1
bond_index = 0
elif bond[1] < 0:
ghost_atom = -(bond[1]) - 1
bond_index = 1
else:
bonds_unique[repr(bond)] = bond
continue
if sheet[ghost_atom][1] > r_length or sheet[ghost_atom][1] < 0:
ghost_x = sheet[ghost_atom][0]
ghost_y = sheet[ghost_atom][1] % r_length
image = filter(lambda i: abs(i[0] - ghost_x) < 1e-2 and
abs(i[1] - ghost_y) < 1e-2, sheet)
bond[bond_index] = old_to_new[sheet.index(image[0]) + 1 ]
bond.sort()
#print >> stderr, ghost_atom +1, bond[bond_index], image
bonds_unique[repr(bond)] = bond
# Removing duplicate bonds
bonds_unique = sorted(bonds_unique.values())
And
sheet_new = []
bonds_new = []
old_to_new = {}
sheet=[]
bonds=[]
The error occurs at the line
bond[bond_index] = old_to_new[sheet.index(image[0]) + 1 ]
I apologise that this type of question has been posted on SO many times, but I am fairly new to Python and do not fully understand dictionaries. Am I trying to use a dictionary in a way in which it should not be used, or should I be using a dictionary where I am not using it?
I know that the fix is probably very simple (albeit not to me), and I will be very grateful if someone could point me in the right direction.
Once again, I apologise if this question has been answered already
Thanks,
Chris.
I am using Python IDLE 3.3.1 on Windows 7 64-bit.
filter() in python 3 does not return a list, but an iterable filter object. Use the next() function on it to get the first filtered item:
bond[bond_index] = old_to_new[sheet.index(next(image)) + 1 ]
There is no need to convert it to a list, as you only use the first value.
Iterable objects like filter() produce results on demand rather than all in one go. If your sheet list is very large, it might take a long time and a lot of memory to put all the filtered results into a list, but filter() only needs to evaluate your lambda condition until one of the values from sheet produces a True result to produce one output. You tell the filter() object to scan through sheet for that first value by passing it to the next() function. You could do so multiple times to get multiple values, or use other tools that take iterables to do more complex things; the itertools library is full of such tools. The Python for loop is another such a tool, it too takes values from an iterable one by one.
If you must have access to all filtered results together, because you have to, say, index into the results at will (e.g. because this time your algorithm needed to access index 223, index 17 then index 42) only then convert the iterable object to a list, by using list():
image = list(filter(lambda i: ..., sheet))
The ability to access any of the values of an ordered sequence of values is called random access; a list is such a sequence, and so is a tuple or a numpy array. Iterables do not provide random access.
Use list before filter condtion then it works fine. For me it resolved the issue.
For example
list(filter(lambda x: x%2!=0, mylist))
instead of
filter(lambda x: x%2!=0, mylist)
image = list(filter(lambda i: abs(i[0] - ghost_x) < 1e-2 and abs(i[1] - ghost_y) < 1e-2, sheet))

Categories