convert a string into a list of floats - python

The string to be converted is:
strng= "0.000000000000000E+000, 2*2.400000000000000E-002 , 97*0.000000000000000E+000 ,"
which I was able to split using strng.split(',')
[' 0.000000000000000E+000', ' 2*2.400000000000000E-002 ', ' 97*0.000000000000000E+000 ', '\n']
however ' 2*2.400000000000000E-002 ' is not a string but actually two elemts of a list
and ' 97*0.000000000000000E+000 is actually 97 elements of a list.
Here is an ugly attempt to accomplish this:
a=strng.split("=").split(',')
lst=[]
for item in a:
print("intem=", item)
i= item.split("*")
if len(i) >1:
print(int(i[0])*i[1])
lst.append(int(i[0])*i[1])
else:
lst.append(i[0])
print("================")
print(lst)
What is a more elegant way to accomplish this?

Considering
string = "0.000000000000000E+000, 2*2.400000000000000E-002 , 97*0.000000000000000E+000 ,"
You can apply the following steps:
Split and strip the original string:
# Split by commas
expressions = string.split(',')
# Remove leading and trailing white space of each element
expressions = [expression.strip() for expression in expressions]
# Remove empty elements
expressions = [expression for expression in expressions if expression]
or, alternatively,
expressions = [expression.strip() for expression in string.split(',') if expression.strip()]
This will evaluate to
['0.000000000000000E+000', '2*2.400000000000000E-002', '97*0.000000000000000E+000']
Create a new list and evaluate each expression:
result = []
for expression in expressions:
# Split by the multiplication sign
operands = expression.split('*')
# A single element, just add to `result`
if len(operands) == 1:
result.append(float(operands[0]))
# Two operands, add to `result` repeatdely
elif len(operands) == 2:
o1, o2 = operands
o1 = int(o1)
o2 = float(o2)
for _ in range(o1):
result.append(o2)
This will evaluate to
[0.0, 0.024, 0.024, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0]

Here is another way to do this with string manipulations but this is only so that you for your understanding on how it works.
string = "0.000000000000000E+000, 2*2.400000000000000E-002 , 97*0.000000000000000E+000 ,"
out = [eval(i.strip().replace('*','*[')+']' if '*' in i else '['+i.strip()+']') for i in strng.split(',') if len(i)>0]
print(out)
[[0.0], [0.024, 0.024], [0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0]]

Related

Keras model feature output is all zeros during prediction of a single image

For a project I'm extracting the features of images but have run into a problem. After using transfer learning on efficientnet, I load the model, remove the last 2 layers (dropout and softmax) then use model.predict to get the feature array. This works fine when working in batches but with a single image it breaks. Here's my code for the single image:
model = load_model('best_model.h5')
model = Model(inputs=model.input,outputs=model.layers[-3].output)
model.summary()
#img_path = 'test_img_new/大葱/20113.jpg'
img_path = '1359.jpg'
img = image.load_img(img_path, target_size=(260,260,3))
x = image.img_to_array(img)
x = preprocess_input_E(x)
x = np.array(x,dtype=np.float32)
#x = x/255
x = np.expand_dims(x, axis=0)
res = model.predict(x,batch_size=1)
print(res.tolist())
and the output is just:
[[0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0]]
Anyone have any suggestions?
After browsing stackoverflow and github for a ridiculous amount of time, I found the solution.
It turns out that batch normalization was not performing correctly during inference. As per the keras documentation, the Batch Normalizaiton layer should use the moving average and standard deviation from training. However, this was now happening because during training I had manually set trainable=True. The solution was to load the model again, then specifcally set the layer to trainable=False
for layer in model.layers:
if "BatchNormalization" in layer.__class__.__name__:
layer.trainable=False
print(layer.weights)
Then, save and use the model directly. It seems that compiling was not needed for this to take effect, and this may be a bug which should be investigated in the future.
NOTE: This seemed to work for tensorflow 2.3 but not 2.1. This may be due to the model being trained on tf 2.3, but I'm unsure of the reasoning.

Can pyspark.ml.stat.Summarizer return sparse vector results?

computing aggregations of sparse vectors using pyspark.ml.stat.Summarizer returns dense vector results - is there a way to force sparse vector operations?
(just converting the result isn't a good solution as the intermediate results will waste a lot of resources)
code to reproduce:
import pyspark
from pyspark.sql.functions import col
from pyspark.ml.stat import Summarizer
from pyspark.ml.linalg import SparseVector, DenseVector
sc = pyspark.SparkContext.getOrCreate()
sql_context = pyspark.SQLContext(sc)
df = sc.parallelize([ ( SparseVector(100, {1: 1.0}),)]).toDF(['v'])
print(df.head())
print(df.select(Summarizer.mean(col('v'))).head())
output:
Row(v=SparseVector(100, {1: 1.0}))
Row(mean(v)=DenseVector([0.0, 1.0,
0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0]))
EDIT
an alternative implementation for aggregating sparse vectors would also be relevant

Why is my variable being overwritten by the function even though it's indirectly referenced?

Why is it that this code:
import random, decimal
data = [[0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0], [0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0], [0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0], [0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0], [0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0],
[0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.3333333333333333, 0.05555555555555555, 0.05555555555555555, 0.05555555555555555, 0.007936507936507936, 0.007352941176470588, 0.005714285714285714, 0.038461538461538464, 0.006024096385542169, 0.00392156862745098, 0.004048582995951417, 0.007874015748031496, 0.0, 0.0, 0.0, 0.0], [0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.03333333333333333, 0.027777777777777776, 0.010638297872340425, 0.006493506493506494, 0.0058823529411764705, 0.003952569169960474, 0.003952569169960474, 0.003952569169960474, 0.003952569169960474, 0.003952569169960474, 0.0044444444444444444, 0.005813953488372093, 0.003952569169960474, 0.004132231404958678, 0.005128205128205128, 0.015625, 0.0, 0.0, 0.0, 0.0], [0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.02040816326530612, 0.004201680672268907, 0.003952569169960474, 0.003952569169960474, 0.003952569169960474, 0.003952569169960474, 0.003952569169960474, 0.003952569169960474, 0.003952569169960474, 0.003952569169960474, 0.00398406374501992, 0.010752688172043012, 0.012195121951219513, 0.012195121951219513, 0.017857142857142856, 0.02564102564102564, 0.0, 0.0, 0.0, 0.0, 0.0], [0.0, 0.0, 0.0, 0.0, 0.0, 0.0,
0.0, 0.05555555555555555, 0.0045662100456621, 0.003952569169960474, 0.003952569169960474, 0.003952569169960474, 0.003952569169960474, 0.003952569169960474, 0.005050505050505051, 0.005494505494505495, 0.004048582995951417, 0.004149377593360996, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0], [0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0125, 0.00641025641025641, 0.009345794392523364, 0.003952569169960474, 0.003952569169960474, 0.004878048780487805, 0.09090909090909091,
0.0, 0.023255813953488372, 0.006493506493506494, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0], [0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.07142857142857142, 1.0, 0.006493506493506494, 0.003952569169960474, 0.011111111111111112, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0], [0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.007194244604316547, 0.003952569169960474, 0.005263157894736842, 0.5, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0], [0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.09090909090909091, 0.005263157894736842, 0.003952569169960474, 0.014285714285714285, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0],
[0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.02857142857142857, 0.004149377593360996, 0.0044444444444444444, 0.00625, 0.009259259259259259, 1.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0], [0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.012345679012345678, 0.004166666666666667, 0.003952569169960474, 0.003952569169960474, 0.008403361344537815, 0.04, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0], [0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.022222222222222223, 0.005376344086021506, 0.003952569169960474, 0.003952569169960474, 0.006666666666666667, 0.037037037037037035, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0], [0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0625, 0.010752688172043012, 0.003968253968253968, 0.003952569169960474, 0.0053475935828877, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0], [0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0,
0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.004016064257028112, 0.003952569169960474, 0.004016064257028112, 0.015625, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0], [0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.021739130434782608, 0.007692307692307693, 0.00546448087431694, 0.003952569169960474, 0.003952569169960474, 0.004830917874396135, 0.5, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0], [0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.02564102564102564, 0.006756756756756757, 0.004366812227074236, 0.003952569169960474, 0.003952569169960474, 0.003952569169960474, 0.004, 0.005494505494505495, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0], [0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.041666666666666664, 0.008771929824561403, 0.004524886877828055, 0.003952569169960474, 0.003952569169960474, 0.003952569169960474, 0.003952569169960474, 0.004975124378109453, 0.01282051282051282, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0], [0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.043478260869565216, 0.015151515151515152, 0.004694835680751174, 0.003952569169960474, 0.003952569169960474, 0.003952569169960474, 0.003952569169960474, 0.005050505050505051, 0.012345679012345678,
0.5, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0], [0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.05555555555555555, 0.005847953216374269, 0.0045662100456621, 0.003952569169960474, 0.003952569169960474, 0.003952569169960474, 0.003952569169960474, 0.005128205128205128, 0.0125, 0.1111111111111111, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0], [0.0, 0.0, 0.0, 0.0, 0.01818181818181818, 0.005813953488372093, 0.004424778761061947, 0.003952569169960474, 0.003952569169960474, 0.003952569169960474, 0.003952569169960474, 0.004098360655737705, 0.007518796992481203, 0.09090909090909091, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0], [0.0, 0.0, 0.0, 0.0, 0.007352941176470588, 0.003952569169960474, 0.003952569169960474, 0.003952569169960474, 0.0047169811320754715, 0.007407407407407408, 0.007575757575757576, 0.0625, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0], [0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0], [0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0], [0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0]]
[[0.2958373547424454, 0.7017788649452896, 0.062273689823917136, 0.5972807574704273, 0.9954328393432262, 0.06846660132933566, 0.951370577952487, 0.13857649386421894, 0.9155594510837761, 0.25332617322833023], [0.9435904403108412, 0.8308103147735971, 0.5642138942684856, 0.569439896252594, 0.008026089778913156, 0.14708273242122766, 0.025129344862662672, 0.10605633957699719, 0.017149039312628126, 0.9685317671516491, 0.9604996293414068, 0.13009421347484584, 0.011168020783984038, 0.9660212281911376, 0.009785512162735856, 0.774118311034036], [0.39828514946703536, 0.9719812893589492, 0.29744104658316833, 0.2407147714636478, 0.17834097142046978, 0.30339140369565476, 0.23414714143729679, 0.010316054064037888, 0.9665891555382209, 0.2709023215219119, 0.18678849879884543, 0.6853950395070337, 0.49793613573495626, 0.7369771877120291, 0.5822901326349779, 0.35804830353223377]]
def rnd():
return float(decimal.Decimal(random.randrange(-200, 200))/100)
def neuronPositivityTotal(aIndex, imageData, weightsBalances):
""" Calculates the total positivity of all nodes.."""
# Step 1: Level 1 Neurons
for z in range(len(aIndex[2])):
value = 0.0
# weightsBalances[2] is the level 1 values
balance = weightsBalances[2][z][1]
weights = weightsBalances[2][z][0]
for y in range(len(imageData)):
for x in range(len(imageData[y])):
value += imageData[x][y]*weights[x][y]
value += balance
aIndex[2][z] = value
activationIndex = [[0.0 for x in range(10)], [0.0 for x in range(16)], [0.0 for x in range(16)]]
answerWB = {}
n1WB = {}
n2WB = {}
for x in range(10):
answerWB[x] = [[rnd() for y in range(16)], rnd()]
for x in range(16):
n2WB[x] = [[rnd() for y in range(16)], rnd()]
n1WB[x] = [[[rnd() for y in range(28)] for x in range(28)], rnd()]
weightsBalances = [answerWB, n2WB, n1WB]
print(activationIndex)
neuronPositivityTotal(activationIndex, data, weightsBalances)
print(activationIndex)
Outputs:
[[0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0], [0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0], [0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0]]
[[0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0], [0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0], [0.7268240240758053, 2.1047003177465773, -0.14702590533853588, 1.6819534808112164, 2.1421710797784743, -2.0840036534506106, -0.6067792109947316, 0.40979446487814936, 1.9165752482288656, 0.5045996467720566, -0.08187273316902777, -0.9549459648875294, -0.19719768439019592, -3.4785770307824064, -4.423173617897677, -1.8256438294590946]]
Even though there are no global tags in the program? It's part of a larger deep learning project that I'm working on and I can' figure it out. The value of activationIndex is only ever set once and the values are used with the aIndex variable. (Using python 3.8)
Passing an argument to a function makes an alias pointing to that same object in the local parameter name within the function. When you do:
aIndex[2][z] = value
it's equivalent to doing:
activationIndex[2][z] = value
because aIndex is an alias to the same list bound to activationIndex; the "pointer" to that list is copied (so saying aIndex = 'foo' won't change activationIndex, as it just assigns a new "pointer"), but mutations to the pointed-to object will be seen through both aliases until one of them is rebound to a new object.
If you want to sever that aliasing linkage to the caller, deepcopy aIndex when you receive it, adding:
from copy import deepcopy # At top of file
and changing the beginning of neuronPositivityTotal to:
def neuronPositivityTotal(aIndex, imageData, weightsBalances):
""" Calculates the total positivity of all nodes.."""
aIndex = deepcopy(aIndex)
# ... rest of function ...

Python statistics.median_grouped returns value outside of input range

I'm analyzing some data and got this list, median_grouped of which seems to be outside of range of values in the list.
screen:
statistics.median_grouped(rcdistance)
-0.1250640605965987
min(rcdistance)
0.0
Here's the list:
[0.0, 0.21098454268727887, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 6.800735254367722, 6.800735254367722, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 2.3691164850450193, 0.8341463454328069, 6.954672651889972, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 1.894523526205462, 0.5202041513165154, 0.3195953715186193, 1.19650727442225, 1.9829608504017993, 0.320822054594543, 0.8021892543724501, 0.06639592707563217, 0.33791893636471976, 1.3469748280785028, 0.46225975601521924, 0.29146628081088805, 0.31432316149828754, 0.24174865833529519, 0.10137129170776715, 2.07576600608641, 0.2615844609784234, 0.6481662780506829, 0.13578192040175857, 0.13470214233110803, 0.34278013217398423, 0.09326076007769707, 0.4336006232739294, 0.6007209936117036, 2.8535831869013726, 0.7286885646881327, 0.6175406688934962, 0.6432741795490186, 0.25089604017432937, 0.2247559442381167, 0.10569371849097034, 0.13555324220681084, 11.575372015987313, 0.13669258957306155, 1.634337739537961, 0.7023579970496473, 0.9488940841470294, 1.9556804571337925, 0.8370864316757329, 1.2024907508700595, 2.2424077713328843, 0.49703062007687854, 0.7817924904836159, 2.3319974222716975, 1.8335319342837306, 0.0, 3.28868490484547, 1.4314796804850678, 1.3638514553202294, 0.3513379545492983, 0.16043624593306288, 12.960363339703447, 1.4495399509870706, 0.2861815531492466, 0.23131921608674785, 0.07470703125, 0.3156641038900055, 0.12637431907281702, 0.6356548946706172, 2.1073361751465227, 0.1782201479382565, 1.3450383951599099, 0.8871288611532409, 1.4173304274383594, 0.14168172737548348, 0.3703729958352881, 1.658888478625415, 1.2477698697766813, 1.7575586601929727, 2.167596692301178, 2.051780602511496, 0.37814640268331945, 2.1041473318539876, 1.1896854668558041, 0.9250246006039865, 1.3271447098141402, 1.6116094170310153, 1.0963143124740196, 1.4711895830945183, 0.1623601238081846, 1.577846710293568, 0.07569225630868787, 1.200957612567313, 0.18243840079429852, 0.7718713876122335, 0.6783927775524854, 1.7536286642036438, 1.7548529400031645, 0.16162109375, 0.6676365879769467, 0.2727575294371441, 13.361847527117655, 2.2281143134718224, 0.5390126302014291, 0.9547223635736837, 0.17379417516519388, 0.15405041289356222, 0.918123792176202, 0.2754296890581234, 0.5438064589731, 1.6679939428923882, 0.33831407770944383, 0.13401630343709894, 0.4036704845489422, 2.4687540199150484, 1.364673723134251, 1.462182233800534, 0.2524997193740947, 0.6879091779122948, 0.07512909587034285, 1.168234753379213, 0.460947068971782, 0.9278222935828773, 0.5815480934256779, 0.15330106772382537, 0.16744291813233583, 0.11594392249731117, 2.424850113159381, 0.8373170681343546, 4.714464894570124, 0.8284206942102329, 6.691576922679101, 0.9362133734023534, 0.3749359394034013, 0.7812593459523988, 5.344434281645955, 0.34577846245001914, 2.6886532105742007, 0.261063966541952, 0.14849250175991532, 0.3498228476287005]

lpsolve - results are different every time I change the order of constraints

I have noticed a strange behaviour of lpsolve library (using it in python 3.4).
When I change the order of constraints which I add to the lpsolve model the results are also slightly different.
Will be glad for any hints why this is happening.
Adding both models to reproduce the case:
lp model 1: http://pastie.org/private/mginn1s7orxkq58mv3dxrw
lp model 2: http://pastie.org/private/ron5k7y3hipxhci1hap8nq
If you run both models you will get slightly different results (while the objective function is almost the same):
obj1: 458093300.0000001
obj2: 458093300.00000006
vars1: [0.0, 350260.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 1900.0, 1198215.0, 318324.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 4310807.0, 0.0, 0.0, 0.0, 1345965.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 4505218.0, 0.0, 1689912.0, 0.0, 0.0, 0.0, 0.0, 0.0, 479929.0, 0.0, 0.0, 0.0, 0.0, 0.0, 782031.0, 0.0, 0.0, 190146.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 5224280.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 3058056.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 650240.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 509539.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 1351133.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 301872.0, 5.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 380880.0, 268556.0, 1201311.0]
vars2: [0.0, 350260.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 1198215.0, 0.0, 0.0, 0.0, 0.0, 1.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 515323.0, 0.0, 0.0, 0.0, 1345965.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 4505218.0, 0.0, 1010333.0, 0.0, 0.0, 0.0, 0.0, 0.0, 479938.0, 0.0, 0.0, 0.0, 0.0, 0.0, 782031.0, 0.0, 0.0, 190146.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 5224280.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 3082057.0, 0.0, 0.0, 3061853.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 650240.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 623447.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 1347336.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 301872.0, 305463.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 536019.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 380880.0, 268556.0, 1201311.0]
Python code to reproduce:
from lpsolve55 import *
mod1 = lpsolve("read_lp", "/home/../model_1.lp")
mod2 = lpsolve("read_lp", "/home/../model_2.lp")
res1 = lpsolve('solve', mod1)
res2 = lpsolve('solve', mod2)
obj1 = lpsolve('get_objective', mod1)
obj2 = lpsolve('get_objective', mod2)
vars1 = lpsolve('get_variables', mod1)[0]
vars2 = lpsolve('get_variables', mod2)[0]
print("obj1: ", obj1)
print("obj2: ", obj2)
print("vars1: ", vars1)
print("vars2: ", vars2)
This could be because your model is not completely numerically stable. You probably have inputs variable who are not in the same range (eg: x1 can be -1..+1 and x2 can be -1000000..+1000000)

Categories