How to set bounds when minimizing using scipy - python

I have some data in a numpy array.
I would like to scale the data using a linear function according to the following rules:
The mean is as close to 65 as possible
The smallest value is at least 50
For my first attempt I made a scoring function:
import numpy as np
from scipy.optimize import minimize
def score(x):
return abs(np.mean(x[0]*data+x[1]) - 65) + abs(x[0]*np.min(data)+x[1] - 50)
I have added on abs(x[0]*np.min(data)+x[1] - 50) as a vain attempt to get it to satisfy rule 2.
I then tried:
x0 = [0.85,0]
res = minimize(score,x0)
np.set_printoptions(suppress=True)
print res
This gives:
fun: 4.8516444911893615
hess_inv: array([[ 0.0047, -0.1532],
[-0.1532, 5.2375]])
jac: array([-50.9628, -2. ])
message: 'Desired error not necessarily achieved due to precision loss.'
nfev: 580
nit: 2
njev: 142
status: 2
success: False
x: array([0.7408, 1.4407])
In other words the optimization failed.
I would also like to set bounds for the coefficients, e.g. bounds = [(0.7,1.3),(-5,5)].
My question is, what is the correct way to run the optimization with the boundary condition that the scaled smallest value is at least 50? Also, how can I make it so that the optimization runs without failure?

Consider the following:
import numpy as np
from scipy.optimize import minimize
data = np.array([ 59. , 59.5, 61. , 61.5, 62.5, 63. , 63. , 65.5, 66.5,
67. , 68. , 69. , 69.5, 70.5, 70.5, 70.5, 71. , 72. ,
72. , 73.5, 73.5, 74. , 75. , 75.5, 78. , 79. , 79. ,
79. , 79.5, 80.5, 80.5, 80.5, 80.5, 80.5, 82.5, 82.5,
82.5, 83. , 83. , 83. , 83. , 83. , 83.5, 83.5, 84. ,
84.5, 84.5, 84.5, 86. , 86. , 86. , 86.5, 86.5, 87.5,
88. , 88. , 88.5, 89. , 90. , 90.5, 90.5, 90.5, 91. ,
91.5, 91.5, 92. , 92. , 93. , 93. , 93. , 93.5, 93.5,
94. , 94. , 94. , 94. , 94. , 94. , 94.5, 94.5, 94.5,
94.5, 95.5, 95.5, 95.5, 95.5, 95.5, 95.5, 96. , 96. ,
96. , 96.5, 96.5, 96.5, 98. , 98. , 98. , 98. , 98. ,
98. , 98. , 98. , 98.5, 98.5, 98.5, 98.5, 98.5, 100. ,
100. , 100. , 100. ])
def scale(data, coeffs):
m,b = coeffs
return (m * data) + b
def score(coeffs):
scaled = scale(data, coeffs)
# Penalty components
p_1 = abs(np.mean(scaled) - 65)
p_2 = max(0, (50 - np.min(scaled)))
return p_1 + p_2
res = minimize(score, (0.85, 0.0), method = 'Powell')
#np.set_printoptions(suppress=True)
print(res)
post = scale(data, res.x)
print(np.mean(post))
print(np.min(post))
print(score(res.x))
Outputs:
direc: array([[ -3.05475495e-02, 2.62047576e+00],
[ 7.54828106e-07, -6.47892698e-05]])
fun: 1.4210854715202004e-14
message: 'Optimization terminated successfully.'
nfev: 360
nit: 8
status: 0
success: True
x: array([ 0.55914442, 17.02691959])
print(np.mean(post)) # 65.0
print(np.min(post)) # 50.0164406291
print(score(res.x)) # 1.42108547152e-14
A few things:
I added a scale helper function to clean up the code a bit, since I use it in the score function as well as at the end to show the scaled data.
The score function was fixed and broken out into two separate penalties (one for each requirement) for clarity. It computes the scaled vector once (and calls it scaled), then computes the penalty components.
Note: This score function has an odd non-smooth area around min(data) = 50 because of the max call. This may cause issues with some optimization methods.
I used the Powell algorithm because I had used it before and it worked in a similar problem with using a min/max operator. Wikipedia says:
The method is useful for calculating the local minimum of a continuous but complex function, especially one without an underlying mathematical definition, because it is not necessary to take derivatives
Someone more familiar with the optimization methods may be able to suggest a better alternative.
(Edit) Lastly, with respect to your question about boundary conditions. Usually, when we talk about boundary conditions we're talking about the boundary of the independent variable, the vector we're optimizing (here, elements of coeffs or x) -- for example, "x[0] must be less than 0", or "x[1] must be between 0 and 1" -- not what you seem to be looking for.

Sorry if I'm understanding you wrong, but just scaling the data according to those 2 rules is straight forward linear algebra:
e = np.mean(data)
m = e - np.min(data)
data * (65-50)/m + (65 - e*(65-50)/m)
# i.e. (data-e) * (65-50)/m + 65
This has exactly mean 65 and minimum 50.

Related

Different results on using function and it's content

I was trying to understand the working of the function fast_knn of impyute library. So, I tried to execute it line by line in order to understand the working. Here it is:
import numpy as np
from scipy.spatial import KDTree
def shepards(distances, power=2):
return to_percentage(1/np.power(distances, power))
def to_percentage(vec):
return vec/np.sum(vec)
data_temp = np.arange(25).reshape((5, 5)).astype(np.float)
data_temp[0][2] = np.nan
k=4
eps=0
p=2
distance_upper_bound=np.inf
leafsize=10
idw_fn=shepards
init_impute_fn=mean
nan_xy = np.argwhere(np.isnan(data_temp))
data_temp_c = init_impute_fn(data_temp)
kdtree = KDTree(data_temp_c, leafsize=leafsize)
for x_i, y_i in nan_xy:
distances, indices = kdtree.query(data_temp_c[x_i], k=k+1, eps=eps,
p=p, distance_upper_bound=distance_upper_bound)
# Will always return itself in the first index. Delete it.
distances, indices = distances[1:], indices[1:]
# Add small constant to distances to avoid division by 0
distances += 1e-3
weights = idw_fn(distances)
# Assign missing value the weighted average of `k` nearest neighbours
data_temp[x_i][y_i] = np.dot(weights, [data_temp_c[ind][y_i] for ind in indices])
data_temp
This outputs:
array([[ 0. , 1. , 10.06569379, 3. , 4. ],
[ 5. , 6. , 7. , 8. , 9. ],
[10. , 11. , 12. , 13. , 14. ],
[15. , 16. , 17. , 18. , 19. ],
[20. , 21. , 22. , 23. , 24. ]])
whereas the function has a different output. The code :
from impyute import fast_knn
import numpy as np
data_temp = np.arange(25).reshape((5, 5)).astype(np.float)
data_temp[0][2] = np.nan
fast_knn(data_temp, k=4)
and the output
array([[ 0. , 1. , 16.78451885, 3. , 4. ],
[ 5. , 6. , 7. , 8. , 9. ],
[10. , 11. , 12. , 13. , 14. ],
[15. , 16. , 17. , 18. , 19. ],
[20. , 21. , 22. , 23. , 24. ]])
``
There seems to be discrepancies with the GitHub repository code and library source code ( the repository has not been updated). The following is the library source code :
def fast_knn(data, k=3, eps=0, p=2, distance_upper_bound=np.inf, leafsize=10, **kwargs):
null_xy = find_null(data)
data_c = mean(data)
kdtree = KDTree(data_c, leafsize=leafsize)
for x_i, y_i in null_xy:
distances, indices = kdtree.query(data_c[x_i], k=k+1, eps=eps,
p=p, distance_upper_bound=distance_upper_bound)
# Will always return itself in the first index. Delete it.
distances, indices = distances[1:], indices[1:]
weights = distances/np.sum(distances)
# Assign missing value the weighted average of `k` nearest neighbours
data[x_i][y_i] = np.dot(weights, [data_c[ind][y_i] for ind in indices])
return data
The weights are computed in a different manner (not using the shepards function). Hence, the difference in outputs.
Maybe you used the code on the current master branch of impyute. But the impyute package version you used maybe v0.0.8 — the current recent version — whose code is at the release/0.0.8 branch.
The difference in the definition of fast_knn is below.
On the current master branch:
# Will always return itself in the first index. Delete it.
distances, indices = distances[1:], indices[1:]
# Add small constant to distances to avoid division by 0
distances += 1e-3
weights = idw_fn(distances)
On release/0.0.8 branch:
# Will always return itself in the first index. Delete it.
distances, indices = distances[1:], indices[1:]
weights = distances/np.sum(distances)
If you use the code in the release/0.0.8 branch, you will get the same result as you use the impyute package.

Fast way to apply function to each row of a numpy array

Suppose I have some nearest neighbor classifier. For a new observation it computes the distance between the new observation and all observations in the "known" data set. It returns the class label of the observation, that has the smallest distance to the new observation.
import numpy as np
known_obs = np.random.randint(0, 10, 40).reshape(8, 5)
new_obs = np.random.randint(0, 10, 80).reshape(16, 5)
labels = np.random.randint(0, 2, 8).reshape(8, )
def my_dist(x1, known_obs, axis=0):
return (np.square(np.linalg.norm(x1 - known_obs, axis=axis)))
def nn_classifier(n, known_obs, labels, axis=1, distance=my_dist):
return labels[np.argmin(distance(n, known_obs, axis=axis))]
def classify_batch(new_obs, known_obs, labels, classifier=nn_classifier, distance=my_dist):
return [classifier(n, known_obs, labels, distance=distance) for n in new_obs]
print(classify_batch(new_obs, known_obs, labels, nn_classifier, my_dist))
For performance reasons I would like to avoid the for loop in the classify_batch function. Is there a way to use numpy operations to apply the nn_classifier function to each row of new_obs?
I already tried apply_along_axis but as often mentioned it is convenient but not fast.
The key to avoiding the loop is to express the action on the (16,8) array of 'distances'. The labels[] and argmin steps just cloud the issue.
If I set labels = np.arange(8), then this
arr = np.array([my_dist(n, known_obs, axis=1) for n in new_obs])
print(arr)
print(np.argmin(arr, axis=1))
produces the same thing. It still has a list comprehension, but we are closer to 'source'.
[[ 32. 115. 22. 116. 162. 86. 161. 117.]
[ 106. 31. 142. 164. 92. 106. 45. 103.]
[ 44. 135. 94. 18. 94. 50. 87. 135.]
[ 11. 92. 57. 67. 79. 43. 118. 106.]
[ 40. 67. 126. 98. 50. 74. 75. 175.]
[ 78. 61. 120. 148. 102. 128. 67. 191.]
[ 51. 48. 57. 133. 125. 35. 110. 14.]
[ 47. 28. 93. 91. 63. 49. 32. 88.]
[ 61. 86. 23. 141. 159. 85. 146. 22.]
[ 131. 70. 155. 149. 129. 127. 44. 138.]
[ 97. 138. 87. 117. 223. 77. 130. 122.]
[ 151. 78. 211. 161. 131. 115. 46. 164.]
[ 13. 50. 31. 69. 59. 43. 80. 40.]
[ 131. 108. 157. 161. 207. 85. 102. 146.]
[ 39. 106. 67. 23. 61. 67. 70. 88.]
[ 54. 51. 74. 68. 42. 86. 35. 65.]]
[2 1 3 0 0 1 7 1 7 6 5 6 0 5 3 6]
With
print((new_obs[:,None,:] - known_obs[None,:,:]).shape)
I get a (16,8,5) array. So can I apply the linalg.norm on the last axis?
This seems to do the trick
np.square(np.linalg.norm(diff, axis=-1))
So together:
diff = (new_obs[:,None,:] - known_obs[None,:,:])
dist = np.square(np.linalg.norm(diff, axis=-1))
idx = np.argmin(dist, axis=1)
print(idx)

Avoid memory errors when integrating large array with Numpy

I have a 601x350x200x146 numpy float64 array which, according to my calculations takes about 22.3 Gb of memory. My output of free -m tells me I have about 100Gb of free memory so it fits fine. However, when integrating with
result = np.trapz(large_arr, axis=3)
I get a memory error. I understand that this is because of the intermediate arrays that numpy.trapz has to create to perform the integration. But I'm looking to see if there's a way around it, or at least a way to minimize the extra use of memory.
I have read about memory errors and I know of things to avoid this: one is placing a gc.collect() call before the integration. I tried this and it didn't work.
The other one is using the *= operators such as writing arr*=a instead of arr=arr*a, which I can't really do here. So I don't know what else to try.
Does anyone know of a way to do this operation without raising a memory error?
You can reproduce the error with:
arr = np.ones((601,350,200,146), dtype=np.float64)
arr=np.trapz(arr, axis=3)
although you'll have to scale down the size to match your memory size.
numpy.trapz provides some convenience, but the actual calculation is very simple. To avoid large temporary arrays, just implement it yourself:
In [37]: x.shape
Out[37]: (2, 4, 4, 10)
Here's the result of numpy.trapz(x, axis=3):
In [38]: np.trapz(x, axis=3)
Out[38]:
array([[[ 43. , 48.5, 46.5, 67. ],
[ 35.5, 39.5, 52.5, 35. ],
[ 44.5, 47.5, 34.5, 39.5],
[ 54. , 40. , 46.5, 50.5]],
[[ 42. , 60. , 55.5, 51. ],
[ 51.5, 40. , 52. , 42.5],
[ 48.5, 43. , 32. , 36.5],
[ 42.5, 38. , 38. , 45. ]]])
Here's the calculation written to use no large intermediate arrays. (The slice x[:,:,:,1:-1] does not copy the data associated with the array.)
In [48]: 0.5*(x[:,:,:,0] + 2*x[:,:,:,1:-1].sum(axis=3) + x[:,:,:,-1])
Out[48]:
array([[[ 43. , 48.5, 46.5, 67. ],
[ 35.5, 39.5, 52.5, 35. ],
[ 44.5, 47.5, 34.5, 39.5],
[ 54. , 40. , 46.5, 50.5]],
[[ 42. , 60. , 55.5, 51. ],
[ 51.5, 40. , 52. , 42.5],
[ 48.5, 43. , 32. , 36.5],
[ 42.5, 38. , 38. , 45. ]]])
If x has shape (m, n, p, q), the few temporary arrays that are generated in that expression all have shape (m, n, p).

"m x n" dimensional gradient-style array in Python

I checked out
gradient descent using python and numpy
but it didn't solve my problem.
I'm trying to get familiar with image-processing and I want to generate a few test arrays to mess around with in Python.
Is there a method (like np.arange) to create a m x n array where the inner entries form some type of gradient?
I did an example of a naive method for generating the desired output.
Excuse my generality of the term gradient, I'm using it in it's simple meaning as smooth transition in color.
#!/usr/bin/python
import numpy as np
import matplotlib.pyplot as plt
#Set up parameters
m = 15
n = 10
A_placeholder = np.zeros((m,n))
V_m = np.arange(0,m).astype(np.float32)
V_n = np.arange(0,n).astype(np.float32)
#Iterate through combinations
for i in range(m):
m_i = V_m[i]
for j in range(n):
n_j = V_n[j]
A_placeholder[i,j] = m_i * n_j #Some combination
#Relabel
A_gradient = A_placeholder
A_placeholder = None
#Print data
print A_gradient
#[[ 0. 0. 0. 0. 0. 0. 0. 0. 0. 0.]
[ 0. 1. 2. 3. 4. 5. 6. 7. 8. 9.]
[ 0. 2. 4. 6. 8. 10. 12. 14. 16. 18.]
[ 0. 3. 6. 9. 12. 15. 18. 21. 24. 27.]
[ 0. 4. 8. 12. 16. 20. 24. 28. 32. 36.]
[ 0. 5. 10. 15. 20. 25. 30. 35. 40. 45.]
[ 0. 6. 12. 18. 24. 30. 36. 42. 48. 54.]
[ 0. 7. 14. 21. 28. 35. 42. 49. 56. 63.]
[ 0. 8. 16. 24. 32. 40. 48. 56. 64. 72.]
[ 0. 9. 18. 27. 36. 45. 54. 63. 72. 81.]
[ 0. 10. 20. 30. 40. 50. 60. 70. 80. 90.]
[ 0. 11. 22. 33. 44. 55. 66. 77. 88. 99.]
[ 0. 12. 24. 36. 48. 60. 72. 84. 96. 108.]
[ 0. 13. 26. 39. 52. 65. 78. 91. 104. 117.]
[ 0. 14. 28. 42. 56. 70. 84. 98. 112. 126.]]
#Show Image
plt.imshow(A_gradient)
plt.show()
I've tried np.gradient but it didn't give me the desired output.
#print np.gradient(np.array([V_m,V_n]))
#Traceback (most recent call last):
# File "Untitled.py", line 19, in <module>
# print np.gradient(np.array([V_m,V_n]))
# File "/Users/Mu/anaconda/lib/python2.7/site-packages/numpy/lib/function_base.py", line 1458, in gradient
# out[slice1] = (y[slice2] - y[slice3])
#ValueError: operands could not be broadcast together with shapes (10,) (15,)
A_placeholder[i,j] = m_i * n_j
Any operation like that can be expressed in numpy using broadcasting
A = np.arange(m)[:, None] * np.arange(n)[None, :]

Getting all points of a given connected component rapidly

Scikit-Image has quite a few methods available for blob detection:
Laplacian of Gaussian (LoG)
Difference of Gaussian (DoG)
Determinant of Hessian (DoH)
All three return an array that contains a single point within the bounds of the found components:
>>> from skimage import data, feature
>>> img = data.coins()
>>> feature.blob_doh(img)
array([[ 121. , 271. , 30. ],
[ 123. , 44. , 23.55555556],
[ 123. , 205. , 20.33333333],
[ 124. , 336. , 20.33333333],
[ 126. , 101. , 20.33333333],
[ 126. , 153. , 20.33333333],
[ 156. , 302. , 30. ],
[ 185. , 348. , 30. ],
[ 192. , 212. , 23.55555556],
[ 193. , 275. , 23.55555556],
[ 195. , 100. , 23.55555556],
[ 197. , 44. , 20.33333333],
[ 197. , 153. , 20.33333333],
[ 260. , 173. , 30. ],
[ 262. , 243. , 23.55555556],
[ 265. , 113. , 23.55555556],
[ 270. , 363. , 30. ]])
I'd like to use that information to produce lists that contains the coordinates of all the points in a given component.
I could just iterate through the whole image myself starting with the seeds and just collect all the points in a dict with the key being the point provide by blob detection, but I imagine it would rather slow unless I'm using cython(more than willing to be wrong about this, as I'm fairly new to python). More truthfully, I simply think there is probably a better way then just doing it myself.

Categories