I want to create an random array with a "norm" of 1 and "mean" of 0. I can use numpy.random.normal() to get the "mean" I want, but how can I create an array such that numpy.linalg.norm(array) returns the number that I want?
Using numpy.random.normal with the size argument will give you an array with values that are drawn from a distribution with a mean of 0. The mean value of the array will not be 0, however (it is more likely to be close to 0, the larger the array is).
But you can easily fix that by subtracting the mean of the array.
Once you have this, you can change the norm of the array to 1 by dividing by its norm. This will not change the mean, because it is 0.
def create(n):
x = numpy.random.normal(size=n)
x -= x.mean()
return x / numpy.linalg.norm(x)
Example
>>> a = create(10)
>>> a
array([-0.48299539, 0.06017975, 0.23788747, -0.31949065, 0.56126426,
-0.33117035, 0.40908645, 0.01169836, -0.1008337 , -0.0456262 ])
>>> a.mean()
-1.3183898417423733e-17 # not exactly 0 due to floating-point math
>>> numpy.linalg.norm(a)
1.0
Notice how for n=2 there are exactly 2 arrays satisfying these conditions: Those that contain both the positive and the negative square root of 1/2:
>>> for _ in range(5):
... print(create(2))
...
[-0.70710678 0.70710678]
[-0.70710678 0.70710678]
[-0.70710678 0.70710678]
[-0.70710678 0.70710678]
[ 0.70710678 -0.70710678]
Related
The correlation matrix is a symmetric matrix, meaning that its upper diagonal and lower diagonal elements are mirror images of each other, together called off-diagonal elements (as opposed to the diagonal elements, which are all equal to 1 in any correlation matrix since any variable's correlation with itself is just 1).
The off-diagonal elements of a correlation matrix are the same wherever the i'th row number and j'th column number in the lower diagonal are swapped in the upper diagonal, i.e. correlation of variables 1 and 2 (row 1, column 2) are the same for variables 2 and 1 (row 2, column 1). Therefore, we only need to re-calculate the lower-diagonal elements, and copy them to corresponding positions in the matrix's upper-diagonal after
import numpy as np
from numpy.random import randn
X = randn(20,3)
Rho = np.corrcoef(X.T) #correlation matrix
print(np.tril(Rho)) #lower off-diagonal of matrix Rho to re-calculate, then copy to other side
shows
array([[ 1. , 0. , 0. ],
[-0.03003281, 1. , 0. ],
[-0.02602238, 0.06137713, 1. ]])
What is the most efficient way to code a "i not-equal-to j" loop for the following sequence of steps:
re-calculate the lower off-diagonal elements of the symmetric matrix according to some apply function (to make it simple, we will just add +2 to each of these elements)
flip those same calculations onto its mirror image (the corresponding upper off-diagonals)
Also, replace the diagonal elements of the symmetric matrix with a vector filled with 10's (instead of 1's as found in the correlation matrix)
The aim is to generate a new matrix that is a re-calculation of the original.
Let us generate Rho first (note that I'm initializing the pseudo-random number generator in order to obtain the same Rho in different runs of the code):
In [526]: import numpy as np
In [527]: np.random.seed(0)
...: n = 3
...: X = np.random.randn(20, n)
...: Rho = np.corrcoef(X.T)
In [528]: Rho
Out[528]:
array([[1. , 0.03224462, 0.05021998],
[0.03224462, 1. , 0.15140358],
[0.05021998, 0.15140358, 1. ]])
Then you can use NumPy's tril_indices_from and advanced indexing to generate the new matrix:
In [548]: result = np.zeros_like(Rho)
In [549]: lrows, lcols = np.tril_indices_from(Rho, k=-1)
In [550]: result[lrows, lcols] = Rho[lrows, lcols] + 2
In [551]: result
Out[551]:
array([[0. , 0. , 0. ],
[2.03224462, 0. , 0. ],
[2.05021998, 2.15140358, 0. ]])
In [552]: result[lcols, lrows] = result[lrows, lcols]
In [553]: result
Out[553]:
array([[0. , 2.03224462, 2.05021998],
[2.03224462, 0. , 2.15140358],
[2.05021998, 2.15140358, 0. ]])
In [554]: result[np.arange(n), np.arange(n)] = 10
In [555]: result
Out[555]:
array([[10. , 2.03224462, 2.05021998],
[ 2.03224462, 10. , 2.15140358],
[ 2.05021998, 2.15140358, 10. ]])
What I'm trying to do is get two different arrays, where the first array is just filled with zeros and second array would be populated by random numbers. I would like to perform an operation where only certain elements from the latter array are added to the array filled with zeros and the rest of elements within the former array remain as zero. I'm trying to get the addition done in a random way as well. I just added the code below as an example. I honestly don't know how to perform something like this and I would be very grateful for any help or suggestions! Thank you!
shape = (6, 3)
empty_array = np.zeros(shape)
random_array = 0.1 * np.random.randn(*empty_array)
sum = np.add(empty_array, random_array)
You can use a binary mask with the density P:
P = 0.5
# Repeat the next two lines as needed
mask = np.random.binomial(1, P, size = empty_array.size)\
.reshape(shape).astype(bool)
empty_array[mask] += random_array[mask]
If you plan to add more random elements, you may want to re-generate the mask at each further iteration.
If I understand you correctly from your comments, you want to create random numbers at random indices based on the threshold of some percent of whole array (you do not need to create a whole random array and use only a percent of it, such random number generation is usually costly in larger scales):
sz = shape[0]*shape[1]
#this is your for example 20% threshold
threshold = 0.2
#create random numbers and random indices
random_array = np.random.rand(int(threshold*sz))
random_idx = np.random.randint(0,sz,int(threshold*sz))
#now you can add this random_array to random indices of your desired array
empty_array.reshape(-1)[random_idx] += random_array
or another solution:
sz = shape[0]*shape[1]
#this is your for example 20% threshold
threshold = 0.2
random_array = np.random.rand(int(threshold*sz))
#pad with enough zeros and randomly shuffle and finally reshape it
random_array.resize(sz)
np.random.shuffle(random_array)
#now you can add this random_array to any array of your choice
empty_array += random_array.reshape(shape)
sample output:
[[0. 0. 0. ]
[0. 0. 0. ]
[0. 0. 0.7397274 ]
[0. 0. 0. ]
[0. 0. 0.79541551]
[0.75684113 0. 0. ]]
I have a function that reads in and outputs a 2D array. I want the output to be constant (pi in this case) for every index in the input that equals 0, otherwise I perform some maths on it. E.g:
import numpy as np
import numpy.ma as ma
def my_func(x):
mask = ma.where(x==0,x)
# make an array of pi's the same size and shape as the input
y = np.pi * np.ones(x)
# psuedo-code bit I can't figure out
y.not_masked = y**2
return y
my_array = [[0,1,2],[1,0,2],[1,2,0]]
result_array = my_func(my_array)
This should give me the following:
result_array = [[3.14, 1, 4],[1, 3.14, 4], [1, 4, 3.14]]
I.e. it has applied y**2 to each element in the 2D list that doesn't equal zero, and replaced all the zeros with pi.
I need this because my function will include division, and I don't know the indexes beforehand. I'm trying to convert a matlab tutorial from a textbook into Python and this function is stumping me!
Thanks
Just use np.where() directly:
y = np.where(x, x**2, np.pi)
Example:
>>> x = np.asarray([[0,1,2],[1,0,2],[1,2,0]])
>>> y = np.where(x, x**2, np.pi)
>>> print(y)
[[ 3.14159265 1. 4. ]
[ 1. 3.14159265 4. ]
[ 1. 4. 3.14159265]]
Try this:
my_array = np.array([[0,1,2],[1,0,2],[1,2,0]]).astype(float)
def my_func(x):
mask = x == 0
x[mask] = np.pi
x[~mask] = x[~mask]**2 # or some other operation on x...
return x
I would suggest rather than using masks you can use a boolean array to achieve what you want.
def my_func(x):
#create a boolean matrix, a, that has True where x==0 and
#False where x!=0
a=x==0
x[a]=np.pi
#Use np.invert to flip where a is True and False so we can
#operate on the non-zero values of the array
x[~a]=x[~a]**2
return x #return the transformed array
my_array = np.array([[0.,1.,2.],[1.,0.,2.],[1.,2.,0.]])
result_array = my_func(my_array)
this gives the output:
array([[ 3.14159265, 1. , 4. ],
[ 1. , 3.14159265, 4. ],
[ 1. , 4. , 3.14159265]])
Notice that I passed to the function an numpy array specifically, originally you passed a list and that will give problems when you attempt to do mathematical operations. Also notice I defined the array with 1. rather than just 1, in order to make sure it was an array of floats rather than integers, because if it is an array of integers when you set values equal to pi it will truncate to 3.
Perhaps it would be good to add a piece to the function to check the dtype of the input argument and see if it is a numpy array rather than a list or other object, and also to make sure it contains floats, and if not you can adjust accordingly.
EDIT:
Change to using ~a rather than invert(a) as per Scotty1's suggestion.
Is there a numerically stable way to compute softmax function below?
I am getting values that becomes Nans in Neural network code.
np.exp(x)/np.sum(np.exp(y))
The softmax exp(x)/sum(exp(x)) is actually numerically well-behaved. It has only positive terms, so we needn't worry about loss of significance, and the denominator is at least as large as the numerator, so the result is guaranteed to fall between 0 and 1.
The only accident that might happen is over- or under-flow in the exponentials. Overflow of a single or underflow of all elements of x will render the output more or less useless.
But it is easy to guard against that by using the identity softmax(x) = softmax(x + c) which holds for any scalar c: Subtracting max(x) from x leaves a vector that has only non-positive entries, ruling out overflow and at least one element that is zero ruling out a vanishing denominator (underflow in some but not all entries is harmless).
Footnote: theoretically, catastrophic accidents in the sum are possible, but you'd need a ridiculous number of terms. For example, even using 16 bit floats which can only resolve 3 decimals---compared to 15 decimals of a "normal" 64 bit float---we'd need between 2^1431 (~6 x 10^431) and 2^1432 to get a sum that is off by a factor of two.
Softmax function is prone to two issues: overflow and underflow
Overflow: It occurs when very large numbers are approximated as infinity
Underflow: It occurs when very small numbers (near zero in the number line) are approximated (i.e. rounded to) as zero
To combat these issues when doing softmax computation, a common trick is to shift the input vector by subtracting the maximum element in it from all elements. For the input vector x, define z such that:
z = x-max(x)
And then take the softmax of the new (stable) vector z
Example:
def stable_softmax(x):
z = x - max(x)
numerator = np.exp(z)
denominator = np.sum(numerator)
softmax = numerator/denominator
return softmax
# input vector
In [267]: vec = np.array([1, 2, 3, 4, 5])
In [268]: stable_softmax(vec)
Out[268]: array([ 0.01165623, 0.03168492, 0.08612854, 0.23412166, 0.63640865])
# input vector with really large number, prone to overflow issue
In [269]: vec = np.array([12345, 67890, 99999999])
In [270]: stable_softmax(vec)
Out[270]: array([ 0., 0., 1.])
In the above case, we safely avoided the overflow problem by using stable_softmax()
For more details, see chapter Numerical Computation in deep learning book.
Extending #kmario23's answer to support 1 or 2 dimensional numpy arrays or lists. 2D tensors (assuming the first dimension is the batch dimension) are common if you're passing a batch of results through softmax:
import numpy as np
def stable_softmax(x):
z = x - np.max(x, axis=-1, keepdims=True)
numerator = np.exp(z)
denominator = np.sum(numerator, axis=-1, keepdims=True)
softmax = numerator / denominator
return softmax
test1 = np.array([12345, 67890, 99999999]) # 1D numpy
test2 = np.array([[12345, 67890, 99999999], # 2D numpy
[123, 678, 88888888]]) #
test3 = [12345, 67890, 999999999] # 1D list
test4 = [[12345, 67890, 999999999]] # 2D list
print(stable_softmax(test1))
print(stable_softmax(test2))
print(stable_softmax(test3))
print(stable_softmax(test4))
[0. 0. 1.]
[[0. 0. 1.]
[0. 0. 1.]]
[0. 0. 1.]
[[0. 0. 1.]]
There is nothing wrong with calculating the softmax function as it is in your case. The problem seems to come from exploding gradient or this sort of issues with your training methods. Focus on those matters with either "clipping values" or "choosing the right initial distribution of weights".
I want to write code that does image filtering. I use simple 3x3 kernel and then use scipy.ndimage.filters.convolve() function. After filtering, range of the values is -1.27 to 1.12. How to normalize data after filtering? Do I need to crop values (values less then zero set to zero, and greater than 1 set to 1), or use linear normalization? Is it OK if values after filtering are greater than range [0,1]?
>>> import numpy as np
>>> x = np.random.randn(10)
>>> x
array([-0.15827641, -0.90237627, 0.74738448, 0.80802178, 0.48720684,
0.56213483, -0.34239788, 1.75621007, 0.63168393, 0.99192999])
You could clip out values outside your range although you would lose that information:
>>> np.clip(x,0,1)
array([ 0. , 0. , 0.74738448, 0.80802178, 0.48720684,
0.56213483, 0. , 1. , 0.63168393, 0.99192999])
To preserve the scaling, you can linearly renormalise into the range 0 to 1:
>>> (x - np.min(x))/(np.max(x) - np.min(x))
array([ 0.27988553, 0. , 0.6205406 , 0.64334869, 0.52267744,
0.55086084, 0.21063013, 1. , 0.57702102, 0.71252388])
Is it OK if values after filtering are greater than range [0,1]?
This is really dependant on your use case for the filtered image.