Use for loop to check each elements in matrix in python - python

A = np.arange(2,42).reshape(5,8)
B = np.arange(4,68).reshape(8,8)
C=np.dot(A,B)
how to use for loop to check each element in C is larger than 100 or not, then the output is True or False.
I have no idea because it is a matrix not a number.
Is there someone help me please

Do you want to return True, if EVERY element of C is >100? or do you want to create a matrix, whose entries are True or False if the entries of C are >100 or <100?
In both cases I would recommend not using for-loops. For case1,you can try:
print(min(C.flatten())<100)
which will print False, if all elements of C are bigger than 100 and else True (Note that the .flatten command just rewrites the 2D Array into a 1D one temporarily. The shape of C stays in its original state.)
for case2, you can just type
print(C<100)
and it will print a matrix with entries True or False, based on whether C is > or <100 in that entry.

if you want to use for-loops: First note that the shape of C is (5,8), meaning that C is a 2D object. Now, in order to access all entries of C via for-loops, you can write something like this:
import numpy as np
A = np.arange(2,42).reshape(5,8)
B = np.arange(4,68).reshape(8,8)
C=np.dot(A,B)
D = np.zeros(C.shape,dtype = bool)
for i in range(C.shape[0]): #meaning in range 5
for j in range(C.shape[1]): #meaning in range 8
if(C[i,j]>100):
D[i,j] = False
else:
D[i,j] = True
print(D)
where I introduced a new matrix D, which is just a new matrix in the same shape of C, in which we consecutively fill up True or False, based on if C is > or <100 at that entry. This code is equivalent, but slower and more complicated than the one I proposed above.
I hope this answers your question sufficiently. If you have any more questions on detials etc., dont hestitate to ask ;).

You must use numpy filter method. This ways that say are very awful and slow. Numpy filtering methods are very optimal
import numpy as np
filter_arr = C > 100
newarr = arr[C]
print(newarr)

Related

Built-in binary search algorithm in numpy that is like np.searchsorted?

I have a main numpy array a and I have another numpy array b. What I want to do is go through each element of b and check if that element exists in a. Keep in mind that both a and b are pretty massive, so I would like to avoid O(N) search times.
I know np.searchsorted(a,b) exists, but this provides an index at which I need to place b. This does not tell me if an element of b is present in a right off the bat.
My question is, is there a binary search algorithm built into numpy that simply reports True or False if an element from b exists in a? I am aware that I can write one but if there is a vectorized that is readily available, I could save some time.
Any advice would be appreciated!
Once you have completed the sorted search you can check if the elements at those indices are equal to the elements in b:
a = numpy.array([1,2,3,4,7])
b = numpy.array([1,4,5,7])
x = numpy.searchsorted(a,b)
boolean_array = a[x] == b
searchsorted indicates that with the default side = 'left' it ensures : a[i-1] < v <= a[i] so if a[i] is equal to the corresponding element in b then it gives the match you want.

Strange numpy divide behaviour for scalars

I have been trying to upgrade a library which has a bunch of geometric operations for scalars so they will work with numpy arrays as well. While doing this I noticed some strange behaviour with numpy divide.
In original code checks a normalised difference between to variables if neither variable is zero, swapping across to numpy this ended up looking something like:
import numpy as np
a = np.array([0, 1, 2, 3, 4])
b = np.array([1, 2, 3, 0, 4])
o = np.zeros(len(a))
o = np.divide(np.subtract(a, b), b, out=o, where=np.logical_and(a != 0, b != 0))
print(f'First implementation: {o}')
where I passed in a output buffer initialised to zero for instances which could not be calculated; this returns:
First implementation: [ 0. -0.5 -0.33333333 0. 0. ]
I had to slightly modify this for scalars as out required an array, but it seemed fine.
a = 0
b = 4
o = None if np.isscalar(a) else np.zeros(len(a))
o = np.divide(np.subtract(a, b), b, out=o, where=np.logical_and(b != 0, a != 0))
print(f'Modified out for scalar: {o}')
returns
Modified out for scalar: 0.0.
Then ran this through some test functions and found a lot of them failed. Digging into this, I found that the first time I call the divide with a scalar with where set to False the function returns zero, but if I called it again, the second time it returns something unpredictable.
a = 0
b = 4
print(f'First divide: {np.divide(b, a, where=False)}')
print(f'Second divide: {np.divide(b, a, where=False)}')
returns
First divide: 0.0
Second divide: 4.0
Looking at the documentation, it says "locations within it where the condition is False will remain uninitialized", so I guess numpy as some internal buffer which is initially set to zero then subsequently it ends up carrying over an earlier intermediate value.
I am struggling to see how I can use divide with or without a where clause; if I use where I get an unpredictable output and if I don't I can't protect against divide by zero. Am I missing something or do I just need to have a different code path in these cases? I realise I'm half way to a different code path already with the out variable.
I would be grateful for any advice.
It looks like a bug to me. But I think you'd want to short-circuit the calls to ufuncs in the case of scalars for performance reasons anyway, so its a question of trying to keep it from being too messy. Since either a or b could be scalars, you need to check them both. Put that check into a function that conditionally returns an output array or None, and you could do
def scalar_test_np_zeros(a, b):
"""Return np.zeros for the length of arguments unless both
arguments are scalar, then None."""
if a_is:=np.isscalar(a) and np.isscalar(b):
return None
else:
return np.zeros(len(a) if a_is else len(b))
a = 0
b = 4
if o := scalar_test_np_zeros(a, b) is None:
o = (a-b)/b if a and b else 0.0
else:
np.divide(np.subtract(a, b), b, out=o,
where=np.logical_and(b != 0, a != 0))
The scalar test would be useful in other code with similar problems.
For what it's worth, if I helps anyone I have come to the conclusion I need to wrap np.divide to use it safely in functions which can take arrays and scalars. This is my wrapping function:
import numpy as np
def divide_where(a, b, where, out=None, fill=0):
""" wraps numpy divide to safely handle where clause for both arrays and scalars
- a: dividend array or scalar
- b: divisor array or scalar
- where: locations where is True a/b will be set
- out: location where data is written to; if None, an output array will be created using fill value
- fill: defines fill value. if scalar and where True value will used; if out not set fill value is used creating output array
"""
if (a_is_scalar := np.isscalar(a)) and np.isscalar(b):
return fill if not where else a / b
if out is None:
out = np.full_like(b if a_is_scalar else a, fill)
return np.divide(a, b, out=out, where=where)

How to return a numpy array with values where, the common indices values for 2 arrays are both greater than 0

I want the first array to display it's values only when common indices values of both the arrays are greater than zero else make it zero. I'm not really sure how to frame the question. Hopefully the expected output provides better insight.
I tried playing around with np.where, but I can't seem to make it work when 2 arrays are provided.
a = np.array([0,2,1,0,4])
b = np.array([1,1,3,4,0])
# Expected Output
a = ([0,2,1,0,0])
The zip function, which takes elements of two arrays side by side, is useful here. You don't necessarily need an np/numpy function.
import numpy as np
a = np.array([0,2,1,0,4])
b = np.array([1,1,3,4,0])
c = np.array([x if x * y > 0 else 0 for x,y in zip(a, b)])
print(c)

Evaluating a function using numpy

What is the significance of the return part when evaluating functions? Why is this necessary?
Your assumption is right: dfdx[0] is indeed the first value in that array, so according to your code that would correspond to evaluating the derivative at x=-1.0.
To know the correct index where x is equal to 0, you will have to look for it in the x array.
One way to find this is the following, where we find the index of the value where |x-0| is minimal (so essentially where x=0 but float arithmetic requires taking some precautions) using argmin :
index0 = np.argmin(np.abs(x-0))
And we then get what we want, dfdx at the index where x is 0 :
print dfdx[index0]
An other but less robust way regarding float arithmetic trickery is to do the following:
# we make a boolean array that is True where x is zero and False everywhere else
bool_array = (x==0)
# Numpy alows to use a boolean array as a way to index an array
# Doing so will get you the all the values of dfdx where bool_array is True
# In our case that will hopefully give us dfdx where x=0
print dfdx[bool_array]
# same thing as oneliner
print dfdx[x==0]
You give the answer. x[0] is -1.0, and you want the value at the middle of the array.`np.linspace is the good function to build such series of values :
def f1(x):
g = np.sin(math.pi*np.exp(-x))
return g
n = 1001 # odd !
x=linspace(-1,1,n) #x[n//2] is 0
f1x=f1(x)
df1=np.diff(f1(x),1)
dx=np.diff(x)
df1dx = - math.pi*np.exp(-x)*np.cos(math.pi*np.exp(-x))[:-1] # to discard last element
# In [3]: np.allclose(df1/dx,df1dx,atol=dx[0])
# Out[3]: True
As an other tip, numpy arrays are more efficiently and readably used without loops.

python printing a generator list after vectorization?

I am new with vectorization and generators. So far I have created the following function:
import numpy as np
def ismember(a,b):
for i in a:
if len(np.where(b==i)[0]) == 0:
lv_var = 0
else:
lv_var = np.int(np.where(b==i)[0])
yield lv_var
vect = np.vectorize(ismember)
A = np.array(xrange(700000))
B = np.array(xrange(700000))
lv_result = vect(A,B)
When I try to cast lv_result as a list or loop through the resulting numpy array I get a list of generator objects. I need to somehow get the actual result. How do I print the actual results from this function ? .next() on generator doesn't seem to do the job.
Could someone tell me what is that I am doing wrong or how could I reconfigure the code to achieve the end goal?
---------------------------------------------------
OK so I understand the vectorize part now (thanks Viet Nguyen for the example).
I was also able to print the generator object results. The code has been modified. Please see below.
For the generator part:
What I am trying to do is to mimic a MATLAB function called ismember (The one that is formatted as: [Lia,Locb] = ismember(A,B). I am just trying to get the Locb part only.
From Matlab: Locb, contain the lowest index in B for each value in A that is a member of B. The output array, Locb, contains 0 wherever A is not a member of B
One of the main problems is that I need to be able to perform this operation as efficient as possible. For testing I have two arrays of 700k elements. Creating a generator and going through the values of the generator doesn't seem to get any better performance wise.
To print the Generator, I have created function f() below.
import numpy as np
def ismember(a,b):
for i in a:
index = np.where(b==i)[0]
if len(index) == 0:
yield 0
else:
yield index
def f(A, gen_obj):
my_array = np.arange(len(A))
for i in my_array:
my_array[i] = gen_obj.next()
return my_array
A = np.arange(700000)
B = np.arange(700000)
gen_obj = ismember(A,B)
f(A, gen_obj)
print 'done'
Note: if we were to try the above code with smaller arrays:
Lets say.
A = np.array([3,4,4,3,6])
B = np.array([2,5,2,6,3])
The result will be an array of : [4 0 0 4 3]
Just like matlabs function: the goal is to get the lowest index in B for each value in A that is a member of B. The output array, Locb, contains 0 wherever A is not a member of B.
Numpy's intersection function doesn't help me to achieve the goal. Also the size of the returning array needs to be kept the same size as the size of array A.
So far this process is taking forever(for arrays of 700k elements). Unfortunately I haven't been able to find the best solution yet. Any inputs on how could I reconfigure the code to achieve the end goal, with the best performance, will be much appreciated.
Optimization Problem solved in:
python-run-generator-using-multiple-cores-for-optimization
I believe you've misunderstood the inputs to a numpy.vectorize function. The "vectorized" function operates on the arrays on a per-element basis (see numpy.vectorize reference). Your function ismember seems to presume that the inputs a and b are arrays. Instead, think of the function as something you would use with built-in map().
> import numpy as np
> def mask(a, b):
> return 1 if a == b else 0
> a = np.array([1, 2, 3, 4])
> b = np.array([1, 3, 4, 5])
> maskv = np.vectorize(mask)
> maskv(a, b)
array([1, 0, 0, 0])
Also, if I'm understanding your intention correctly, NumPy comes with an intersection function.

Categories