Making an array from multiple elements from different arrays - python

I want to make a new array out of different numbers from each array. This is an example:
import numpy as np
a=[[0,-1,-2,-3,-4,-5,-6,-7,-8,-9,-10],[0,1,2,3,4,5,6,7,8,9,10]]
b=[[0,-1,-2,-3,-4,-5,-6,-7,-8,-9,-10],[0,1,2,3,4,5,6,7,8,9,10]]
c=[[0,-1,-2,-3,-4,-5,-6,-7,-8,-9,-10],[0,1,2,3,4,5,6,7,8,9,10]]
d=[]
for c in range (0,2):
d.append([])
for s in range (0,10):
d[c] =np.concatenate((a[c][s],b[c][s],c[c][s]))
print(d)
when I print 'd', it gives me a TypeError: 'int' object is not subscriptable.
Is this due to the concatenante function? or can I use stack?
I want the outcome to be something like:
d[0][0]= [0,0,0]
having the first term from each array. d[0][0] is indexing to a file and a row. that's why I want this format.

Numpy is an incredibly powerful library so I would recommend always using it to manipulate your arrays first before you use for loops. You should look up what numpy axes and shapes mean.
The array d that you want seems to be 3D, but the arrays a, b and c are 2D. Therefore we will first expand the dimensons of the three arrays. Then we can easily concatenate them on this new dimension.
The following code achieves what you want:
import numpy as np
# First convert the lists themselves to numpy arrays.
a = np.array([[0, -1, -2, -3, -4, -5, -6, -7, -8, -9, -10],
[0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10]]) # Shape: (2, 11)
b = np.array([[0, -1, -2, -3, -4, -5, -6, -7, -8, -9, -10],
[0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10]]) # Shape: (2, 11)
c = np.array([[0, -1, -2, -3, -4, -5, -6, -7, -8, -9, -10],
[0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10]]) # Shape: (2, 11)
# Print the shape of the arrays
print(a.shape, b.shape, c.shape)
# Add an additional dimension to the three arrays along a new axis.
# axis 0 and axis 1 already exist. So we create it along axis 2.
a_ = np.expand_dims(a, axis=2) # Shape: (2, 11, 1)
b_ = np.expand_dims(b, axis=2) # Shape: (2, 11, 1)
c_ = np.expand_dims(c, axis=2) # Shape: (2, 11, 1)
# Print the shape of the arrays
print(a_.shape, b_.shape, c_.shape)
# Concatenate all three arrays along the last axis i.e. axis 2.
d = np.concatenate((a_, b_, c_), axis=2) # Shape: (2, 11, 3)
# Print d[0][0] to check if it is [0, 0, 0]
print(d[0][0])
You should print the individual arrays a, a_ and d to check what kind of transformations are taking place.

Related

Find index of n biggest values [duplicate]

NumPy proposes a way to get the index of the maximum value of an array via np.argmax.
I would like a similar thing, but returning the indexes of the N maximum values.
For instance, if I have an array, [1, 3, 2, 4, 5], then nargmax(array, n=3) would return the indices [4, 3, 1] which correspond to the elements [5, 4, 3].
Newer NumPy versions (1.8 and up) have a function called argpartition for this. To get the indices of the four largest elements, do
>>> a = np.array([9, 4, 4, 3, 3, 9, 0, 4, 6, 0])
>>> a
array([9, 4, 4, 3, 3, 9, 0, 4, 6, 0])
>>> ind = np.argpartition(a, -4)[-4:]
>>> ind
array([1, 5, 8, 0])
>>> top4 = a[ind]
>>> top4
array([4, 9, 6, 9])
Unlike argsort, this function runs in linear time in the worst case, but the returned indices are not sorted, as can be seen from the result of evaluating a[ind]. If you need that too, sort them afterwards:
>>> ind[np.argsort(a[ind])]
array([1, 8, 5, 0])
To get the top-k elements in sorted order in this way takes O(n + k log k) time.
The simplest I've been able to come up with is:
>>> import numpy as np
>>> arr = np.array([1, 3, 2, 4, 5])
>>> arr.argsort()[-3:][::-1]
array([4, 3, 1])
This involves a complete sort of the array. I wonder if numpy provides a built-in way to do a partial sort; so far I haven't been able to find one.
If this solution turns out to be too slow (especially for small n), it may be worth looking at coding something up in Cython.
Simpler yet:
idx = (-arr).argsort()[:n]
where n is the number of maximum values.
Use:
>>> import heapq
>>> import numpy
>>> a = numpy.array([1, 3, 2, 4, 5])
>>> heapq.nlargest(3, range(len(a)), a.take)
[4, 3, 1]
For regular Python lists:
>>> a = [1, 3, 2, 4, 5]
>>> heapq.nlargest(3, range(len(a)), a.__getitem__)
[4, 3, 1]
If you use Python 2, use xrange instead of range.
Source: heapq — Heap queue algorithm
If you happen to be working with a multidimensional array then you'll need to flatten and unravel the indices:
def largest_indices(ary, n):
"""Returns the n largest indices from a numpy array."""
flat = ary.flatten()
indices = np.argpartition(flat, -n)[-n:]
indices = indices[np.argsort(-flat[indices])]
return np.unravel_index(indices, ary.shape)
For example:
>>> xs = np.sin(np.arange(9)).reshape((3, 3))
>>> xs
array([[ 0. , 0.84147098, 0.90929743],
[ 0.14112001, -0.7568025 , -0.95892427],
[-0.2794155 , 0.6569866 , 0.98935825]])
>>> largest_indices(xs, 3)
(array([2, 0, 0]), array([2, 2, 1]))
>>> xs[largest_indices(xs, 3)]
array([ 0.98935825, 0.90929743, 0.84147098])
Three Answers Compared For Coding Ease And Speed
Speed was important for my needs, so I tested three answers to this question.
Code from those three answers was modified as needed for my specific case.
I then compared the speed of each method.
Coding wise:
NPE's answer was the next most elegant and adequately fast for my needs.
Fred Foos answer required the most refactoring for my needs but was the fastest. I went with this answer, because even though it took more work, it was not too bad and had significant speed advantages.
off99555's answer was the most elegant, but it is the slowest.
Complete Code for Test and Comparisons
import numpy as np
import time
import random
import sys
from operator import itemgetter
from heapq import nlargest
''' Fake Data Setup '''
a1 = list(range(1000000))
random.shuffle(a1)
a1 = np.array(a1)
''' ################################################ '''
''' NPE's Answer Modified A Bit For My Case '''
t0 = time.time()
indices = np.flip(np.argsort(a1))[:5]
results = []
for index in indices:
results.append((index, a1[index]))
t1 = time.time()
print("NPE's Answer:")
print(results)
print(t1 - t0)
print()
''' Fred Foos Answer Modified A Bit For My Case'''
t0 = time.time()
indices = np.argpartition(a1, -6)[-5:]
results = []
for index in indices:
results.append((a1[index], index))
results.sort(reverse=True)
results = [(b, a) for a, b in results]
t1 = time.time()
print("Fred Foo's Answer:")
print(results)
print(t1 - t0)
print()
''' off99555's Answer - No Modification Needed For My Needs '''
t0 = time.time()
result = nlargest(5, enumerate(a1), itemgetter(1))
t1 = time.time()
print("off99555's Answer:")
print(result)
print(t1 - t0)
Output with Speed Reports
NPE's Answer:
[(631934, 999999), (788104, 999998), (413003, 999997), (536514, 999996), (81029, 999995)]
0.1349949836730957
Fred Foo's Answer:
[(631934, 999999), (788104, 999998), (413003, 999997), (536514, 999996), (81029, 999995)]
0.011161565780639648
off99555's Answer:
[(631934, 999999), (788104, 999998), (413003, 999997), (536514, 999996), (81029, 999995)]
0.439760684967041
If you don't care about the order of the K-th largest elements you can use argpartition, which should perform better than a full sort through argsort.
K = 4 # We want the indices of the four largest values
a = np.array([0, 8, 0, 4, 5, 8, 8, 0, 4, 2])
np.argpartition(a,-K)[-K:]
array([4, 1, 5, 6])
Credits go to this question.
I ran a few tests and it looks like argpartition outperforms argsort as the size of the array and the value of K increase.
For multidimensional arrays you can use the axis keyword in order to apply the partitioning along the expected axis.
# For a 2D array
indices = np.argpartition(arr, -N, axis=1)[:, -N:]
And for grabbing the items:
x = arr.shape[0]
arr[np.repeat(np.arange(x), N), indices.ravel()].reshape(x, N)
But note that this won't return a sorted result. In that case you can use np.argsort() along the intended axis:
indices = np.argsort(arr, axis=1)[:, -N:]
# Result
x = arr.shape[0]
arr[np.repeat(np.arange(x), N), indices.ravel()].reshape(x, N)
Here is an example:
In [42]: a = np.random.randint(0, 20, (10, 10))
In [44]: a
Out[44]:
array([[ 7, 11, 12, 0, 2, 3, 4, 10, 6, 10],
[16, 16, 4, 3, 18, 5, 10, 4, 14, 9],
[ 2, 9, 15, 12, 18, 3, 13, 11, 5, 10],
[14, 0, 9, 11, 1, 4, 9, 19, 18, 12],
[ 0, 10, 5, 15, 9, 18, 5, 2, 16, 19],
[14, 19, 3, 11, 13, 11, 13, 11, 1, 14],
[ 7, 15, 18, 6, 5, 13, 1, 7, 9, 19],
[11, 17, 11, 16, 14, 3, 16, 1, 12, 19],
[ 2, 4, 14, 8, 6, 9, 14, 9, 1, 5],
[ 1, 10, 15, 0, 1, 9, 18, 2, 2, 12]])
In [45]: np.argpartition(a, np.argmin(a, axis=0))[:, 1:] # 1 is because the first item is the minimum one.
Out[45]:
array([[4, 5, 6, 8, 0, 7, 9, 1, 2],
[2, 7, 5, 9, 6, 8, 1, 0, 4],
[5, 8, 1, 9, 7, 3, 6, 2, 4],
[4, 5, 2, 6, 3, 9, 0, 8, 7],
[7, 2, 6, 4, 1, 3, 8, 5, 9],
[2, 3, 5, 7, 6, 4, 0, 9, 1],
[4, 3, 0, 7, 8, 5, 1, 2, 9],
[5, 2, 0, 8, 4, 6, 3, 1, 9],
[0, 1, 9, 4, 3, 7, 5, 2, 6],
[0, 4, 7, 8, 5, 1, 9, 2, 6]])
In [46]: np.argpartition(a, np.argmin(a, axis=0))[:, -3:]
Out[46]:
array([[9, 1, 2],
[1, 0, 4],
[6, 2, 4],
[0, 8, 7],
[8, 5, 9],
[0, 9, 1],
[1, 2, 9],
[3, 1, 9],
[5, 2, 6],
[9, 2, 6]])
In [89]: a[np.repeat(np.arange(x), 3), ind.ravel()].reshape(x, 3)
Out[89]:
array([[10, 11, 12],
[16, 16, 18],
[13, 15, 18],
[14, 18, 19],
[16, 18, 19],
[14, 14, 19],
[15, 18, 19],
[16, 17, 19],
[ 9, 14, 14],
[12, 15, 18]])
Method np.argpartition only returns the k largest indices, performs a local sort, and is faster than np.argsort(performing a full sort) when array is quite large. But the returned indices are NOT in ascending/descending order. Let's say with an example:
We can see that if you want a strict ascending order top k indices, np.argpartition won't return what you want.
Apart from doing a sort manually after np.argpartition, my solution is to use PyTorch, torch.topk, a tool for neural network construction, providing NumPy-like APIs with both CPU and GPU support. It's as fast as NumPy with MKL, and offers a GPU boost if you need large matrix/vector calculations.
Strict ascend/descend top k indices code will be:
Note that torch.topk accepts a torch tensor, and returns both top k values and top k indices in type torch.Tensor. Similar with np, torch.topk also accepts an axis argument so that you can handle multi-dimensional arrays/tensors.
This will be faster than a full sort depending on the size of your original array and the size of your selection:
>>> A = np.random.randint(0,10,10)
>>> A
array([5, 1, 5, 5, 2, 3, 2, 4, 1, 0])
>>> B = np.zeros(3, int)
>>> for i in xrange(3):
... idx = np.argmax(A)
... B[i]=idx; A[idx]=0 #something smaller than A.min()
...
>>> B
array([0, 2, 3])
It, of course, involves tampering with your original array. Which you could fix (if needed) by making a copy or replacing back the original values. ...whichever is cheaper for your use case.
Use:
from operator import itemgetter
from heapq import nlargest
result = nlargest(N, enumerate(your_list), itemgetter(1))
Now the result list would contain N tuples (index, value) where value is maximized.
Use:
def max_indices(arr, k):
'''
Returns the indices of the k first largest elements of arr
(in descending order in values)
'''
assert k <= arr.size, 'k should be smaller or equal to the array size'
arr_ = arr.astype(float) # make a copy of arr
max_idxs = []
for _ in range(k):
max_element = np.max(arr_)
if np.isinf(max_element):
break
else:
idx = np.where(arr_ == max_element)
max_idxs.append(idx)
arr_[idx] = -np.inf
return max_idxs
It also works with 2D arrays. For example,
In [0]: A = np.array([[ 0.51845014, 0.72528114],
[ 0.88421561, 0.18798661],
[ 0.89832036, 0.19448609],
[ 0.89832036, 0.19448609]])
In [1]: max_indices(A, 8)
Out[1]:
[(array([2, 3], dtype=int64), array([0, 0], dtype=int64)),
(array([1], dtype=int64), array([0], dtype=int64)),
(array([0], dtype=int64), array([1], dtype=int64)),
(array([0], dtype=int64), array([0], dtype=int64)),
(array([2, 3], dtype=int64), array([1, 1], dtype=int64)),
(array([1], dtype=int64), array([1], dtype=int64))]
In [2]: A[max_indices(A, 8)[0]][0]
Out[2]: array([ 0.89832036])
I found it most intuitive to use np.unique.
The idea is, that the unique method returns the indices of the input values. Then from the max unique value and the indicies, the position of the original values can be recreated.
multi_max = [1,1,2,2,4,0,0,4]
uniques, idx = np.unique(multi_max, return_inverse=True)
print np.squeeze(np.argwhere(idx == np.argmax(uniques)))
>> [4 7]
The following is a very easy way to see the maximum elements and its positions. Here axis is the domain; axis = 0 means column wise maximum number and axis = 1 means row wise max number for the 2D case. And for higher dimensions it depends upon you.
M = np.random.random((3, 4))
print(M)
print(M.max(axis=1), M.argmax(axis=1))
Here's a more complicated way that increases n if the nth value has ties:
>>>> def get_top_n_plus_ties(arr,n):
>>>> sorted_args = np.argsort(-arr)
>>>> thresh = arr[sorted_args[n]]
>>>> n_ = np.sum(arr >= thresh)
>>>> return sorted_args[:n_]
>>>> get_top_n_plus_ties(np.array([2,9,8,3,0,2,8,3,1,9,5]),3)
array([1, 9, 2, 6])
I think the most time efficiency way is manually iterate through the array and keep a k-size min-heap, as other people have mentioned.
And I also come up with a brute force approach:
top_k_index_list = [ ]
for i in range(k):
top_k_index_list.append(np.argmax(my_array))
my_array[top_k_index_list[-1]] = -float('inf')
Set the largest element to a large negative value after you use argmax to get its index. And then the next call of argmax will return the second largest element.
And you can log the original value of these elements and recover them if you want.
This code works for a numpy 2D matrix array:
mat = np.array([[1, 3], [2, 5]]) # numpy matrix
n = 2 # n
n_largest_mat = np.sort(mat, axis=None)[-n:] # n_largest
tf_n_largest = np.zeros((2,2), dtype=bool) # all false matrix
for x in n_largest_mat:
tf_n_largest = (tf_n_largest) | (mat == x) # true-false
n_largest_elems = mat[tf_n_largest] # true-false indexing
This produces a true-false n_largest matrix indexing that also works to extract n_largest elements from a matrix array
When top_k<<axis_length,it better than argsort.
import numpy as np
def get_sorted_top_k(array, top_k=1, axis=-1, reverse=False):
if reverse:
axis_length = array.shape[axis]
partition_index = np.take(np.argpartition(array, kth=-top_k, axis=axis),
range(axis_length - top_k, axis_length), axis)
else:
partition_index = np.take(np.argpartition(array, kth=top_k, axis=axis), range(0, top_k), axis)
top_scores = np.take_along_axis(array, partition_index, axis)
# resort partition
sorted_index = np.argsort(top_scores, axis=axis)
if reverse:
sorted_index = np.flip(sorted_index, axis=axis)
top_sorted_scores = np.take_along_axis(top_scores, sorted_index, axis)
top_sorted_indexes = np.take_along_axis(partition_index, sorted_index, axis)
return top_sorted_scores, top_sorted_indexes
if __name__ == "__main__":
import time
from sklearn.metrics.pairwise import cosine_similarity
x = np.random.rand(10, 128)
y = np.random.rand(1000000, 128)
z = cosine_similarity(x, y)
start_time = time.time()
sorted_index_1 = get_sorted_top_k(z, top_k=3, axis=1, reverse=True)[1]
print(time.time() - start_time)
You can simply use a dictionary to find top k values & indices in a numpy array.
For example, if you want to find top 2 maximum values & indices
import numpy as np
nums = np.array([0.2, 0.3, 0.25, 0.15, 0.1])
def TopK(x, k):
a = dict([(i, j) for i, j in enumerate(x)])
sorted_a = dict(sorted(a.items(), key = lambda kv:kv[1], reverse=True))
indices = list(sorted_a.keys())[:k]
values = list(sorted_a.values())[:k]
return (indices, values)
print(f"Indices: {TopK(nums, k = 2)[0]}")
print(f"Values: {TopK(nums, k = 2)[1]}")
Indices: [1, 2]
Values: [0.3, 0.25]
A vectorized 2D implementation using argpartition:
k = 3
probas = np.array([
[.6, .1, .15, .15],
[.1, .6, .15, .15],
[.3, .1, .6, 0],
])
k_indices = np.argpartition(-probas, k-1, axis=-1)[:, :k]
# adjust indices to apply in flat array
adjuster = np.arange(probas.shape[0]) * probas.shape[1]
adjuster = np.broadcast_to(adjuster[:, None], k_indices.shape)
k_indices_flat = k_indices + adjuster
k_values = probas.flatten()[k_indices_flat]
# k_indices:
# array([[0, 2, 3],
# [1, 2, 3],
# [2, 0, 1]])
# k_values:
# array([[0.6 , 0.15, 0.15],
# [0.6 , 0.15, 0.15],
# [0.6 , 0.3 , 0.1 ]])
If you are dealing with NaNs and/or have problems understanding np.argpartition, try pandas.DataFrame.sort_values.
import numpy as np
import pandas as pd
a = np.array([9, 4, 4, 3, 3, 9, 0, 4, 6, 0])
df = pd.DataFrame(a, columns=['array'])
max_values = df['array'].sort_values(ascending=False, na_position='last')
ind = max_values[0:3].index.to_list()
This example gives the indices of the 3 largest, not-NaN values. Probably inefficient, but easy to read and customize.

Numpy concatenate lists where first column is in range n

I am trying to select all rows in a numpy matrix named matrix with shape (25323, 9), where the values of the first column are inside the range of start and end for each tuple on the list range_tuple. Ultimately, I want to create a new numpy matrix with the result where final has a shape of (n, 9). The following code returns this error: TypeError: only integer scalar arrays can be converted to a scalar index. I have also tried initializing final with numpy.zeros((1,9)) and used np.concatenate but get similar results. I do get a compiled result when I use final.append(result) instead of using np.concatenate but the shape of the matrix gets lost. I know there is a proper solution to this problem, any help would be appreciated.
final = []
for i in range_tuples:
copy = np.copy(matrix)
start = i[0]
end = i[1]
result = copy[(matrix[:,0] < end) & (matrix[:,0] > start)]
final = np.concatenate(final, result)
final = np.matrix(final)
In [33]: arr
Out[33]:
array([[ 0, 1, 2],
[ 3, 4, 5],
[ 6, 7, 8],
[ 9, 10, 11],
[12, 13, 14],
[15, 16, 17],
[18, 19, 20],
[21, 22, 23]])
In [34]: tups = [(0,6),(3,12),(9,10),(15,14)]
In [35]: alist=[]
...: for start, stop in tups:
...: res = arr[(arr[:,0]<stop)&(arr[:,0]>=start), :]
...: alist.append(res)
...:
check the list; note that elements differ in shape; some are 1 or 0 rows. It's a good idea to test these edge cases.
In [37]: alist
Out[37]:
[array([[0, 1, 2],
[3, 4, 5]]), array([[ 3, 4, 5],
[ 6, 7, 8],
[ 9, 10, 11]]), array([[ 9, 10, 11]]), array([], shape=(0, 3), dtype=int64)]
vstack joins them:
In [38]: np.vstack(alist)
Out[38]:
array([[ 0, 1, 2],
[ 3, 4, 5],
[ 3, 4, 5],
[ 6, 7, 8],
[ 9, 10, 11],
[ 9, 10, 11]])
Here concatenate also works, because default axis is 0, and all inputs are already 2d.
Try the following
final = np.empty((0,9))
for start, stop in range_tuples:
result = matrix[(matrix[:,0] < end) & (matrix[:,0] > start)]
final = np.concatenate((final, result))
The first is to initialize final as a numpy array. The first argument to concatenate has to be a python list of the arrays, see docs. In your code it interprets the result variable as the value for the parameter axis
Notes
I used tuple deconstruction to make the loop clearer
the copy is not needed
appending lists can be faster. The final result can afterwards be obtained through reshaping, if result is always of the same length.
I would simply create a boolean mask to select rows that satisfy required conditions.
EDIT: I missed that you are working with matrix (as opposite to ndarray). Answer was edited for matrix.
Assume following input data:
matrix = np.matrix([[1, 2, 3], [5, 6, 7], [2, 1, 7], [3, 4, 5], [8, 9, 0]])
range_tuple = [(0, 2), (1, 4), (1, 9), (5, 9), (0, 100)]
Then, first, I would convert range_tuple to a numpy.ndarray:
range_mat = np.matrix(range_tuple)
Now, create the mask:
mask = np.ravel((matrix[:, 0] > range_mat[:, 0]) & (matrix[:, 0] < range_mat[:, 1]))
Apply the mask:
final = matrix[mask] # or matrix[mask].copy() if you intend to modify matrix
To check:
print(final)
[[1 2 3]
[2 1 7]
[8 9 0]]
If length of range_tuple can be different from the number of rows in the matrix, then do this:
n = min(range_mat.shape[0], matrix.shape[0])
mask = np.pad(
np.ravel(
(matrix[:n, 0] > range_mat[:n, 0]) & (matrix[:n, 0] < range_mat[:n, 1])
),
(0, matrix.shape[0] - n)
)
final = matrix[mask]

Get values in numpy array either side of a specified value

I have a numpy array, provided at random, which for this example looks like:
a = [10, 8, 6, 4, 2, -2, -4, -6, -8, -10, 1]
ideally, in this example, the values would be between -10 and 10 but this cannot be guaranteed (as above).
I want to retrive the 2 values closest to zero, such that:
b = a[a > 0][-1]
c = a[a < 0][0]
which would ideally return me the values of 2 and -2. However, the 1 value is included in the slice in b and i get returned the values of 1 and -2.
Is there a way in numpy to retrieve the values immediately 'next' to zero?
Its worth noting that whilst I always want to split the array at 0, the array could be any length and I could have an uneven number of positive and negative values in the array (i.e. [5, 4, 3, 2, 1, 0, -1])
A real world example is:
I want the yellow and green position but get returned the blue and green position instead, as the data crosses back over zero from -ve to +ve
This function should do the job:
import numpy as np
def my_func(x):
left = np.where(x[:-1]>0)[0][-1]
right = 1 + np.where(x[1:]<0)[0][0]
return x[left], x[right]
Demo:
>>> a = np.array([10, 8, 6, 4, 2, -2, -4, -6, -8, -10, 1])
>>> b = np.array([5, 4, 3, 2, 1, 0, -1])
>>> my_func(a)
(2, -2)
>>> my_func(b)
(1, -1)

What is the purpose of meshgrid in Python / NumPy?

Can someone explain to me what is the purpose of meshgrid function in Numpy? I know it creates some kind of grid of coordinates for plotting, but I can't really see the direct benefit of it.
I am studying "Python Machine Learning" from Sebastian Raschka, and he is using it for plotting the decision borders. See input 11 here.
I have also tried this code from official documentation, but, again, the output doesn't really make sense to me.
x = np.arange(-5, 5, 1)
y = np.arange(-5, 5, 1)
xx, yy = np.meshgrid(x, y, sparse=True)
z = np.sin(xx**2 + yy**2) / (xx**2 + yy**2)
h = plt.contourf(x,y,z)
Please, if possible, also show me a lot of real-world examples.
The purpose of meshgrid is to create a rectangular grid out of an array of x values and an array of y values.
So, for example, if we want to create a grid where we have a point at each integer value between 0 and 4 in both the x and y directions. To create a rectangular grid, we need every combination of the x and y points.
This is going to be 25 points, right? So if we wanted to create an x and y array for all of these points, we could do the following.
x[0,0] = 0 y[0,0] = 0
x[0,1] = 1 y[0,1] = 0
x[0,2] = 2 y[0,2] = 0
x[0,3] = 3 y[0,3] = 0
x[0,4] = 4 y[0,4] = 0
x[1,0] = 0 y[1,0] = 1
x[1,1] = 1 y[1,1] = 1
...
x[4,3] = 3 y[4,3] = 4
x[4,4] = 4 y[4,4] = 4
This would result in the following x and y matrices, such that the pairing of the corresponding element in each matrix gives the x and y coordinates of a point in the grid.
x = 0 1 2 3 4 y = 0 0 0 0 0
0 1 2 3 4 1 1 1 1 1
0 1 2 3 4 2 2 2 2 2
0 1 2 3 4 3 3 3 3 3
0 1 2 3 4 4 4 4 4 4
We can then plot these to verify that they are a grid:
plt.plot(x,y, marker='.', color='k', linestyle='none')
Obviously, this gets very tedious especially for large ranges of x and y. Instead, meshgrid can actually generate this for us: all we have to specify are the unique x and y values.
xvalues = np.array([0, 1, 2, 3, 4]);
yvalues = np.array([0, 1, 2, 3, 4]);
Now, when we call meshgrid, we get the previous output automatically.
xx, yy = np.meshgrid(xvalues, yvalues)
plt.plot(xx, yy, marker='.', color='k', linestyle='none')
Creation of these rectangular grids is useful for a number of tasks. In the example that you have provided in your post, it is simply a way to sample a function (sin(x**2 + y**2) / (x**2 + y**2)) over a range of values for x and y.
Because this function has been sampled on a rectangular grid, the function can now be visualized as an "image".
Additionally, the result can now be passed to functions which expect data on rectangular grid (i.e. contourf)
Courtesy of Microsoft Excel:
Actually the purpose of np.meshgrid is already mentioned in the documentation:
np.meshgrid
Return coordinate matrices from coordinate vectors.
Make N-D coordinate arrays for vectorized evaluations of N-D scalar/vector fields over N-D grids, given one-dimensional coordinate arrays x1, x2,..., xn.
So it's primary purpose is to create a coordinates matrices.
You probably just asked yourself:
Why do we need to create coordinate matrices?
The reason you need coordinate matrices with Python/NumPy is that there is no direct relation from coordinates to values, except when your coordinates start with zero and are purely positive integers. Then you can just use the indices of an array as the index.
However when that's not the case you somehow need to store coordinates alongside your data. That's where grids come in.
Suppose your data is:
1 2 1
2 5 2
1 2 1
However, each value represents a 3 x 2 kilometer area (horizontal x vertical). Suppose your origin is the upper left corner and you want arrays that represent the distance you could use:
import numpy as np
h, v = np.meshgrid(np.arange(3)*3, np.arange(3)*2)
where v is:
array([[0, 0, 0],
[2, 2, 2],
[4, 4, 4]])
and h:
array([[0, 3, 6],
[0, 3, 6],
[0, 3, 6]])
So if you have two indices, let's say x and y (that's why the return value of meshgrid is usually xx or xs instead of x in this case I chose h for horizontally!) then you can get the x coordinate of the point, the y coordinate of the point and the value at that point by using:
h[x, y] # horizontal coordinate
v[x, y] # vertical coordinate
data[x, y] # value
That makes it much easier to keep track of coordinates and (even more importantly) you can pass them to functions that need to know the coordinates.
A slightly longer explanation
However, np.meshgrid itself isn't often used directly, mostly one just uses one of similar objects np.mgrid or np.ogrid.
Here np.mgrid represents the sparse=False and np.ogrid the sparse=True case (I refer to the sparse argument of np.meshgrid). Note that there is a significant difference between
np.meshgrid and np.ogrid and np.mgrid: The first two returned values (if there are two or more) are reversed. Often this doesn't matter but you should give meaningful variable names depending on the context.
For example, in case of a 2D grid and matplotlib.pyplot.imshow it makes sense to name the first returned item of np.meshgrid x and the second one y while it's
the other way around for np.mgrid and np.ogrid.
np.ogrid and sparse grids
>>> import numpy as np
>>> yy, xx = np.ogrid[-5:6, -5:6]
>>> xx
array([[-5, -4, -3, -2, -1, 0, 1, 2, 3, 4, 5]])
>>> yy
array([[-5],
[-4],
[-3],
[-2],
[-1],
[ 0],
[ 1],
[ 2],
[ 3],
[ 4],
[ 5]])
As already said the output is reversed when compared to np.meshgrid, that's why I unpacked it as yy, xx instead of xx, yy:
>>> xx, yy = np.meshgrid(np.arange(-5, 6), np.arange(-5, 6), sparse=True)
>>> xx
array([[-5, -4, -3, -2, -1, 0, 1, 2, 3, 4, 5]])
>>> yy
array([[-5],
[-4],
[-3],
[-2],
[-1],
[ 0],
[ 1],
[ 2],
[ 3],
[ 4],
[ 5]])
This already looks like coordinates, specifically the x and y lines for 2D plots.
Visualized:
yy, xx = np.ogrid[-5:6, -5:6]
plt.figure()
plt.title('ogrid (sparse meshgrid)')
plt.grid()
plt.xticks(xx.ravel())
plt.yticks(yy.ravel())
plt.scatter(xx, np.zeros_like(xx), color="blue", marker="*")
plt.scatter(np.zeros_like(yy), yy, color="red", marker="x")
np.mgrid and dense/fleshed out grids
>>> yy, xx = np.mgrid[-5:6, -5:6]
>>> xx
array([[-5, -4, -3, -2, -1, 0, 1, 2, 3, 4, 5],
[-5, -4, -3, -2, -1, 0, 1, 2, 3, 4, 5],
[-5, -4, -3, -2, -1, 0, 1, 2, 3, 4, 5],
[-5, -4, -3, -2, -1, 0, 1, 2, 3, 4, 5],
[-5, -4, -3, -2, -1, 0, 1, 2, 3, 4, 5],
[-5, -4, -3, -2, -1, 0, 1, 2, 3, 4, 5],
[-5, -4, -3, -2, -1, 0, 1, 2, 3, 4, 5],
[-5, -4, -3, -2, -1, 0, 1, 2, 3, 4, 5],
[-5, -4, -3, -2, -1, 0, 1, 2, 3, 4, 5],
[-5, -4, -3, -2, -1, 0, 1, 2, 3, 4, 5],
[-5, -4, -3, -2, -1, 0, 1, 2, 3, 4, 5]])
>>> yy
array([[-5, -5, -5, -5, -5, -5, -5, -5, -5, -5, -5],
[-4, -4, -4, -4, -4, -4, -4, -4, -4, -4, -4],
[-3, -3, -3, -3, -3, -3, -3, -3, -3, -3, -3],
[-2, -2, -2, -2, -2, -2, -2, -2, -2, -2, -2],
[-1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1],
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
[ 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1],
[ 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2],
[ 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3],
[ 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4],
[ 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5]])
The same applies here: The output is reversed compared to np.meshgrid:
>>> xx, yy = np.meshgrid(np.arange(-5, 6), np.arange(-5, 6))
>>> xx
array([[-5, -4, -3, -2, -1, 0, 1, 2, 3, 4, 5],
[-5, -4, -3, -2, -1, 0, 1, 2, 3, 4, 5],
[-5, -4, -3, -2, -1, 0, 1, 2, 3, 4, 5],
[-5, -4, -3, -2, -1, 0, 1, 2, 3, 4, 5],
[-5, -4, -3, -2, -1, 0, 1, 2, 3, 4, 5],
[-5, -4, -3, -2, -1, 0, 1, 2, 3, 4, 5],
[-5, -4, -3, -2, -1, 0, 1, 2, 3, 4, 5],
[-5, -4, -3, -2, -1, 0, 1, 2, 3, 4, 5],
[-5, -4, -3, -2, -1, 0, 1, 2, 3, 4, 5],
[-5, -4, -3, -2, -1, 0, 1, 2, 3, 4, 5],
[-5, -4, -3, -2, -1, 0, 1, 2, 3, 4, 5]])
>>> yy
array([[-5, -5, -5, -5, -5, -5, -5, -5, -5, -5, -5],
[-4, -4, -4, -4, -4, -4, -4, -4, -4, -4, -4],
[-3, -3, -3, -3, -3, -3, -3, -3, -3, -3, -3],
[-2, -2, -2, -2, -2, -2, -2, -2, -2, -2, -2],
[-1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1],
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
[ 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1],
[ 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2],
[ 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3],
[ 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4],
[ 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5]])
Unlike ogrid these arrays contain all xx and yy coordinates in the -5 <= xx <= 5; -5 <= yy <= 5 grid.
yy, xx = np.mgrid[-5:6, -5:6]
plt.figure()
plt.title('mgrid (dense meshgrid)')
plt.grid()
plt.xticks(xx[0])
plt.yticks(yy[:, 0])
plt.scatter(xx, yy, color="red", marker="x")
Functionality
It's not only limited to 2D, these functions work for arbitrary dimensions (well, there is a maximum number of arguments given to function in Python and a maximum number of dimensions that NumPy allows):
>>> x1, x2, x3, x4 = np.ogrid[:3, 1:4, 2:5, 3:6]
>>> for i, x in enumerate([x1, x2, x3, x4]):
... print('x{}'.format(i+1))
... print(repr(x))
x1
array([[[[0]]],
[[[1]]],
[[[2]]]])
x2
array([[[[1]],
[[2]],
[[3]]]])
x3
array([[[[2],
[3],
[4]]]])
x4
array([[[[3, 4, 5]]]])
>>> # equivalent meshgrid output, note how the first two arguments are reversed and the unpacking
>>> x2, x1, x3, x4 = np.meshgrid(np.arange(1,4), np.arange(3), np.arange(2, 5), np.arange(3, 6), sparse=True)
>>> for i, x in enumerate([x1, x2, x3, x4]):
... print('x{}'.format(i+1))
... print(repr(x))
# Identical output so it's omitted here.
Even if these also work for 1D there are two (much more common) 1D grid creation functions:
np.arange
np.linspace
Besides the start and stop argument it also supports the step argument (even complex steps that represent the number of steps):
>>> x1, x2 = np.mgrid[1:10:2, 1:10:4j]
>>> x1 # The dimension with the explicit step width of 2
array([[1., 1., 1., 1.],
[3., 3., 3., 3.],
[5., 5., 5., 5.],
[7., 7., 7., 7.],
[9., 9., 9., 9.]])
>>> x2 # The dimension with the "number of steps"
array([[ 1., 4., 7., 10.],
[ 1., 4., 7., 10.],
[ 1., 4., 7., 10.],
[ 1., 4., 7., 10.],
[ 1., 4., 7., 10.]])
Applications
You specifically asked about the purpose and in fact, these grids are extremely useful if you need a coordinate system.
For example if you have a NumPy function that calculates the distance in two dimensions:
def distance_2d(x_point, y_point, x, y):
return np.hypot(x-x_point, y-y_point)
And you want to know the distance of each point:
>>> ys, xs = np.ogrid[-5:5, -5:5]
>>> distances = distance_2d(1, 2, xs, ys) # distance to point (1, 2)
>>> distances
array([[9.21954446, 8.60232527, 8.06225775, 7.61577311, 7.28010989,
7.07106781, 7. , 7.07106781, 7.28010989, 7.61577311],
[8.48528137, 7.81024968, 7.21110255, 6.70820393, 6.32455532,
6.08276253, 6. , 6.08276253, 6.32455532, 6.70820393],
[7.81024968, 7.07106781, 6.40312424, 5.83095189, 5.38516481,
5.09901951, 5. , 5.09901951, 5.38516481, 5.83095189],
[7.21110255, 6.40312424, 5.65685425, 5. , 4.47213595,
4.12310563, 4. , 4.12310563, 4.47213595, 5. ],
[6.70820393, 5.83095189, 5. , 4.24264069, 3.60555128,
3.16227766, 3. , 3.16227766, 3.60555128, 4.24264069],
[6.32455532, 5.38516481, 4.47213595, 3.60555128, 2.82842712,
2.23606798, 2. , 2.23606798, 2.82842712, 3.60555128],
[6.08276253, 5.09901951, 4.12310563, 3.16227766, 2.23606798,
1.41421356, 1. , 1.41421356, 2.23606798, 3.16227766],
[6. , 5. , 4. , 3. , 2. ,
1. , 0. , 1. , 2. , 3. ],
[6.08276253, 5.09901951, 4.12310563, 3.16227766, 2.23606798,
1.41421356, 1. , 1.41421356, 2.23606798, 3.16227766],
[6.32455532, 5.38516481, 4.47213595, 3.60555128, 2.82842712,
2.23606798, 2. , 2.23606798, 2.82842712, 3.60555128]])
The output would be identical if one passed in a dense grid instead of an open grid. NumPys broadcasting makes it possible!
Let's visualize the result:
plt.figure()
plt.title('distance to point (1, 2)')
plt.imshow(distances, origin='lower', interpolation="none")
plt.xticks(np.arange(xs.shape[1]), xs.ravel()) # need to set the ticks manually
plt.yticks(np.arange(ys.shape[0]), ys.ravel())
plt.colorbar()
And this is also when NumPys mgrid and ogrid become very convenient because it allows you to easily change the resolution of your grids:
ys, xs = np.ogrid[-5:5:200j, -5:5:200j]
# otherwise same code as above
However, since imshow doesn't support x and y inputs one has to change the ticks by hand. It would be really convenient if it would accept the x and y coordinates, right?
It's easy to write functions with NumPy that deal naturally with grids. Furthermore, there are several functions in NumPy, SciPy, matplotlib that expect you to pass in the grid.
I like images so let's explore matplotlib.pyplot.contour:
ys, xs = np.mgrid[-5:5:200j, -5:5:200j]
density = np.sin(ys)-np.cos(xs)
plt.figure()
plt.contour(xs, ys, density)
Note how the coordinates are already correctly set! That wouldn't be the case if you just passed in the density.
Or to give another fun example using astropy models (this time I don't care much about the coordinates, I just use them to create some grid):
from astropy.modeling import models
z = np.zeros((100, 100))
y, x = np.mgrid[0:100, 0:100]
for _ in range(10):
g2d = models.Gaussian2D(amplitude=100,
x_mean=np.random.randint(0, 100),
y_mean=np.random.randint(0, 100),
x_stddev=3,
y_stddev=3)
z += g2d(x, y)
a2d = models.AiryDisk2D(amplitude=70,
x_0=np.random.randint(0, 100),
y_0=np.random.randint(0, 100),
radius=5)
z += a2d(x, y)
Although that's just "for the looks" several functions related to functional models and fitting (for example scipy.interpolate.interp2d,
scipy.interpolate.griddata even show examples using np.mgrid) in Scipy, etc. require grids. Most of these work with open grids and dense grids, however some only work with one of them.
Suppose you have a function:
def sinus2d(x, y):
return np.sin(x) + np.sin(y)
and you want, for example, to see what it looks like in the range 0 to 2*pi. How would you do it? There np.meshgrid comes in:
xx, yy = np.meshgrid(np.linspace(0,2*np.pi,100), np.linspace(0,2*np.pi,100))
z = sinus2d(xx, yy) # Create the image on this grid
and such a plot would look like:
import matplotlib.pyplot as plt
plt.imshow(z, origin='lower', interpolation='none')
plt.show()
So np.meshgrid is just a convenience. In principle the same could be done by:
z2 = sinus2d(np.linspace(0,2*np.pi,100)[:,None], np.linspace(0,2*np.pi,100)[None,:])
but there you need to be aware of your dimensions (suppose you have more than two ...) and the right broadcasting. np.meshgrid does all of this for you.
Also meshgrid allows you to delete coordinates together with the data if you, for example, want to do an interpolation but exclude certain values:
condition = z>0.6
z_new = z[condition] # This will make your array 1D
so how would you do the interpolation now? You can give x and y to an interpolation function like scipy.interpolate.interp2d so you need a way to know which coordinates were deleted:
x_new = xx[condition]
y_new = yy[condition]
and then you can still interpolate with the "right" coordinates (try it without the meshgrid and you will have a lot of extra code):
from scipy.interpolate import interp2d
interpolated = interp2d(x_new, y_new, z_new)
and the original meshgrid allows you to get the interpolation on the original grid again:
interpolated_grid = interpolated(xx[0], yy[:, 0]).reshape(xx.shape)
These are just some examples where I used the meshgrid there might be a lot more.
Short answer
The purpose of meshgrid is to help replace slow Python loops by faster vectorized operations available in NumPy library. meshgrid role is to prepare 2D arrays required by the vectorized operation.
Basic example showing the principle
Let's say we have two sequences of values,
a = [2,7,9,20]
b = [1,6,7,9] ​
and we want to perform an operation on each possible pair of values, one taken from the first list, one taken from the second list. We also want to store the result. For example, let's say we want to get the sum of the values for each possible pair.
Slow and laborious method
c = []
for i in range(len(b)):
row = []
for j in range(len(a)):
row.append (a[j] + b[i])
c.append (row)
print (c)
Result:
[[3, 8, 10, 21],
[8, 13, 15, 26],
[9, 14, 16, 27],
[11, 16, 18, 29]]
Fast and easy method
i,j = np.meshgrid (a,b)
c = i + j
print (c)
Result:
[[ 3 8 10 21]
[ 8 13 15 26]
[ 9 14 16 27]
[11 16 18 29]]
You can see from this basic illustration how the explicit slow Python loops have been replaced by hidden faster C loops in Numpy library. This principle is widely used for 3D operations, included colored pixel maps. The common example is a 3D plot.
Common use: 3D plot
x = np.arange(-4, 4, 0.25)
y = np.arange(-4, 4, 0.25)
X, Y = np.meshgrid(x, y)
R = np.sqrt(X**2 + Y**2)
Z = np.sin(R)
(Borrowed from this site)
meshgrid is used to create pairs of coordinates between -4 and +4 with .25 increments in each direction X and Y. Each pair is then used to find R, and Z from it. This way of preparing "a grid" of coordinates is frequently used in plotting 3D surfaces, or coloring 2D surfaces.
Meshgrid under the hood
The two arrays prepared by meshgrid are:
(array([[ 2, 7, 9, 20],
[ 2, 7, 9, 20],
[ 2, 7, 9, 20],
[ 2, 7, 9, 20]]),
array([[1, 1, 1, 1],
[6, 6, 6, 6],
[7, 7, 7, 7],
[9, 9, 9, 9]]))
These arrays are created by repeating the values provided, either horizontally or vertically. The two arrays are shape compatible for a vector operation.
Origin
numpy.meshgrid comes from MATLAB, like many other NumPy functions. So you can also study the examples from MATLAB to see meshgrid in use, the code for the 3D plotting looks the same in MATLAB.
meshgrid helps in creating a rectangular grid from two 1-D arrays of all pairs of points from the two arrays.
x = np.array([0, 1, 2, 3, 4])
y = np.array([0, 1, 2, 3, 4])
Now, if you have defined a function f(x,y) and you wanna apply this function to all the possible combination of points from the arrays 'x' and 'y', then you can do this:
f(*np.meshgrid(x, y))
Say, if your function just produces the product of two elements, then this is how a cartesian product can be achieved, efficiently for large arrays.
Referred from here
Basic Idea
Given possible x values, xs, (think of them as the tick-marks on the x-axis of a plot) and possible y values, ys, meshgrid generates the corresponding set of (x, y) grid points---analogous to set((x, y) for x in xs for y in yx). For example, if xs=[1,2,3] and ys=[4,5,6], we'd get the set of coordinates {(1,4), (2,4), (3,4), (1,5), (2,5), (3,5), (1,6), (2,6), (3,6)}.
Form of the Return Value
However, the representation that meshgrid returns is different from the above expression in two ways:
First, meshgrid lays out the grid points in a 2d array: rows correspond to different y-values, columns correspond to different x-values---as in list(list((x, y) for x in xs) for y in ys), which would give the following array:
[[(1,4), (2,4), (3,4)],
[(1,5), (2,5), (3,5)],
[(1,6), (2,6), (3,6)]]
Second, meshgrid returns the x and y coordinates separately (i.e. in two different numpy 2d arrays):
xcoords, ycoords = (
array([[1, 2, 3],
[1, 2, 3],
[1, 2, 3]]),
array([[4, 4, 4],
[5, 5, 5],
[6, 6, 6]]))
# same thing using np.meshgrid:
xcoords, ycoords = np.meshgrid([1,2,3], [4,5,6])
# same thing without meshgrid:
xcoords = np.array([xs] * len(ys)
ycoords = np.array([ys] * len(xs)).T
Note, np.meshgrid can also generate grids for higher dimensions. Given xs, ys, and zs, you'd get back xcoords, ycoords, zcoords as 3d arrays. meshgrid also supports reverse ordering of the dimensions as well as sparse representation of the result.
Applications
Why would we want this form of output?
Apply a function at every point on a grid:
One motivation is that binary operators like (+, -, *, /, **) are overloaded for numpy arrays as elementwise operations. This means that if I have a function def f(x, y): return (x - y) ** 2 that works on two scalars, I can also apply it on two numpy arrays to get an array of elementwise results: e.g. f(xcoords, ycoords) or f(*np.meshgrid(xs, ys)) gives the following on the above example:
array([[ 9, 4, 1],
[16, 9, 4],
[25, 16, 9]])
Higher dimensional outer product: I'm not sure how efficient this is, but you can get high-dimensional outer products this way: np.prod(np.meshgrid([1,2,3], [1,2], [1,2,3,4]), axis=0).
Contour plots in matplotlib: I came across meshgrid when investigating drawing contour plots with matplotlib for plotting decision boundaries. For this, you generate a grid with meshgrid, evaluate the function at each grid point (e.g. as shown above), and then pass the xcoords, ycoords, and computed f-values (i.e. zcoords) into the contourf function.
Behind the scenes:
import numpy as np
def meshgrid(x , y):
XX = []
YY = []
for colm in range(len(y)):
XX.append([])
YY.append([])
for row in range(len(x)):
XX[colm].append(x[row])
YY[colm].append(y[colm])
return np.asarray(XX), np.asarray(YY)
Lets take dataset of #Sarsaparilla's answer as example:
y = [7, 6, 5]
x = [1, 2, 3, 4]
xx, yy = meshgrid(x , y)
and it outputs:
>>> xx
array([[1, 2, 3, 4],
[1, 2, 3, 4],
[1, 2, 3, 4]])
>>> yy
array([[7, 7, 7, 7],
[6, 6, 6, 6],
[5, 5, 5, 5]])

Resample and resize numpy array

I would like to resample a numpy array as suggested here Resampling a numpy array representing an image however this resampling will do so by a factor i.e.
x = np.arange(9).reshape(3,3)
print scipy.ndimage.zoom(x, 2, order=1)
Will create a shape of (6,6) but how can I resample an array to its best approximation within a (4,6),(6,8) or (6,10) shape for instance?
Instead of passing a single number to the zoom parameter, give a sequence:
scipy.ndimage.zoom(x, zoom=(1.5, 2.), order=1)
#array([[0, 0, 1, 1, 2, 2],
# [2, 2, 3, 3, 4, 4],
# [4, 4, 5, 5, 6, 6],
# [6, 6, 7, 7, 8, 8]])
With the sequences (2., 2.75) and (2., 3.5) you will get output arrays with shapes (6, 8) and (6, 10), respectively.

Categories