Fill in missing values with nearest neighbour in Python numpy masked arrays? - python

I am working with a 2D Numpy masked_array in Python.
I need to change the data values in the masked area such that they equal the nearest unmasked value.
NB. If there are more than one nearest unmasked values then it can take any of those nearest values (which ever one turns out to be easiest to codeā€¦)
e.g.
import numpy
import numpy.ma as ma
a = numpy.arange(100).reshape(10,10)
fill_value=-99
a[2:4,3:8] = fill_value
a[8,8] = fill_value
a = ma.masked_array(a,a==fill_value)
>>> a [[0 1 2 3 4 5 6 7 8 9]
[10 11 12 13 14 15 16 17 18 19]
[20 21 22 -- -- -- -- -- 28 29]
[30 31 32 -- -- -- -- -- 38 39]
[40 41 42 43 44 45 46 47 48 49]
[50 51 52 53 54 55 56 57 58 59]
[60 61 62 63 64 65 66 67 68 69]
[70 71 72 73 74 75 76 77 78 79]
[80 81 82 83 84 85 86 87 -- 89]
[90 91 92 93 94 95 96 97 98 99]],
I need it to look like this:
>>> a.data
[[0 1 2 3 4 5 6 7 8 9]
[10 11 12 13 14 15 16 17 18 19]
[20 21 22 ? 14 15 16 ? 28 29]
[30 31 32 ? 44 45 46 ? 38 39]
[40 41 42 43 44 45 46 47 48 49]
[50 51 52 53 54 55 56 57 58 59]
[60 61 62 63 64 65 66 67 68 69]
[70 71 72 73 74 75 76 77 78 79]
[80 81 82 83 84 85 86 87 ? 89]
[90 91 92 93 94 95 96 97 98 99]],
NB. where "?" could take any of the adjacent unmasked values.
What is the most efficient way to do this?
Thanks for your help.

I generally use a distance transform, as wisely suggested by Juh_ in this question.
This does not directly apply to masked arrays, but I do not think it will be that hard to transpose there, and it is quite efficient, I've had no problem applying it to large 100MPix images.
Copying the relevant method there for reference :
import numpy as np
from scipy import ndimage as nd
def fill(data, invalid=None):
"""
Replace the value of invalid 'data' cells (indicated by 'invalid')
by the value of the nearest valid data cell
Input:
data: numpy array of any dimension
invalid: a binary array of same shape as 'data'. True cells set where data
value should be replaced.
If None (default), use: invalid = np.isnan(data)
Output:
Return a filled array.
"""
#import numpy as np
#import scipy.ndimage as nd
if invalid is None: invalid = np.isnan(data)
ind = nd.distance_transform_edt(invalid, return_distances=False, return_indices=True)
return data[tuple(ind)]

You could use np.roll to make shifted copies of a, then use boolean logic on the masks to identify the spots to be filled in:
import numpy as np
import numpy.ma as ma
a = np.arange(100).reshape(10,10)
fill_value=-99
a[2:4,3:8] = fill_value
a[8,8] = fill_value
a = ma.masked_array(a,a==fill_value)
print(a)
# [[0 1 2 3 4 5 6 7 8 9]
# [10 11 12 13 14 15 16 17 18 19]
# [20 21 22 -- -- -- -- -- 28 29]
# [30 31 32 -- -- -- -- -- 38 39]
# [40 41 42 43 44 45 46 47 48 49]
# [50 51 52 53 54 55 56 57 58 59]
# [60 61 62 63 64 65 66 67 68 69]
# [70 71 72 73 74 75 76 77 78 79]
# [80 81 82 83 84 85 86 87 -- 89]
# [90 91 92 93 94 95 96 97 98 99]]
for shift in (-1,1):
for axis in (0,1):
a_shifted=np.roll(a,shift=shift,axis=axis)
idx=~a_shifted.mask * a.mask
a[idx]=a_shifted[idx]
print(a)
# [[0 1 2 3 4 5 6 7 8 9]
# [10 11 12 13 14 15 16 17 18 19]
# [20 21 22 13 14 15 16 28 28 29]
# [30 31 32 43 44 45 46 47 38 39]
# [40 41 42 43 44 45 46 47 48 49]
# [50 51 52 53 54 55 56 57 58 59]
# [60 61 62 63 64 65 66 67 68 69]
# [70 71 72 73 74 75 76 77 78 79]
# [80 81 82 83 84 85 86 87 98 89]
# [90 91 92 93 94 95 96 97 98 99]]
If you'd like to use a larger set of nearest neighbors, you could perhaps do something like this:
neighbors=((0,1),(0,-1),(1,0),(-1,0),(1,1),(-1,1),(1,-1),(-1,-1),
(0,2),(0,-2),(2,0),(-2,0))
Note that the order of the elements in neighbors is important. You probably want to fill in missing values with the nearest neighbor, not just any neighbor. There's probably a smarter way to generate the neighbors sequence, but I'm not seeing it at the moment.
a_copy=a.copy()
for hor_shift,vert_shift in neighbors:
if not np.any(a.mask): break
a_shifted=np.roll(a_copy,shift=hor_shift,axis=1)
a_shifted=np.roll(a_shifted,shift=vert_shift,axis=0)
idx=~a_shifted.mask*a.mask
a[idx]=a_shifted[idx]
Note that np.roll happily rolls the lower edge to the top, so a missing value at the top may be filled in by a value from the very bottom. If this is a problem, I'd have to think more about how to fix it. The obvious but not very clever solution would be to use if statements and feed the edges a different sequence of admissible neighbors...

For more complicated cases you could use scipy.spatial:
from scipy.spatial import KDTree
x,y=np.mgrid[0:a.shape[0],0:a.shape[1]]
xygood = np.array((x[~a.mask],y[~a.mask])).T
xybad = np.array((x[a.mask],y[a.mask])).T
a[a.mask] = a[~a.mask][KDTree(xygood).query(xybad)[1]]
print a
[[0 1 2 3 4 5 6 7 8 9]
[10 11 12 13 14 15 16 17 18 19]
[20 21 22 13 14 15 16 17 28 29]
[30 31 32 32 44 45 46 38 38 39]
[40 41 42 43 44 45 46 47 48 49]
[50 51 52 53 54 55 56 57 58 59]
[60 61 62 63 64 65 66 67 68 69]
[70 71 72 73 74 75 76 77 78 79]
[80 81 82 83 84 85 86 87 78 89]
[90 91 92 93 94 95 96 97 98 99]]

Related

How can I get uneven submatrices from NxN matrix?

I have a large NxN matrix that I'm looking to retrieve multiple submatrices from. Each of these submatrices can be different sizes but they can't overlap (see attached pic). Is there a function in Python that could remotely do what I'm looking to achieve?
example of submatrices in NxN matrix
This is what I've written so far; however, it doesn't give me back a square submatrix
import numpy as np
# Create a 10x10 matrix
matrix = np.arange(0, 100).reshape((10, 10))
print(matrix)
# Define the sizes of the submatrices
submatrix_sizes = [4, 4, 5]
# Calculate the starting and ending indices for each submatrix
starts = np.cumsum([0] + submatrix_sizes[:-1])
ends = np.cumsum(submatrix_sizes)
# Split the matrix into submatrices of the specified sizes
submatrices = np.split(matrix, ends, axis=1)[:-1]
# Print the submatrices
for i, submatrix in enumerate(submatrices):
print(f"Submatrix {i+1}:")
print(submatrix)
Output
[[ 0 1 2 3 4 5 6 7 8 9]
[10 11 12 13 14 15 16 17 18 19]
[20 21 22 23 24 25 26 27 28 29]
[30 31 32 33 34 35 36 37 38 39]
[40 41 42 43 44 45 46 47 48 49]
[50 51 52 53 54 55 56 57 58 59]
[60 61 62 63 64 65 66 67 68 69]
[70 71 72 73 74 75 76 77 78 79]
[80 81 82 83 84 85 86 87 88 89]
[90 91 92 93 94 95 96 97 98 99]]
Submatrix 1:
[[ 0 1 2 3]
[10 11 12 13]
[20 21 22 23]
[30 31 32 33]
[40 41 42 43]
[50 51 52 53]
[60 61 62 63]
[70 71 72 73]
[80 81 82 83]
[90 91 92 93]]
Submatrix 2:
[[ 4 5 6 7]
[14 15 16 17]
[24 25 26 27]
[34 35 36 37]
[44 45 46 47]
[54 55 56 57]
[64 65 66 67]
[74 75 76 77]
[84 85 86 87]
[94 95 96 97]]
Submatrix 3:
[[ 8 9]
[18 19]
[28 29]
[38 39]
[48 49]
[58 59]
[68 69]
[78 79]
[88 89]
[98 99]]
Your starts and ends are not calculated correctly:
It is impossible to have index of 13 on any axis on a 10x10 matix.
you don't use the calculated starts while slicing
starts = np.cumsum([0] + submatrix_sizes[:-1])
# has to be disiced how to calculate these correctly
ends = np.cumsum(submatrix_sizes)
breaks = list(zip(starts, ends))
# slicing x and y axis not only x
submatrix_sizes = [matrix[elem[0]:elem[1], elem[0]:elem[1]] for elem in breaks]

Why is 4D realisation of Max-Pooling in numpy misleading?

I'm trying to understand an algorithm of Max-Pooling in numpy. There are many answers like this that offer to give a new 4 - dimensional shape to two - dimensional image and then call np.max on axis 1 and 3:
window = (2, 4)
arr = np.random.randint(99, size=(1,8,12))
shape = (arr.shape[1]//window[0], window[0], arr.shape[2]//window[1], window[1])
out = arr.reshape(shape).max(axis=(1, 3))
According to my visual understanding, I should operate on axis=(0, 2) so it will shrink to the size 1 and produce an output like so:
That makes a lot of sense but it's not correct:
arr = np.random.randint(99, size=(1,8,12)) =
[[[ 7 55 21 88 69 35 7 7 73 54 16 80]
[70 79 62 55 42 5 77 81 38 52 69 39]
[58 78 48 35 5 93 47 64 18 25 73 25]
[14 8 63 27 28 46 29 68 28 38 51 79]
[70 15 37 51 72 27 44 79 1 79 75 9]
[ 4 27 0 90 15 30 95 62 14 8 69 57]
[24 29 26 44 72 89 74 78 39 29 6 2]
[82 12 0 11 54 38 61 79 91 92 53 28]]]
--------------------------------------------------
arr.reshape(4, 2, 3, 4).max(axis=(0, 2)) =
[[73 93 75 88]
[91 92 95 90]]
--------------------------------------------------
arr.reshape(4, 2, 3, 4).max(axis=(1, 3)) =
[[88 81 80]
[78 93 79]
[90 95 79]
[82 89 92]]
So it doesn't ever agree with my picture in reality. What is the source of this disagreement? What are the reasons it's not working as expected?

How tf.data.experimental.group_by_window() operates in Tensorflow 2.0

I am trying to understand the tf.data.experimental.group_by_window() method in Tensorflow 2 but I have some difficulties.
For a reproducible example I use the one presented in the documentation:
components = np.arange(100).astype(np.int64)
dataset20 = tf.data.Dataset.from_tensor_slices(components)
dataset20 = dataset.apply(tf.data.experimental.group_by_window(key_func=lambda x: x%2, reduce_func=lambda _,\
els: els.batch(10), window_size=100))
i = 0
for elem in dataset20:
print('i is {0}\n'.format(i))
print('elem is {0}'.format(elem.numpy()))
i += 1
print('\n--------------------------------\n')
i is 0
elem is [0 2 4 6 8]
--------------------------------
i is 1
elem is [1 3 5 7 9]
--------------------------------
Part of the confusion may be that the output doesn't correspond to the example code. The actual output from this:
components = np.arange(100).astype(np.int64)
dataset20 = tf.data.Dataset.from_tensor_slices(components)
dataset20 = dataset20.apply(tf.data.experimental.group_by_window(key_func=lambda x: x%2, reduce_func=lambda _,els: els.batch(10), window_size=100))
for i, d in enumerate(dataset20):
print(i, d.numpy())
is
0 [ 0 2 4 6 8 10 12 14 16 18]
1 [20 22 24 26 28 30 32 34 36 38]
2 [40 42 44 46 48 50 52 54 56 58]
3 [60 62 64 66 68 70 72 74 76 78]
4 [80 82 84 86 88 90 92 94 96 98]
5 [ 1 3 5 7 9 11 13 15 17 19]
6 [21 23 25 27 29 31 33 35 37 39]
7 [41 43 45 47 49 51 53 55 57 59]
8 [61 63 65 67 69 71 73 75 77 79]
9 [81 83 85 87 89 91 93 95 97 99]
As described in the documentation here, the key func separates the data into groups with associated key values. In the example the key func separates the data [0, 99] into even and odd groups. The reduce_func then operates on the key, group pairs to produce another dataset. Note though that reduce_func only operates on groups of data no greater than window_size. In the example, the window size is greater than the two group sizes (100 vs 50 elements), so has no effect and all evens are given in batches of 10 followed by all odds. If window size is changed to a value less than 50 then it does have an effect. For example, if the window size is changed to 5 and also the batching is moved to outside the group_by_window function:
dataset20 = dataset20.apply(tf.data.experimental.group_by_window(key_func=lambda x: x%2, reduce_func=lambda _, els: els, window_size=5)).batch(10)
then the following output is produced:
0 [0 2 4 6 8 1 3 5 7 9]
1 [10 12 14 16 18 11 13 15 17 19]
2 [20 22 24 26 28 21 23 25 27 29]
3 [30 32 34 36 38 31 33 35 37 39]
4 [40 42 44 46 48 41 43 45 47 49]
5 [50 52 54 56 58 51 53 55 57 59]
6 [60 62 64 66 68 61 63 65 67 69]
7 [70 72 74 76 78 71 73 75 77 79]
8 [80 82 84 86 88 81 83 85 87 89]
9 [90 92 94 96 98 91 93 95 97 99]

Object Similarity Pandas and Scikit Learn

Is there a way to find to find and rank rows in a Pandas Dataframe by their similarity to a row from another Dataframe?
My understanding of your question: you have two data frames, hopfully of the same column count. You want to rate first data frame's, the subject data frame, members by how close, i.e. similar, they are to any of the members of the target data frame.
I am not aware of a built in method.
It is probably not the most efficient way but here is how I'd approach:
#! /usr/bin/python3
import pandas as pd
import numpy as np
import pprint
pp = pprint.PrettyPrinter(indent=4)
# Simulate data
df_subject = pd.DataFrame(np.random.randint(0,100,size=(100, 4)), columns=list('ABCD')) # This is the one we're iterating to check similarity to target.
df_target = pd.DataFrame(np.random.randint(0,100,size=(100, 4)), columns=list('ABCD')) # This is the one we're checking distance to
# This will hold the min dstances.
distances=[]
# Loop to iterate over subject DF
for ix1,subject in df_subject.iterrows():
distances_cur=[]
# Loop to iterate over target DF
for ix2,target in df_target.iterrows():
distances_cur.append(np.linalg.norm(target-subject))
# Get the minimum distance for the subject set member.
distances.append(min(distances_cur))
# Distances to df
distances=pd.DataFrame(distances)
# Normalize.
distances=0.5-(distances-distances.mean(axis=0))/distances.max(axis=0)
# Column index joining, ordering and beautification.
Proximity_Ratings_name='Proximity Ratings'
distances=distances.rename(columns={0: Proximity_Ratings_name})
df_subject=df_subject.join(distances)
pp.pprint(df_subject.sort_values(Proximity_Ratings_name,ascending=False))
It should yeild something like the table below. Higher rating means there's a similar member in the target data frame:
A B C D Proximity Ratings
55 86 21 91 78 0.941537
38 91 31 35 95 0.901638
43 49 89 49 6 0.878030
98 28 98 98 36 0.813685
77 67 23 78 84 0.809324
35 52 16 36 58 0.802223
54 2 25 61 44 0.788591
95 76 3 60 46 0.766896
5 55 39 88 37 0.756049
52 79 71 90 70 0.752520
66 52 27 82 82 0.751353
41 45 67 55 33 0.739919
76 12 93 50 62 0.720323
94 99 84 39 63 0.716123
26 62 6 97 60 0.715081
40 64 50 37 27 0.714042
68 70 21 8 82 0.698824
47 90 54 60 65 0.676680
7 85 95 45 71 0.672036
2 14 68 50 6 0.661113
34 62 63 83 29 0.659322
8 87 90 28 74 0.647873
75 14 61 27 68 0.633370
60 9 91 42 40 0.630030
4 46 46 52 35 0.621792
81 94 19 82 44 0.614510
73 67 27 34 92 0.608137
30 92 64 93 51 0.608137
11 52 25 93 50 0.605770
51 17 48 57 52 0.604984
.. .. .. .. .. ...
64 28 56 0 9 0.397054
18 52 84 36 79 0.396518
99 41 5 32 34 0.388519
27 19 54 43 94 0.382714
92 69 56 73 93 0.382714
59 1 29 46 16 0.374878
58 2 36 8 96 0.362525
69 58 92 16 48 0.361505
31 27 57 80 35 0.349887
10 59 23 47 24 0.345891
96 41 77 76 33 0.345891
78 42 71 87 65 0.344398
93 12 31 6 27 0.329152
23 6 5 10 42 0.320445
14 44 6 43 29 0.319964
6 81 51 44 15 0.311840
3 17 60 13 22 0.293066
70 28 40 22 82 0.251549
36 95 72 35 5 0.249354
49 78 10 30 18 0.242370
17 79 69 57 96 0.225168
46 42 95 86 81 0.224742
84 58 81 59 86 0.221346
9 9 62 8 30 0.211659
72 11 51 74 8 0.159265
90 74 26 80 1 0.138993
20 90 4 6 5 0.117652
50 3 12 5 53 0.077088
42 90 76 42 1 0.075284
45 94 46 88 14 0.054244
Hope I understand correctly. Don't use if performance matters, I'm sure there's an algebraic way to approach this (Multiply matrices) that would run way faster.

Python: creating a grid

Is it possible to create a grid like below?
I didn't found anything in the forum.
#euler-project problem number 11
#In the 20 times 20 grid below,
#four numbers along a diagonal line have been marked in red.
#The product of these numbers is 26 times 63 times 78 times 14 = 1788696.
#What is the greatest product of four adjacent numbers in any direction
#(up, down, left, right, or diagonally) in the 20 times 20 grid?
import numpy
number = numpy.array([[08 02 22 97 38 15 00 40 00 75 04 05 07 78 52 12 50 77 91 08]
[49 49 99 40 17 81 18 57 60 87 17 40 98 43 69 48 04 56 62 00]
[81 49 31 73 55 79 14 29 93 71 40 67 53 88 30 03 49 13 36 65]
[52 70 95 23 04 60 11 42 69 24 68 56 01 32 56 71 37 02 36 91]
[22 31 16 71 51 67 63 89 41 92 36 54 22 40 40 28 66 33 13 80]
[24 47 32 60 99 03 45 02 44 75 33 53 78 36 84 20 35 17 12 50]
[32 98 81 28 64 23 67 10 26 38 40 67 59 54 70 66 18 38 64 70]
[67 26 20 68 02 62 12 20 95 63 94 39 63 08 40 91 66 49 94 21]
[24 55 58 05 66 73 99 26 97 17 78 78 96 83 14 88 34 89 63 72]
[21 36 23 09 75 00 76 44 20 45 35 14 00 61 33 97 34 31 33 95]
[78 17 53 28 22 75 31 67 15 94 03 80 04 62 16 14 09 53 56 92]
[16 39 05 42 96 35 31 47 55 58 88 24 00 17 54 24 36 29 85 57]
[86 56 00 48 35 71 89 07 05 44 44 37 44 60 21 58 51 54 17 58]
[19 80 81 68 05 94 47 69 28 73 92 13 86 52 17 77 04 89 55 40]
[04 52 08 83 97 35 99 16 07 97 57 32 16 26 26 79 33 27 98 66]
[88 36 68 87 57 62 20 72 03 46 33 67 46 55 12 32 63 93 53 69]
[04 42 16 73 38 25 39 11 24 94 72 18 08 46 29 32 40 62 76 36]
[20 69 36 41 72 30 23 88 34 62 99 69 82 67 59 85 74 04 36 16]
[20 73 35 29 78 31 90 01 74 31 49 71 48 86 81 16 23 57 05 54]
[01 70 54 71 83 51 54 69 16 92 33 48 61 43 52 01 89 19 67 48]])
EDIT no.1:
I found numpy-array now.
x = np.array([[1, 2, 3], [4, 5, 6]], np.int32)
Is there a way to do it without the commas?
EDIT no.2:
I also found a new problem.
Python: Invalid Token
Invalid token in number 08! :)
You can define the numbers in a string and split it easily in row/columns:
nums = """\
1 2 3
4 5 6
7 8 9 10
"""
rows = [map(int, row.split()) for row in nums.splitlines()]
print rows ##> [[1, 2, 3], [4, 5, 6], [7, 8, 9, 10]]
Check out NumPy - specifically, the N-dimensional array object.
Your code example won't compile unless you put commas between the list elements.
For example, this will compile:
value = [
[ 1, 2, 3, 4],
[ 5, 6, 7, 8],
[ 9,10,11,12]
]
If you're interested in taking strings like you show, and parsing them into a list of lists (or numpy multi-dimensional array), or if you have a list of lists or numpy array and want to print them out like you describe, you can do that too with a clever couple of list comprehensions.
What you have above does not work, e.g if pasted into a file and then run as a script, or pasted into the interpreter. I get:
SyntaxError: invalid token
Again, I suspect that what you have done is paste text (a string) containing these characters. They are not integers, and you will get nowhere unless you realize that fact.
Edit: I see...we only get "invalid syntax" if we avoid the "invalid token" error caused by the "08"
>>> import numpy
>>> number = numpy.array([[08 02 22 97]])
File "<stdin>", line 1
number = numpy.array([[08 02 22 97]])
^
SyntaxError: invalid token
>>> number = numpy.array([[18 12 22 97]])
File "<stdin>", line 1
number = numpy.array([[18 12 22 97]])
^
SyntaxError: invalid syntax
As for parsing the actual data, and you don't want to read it from a file or use sensible methods, there is always this:
s = """[[08 02 22 97 38 15 00 40 00 75 04 05 07 78 52 12 50 77 91 08]
...etc
"""
s = s.replace("]", "").replace("[", "").split()
numbers = [int(x) for x in s]
Then you got a 1d array of numbers, which you can have fun with.

Categories