Find distance to ones in a binary numpy array in python - python

For a robotics project, I've used ultrasound as vision. From edge detection algorithms I've generated a binary numpy array. Now, I'm not sure what is the most cost efficient way of calculating the distance to the object. Say I wanted to calculated the shortest distanse from a one to the top left corner? Would it be possible to use "np.where" and "dst = numpy.linalg.norm( )"?
import numpy as np
from scipy import ndimage
from PIL import Image
Max_filtrated = np.where(result>np.amax(result)*0.8,0,result)
Band_filtrated = np.where(Max_filtrated>np.amax(Max_filtrated)*0.11,
1,0)
####### Define connected region and remove noise ########
mask = Band_filtrated> Band_filtrated.mean()
label_im, nb_labels = ndimage.label(mask)
sizes = ndimage.sum(mask, label_im, range(nb_labels + 1))
mean_vals = ndimage.sum(im, label_im, range(1, nb_labels + 1))
mask_size = sizes < 500
remove_pixel = mask_size[label_im]
label_im[remove_pixel] = 0
Ferdig= np.where(label_im>np.amax(label_im)*0.1,1,0)
#########################################################
Thanks

I tried doing this a different way - using the same image as I trimmed for my other answer. This time I calculate each pixel as the square of the distance from the origin and then make all black pixels in the input image inelligible for being the nearest by setting them to a big number. Then I find the smallest number in the array.
#!/usr/bin/env python3
import sys
import numpy as np
from PIL import Image
# Open image in greyscale and make into Numpy array
im = Image.open('curve.png').convert('L')
na = np.array(im)
# Make grid where every pixel is the squared distance from origin - no need to sqrt()
# This could be done outside main loop, btw
x,y = np.indices(na.shape)
dist = x*x + y*y
# Make all black pixels inelligible to be nearest
dist[np.where(na<128)] = sys.maxsize
# Find cell with smallest value, i.e. smallest distance
resultY, resultX = np.unravel_index(dist.argmin(), dist.shape)
print(f'Coordinates: [{resultY},{resultX}]')
Sample Output
Coordinates: [159,248]
Keywords: Python, image processing, nearest white pixel, nearest black pixel, nearest foreground pixel, nearest background pixel, Numpy

I trimmed your image as follows - please don't post images with axes and labels if folks need to process them!
I then leverage Scipy's cdist() function. So, first generate a list of all the white pixels in the image, then calculate the distance from the origin at top-left to each pixel in list. Then find the minimum one.
#!/usr/bin/env python3
import numpy as np
from PIL import Image
from scipy.spatial.distance import cdist
# Open image in greyscale and make into Numpy array
im = Image.open('curve.png').convert('L')
na = np.array(im)
# Get coordinates of white pixels
whites = np.where(na>127)
# Get distance from [0,0] to each white pixel
distances = cdist([(0,0)],np.transpose(whites))
# Index of nearest
ind = distances.argmin()
# Distance of nearest
d = distances[0,ind]
# Coords of nearest
x, y = whites[0][ind], whites[1][ind]
print(f'distance [{x},{y}] = {d}')
Sample Output
distance [159,248] = 294.5929394944828
If I draw a red circle radius=294 centred on the origin and a blue circle centred on those x,y coordinates:
Keywords: Python, image processing, nearest white pixel, nearest black pixel, nearest foreground pixel, nearest background pixel, Numpy, cdist()

Related

Gradual conversion of image to greyscale with numpy in python

Say I have an image, and I want to have it fade out to greyscale over a distance.
I already know that to entirely convert an image to greyscale with Numpy, I'd do something like
import numpy as np
import cv2
myImage = cv2.imread("myImage.jpg")
grey = np.dot(an_image[...,:3], [0.2989, 0.5870, 0.1140])
This is not what I'm looking for. I already can get that to work.
I have a NxMx3 matrix (where N and M are the dimensions of the image), and this matrix is a dimension with the red transform, green transform, and blue transform.
So, for a given origin and radius of "keep this colored", I have
greyscaleWeights = np.array([0.2989, 0.5870, 0.1140])
# We flip this so we can weight down the transformation
greyscaleWeightOffsets = np.ones(3) - greyscaleWeights
from scipy.spatial.distance import cdist as getDistances
transformWeighter = list()
for rowNumber in np.arange(rowCount, dtype= 'int'):
# Create a row of tuples containing the coordinate we are at in the picture
row = [(x, rowNumber) for x in np.arange(columnCount, dtype= 'int')]
# Transform this into a row of distances from our in-color center
rowDistances = getDistances(row, [self.focusOrigin]).T[0]
# Get the transformation weights: inside of the focus radius we have no transform,
# outside of the pixelDistanceToFullTransform we have a weight of 1, and an even
# gradation in-between
rowWeights = [np.clip((x - self.focusRadius) / pixelDistanceToFullTransform, 0, 1) for x in rowDistances]
transformWeighter.append(rowWeights)
# Convert this into an numpy array
transformWeighter = np.array(transformWeighter)
# Change this 1-D set of weights into 3-D weights (for each color channel)
transformRGB = np.repeat(transformWeighter[:, :, None],3, axis=1).reshape(self.image.shape)
# Change the weight offsets back into greyscale weights
greyscaleTransform = 1 - greyscaleWeightOffsets * transformRGB
greyscaleishImage = self.image * greyscaleTransform
I do get the fade behaviour I was hoping for, but it just fades into the green channel while nuking the red and blue, so far as I can tell.
So, for example:
transforms into
which is the correct transformation behaviour, but fading to green instead of greyscale...
Well, the answer was both easy and hard.
The premise of my question was fundamentally flawed. To quote this answer on answers.opencv.org:
First, you must understand that a MxNx3 in greyscale doesn't exist. I mean, the concept of greyscale is that you have one channel describing the intensity on a gradual scale between black and white. So, it is not clear why would you need a 3 channels greyscale image, but if you do, I suggest that you take the value of each pixel of your 1 channel greyscale image and that you copy it three times, one on each channel of a BGR image. When a BGR image has the same value on each channel, it appears to be grey.
The correct answer then was to change the color space then desaturate the image, so
imageHSV = cv2.cvtColor(self.image, cv2.COLOR_RGB2HSV)
newSaturationChannel = saturationWeighter * imageHSV[:,:,1]
imageHSV[:,:,1] = newSaturationChannel
greyscaleishImage = cv2.cvtColor(imageHSV, cv2.COLOR_HSV2RGB)

I am trying to measure land plot area Using OpenCV in Python

SO far I have been able to perform medianBlur and Edge Detection, Now I want to o further remove noise from the image, Matlabs region **property functions ** was used to remove all white regions that had a total pixel area of less than the mean pixel area value. How can I implement this on python
import matplotlib.image as mpimg
import numpy as np
import cv2
import os
import math
from collections import defaultdict
from matplotlib import pyplot as plt
import imutils
#import generalized_hough
gray = cv2.cvtColor(image, cv2.COLOR_RGB2GRAY)
print(gray.shape)
blur = cv2.bilateralFilter(gray,9,75,75)
median = cv2.medianBlur(gray,5)
# display input and output image
titles = ["bilateral Smoothing","median bulr"]
images = [ blur, median]
plt.figure(figsize=(20, 20))
for i in range(2):
plt.subplot(1,2,i+1)
plt.imshow(images[i])
plt.title(titles[i])
plt.xticks([]), plt.yticks([])
plt.show()
sobelX = cv2.Sobel(median,cv2.cv2.CV_64F, 1, 0)
sobelY = cv2.Sobel(median,cv2.cv2.CV_64F, 0,1)
sobelX = np.uint8(np.absolute(sobelX))
sobelY = np.uint8(np.absolute(sobelY))
SobelCombined = cv2.bitwise_or(sobelX,sobelY)
cv2.imshow('img', SobelCombined)
cv2.waitKey(0)
cv2.destroyAllWindows()
Here is a Matlab code that works for the same task.
close all
%upload image of farm
figure,
farm = imread('small_farms.JPG');%change this to the file path of image
imshow(farm);%this shows the original image
%convert the image to grayscale for 2D manipulation
gfarm = rgb2gray(farm);
figure,
imshow(gfarm);%show grayscaled image
%median filters take a m*n area around a coordinate and
%find the median pixel value and set that coordinate to that
%pixel value. It's a method of removing noise or details in an
%image. may want to tune dimensions of filter.
A = medfilt2(gfarm,[4 4]);
figure,
imshow(A);
%perform a logarithmic edge detection filter,
%this picks out the edges of the image, log setting
%was found to wrok best, although 'Sobel' can also be tried
B = edge(A,'log');
%show results of the edge filter
figure,
imshow(B,[]);
%find the areas of the lines made
areas = regionprops(B,'Area');
%find the mean and one standard deviation
men = mean([areas.Area])+0*std([areas.Area]);
%find max pixel area
big = max([areas.Area]);
%remove regions that are too small
C = bwpropfilt(B,'Area',[men big]);
%perform a dilation on the remaining pixels, this
%helps fill in gaps. The size and shape of the dilation
%can be tuned below.
SE = strel('square',4);
C = imdilate(C,SE);
areas2 = regionprops(C,'Area');
%place white border around image to find areas of farms
%that go off the picture
[h,w] = size(C);
C(1,:) = 1;
C(:,1) = 1;
C(h,:) = 1;
C(:,w) = 1;
C = C<1;
%fill in holes
C = imfill(C,'holes');
%show final processed image
figure,imshow(C);
%the section below is for display purpose
%it creates the boundaries of the image and displays them
%in a rainbow fashion
figure,
[B,L,n,A] = bwboundaries(C,'noholes');
imshow(label2rgb(L, #jet, [.5 .5 .5]))
hold on
for k = 1:length(B)
boundary = B{k};
plot(boundary(:,2), boundary(:,1), 'w', 'LineWidth', 2)
end
%The section below prints out the areas of each found
%region by pixel values. These values need to be scaled
%by the real measurements of the images to get relevant
%metrics
centers = regionprops(C,'Centroid','Area');
for k=1:length(centers)
if(centers(k).Area > mean([centers.Area])-std([areas.Area]))
text(centers(k).Centroid(1),centers(k).Centroid(2),string(centers(k).Area));
end
end

nine point smooth in OpenCV

How can i apply nine point smooth using OpenCV?
Info : Nine point smooth will take a 3x3 square of 9 pixels each and determine the count of each pixel.The counts per pixel are then averaged, and that
value is assigned to the central pixel.
Nine Point Smooth : http://www.people.vcu.edu/~mhcrosthwait/clrs322/2DFilteringconcepts.htm
From the docs mentioned in the comments: https://docs.opencv.org/3.1.0/d4/d13/tutorial_py_filtering.html. It would be:
import cv2
import numpy as np
img = cv2.imread('opencv_logo.png')
blur = cv2.blur(img,(3,3))
Or slightly more manually:
kernel = np.ones((3,3), np.float32)/9
dst = cv2.filter2D(img,-1,kernel)

New coordinates after image rotation using scipy.ndimage.rotate [duplicate]

I have a numpy array for an image that I read in from a FITS file. I rotated it by N degrees using scipy.ndimage.interpolation.rotate. Then I want to figure out where some point (x,y) in the original non-rotated frame ends up in the rotated image -- i.e., what are the rotated frame coordinates (x',y')?
This should be a very simple rotation matrix problem but if I do the usual mathematical or programming based rotation equations, the new (x',y') do not end up where they originally were. I suspect this has something to do with needing a translation matrix as well because the scipy rotate function is based on the origin (0,0) rather than the actual center of the image array.
Can someone please tell me how to get the rotated frame (x',y')? As an example, you could use
from scipy import misc
from scipy.ndimage import rotate
data_orig = misc.face()
data_rot = rotate(data_orig,66) # data array
x0,y0 = 580,300 # left eye; (xrot,yrot) should point there
P.S. The following two related questions' answers do not help me:
Find new coordinates of a point after rotation
New coordinates after image rotation using scipy.ndimage.rotate
As usual with rotations, one needs to translate to the origin, then rotate, then translate back. Here, we can take the center of the image as origin.
import numpy as np
import matplotlib.pyplot as plt
from scipy import misc
from scipy.ndimage import rotate
data_orig = misc.face()
x0,y0 = 580,300 # left eye; (xrot,yrot) should point there
def rot(image, xy, angle):
im_rot = rotate(image,angle)
org_center = (np.array(image.shape[:2][::-1])-1)/2.
rot_center = (np.array(im_rot.shape[:2][::-1])-1)/2.
org = xy-org_center
a = np.deg2rad(angle)
new = np.array([org[0]*np.cos(a) + org[1]*np.sin(a),
-org[0]*np.sin(a) + org[1]*np.cos(a) ])
return im_rot, new+rot_center
fig,axes = plt.subplots(2,2)
axes[0,0].imshow(data_orig)
axes[0,0].scatter(x0,y0,c="r" )
axes[0,0].set_title("original")
for i, angle in enumerate([66,-32,90]):
data_rot, (x1,y1) = rot(data_orig, np.array([x0,y0]), angle)
axes.flatten()[i+1].imshow(data_rot)
axes.flatten()[i+1].scatter(x1,y1,c="r" )
axes.flatten()[i+1].set_title("Rotation: {}deg".format(angle))
plt.show()

how to locate the center of a bright spot in an image?

Here is an example of the kinds of images I'll be dealing with:
(source: csverma at pages.cs.wisc.edu)
There is one bright spot on each ball. I want to locate the coordinates of the centre of the bright spot. How can I do it in Python or Matlab? The problem I'm having right now is that more than one points on the spot has the same (or roughly the same) white colour, but what I need is to find the centre of this 'cluster' of white points.
Also, for the leftmost and rightmost images, how can I find the centre of the whole circular object?
You can simply threshold the image and find the average coordinates of what is remaining. This handles the case when there are multiple values that have the same intensity. When you threshold the image, there will obviously be more than one bright white pixel, so if you want to bring it all together, find the centroid or the average coordinates to determine the centre of all of these white bright pixels. There isn't a need to filter in this particular case. Here's something to go with in MATLAB.
I've read in that image directly, converted to grayscale and cleared off the white border that surrounds each of the images. Next, I split up the image into 5 chunks, threshold the image, find the average coordinates that remain and place a dot on where each centre would be:
im = imread('http://pages.cs.wisc.edu/~csverma/CS766_09/Stereo/callight.jpg');
im = rgb2gray(im);
im = imclearborder(im);
%// Split up images and place into individual cells
split_point = floor(size(im,2) / 5);
images = mat2cell(im, size(im,1), split_point*ones(5,1));
%// Show image to place dots
imshow(im);
hold on;
%// For each image...
for idx = 1 : 5
%// Get image
img = images{idx};
%// Threshold
thresh = img > 200;
%// Find coordinates of thresholded image
[y,x] = find(thresh);
%// Find average
xmean = mean(x);
ymean = mean(y);
%// Place dot at centre
%// Make sure you offset by the right number of columns
plot(xmean + (idx-1)*split_point, ymean, 'r.', 'MarkerSize', 18);
end
I get this:
If you want a Python solution, I recommend using scikit-image combined with numpy and matplotlib for plotting. Here's the above code transcribed in Python. Note that I saved the image referenced by the link manually on disk and named it balls.jpg:
import skimage.io
import skimage.segmentation
import numpy as np
import matplotlib.pyplot as plt
# Read in the image
# Note - intensities are floating point from [0,1]
im = skimage.io.imread('balls.jpg', True)
# Threshold the image first then clear the border
im_clear = skimage.segmentation.clear_border(im > (200.0/255.0))
# Determine where to split up the image
split_point = int(im.shape[1]/5)
# Show image in figure and hold to place dots in
plt.figure()
plt.imshow(np.dstack([im,im,im]))
# For each image...
for idx in range(5):
# Extract sub image
img = im_clear[:,idx*split_point:(idx+1)*split_point]
# Find coordinates of thresholded image
y,x = np.nonzero(img)
# Find average
xmean = x.mean()
ymean = y.mean()
# Plot on figure
plt.plot(xmean + idx*split_point, ymean, 'r.', markersize=14)
# Show image and make sure axis is removed
plt.axis('off')
plt.show()
We get this figure:
Small sidenote
I could have totally skipped the above code and used regionprops (MATLAB link, scikit-image link). You could simply threshold the image, then apply regionprops to find the centroids of each cluster of white pixels, but I figured I'd show you a more manual way so you can appreciate the algorithm and understand it for yourself.
Hope this helps!
Use a 2D convolution and then find the point with the highest intensity. You can apply a concave non-linear function (such as exp) on intensity values before applying the 2d convolution, to intensify the bright spots relative to the dimmer parts of the image. Something like conv2(exp(img),ker)

Categories