I have one image given
here and I have centroids and area of every small and big defect present here, for example I have three lists x, y and area where x and y are coordinates of centroids of defect(every yellow object considers defect) in the image and area is area of defect computed from contour. I want to show density map or heatmap on this image where it is clearly shown that defect with higher area is having more peak compare to defect with lower area, how can I do this in python? for reference I have attached one more image from one paper given here, here based one kde and weighted kde of image it is clearly shown where bigger defect(big yellow circle) is having more area.
So you are trying to draw a heatmap superimposed on an image, to represent what you are calling the "defects" in the image (it's not clear from your explanation what those are--maybe deviations from a reference image?)? This sounds like it would be VERY confusing for a viewer to interpret, having to mentally separate the heatmap pixels from the pixels of the image itself. Much better would be to create a new blank image with the same dimensions as the original, then plot points in that image whose center (x,y) represent the location in the original image, and whose radius/color represent area.
Related
I have a depth image but I am manipulating it like a (2d) grayscale image to analyze the shape of the figures I have.
I am trying to get the width (distance) of a shape, as given by this image. The width is shown by the red line, which also follows the direction of vector v2.
I have the vectors shown in the image, resulting of a 2-components PCA to gather the direction of the shape (the shape in the picture is cropped, since I just need the width on red, on this part of the shape).
I have no clue, how to rotate the points to origin, or how to project the points to the line and then to calculate the width, somehow by calculating eucledian distance from min to max.
How to get width given by a set of points that are not aligned to axis?
I managed it using a rotated bounding box from cv2, as described by this solution.
I am faced now to a new problem using GetDist library available on home page of GetDist. Examples are given in this getdist plot gallery.
This is a tool to plot joint distribution for a set of covariance matrices.
Everything works fine except one detail that disturbs me : If I zoom very deeply, I notice a slight shift between the contours filled and the lines contours. I illustrate this by the following zoomed figure (smallest contours refers to 1 sigma uncertainty and the largest 2 sigma) representing the ellipse of 2 covariance matrices.
In this figure, I zoom very deeply on a subplot. Classically, if I unzoom the figure, I get this kind of image :
The relevant section that generates the triplot is :
# Call triplot
g.triangle_plot([matrix1, matrix2],
names,
filled = True,
legend_labels = [],
contour_colors = ['darkblue','red'],
line_args = [{'lw':2, 'color':'darkblue'},
{'lw':2, 'color':'red'}],
)
I don't understand why filled area (red and darkblue) exceeds slightly the lines of the corresponding contours.
Maybe it is related to my conputation of limits of ellipse along x-coordinates and y-coordinates in order to fully fill the subplot and the rounding errors. I tried to modify these paramters without success.
I haven't looked in the code, but what I can see from the image is, that the border is half inset and half outset. I assume that the border has a transparency like the shape's fill color and thus it has the effect of a shifted dark border while this is just the part where the transparent border and the transparent background overlay.
The following example shows two circles, with a backgroundcolor rgba(0,0,0,0.5). The border on circle A has no opacity: rgb(0,0,0,1) while on circle B the border color matches the fill color (so 50% opacity: rgba(0,0,0,0.5).
I'm very new to the image processing and object detection. I'd like to extract/identify the position and dimensions of teeth in the following image:
Here's what I've tried so far using OpenCV:
import cv2
import numpy as np
planets = cv2.imread('model.png', 0)
canny = cv2.Canny(planets, 70, 150)
circles = cv2.HoughCircles(canny,cv2.HOUGH_GRADIENT,1,40, param1=10,param2=16,minRadius=10,maxRadius=80)
circles = np.uint16(np.around(circles))
for i in circles[0,:]:
# draw the outer circle
cv2.circle(planets,(i[0],i[1]),i[2],(255,0,0),2)
# draw the center of the circle
cv2.circle(planets,(i[0],i[1]),2,(255,0,0),3)
cv2.imshow("HoughCirlces", planets)
cv2.waitKey()
cv2.destroyAllWindows()
This is what I get after applying canny filter:
This is the final result:
I don't know where to go from here. I'd like to get all of the teeth identified. How can I do that?
I'd really appreciate any help..
Note that the teeth-structure is more-or-less a parabola (upside-down). If you could somehow guess the parabolic shape that defines the centroids of those blobs (teeth), then your problem could be simplified to a reasonable extent. I have shown a red line that passes through the centers of the teeth.
I would suggest you to approach it as follows:
Binarize your image (background=0, else 1). You could use sklearn.preprocessing.binarize.
Calculate the centroid of all the non-zero pixels. This is the central blue circle in the image. Call this structure_centroid. See this: How to center the nonzero values within 2D numpy array?.
Make polar slices of the entire image, centered at the location of the structure_centroid. I have shown a cartoon image of such polar slices (triangular semi-transparent). Cover complete 360 degrees. See this: polarTransform library.
Determine the position of the centroid of the non-zero pixels for each of these polar slices. See these:
find the distance between a point and a curve python.
Find the minimum distance from a point to a curve.
The array containing these centroids gives you the locus (path) of the average location of the teeth. Call this centroid_path.
Run an elimination/selection algorithm on the circles you were able to detect, that are closest to the centroid_path. Use a threshold distance to drop the outliers.
This should give you a good approximation of the teeth with the circles.
I hope this helps.
This is the continuation of my previous question. I now have an image like this
Here the corners are detected. Now I am trying to estimate the dimensions of the bigger box while smaller black box dimensions are known.
Can anyone guide me what is the best way to estimate the dimensions of the box? I can do it with simple Euclidean distance but I don't know if it is the correct way or not. Or even if it is the correct way then from a list of tuples (coordinates) how can I find distances like A-B or A-D or G-H but not like A-C or A-F?
The sequence has to be preserved in order to get correct dimensions. Also I have two boxes here so when I create list of corners coordinates then it should contain all coordinates from A-J and I don't know which coordinates belong to which box. So how can I preserve that for two different boxes because I want to run this code for more similar images.
Note: The corners in this image is not a single point but a set of points so I clustered the set of the corner and average them to get a single (x,y) coordinate for each corner.
I have tried my best to explain my questions. Will be extremely glad to have some answers :) Thanks.
For the
How can I find distances like A-B or A-D or G-H but not like A-C or
A-F
part
Here's a quick code, not efficient for images with lots of corners, but for your case it's OK. The idea is to start from the dilated edge image you got in your other question (with only the big box, but the idea is the same for the image where there is also the small box)
then for every possible combination of corners, you look at a few points on an imaginary line between them, and then you check if these points actually fall on a real line in the image.
import cv2
import numpy as np
#getting intermediate points on the line between point1 and point2
#for example, calling this function with (p1,p2,3) will return the point
#on the line between p1 and p2, at 1/3 distance from p2
def get_intermediate_point(p1,p2,ratio):
return [p1[0]+(p2[0]-p1[0])/ratio,p1[1]+(p2[1]-p1[1])/ratio]
#open dilated edge images
img=cv2.imread(dilated_edges,0)
#corners you got from your segmentation and other question
corners=[[29,94],[102,21],[184,52],[183,547],[101,576],[27,509]]
nb_corners=len(corners)
#intermediate points between corners you are going to test
ratios=[2,4,6,8] #in this example, the middle point, the quarter point, etc
nb_ratios=len(ratios)
#list which will contain all connected corners
connected_corners=[]
#double loop for going through all possible corners
for i in range(nb_corners-1):
for j in range(i+1,nb_corners):
cpt=0
c1=corners[i]; c2=corners[j]
#testing every intermediate points between the selected corners
for ratio in ratios:
p=get_intermediate_point(c1,c2,ratio)
#checking if these points fall on a white pixel in the image
if img[p[0],p[1]]==255:
cpt+=1
#if enough of the intermediate points fall on a white pixel
if cpt>=int(nb_ratios*0.75):
#then we assume that the 2 corners are indeed connected by a line
connected_corners.append([i,j])
print(connected_corners)
In general you cannot, since any reconstruction is only up to scale.
Basically, given a calibrated camera and 6 2D-points (6x2=12) you want to find 6 3D points + scale = 6x3+1=19. There aren't enough equations.
In order to do so, you will have to make some assumptions and insert them into the equations.
Form example:
The box edges are perpendicular to each other (which means that every 2 neighboring points share at least one coordinate value).
You need to assume that you know the height of the bottom points, i.e. they are on the same plane as your calibration box (this will give you the Z of the visible bottom points).
Hopefully, these constraints are enough to given you less equations that unknown and you can solve the linear equation set.
I'm pretty new to numpy.
I have been looking around how to do this but I can't find anything easy enough.
This is the problem.
I'm identifying particles in red (it's ok and done) so I have an array with locations.
I make a new image with these locations with grey dilation and scipy.ndimage, having the dilated positions with a value and the rest 0.
Then I multiply this image with another image (green color), so that the new color only has signals where you have particles in red. What I want to do is to detect the mean of intensities in this other color per given point, in a given radius or square for example.
How can I do this? Do I make scipy.ndimage.measurements.label in the initial color and then use the same array indexes to have the means? Or I can just have x,y coordinates and do the mean() over a given radius?