Extract pixels within quarter circle in python - python

I have an image which I want to extract the pixels of the specific part of that. This part is a quarter circle, and my desire is to obtain the pixels of that.
I have the coordinates of the center and points which the lines connected to the circle. How is it possible to extract one quarter and ignore other parts?

The disk has the equation (X-Xc)²+(Y-Yc)²≤R².
The half planes meeting at the center have equations c(X-Xc) + s(Y-Yc)≥0 where c and s are the cosine and sine of the angle.
Hence, scan the image (or just the bounding box of the circle) and consider the pixels (X, Y) such that the three constraints are satisfied.

Related

How to find the centroid of multiple rectangles - Python

I have to find the exact centroid of multiple rectangles. The coordinates of each rectangle are as follows:
coord = (0.294792, 0.474537, 0.0989583, 0.347222) ## (xcenter, ycenter, width, height)
I have around 200 rectangles, how can I compute the centroid of them?
I already tried to implement it, but the code did not work well.
My code:
for i in range(len(xCenter)):
center = np.array((xCenter[i]+(Width[i]/2), yCenter[i]+(Height[i]/2)))
This is a somewhat vague question, but if you mean the centroid of all rectangles by area, then each center of a rectangle is weighted by the area of the rectangle. Think of it as the all the mass of the rectangle being compressed into the center, and then having to take the centroid of several weighted points. The formula for that would be the sum of 1 through n (assuming rectangles are numbered 1 to n) of Area(Rec(i)) * vec(center(i)) all divided by the total mass of the system (the sum of all the areas). If you are referring to the centroid of the area in general, ignoring rectangle overlap, that is a little more tricky. One thing you could do is for each rectangle, check it against all other rectangles, and if a pair of rectangles overlap, split them up into a set of non-overlapping rectangles and put them back into the set of rectangles. Once all rectangles are non-overlapping, find the centroid by mass.

Detect the angle of a surface in real-time

My project is about image detection. I am using 1 camera and have 2 shapes (rectangle and triangle) placed on a flat surface. I have successfully been able to:
Detect the contours
Detect the area of the contours
Make a relation between both areas and the distances, to be able to calculate both distances from the camera
Calculate the lengths, corner coordinates and the angle of each corner in both shapes
Now I need to calculate the angle of rotation of the whole surface when rotated around the x-axis (up & down), y-axis (left & right) and both axes (oblique). This can be seen in the gif I provided. The default angle (0 degrees) is when the surface is parallel to the camera, and any rotation would mean that an angle around 1 or both axes is present.
P.S When the surface is rotating around both axes (oblique), both angles have to be detected separately. For example, I would get 14° x-axis & 27° y-axis.
Any help would be appreciated.

Approximate 4 circles inside ellipse to get the radii

I need the approximate radii of the following ellipse.
The bottom/top and left/right radii should be the same nevertheless need to be checked. Which means 4 radii should be the result of my code. I did the following in paint, the green circle should give me the top radius and red the left (the right and bottom one aren't drawn here).
The idea I'm working on is to crop the image (left/right/top/bottom side) and approximate circles to the cropped images. With the cv2.findContours-feature some white pixels get recognized as highlighted here.
Is there a way to approximate my drawn red circle from above with these given coordinates? The problems I've seen on the internet are all with a given center point or angle which I don't have. Is there a cv2 function that draws circles with only some given coordinates or something similar?
Use this function : cv2.fitEllipse(points) and pass contour points -Ziri
Yes this did the trick. I got the radii after your function with:
(x, y), radius = cv2.minEnclosingCircle(i)

OpenCV - Estimating Box dimensions in Python

This is the continuation of my previous question. I now have an image like this
Here the corners are detected. Now I am trying to estimate the dimensions of the bigger box while smaller black box dimensions are known.
Can anyone guide me what is the best way to estimate the dimensions of the box? I can do it with simple Euclidean distance but I don't know if it is the correct way or not. Or even if it is the correct way then from a list of tuples (coordinates) how can I find distances like A-B or A-D or G-H but not like A-C or A-F?
The sequence has to be preserved in order to get correct dimensions. Also I have two boxes here so when I create list of corners coordinates then it should contain all coordinates from A-J and I don't know which coordinates belong to which box. So how can I preserve that for two different boxes because I want to run this code for more similar images.
Note: The corners in this image is not a single point but a set of points so I clustered the set of the corner and average them to get a single (x,y) coordinate for each corner.
I have tried my best to explain my questions. Will be extremely glad to have some answers :) Thanks.
For the
How can I find distances like A-B or A-D or G-H but not like A-C or
A-F
part
Here's a quick code, not efficient for images with lots of corners, but for your case it's OK. The idea is to start from the dilated edge image you got in your other question (with only the big box, but the idea is the same for the image where there is also the small box)
then for every possible combination of corners, you look at a few points on an imaginary line between them, and then you check if these points actually fall on a real line in the image.
import cv2
import numpy as np
#getting intermediate points on the line between point1 and point2
#for example, calling this function with (p1,p2,3) will return the point
#on the line between p1 and p2, at 1/3 distance from p2
def get_intermediate_point(p1,p2,ratio):
return [p1[0]+(p2[0]-p1[0])/ratio,p1[1]+(p2[1]-p1[1])/ratio]
#open dilated edge images
img=cv2.imread(dilated_edges,0)
#corners you got from your segmentation and other question
corners=[[29,94],[102,21],[184,52],[183,547],[101,576],[27,509]]
nb_corners=len(corners)
#intermediate points between corners you are going to test
ratios=[2,4,6,8] #in this example, the middle point, the quarter point, etc
nb_ratios=len(ratios)
#list which will contain all connected corners
connected_corners=[]
#double loop for going through all possible corners
for i in range(nb_corners-1):
for j in range(i+1,nb_corners):
cpt=0
c1=corners[i]; c2=corners[j]
#testing every intermediate points between the selected corners
for ratio in ratios:
p=get_intermediate_point(c1,c2,ratio)
#checking if these points fall on a white pixel in the image
if img[p[0],p[1]]==255:
cpt+=1
#if enough of the intermediate points fall on a white pixel
if cpt>=int(nb_ratios*0.75):
#then we assume that the 2 corners are indeed connected by a line
connected_corners.append([i,j])
print(connected_corners)
In general you cannot, since any reconstruction is only up to scale.
Basically, given a calibrated camera and 6 2D-points (6x2=12) you want to find 6 3D points + scale = 6x3+1=19. There aren't enough equations.
In order to do so, you will have to make some assumptions and insert them into the equations.
Form example:
The box edges are perpendicular to each other (which means that every 2 neighboring points share at least one coordinate value).
You need to assume that you know the height of the bottom points, i.e. they are on the same plane as your calibration box (this will give you the Z of the visible bottom points).
Hopefully, these constraints are enough to given you less equations that unknown and you can solve the linear equation set.

Maths/Python - given a sphere, plot sequential points around the sphere?

I am trying to rotate a vtk camera around its focal point. The aim being to 'orbit' the model.
I'm using the camera.SetPosiiton(x, y, z) call to set the camera location, and I know I can do the same at each update period in my render window.
The focal point has the location (0, 0, 0), and some other bounding box getting gives me my initial camera (x, y, z) location. The distance from the focal point (0, 0, 0) to the camera location 9x, y, z) describes the radius the of the sphere.
In my head, this essentially moving the camera in steps around the point (0, 0, 0) and I am presuming there is a maths function I could use to feed it my starting camera point, and work out my next camera location.
This should result in the model appearing to spin in space. My camera view is offset from all x, y, z, planes, making it a 3d problem, not a 2d problem. However, I do want my camera to remain the same distance from the model (focal point)
What I am trying to achieve is like this:- take a pencil (my model is long and narrow). Hold it in your finger tips at arms length, tip pointing to the ceiling. Tilt the pencil by ~30 degrees. This is the camera start position. Rotate the pencil body in your fingers, maintaining tilt angle, and the distance from your eye.
THis post looks helpful: Plotting a point on the edge of a sphere however, this assumes you know the radius to get to the initial x, y location.
Could some one point me towards the maths I need to do this, my maths is horribly rusty.
It seems what you want is to rotate a vector about an axis, this can be most easily done using a rotation matrix
So, if your desired axis of rotation is tilted 30 degrees from the z axis on the zx plane, your axis of rotation is (cos(pi/6),0,sin(pi/6)), increment the rotation angle, plug that into the rotation matrix to get matrix R, the new camera position vector will be R*(x,y,z)'
Start off with the points (+-1,0,0), (0,+-1,0), (0,0,+-1). These form two Pyramids with all the points on the unit sphere.
Now you can take the midpoints of each triangle, and project out so they lie on the unit sphere too. For each triangle this now gives you 3 new triangles, and you can repeat the process.
The alternative to the midpoint of the triangle is to take the midpoints of each side, and join them up. That gives 3 new points that can be projected out to the unit circle. This gives you 4 triangles for each sub division.
Repeat as many times as you need.

Categories