time series for binary shapes - python

I have been working around extracting the time series from shapes based on distances to center of mass clockwise starting from angle 0 to 360.
My Implementation that arranges contour points based on their angle to the [1,0], vector might be good for some shapes but is not useful for shapes that has much articulation. Consider the following code:
im = Image.open(os.path.join(path,filename))
im = im.filter(ifilter.MedianFilter)
contim = im.filter(ifilter.CONTOUR)
contim = contim[1:-1,1:-1] # this is because borders are extracted here as contours
contpts = np.where(contim ==0)
contpts = np.vstack((contpts[0],contpts[1])) # Just need to arrange these points clockwise with respect to the center of mass of the shape
Can anyone direct me to how I can extract that feature from any shape where I can start from a point and keep going along the contour to extract all the distances to the center of mass of the shape.
For more information about the feature, please view this paper: "LB_Keogh Supports Exact Indexing of Shapes under Rotation Invariance with Arbitrary Representations and Distance Measures"

If I understood, there's a geometrical figure in a discretized plane, represented as a matrix. If the entry is 1, you're inside the figure. If it's 0, you're outside. He wants to determine de distance between the edge of the figure and the center of the figure for all points in the edge. He parametrized it with a polar coordinate system. The center of the figure is the origin and now he wants to get the distance to the border as a function of the angle. This is what he calls his "time series".
Is this correct?
If yes, couldn't you just:
1. determine the center of mass,
2. reposition the origin to match the center of mass.
3. start angle at 0
4. r = 0
5. for each angle in [0,1,...,360]
1. If you're in inside the figure, increase r until you reach the border.
2. If you're outside the figure, decrease r until you reach the border.
3. When you reach the border, d(angle) = r
It the figure have a more or less continuous border, this will follow the contour.
Would this work?

Related

how do I fit a grid of points on a random point cloud

I have a binary image with dots, which I obtained using OpenCV's goodFeaturesToTrack, as shown on Image1.
Image1 : Cloud of points
I would like to fit a grid of 4*25 dots on it, such as the on shown on Image2 (Not all points are visible on the image, but it is a regular 4*25 points rectangle).
Image2 : Model grid of points
My model grid of 4*25 dots is parametrized by :
1 - The position of the top left corner
2 - The inclination of the rectangle with the horizon
The code below shows a function that builds such a model.
This problem seems to be close to a chessboard corner problem.
I would like to know how to fit my model cloud of points to the input image and get the position and angle of the cloud.
I can easily measure a distance in between the two images (the input one and the on with the model grid) but I would like to avoid having to check every pixel and angle on the image for finding the minimum of this distance.
def ModelGrid(pos, angle, shape):
# Initialization of output image of size shape
table = np.zeros(shape)
# Parameters
size_pan = [32, 20]# Pixels
nb_corners= [4, 25]
index = np.ndarray([nb_corners[0], nb_corners[1], 2],dtype=np.dtype('int16'))
angle = angle*np.pi/180
# Creation of the table
for i in range(nb_corners[0]):
for j in range(nb_corners[1]):
index[i,j,0] = pos[0] + j*int(size_pan[1]*np.sin(angle)) + i*int(size_pan[0]*np.cos(angle))
index[i,j,1] = pos[1] + j*int(size_pan[1]*np.cos(angle)) - i*int(size_pan[0]*np.sin(angle))
if 0 < index[i,j,0] < table.shape[0]:
if 0 < index[i,j,1] < table.shape[1]:
table[index[i,j,0], index[i,j,1]] = 1
return table
A solution I found, which works relatively well is the following :
First, I create an index of positions of all positive pixels, just going through the image. I will call these pixels corners.
I then use this index to compute an average angle of inclination :
For each of the corners, I look for others which would be close enough in certain areas, as to define a cross. I manage, for each pixel to find the ones that are directly on the left, right, top and bottom of it.
I use this cross to calculate an inclination angle, and then use the median of all obtained inclination angles as the angle for my model grid of points.
Once I have this angle, I simply build a table using this angle and the positions of each corner.
The optimization function measures the number of coincident pixels on both images, and returns the best position.
This way works fine for most examples, but the returned 'best position' has to be one of the corners, which does not imply that it corresponds to the best position... Mainly if the top left corner of the grid within the cloud of corners is missing.

OpenCV - Estimating Box dimensions in Python

This is the continuation of my previous question. I now have an image like this
Here the corners are detected. Now I am trying to estimate the dimensions of the bigger box while smaller black box dimensions are known.
Can anyone guide me what is the best way to estimate the dimensions of the box? I can do it with simple Euclidean distance but I don't know if it is the correct way or not. Or even if it is the correct way then from a list of tuples (coordinates) how can I find distances like A-B or A-D or G-H but not like A-C or A-F?
The sequence has to be preserved in order to get correct dimensions. Also I have two boxes here so when I create list of corners coordinates then it should contain all coordinates from A-J and I don't know which coordinates belong to which box. So how can I preserve that for two different boxes because I want to run this code for more similar images.
Note: The corners in this image is not a single point but a set of points so I clustered the set of the corner and average them to get a single (x,y) coordinate for each corner.
I have tried my best to explain my questions. Will be extremely glad to have some answers :) Thanks.
For the
How can I find distances like A-B or A-D or G-H but not like A-C or
A-F
part
Here's a quick code, not efficient for images with lots of corners, but for your case it's OK. The idea is to start from the dilated edge image you got in your other question (with only the big box, but the idea is the same for the image where there is also the small box)
then for every possible combination of corners, you look at a few points on an imaginary line between them, and then you check if these points actually fall on a real line in the image.
import cv2
import numpy as np
#getting intermediate points on the line between point1 and point2
#for example, calling this function with (p1,p2,3) will return the point
#on the line between p1 and p2, at 1/3 distance from p2
def get_intermediate_point(p1,p2,ratio):
return [p1[0]+(p2[0]-p1[0])/ratio,p1[1]+(p2[1]-p1[1])/ratio]
#open dilated edge images
img=cv2.imread(dilated_edges,0)
#corners you got from your segmentation and other question
corners=[[29,94],[102,21],[184,52],[183,547],[101,576],[27,509]]
nb_corners=len(corners)
#intermediate points between corners you are going to test
ratios=[2,4,6,8] #in this example, the middle point, the quarter point, etc
nb_ratios=len(ratios)
#list which will contain all connected corners
connected_corners=[]
#double loop for going through all possible corners
for i in range(nb_corners-1):
for j in range(i+1,nb_corners):
cpt=0
c1=corners[i]; c2=corners[j]
#testing every intermediate points between the selected corners
for ratio in ratios:
p=get_intermediate_point(c1,c2,ratio)
#checking if these points fall on a white pixel in the image
if img[p[0],p[1]]==255:
cpt+=1
#if enough of the intermediate points fall on a white pixel
if cpt>=int(nb_ratios*0.75):
#then we assume that the 2 corners are indeed connected by a line
connected_corners.append([i,j])
print(connected_corners)
In general you cannot, since any reconstruction is only up to scale.
Basically, given a calibrated camera and 6 2D-points (6x2=12) you want to find 6 3D points + scale = 6x3+1=19. There aren't enough equations.
In order to do so, you will have to make some assumptions and insert them into the equations.
Form example:
The box edges are perpendicular to each other (which means that every 2 neighboring points share at least one coordinate value).
You need to assume that you know the height of the bottom points, i.e. they are on the same plane as your calibration box (this will give you the Z of the visible bottom points).
Hopefully, these constraints are enough to given you less equations that unknown and you can solve the linear equation set.

Rotate a rectangle until it hits a triangle, and determine the point of intersection

This problem is in 3D space.
There is a rectangle, defined by 4 vertices. We rotate it around one of its sides.
There is a triangle, defined by 3 vertices.
After a full 360 degree rotation, will the rectangle ever intersect/touch the triangle?
If so, what is the angle of rotation at which intersection first occurs? And what is the point of this first intersection?
After thinking about this for a while, it seems like there are 3 main cases:
triangle vertex touches rectangle surface
triangle surface touches rectangle vertex
triangle edge touches rectangle edge
And there are 2 unlikely cases where the two are perpendicular when the intersect:
rectangle edge hits triangle surface
rectangle surface hits triangle edge
However identifying these cases hasn't really gotten me closer to a solution. I'm hoping someone can point me in the right direction for how to solve this problem. I want to solve it fast for a small number of rectangles x a large number of triangles.
Context: the larger problem I'm trying to solve is I want to wrap a rectangle around a closed polygonal mesh. I wish to do this step by step by rotating the rectangle until it intersects, then rotating the remaining rectangle around the intersection point, etc.
When you rotate a rectangle around one of its sides, you get a cylinder. Intersect each of the lines with the cylinder. The position of the intersection points gives you the rotation angles. Since this doesn't catch the case where the triangle is completely contained within the cylinder, test whether the vertices' distance to the cylinder's axis is smaller than the cylinder's radius, too.
Say your rectangle has the vertices A to D. You want to rotate around the side AB. The radius of your cylinder is then r = |AD|.
First, transform the coordinates so that the rectangle is placed with the side that you want to rotate about along the z axis and the adjacent side along the x axis.
A′ = {M} · A = {0, 0, 0}
B′ = {M} · B = {0, 0, |AB|}
C′ = {M} · C = {r, 0, 0}
Apply the same transformation {M} to the vertices of the triangle.
Now find the intersections of all three sides of the triangle with the cylinder. Because the cylinder is aligned to the z axis, the problem can be separated into two subproblems: (1) Find any intersections with the top and bottom surfaces a z == 0 and z == |AB|. (2) Find the intersections with the "coat" of the cylinder; this is the intersection of a line with a circle in the xy plane.
You can then calculate the rotation angles with the tangent function of the y and x coordinates of these points as atan2(y, x).
If you need the coordinates of the intersection points in the original coordinates, don't forget to undo the transformation.

Using 3D perception in opencv2

Can anyone please explain if it is possible, and if so how, to work with cv2.getPerspectiveTransform().
I have 3d information about my image: I know the length of a,b and also the Different heights of c,d,e,f and g. I made the height different to get more 3d information but if it isn't needed that will be preferable.
Ultimately I need to know where the pink dot really is in the rectangle after implementing the transform on my [x,y] position I get from the camera feed.
If you denote by C,D,E,F the positions of the four corners of the black polygon in the original image (each of them is a 2D point), and C',D',E',F' the positions of the corresponding points in your target image (probably (0,0), (a, 0), (a, b), (0, b)), M = cv2.getPerspectiveTransform({C,D,E,F}, {C',D',E',F'}) is the perspective transformation from one polygon to the other.
Given the position G of the vertical projection of g onto the black polygon in the original image, you can compute its position in the target image as cv2.transform(G, M). This will return a point (x,y,z), where the last coordinate z is a normalizing term. This z is zero when your point would be "at infinity" in the target image. If z is not zero, the point you are looking for is (x/z, y/z).
If z is zero, your point is at infinity, in the direction of the support of vector (x, y) (think of the case where G would be at the intersection of the supporting lines of two opposite sides of the black polygon in the source image).
If you know that the heights of c,d,e,f,g are equal, these points are also coplanar, and the exact same method applies to c,d,e,f,g instead of C,D,E,F,G.

Maths/Python - given a sphere, plot sequential points around the sphere?

I am trying to rotate a vtk camera around its focal point. The aim being to 'orbit' the model.
I'm using the camera.SetPosiiton(x, y, z) call to set the camera location, and I know I can do the same at each update period in my render window.
The focal point has the location (0, 0, 0), and some other bounding box getting gives me my initial camera (x, y, z) location. The distance from the focal point (0, 0, 0) to the camera location 9x, y, z) describes the radius the of the sphere.
In my head, this essentially moving the camera in steps around the point (0, 0, 0) and I am presuming there is a maths function I could use to feed it my starting camera point, and work out my next camera location.
This should result in the model appearing to spin in space. My camera view is offset from all x, y, z, planes, making it a 3d problem, not a 2d problem. However, I do want my camera to remain the same distance from the model (focal point)
What I am trying to achieve is like this:- take a pencil (my model is long and narrow). Hold it in your finger tips at arms length, tip pointing to the ceiling. Tilt the pencil by ~30 degrees. This is the camera start position. Rotate the pencil body in your fingers, maintaining tilt angle, and the distance from your eye.
THis post looks helpful: Plotting a point on the edge of a sphere however, this assumes you know the radius to get to the initial x, y location.
Could some one point me towards the maths I need to do this, my maths is horribly rusty.
It seems what you want is to rotate a vector about an axis, this can be most easily done using a rotation matrix
So, if your desired axis of rotation is tilted 30 degrees from the z axis on the zx plane, your axis of rotation is (cos(pi/6),0,sin(pi/6)), increment the rotation angle, plug that into the rotation matrix to get matrix R, the new camera position vector will be R*(x,y,z)'
Start off with the points (+-1,0,0), (0,+-1,0), (0,0,+-1). These form two Pyramids with all the points on the unit sphere.
Now you can take the midpoints of each triangle, and project out so they lie on the unit sphere too. For each triangle this now gives you 3 new triangles, and you can repeat the process.
The alternative to the midpoint of the triangle is to take the midpoints of each side, and join them up. That gives 3 new points that can be projected out to the unit circle. This gives you 4 triangles for each sub division.
Repeat as many times as you need.

Categories