How to draw a rectangle around keypoints of a pose? - python

I have an image of a hand and I passed it through a pre trained hand pose estimation model and got this output. Output
Task
Now I want to draw a rectangle around the hand to carry out some tasks. How do I draw a rectangle around a hand with just using those keypoints(not using another model).
In case if you're interested to know why I need that rectangle.
I want to normalize all the points inside the rectangle to range of (0,1) by dividing each point with width and height of rectangle and top left points to 0s and bottom points to 1s.

I haven't used OpenCV in quite a while, hence the simplest approach I can think of without relying on it's methods would be to use the list of keypoint locations and find the min/max x and y values.
That is, loop through the list of points (which I assume each have a given x and y) and store the minimum x and y, as well as maximum x and y. To do this you will want to initialise the variables according to your image's size, or alternatively store all x and y in their own separate lists and perform min and max functions accordingly.
The rectangle therefore is defined by the 2 corner points of (x_min, y_min) and (x_max, y_max), which you can also extract width and height from using subtraction. Make sure your drawing reference matches up with the xy-reference of the points. To actually draw the rectangle you can refer to the code here: https://docs.opencv.org/master/dc/da5/tutorial_py_drawing_functions.html

Related

Calculating how much area of an ellipsis is covered by a certain pixel in Python

I am working with Python and currently trying to figure out the following: If I place an ellipsis of which the semi-axes, the centre's location and the orientation are known, on a pixel map, and the ellipsis is large enough to cover multiple pixels, how do I figure out which pixel covers which percentage of the total area of the ellipsis? As an example, let's take a map of 10*10 pixels (i.e. interval of [0,9]) and an ellipsis with the centre at (6.5, 6.5), semi-axes of (0.5, 1.5) and an orientation angle of 30° between the horizontal and the semi-major axis. I have honestly no idea, so any help is appreciated.
edit: To clarify, the pixels (or cells) have an area. I know the area of the ellipsis, its position and its orientation, and I want to find out how much of its area is located within pixel 1, how much it is within pixel 2 etc.
Following the equation of an elipse
The easiest way to find which pixels from your mesh are inside and which are out would be to assign (x, y, alpha) for each pixel in the above equation.
If the result <=1, the pixel is inside. Otherwise, it is outside.
You can count the pixels.
This is math problem. Try math.exchange rather than stackoverflow.
I suggest you to transform the plane: translation to get the center in the middle, rotation to get the ellipsis's axes on the x-y ones and dilatation on x to get a circle. And then work with a circle on rhombus tiles.
Your problem won't be less or more tractable in the new formulation but the math and code you have to work on will be slightly lighter.

OpenCV Python creating bounding box or enclosing circle/polygon around scattered points

I am working on object detection for collision avoidance using OpenCV Python on a small quad. First I need to detect objects using Optical Flow Pyramid (LK)(OpenCV) approach. I was able to track points on the image ROI as shown in the image points tracked using opticalflow lk pyr
I need to create a bounding box or enclosing convexHull or some polygonal shape as shown below to show that these are the detected objects red lines are which I drew . Ignoring isolated points, only points at certain distance to each other must be taken.
If any one could help me or provide me with your ideas it will be useful.
If my question is not precise or to vast please let me know
you can get minimum and maximum x,y values by looping through the cluster labels. and then can draw rectangles using those 4 points for each cluster.
following code will help you.
ret, label, center = cv2.kmeans(Z, 10, None, criteria, 10, cv2.KMEANS_RANDOM_CENTERS)
for i in label.ravel():
x_values = []
y_values = []
count = Z[label.ravel()==i]
for x,y in count:
x_values.append(x)
y_values.append(y)
min_x = min(x_values)
min_y = min(y_values)
max_x = max(x_values)
max_y=max(y_values)
cv2.rectangle(frame, (max_x, max_y), (min_x, min_y), (0, 255, 0), 3)
To get the bounding box draw a rectangle around p1(x_min, y_min) and p2(x_max, y_max), where x_min/max and y_min/max denote the minimum and maximum x and y coordinates of a point cluster.
So as you already have your points the first step is to form clusters of close points and get rid of outliers.
Please research cluster analysis to find out how to do this. I'm not willing to write a book here. https://en.wikipedia.org/wiki/Cluster_analysis might give you a first idea.
The problem involves two steps:
You need to perform clustering over the key-points. I suggest taking a look at scikit-learn clustering.
Build a bounding rectangle or any other type convex envelope over each of the clusters. For this open CV provides the function BoundingRect(link to the documentation) which provides the desired functionality: The function calculates and returns the minimal up-right bounding rectangle for the specified point set.

how do I fit a grid of points on a random point cloud

I have a binary image with dots, which I obtained using OpenCV's goodFeaturesToTrack, as shown on Image1.
Image1 : Cloud of points
I would like to fit a grid of 4*25 dots on it, such as the on shown on Image2 (Not all points are visible on the image, but it is a regular 4*25 points rectangle).
Image2 : Model grid of points
My model grid of 4*25 dots is parametrized by :
1 - The position of the top left corner
2 - The inclination of the rectangle with the horizon
The code below shows a function that builds such a model.
This problem seems to be close to a chessboard corner problem.
I would like to know how to fit my model cloud of points to the input image and get the position and angle of the cloud.
I can easily measure a distance in between the two images (the input one and the on with the model grid) but I would like to avoid having to check every pixel and angle on the image for finding the minimum of this distance.
def ModelGrid(pos, angle, shape):
# Initialization of output image of size shape
table = np.zeros(shape)
# Parameters
size_pan = [32, 20]# Pixels
nb_corners= [4, 25]
index = np.ndarray([nb_corners[0], nb_corners[1], 2],dtype=np.dtype('int16'))
angle = angle*np.pi/180
# Creation of the table
for i in range(nb_corners[0]):
for j in range(nb_corners[1]):
index[i,j,0] = pos[0] + j*int(size_pan[1]*np.sin(angle)) + i*int(size_pan[0]*np.cos(angle))
index[i,j,1] = pos[1] + j*int(size_pan[1]*np.cos(angle)) - i*int(size_pan[0]*np.sin(angle))
if 0 < index[i,j,0] < table.shape[0]:
if 0 < index[i,j,1] < table.shape[1]:
table[index[i,j,0], index[i,j,1]] = 1
return table
A solution I found, which works relatively well is the following :
First, I create an index of positions of all positive pixels, just going through the image. I will call these pixels corners.
I then use this index to compute an average angle of inclination :
For each of the corners, I look for others which would be close enough in certain areas, as to define a cross. I manage, for each pixel to find the ones that are directly on the left, right, top and bottom of it.
I use this cross to calculate an inclination angle, and then use the median of all obtained inclination angles as the angle for my model grid of points.
Once I have this angle, I simply build a table using this angle and the positions of each corner.
The optimization function measures the number of coincident pixels on both images, and returns the best position.
This way works fine for most examples, but the returned 'best position' has to be one of the corners, which does not imply that it corresponds to the best position... Mainly if the top left corner of the grid within the cloud of corners is missing.

OpenCV - Estimating Box dimensions in Python

This is the continuation of my previous question. I now have an image like this
Here the corners are detected. Now I am trying to estimate the dimensions of the bigger box while smaller black box dimensions are known.
Can anyone guide me what is the best way to estimate the dimensions of the box? I can do it with simple Euclidean distance but I don't know if it is the correct way or not. Or even if it is the correct way then from a list of tuples (coordinates) how can I find distances like A-B or A-D or G-H but not like A-C or A-F?
The sequence has to be preserved in order to get correct dimensions. Also I have two boxes here so when I create list of corners coordinates then it should contain all coordinates from A-J and I don't know which coordinates belong to which box. So how can I preserve that for two different boxes because I want to run this code for more similar images.
Note: The corners in this image is not a single point but a set of points so I clustered the set of the corner and average them to get a single (x,y) coordinate for each corner.
I have tried my best to explain my questions. Will be extremely glad to have some answers :) Thanks.
For the
How can I find distances like A-B or A-D or G-H but not like A-C or
A-F
part
Here's a quick code, not efficient for images with lots of corners, but for your case it's OK. The idea is to start from the dilated edge image you got in your other question (with only the big box, but the idea is the same for the image where there is also the small box)
then for every possible combination of corners, you look at a few points on an imaginary line between them, and then you check if these points actually fall on a real line in the image.
import cv2
import numpy as np
#getting intermediate points on the line between point1 and point2
#for example, calling this function with (p1,p2,3) will return the point
#on the line between p1 and p2, at 1/3 distance from p2
def get_intermediate_point(p1,p2,ratio):
return [p1[0]+(p2[0]-p1[0])/ratio,p1[1]+(p2[1]-p1[1])/ratio]
#open dilated edge images
img=cv2.imread(dilated_edges,0)
#corners you got from your segmentation and other question
corners=[[29,94],[102,21],[184,52],[183,547],[101,576],[27,509]]
nb_corners=len(corners)
#intermediate points between corners you are going to test
ratios=[2,4,6,8] #in this example, the middle point, the quarter point, etc
nb_ratios=len(ratios)
#list which will contain all connected corners
connected_corners=[]
#double loop for going through all possible corners
for i in range(nb_corners-1):
for j in range(i+1,nb_corners):
cpt=0
c1=corners[i]; c2=corners[j]
#testing every intermediate points between the selected corners
for ratio in ratios:
p=get_intermediate_point(c1,c2,ratio)
#checking if these points fall on a white pixel in the image
if img[p[0],p[1]]==255:
cpt+=1
#if enough of the intermediate points fall on a white pixel
if cpt>=int(nb_ratios*0.75):
#then we assume that the 2 corners are indeed connected by a line
connected_corners.append([i,j])
print(connected_corners)
In general you cannot, since any reconstruction is only up to scale.
Basically, given a calibrated camera and 6 2D-points (6x2=12) you want to find 6 3D points + scale = 6x3+1=19. There aren't enough equations.
In order to do so, you will have to make some assumptions and insert them into the equations.
Form example:
The box edges are perpendicular to each other (which means that every 2 neighboring points share at least one coordinate value).
You need to assume that you know the height of the bottom points, i.e. they are on the same plane as your calibration box (this will give you the Z of the visible bottom points).
Hopefully, these constraints are enough to given you less equations that unknown and you can solve the linear equation set.

Using OpenCV remap function crops image

I am trying to warp an 640x360 image via the OpenCV remap function (in python 2.7). The steps executed are the following
Generate a curve and store its x and y coordinates in two seperate arrays, curve_x and curve_y.I am attaching the generated curve as an image(using pyplot):
Load image via the opencv imread function
original = cv2.imread('C:\\Users\\User\\Desktop\\alaskan-landscaps3.jpg')
Execute a nested for loop so that each pixel is shifted upwards in proportion to the height of the curve at that point.For each pixel I calculate a warping factor by dividing the distance between the curve's y coordinate and the "ceiling" (360) by the height of the image. The factor is then multiplied with the distance between the pixel's y-coordinate and the "ceiling" in order to find the new distance that the pixel must have from the "ceiling" (it will be shorter since we have an upward shift). Finally I subtract this new distance from the "ceiling" to obtain the new y-coordinate for the pixel. I thought of this formula in order to ensure that all entries in the map_y array used in the remap function will be within the area of the original image.
for i in range(0, y_size):
for j in range(0,x_size):
map_y[i][j]= y_size-((y_size - i) * ((y_size - curve_y[j]) / y_size))
map_x[i][j]=j`
Then using the remap function
warped=cv2.remap(original,map_x,map_y,cv2.INTER_LINEAR)
The resulting image appears to be warped somewhat along the curve's path but it is cropped - I am attaching both the original and resulting image
I know I must be missing something but I can't figure out where the mistake is in my code - I don't understand why since all y-coordinates in map_y are between 0-360 the top third of the image has disappeared following the remapping
Any pointers or help will be appreciated. Thanks
[EDIT:] I have edited my function as follows:
#array to store previous y-coordinate, used as a counter during mapping process
floor_y=np.zeros((x_size),np.float32)
#for each row and column of picture
for i in range(0, y_size):
for j in range(0,x_size):
#calculate distance between top of the curve at given x coordinate and top
height_above_curve = (y_size-1) - curve_y_points[j]
#calculated a mapping factor, using total height of picture and distance above curve
mapping_factor = (y_size-1)/height_above_curve
# if there was no curve at given x-coordinate then do not change the pixel coordinate
if(curve_y_points[j]==0):
map_y[i][j]=j
#if this is the first time the column is traversed, save the curve y-coordinate
elif (floor_y[j]==0):
#the pixel is translated upwards according to the height of the curve at that point
floor_y[j]=i+curve_y_points[j]
map_y[i][j]=i+curve_y_points[j] # new coordinate saved
# use a modulo operation to only translate each nth pixel where n is the mapping factor.
# the idea is that in order to fit all pixels from the original picture into a new smaller space
#(because the curve squashes the picture upwards) a number of pixels must be removed
elif ((math.floor(i % mapping_factor))==0):
#increment the "floor" counter so that the next group of pixels from the original image
#are mapped 1 pixel higher up than the previous group in the new picture
floor_y[j]=floor_y[j]+1
map_y[i][j]=floor_y[j]
else:
#for pixels that must be skipped map them all to the last pixel actually translated to the new image
map_y[i][j]=floor_y[j]
#all x-coordinates remain unchanges as we only translate pixels upwards
map_x[i][j] = j
#printout function to test mappings at x=383
for j in range(0, 360):
print('At x=383,y='+str(j)+'for curve_y_points[383]='+str(curve_y_points[383])+' and floor_y[383]='+str(floor_y[383])+' mapping is:'+str(map_y[j][383]))
The bottom line is that now the higher part of the image should not receive mappings from the lowest part so overwriting of pixels should not take place. Yet i am still getting a hugely exaggerated upwards warping effect in the picture which I cannot explain. (see new image below).The top of the curved part is at around y=140 in the original picture yet now is very close to the top i.e y around 300. There is also the question of why I am not getting a blank space at the bottom for the pixels below the curve.
I'm thinking that maybe there is also something going on with the order of rows and columns in the map_y array?
I don't think the image is being cropped. Rather, the values are "crowded" in the top-middle pixels, so that they get overwritten. Consider the following example with a simple function on a checkerboard.
import numpy as np
import cv2
import pickle
y_size=200
x_size=200
x=np.linspace(0,x_size,x_size+1)
y=(-(x-x_size/2)*(x-x_size/2))/x_size+x_size
plt.plot(x,y)
The function looks like this:
Then let's produce an image with a regular pattern.
test=np.zeros((x_size,y_size),dtype=np.float32)
for i in range(0, y_size):
for j in range(0,x_size):
if i%2 and j%2:
test[i][j]=255
cv2.imwrite('checker.png',test)
Now let's apply your shift function to that pattern:
map_y=np.zeros((x_size,y_size),dtype=np.float32)
map_x=np.zeros((x_size,y_size),dtype=np.float32)
for i in range(0, y_size):
for j in range(0,x_size):
map_y[i][j]= y_size-((y_size - i) * ((y_size - y[j]) / y_size))
map_x[i][j]=j
warped=cv2.remap(test,map_x,map_y,cv2.INTER_LINEAR)
cv2.imwrite('warped.png',warped)
If you notice, because of the shift, more than one value corresponds to the top-middle areas, which makes it look like it is cropped. But if you check to the top left and right corners of the image, notice that the values are sparser, thus the "cropping" effect does not occur much. I hope the simple example helps better to understand what is going on.

Categories