I am trying to view my local ridge orientation of a fingerprint as a flowchart. But I seem to fail miserably at doing so. My method consists of the following steps:
use the lro function
find the most dominant angle in a 16x16 block
create a line segment and rotate it by the dominant angle to display it
The problem is that while the angles that the lro produces are good, the display of these in the flowchart does not work at all. There I just get a lot of random angles going in all kind of directions. Can anyone help me solve this problem?
Here is the code I'm using:
def lro(im_np):
eps = 2**(-52)
orientsmoothsigma = 4
# original
Gxx = cv2.Sobel(im_np, -1, 2, 0)
Gxy = cv2.Sobel(im_np, -1, 1, 1)
Gyy = cv2.Sobel(im_np, -1, 0, 2)
Gxx = scipy.ndimage.filters.gaussian_filter(Gxx, orientsmoothsigma)
Gxy = scipy.ndimage.filters.gaussian_filter(Gxy, orientsmoothsigma)
Gyy = scipy.ndimage.filters.gaussian_filter(Gyy, orientsmoothsigma)
angle = math.pi/2. + numpy.divide(numpy.arctan2(numpy.multiply(Gxy,2), numpy.subtract(Gxx,Gyy)),2)
return angle
def createLine(im_np):
#Assumes it is 17x17
#Takes in the block-direction
#returns a block-direction image as a numpy array
angle = numpy.max(im_np)
# print im_np.shape
im = Image.new('L', (im_np.shape[0], im_np.shape[1]), (0))
draw = ImageDraw.Draw(im)
draw.line([(0,im_np.shape[0]/2), (im_np.shape[0],im_np.shape[0]/2)], fill=255)
im = im.rotate(angle)
img_np2 = numpy.asarray(im)
# print img_np2
return img_np2
def findDomAngle(im_np):
mask = numpy.zeros((180,2))
for i in range(180):
mask[i][0] = i+1
for i in range(im_np.shape[0]):
for j in range(im_np.shape[0]):
mask[im_np[i][j]-1][1] += 1
max = 0
bestdir = 0
for i in range(180):
if mask[i][1] > max:
bestdir = i + 1
max = mask[i][1]
# print mask
# print max
return bestdir
def blkdir(angle_mat):
x = angle_mat.shape[0]
y = angle_mat.shape[1]
# print angle_mat
domAngle = findDomAngle(angle_mat)
# print domAngle
blkAngle = angle_mat
blkAngle.setflags(write=True)
for i in range(x):
for j in range(y):
blkAngle[i][j] = domAngle
return blkAngle
I am applying another function to process the image block by block, but this method has proven to work so I don't find it relevant to include.
Related
I'm attempting to extend the 'tail' of an arrow. So far I've been able to draw a line through the center of the arrow, but this line extends 'both' ways, rather than in just one direction. The script below shows my progress. Ideally I would be able to extend the tail of the arrow regardless of the orientation of the arrow image. Any suggestions on how to accomplish this. Image examples below, L:R start, progress, goal.
# import image and grayscale
image = cv2.imread("image path")
image = cv2.cvtColor(image, cv2.COLOR_BGR2GRAY)
cv2.imshow("original",image)
# inverts black and white
gray = 255 - image
cv2.imshow("Inverted", gray)
# Extend the borders for the line
extended = cv2.copyMakeBorder(gray, 20, 20, 10, 10, cv2.BORDER_CONSTANT)
cv2.imshow("extended borders", extended)
# contour finding
contours, hierarchy = cv2.findContours(extended, 1, 2)
cont = contours[0]
rows,cols = extended.shape[:2]
[vx,vy,x,y] = cv2.fitLine(cont, cv2.DIST_L2,0,0.01,0.01)
leftish = int((-x*vy/vx) + y)
rightish = int(((cols-x)*vy/vx)+y)
line = cv2.line(extended,(cols-1,rightish),(0,leftish),(255,255,255), 6)
cv2.imshow("drawn line", line)
"Moments" can be strange things. They're building blocks and show up most often in statistics.
It helps to have a little background in statistics, and see the application of those calculations to image data, which can be considered a set of points. If you've ever calculated the weighted average or "centroid" of something, you'll recognize some of the sums that show up in "moments".
Higher order moments can be building blocks to higher statistical measures such as covariance and skewness.
Using covariance, you can calculate the major axis of your set of points, or your arrow in this case.
Using skewness, you can figure out which side of a distribution is heavier than the other... i.e. which side is the arrow's tip and which is its tail.
This should give you a very precise angle. The scale/radius however is best estimated using other ways. You'll notice that the radius estimated from the area of the arrow fluctuates a little. You could find the points belonging to the arrow that are furthest away from the center, and take that as a somewhat stable length.
Here's a longish program that implements the two ideas above and shows the direction of an arrow:
#!/usr/bin/env python3
import os
import sys
import numpy as np
import cv2 as cv
# utilities to convert between 2D vectors and complex numbers
# complex numbers are handy for rotating stuff
def to_complex(vec):
assert vec.shape[-1] == 2
if vec.dtype == np.float32:
return vec.view(np.complex64)
elif vec.dtype == np.float64:
return vec.view(np.complex128)
else:
assert False, vec.dtype
def from_complex(cplx):
if cplx.dtype == np.complex64:
return cplx.view(np.float32)
elif cplx.dtype == np.complex128:
return cplx.view(np.float64)
else:
assert False, cplx.dtype
# utilities for drawing with fractional bits of position
# just to make a pretty picture
def iround(val):
return int(round(val))
def ipt(vec, shift=0):
if isinstance(vec, (int, float)):
return iround(vec * 2**shift)
elif isinstance(vec, (tuple, list, np.ndarray)):
return tuple(iround(el * 2**shift) for el in vec)
else:
assert False, type(vec)
# utilities for affine transformation
# just to make a pretty picture
def rotate(degrees=0):
# we want positive rotation
# meaning move +x towards +y
# getRotationMatrix2D does it differently
result = np.eye(3).astype(np.float32)
result[0:2, 0:3] = cv.getRotationMatrix2D(center=(0,0), angle=-degrees, scale=1.0)
return result
def translate(dx=0, dy=0):
result = np.eye(3).astype(np.float32)
result[0:2,2] = [dx, dy]
return result
# main logic
def calculate_direction(im):
# using "nonzero" (default behavior) is a little noisy
mask = (im >= 128)
m = cv.moments(mask.astype(np.uint8), binaryImage=True)
# easier access... see below for details
m00 = m['m00']
m10 = m['m10']
m01 = m['m01']
mu00 = m00
mu20 = m['mu20']
mu11 = m['mu11']
mu02 = m['mu02']
nu30 = m['nu30']
nu03 = m['nu03']
# that's just the centroid
cx = m10 / m00
cy = m01 / m00
centroid = np.array([cx, cy]) # as a vector
# and that's the size in pixels:
size = m00
# and that's an approximate "radius", if it were a circle which it isn't
radius = (size / np.pi) ** 0.5
# (since the "size" in pixels can fluctuate due to resampling, so will the "radius")
# wikipedia helpfully mentions "image orientation" as an example:
# https://en.wikipedia.org/wiki/Image_moment#Examples_2
# we'll use that for the major axis
mup20 = mu20 / mu00
mup02 = mu02 / mu00
mup11 = mu11 / mu00
theta = 0.5 * np.arctan2(2 * mup11, mup20 - mup02)
#print(f"angle: {theta / np.pi * 180:+6.1f} degrees")
# we only have the axis, not yet the direction
# we will assess "skewness" now
# https://en.wikipedia.org/wiki/Skewness#Definition
# note how "positive" skewness appears in a distribution:
# it points away from the heavy side, towards the light side
# fortunately, cv.moments() also calculates those "standardized moments"
# https://en.wikipedia.org/wiki/Standardized_moment#Standard_normalization
skew = np.array([nu30, nu03])
#print("skew:", skew)
# we'll have to *rotate* that so it *roughly* lies along the x axis
# then assess which end is the heavy/light end
# then use that information to maybe flip the axis,
# so it points in the direction of the arrow
skew_complex = to_complex(skew) # reinterpret two reals as one complex number
rotated_skew_complex = skew_complex * np.exp(1j * -theta) # rotation
rotated_skew = from_complex(rotated_skew_complex)
#print("rotated skew:", rotated_skew)
if rotated_skew[0] > 0: # pointing towards tail
theta = (theta + np.pi) % (2*np.pi) # flip direction 180 degrees
else: # pointing towards head
pass
print(f"angle: {theta / np.pi * 180:+6.1f} degrees")
# construct a vector that points like the arrow in the picture
direction = np.exp([1j * theta])
direction = from_complex(direction)
return (radius, centroid, direction)
def draw_a_picture(im, radius, centroid, direction):
height, width = im.shape[:2]
# take the source at half brightness
canvas = cv.cvtColor(im // 2, cv.COLOR_GRAY2BGR)
shift = 4 # prettier drawing
cv.circle(canvas,
center=ipt(centroid, shift),
radius=ipt(radius, shift),
thickness=iround(radius * 0.1),
color=(0,0,255),
lineType=cv.LINE_AA,
shift=shift)
# (-direction) meaning point the *opposite* of the arrow's direction, i.e. towards tail
cv.line(canvas,
pt1=ipt(centroid + direction * radius * -3.0, shift),
pt2=ipt(centroid + direction * radius * +3.0, shift),
thickness=iround(radius * 0.05),
color=(0,255,255),
lineType=cv.LINE_AA,
shift=shift)
cv.line(canvas,
pt1=ipt(centroid + (-direction) * radius * 3.5, shift),
pt2=ipt(centroid + (-direction) * radius * 4.5, shift),
thickness=iround(radius * 0.15),
color=(0,255,255),
lineType=cv.LINE_AA,
shift=shift)
return canvas
if __name__ == '__main__':
imfile = sys.argv[1] if len(sys.argv) >= 2 else "p7cmR.png"
src = cv.imread(imfile, cv.IMREAD_GRAYSCALE)
src = 255 - src # invert (white arrow on black background)
height, width = src.shape[:2]
diagonal = np.hypot(height, width)
outsize = int(np.ceil(diagonal * 1.3)) # fudge factor
cv.namedWindow("arrow", cv.WINDOW_NORMAL)
cv.resizeWindow("arrow", 5*outsize, 5*outsize)
angle = 0 # degrees
increment = +1
do_spin = True
while True:
print(f"{angle:+.0f} degrees")
M = translate(dx=+outsize/2, dy=+outsize/2) # rotate(degrees=angle) # translate(dx=-width/2, dy=-height/2)
im = cv.warpAffine(src, M=M[:2], dsize=(outsize, outsize), flags=cv.INTER_CUBIC, borderMode=cv.BORDER_REPLICATE)
# resampling introduces blur... except when it's an even number like 0 degrees, 90 degrees, ...
# so at even rotations, things will jump a little.
# this rotation is only for demo purposes
(radius, centroid, direction) = calculate_direction(im)
canvas = draw_a_picture(im, radius, centroid, direction)
cv.imshow("arrow", canvas)
if do_spin:
angle = (angle + increment) % 360
print()
key = cv.waitKeyEx(30 if do_spin else -1)
if key == -1:
continue
elif key in (0x0D, 0x20): # ENTER (CR), SPACE
do_spin = not do_spin # toggle spinning
elif key == 27: # ESC
break # end program
elif key == 0x250000: # VK_LEFT
increment = -abs(increment)
angle += increment
elif key == 0x270000: # VK_RIGHT
increment = +abs(increment)
angle += increment
else:
print(f"key 0x{key:02x}")
cv.destroyAllWindows()
I wish to filter a pointcloud, loaded with opend3d, as efficiently as possible.
Currently, I perform a downsampling of the points before making a mesh out of them and using .contains on an inclusion volume mesh I did manually. Something like this:
def load_pointcloud(self, pointcloud_path):
# Load Pointcloud
print('target_pointcloud', pointcloud_path)
self.pointcloud_path = pointcloud_path
pcd = o3d.io.read_point_cloud(pointcloud_path)
downpcd = pcd.voxel_down_sample(voxel_size=0.02)
cl, ind = downpcd.remove_statistical_outlier(nb_neighbors=20,
std_ratio=2.0)
downpcd = downpcd.select_by_index(ind)
pcd_points = np.asarray(downpcd.points, dtype=np.float32)
self.verts = torch.from_numpy(pcd_points)
self.verts = self.verts.to(device)
# We construct a Meshes structure for the target mesh
self.pointcloud_points = Pointclouds(points=[self.verts])
self.points = pcd_points
self.inclusion_pointcloud()
def inclusion_pointcloud(self):
vetices_in_mesh_states = self.mesh_inclusion.contains(self.points)
vetices_in_mesh = self.points[vetices_in_mesh_states == True]
# Creating cropped point cloud
cropped_pc = o3d.geometry.PointCloud()
cropped_pc.points = o3d.utility.Vector3dVector(vetices_in_mesh)
cropped_pc.paint_uniform_color([0,0,0])
self.points = np.asarray(cropped_pc.points, dtype=np.float32)
self.verts = torch.from_numpy(self.points)
self.verts = self.verts.to(device)
self.pointcloud_points = Pointclouds(points=[self.verts])
self.pc_mesh = trimesh.Trimesh(vertices=self.points)
What I was thinking in doing was to, after the downsampling, mask away points on X, Y, and Z, and then making a mesh to use .contains again in the same inclusion volume. I thought that this would reduce the .contains computation and run faster, and it kind of does, but is a marginal reduction, like 10 or 15ms, sometimes less. Something like this:
def new_load_pointcloud(self, pointcloud_path):
# Load Pointcloud
print('target_pointcloud', pointcloud_path)
self.pointcloud_path = pointcloud_path
pcd = self.trim_cloud(pointcloud_path)
downpcd = pcd.voxel_down_sample(voxel_size=0.02)
cl, ind = downpcd.remove_statistical_outlier(nb_neighbors=20,
std_ratio=2.0)
downpcd = downpcd.select_by_index(ind)
pcd_points = np.asarray(downpcd.points, dtype=np.float32)
self.verts = torch.from_numpy(pcd_points)
self.verts = self.verts.to(device)
# We construct a Meshes structure for the target mesh
self.pointcloud_points = Pointclouds(points=[self.verts])
self.points = pcd_points
self.inclusion_pointcloud()
def trim_cloud(self, pointcloud_path):
# pcd = o3d.io.read_point_cloud(pointcloud_path)
pcd_clean = o3d.io.read_point_cloud(pointcloud_path)
# X Axis
points = np.asarray(pcd_clean.points)
mask_x_1 = points[:,0] > -0.4
mask_x_2 = points[:,0] < 0.4
# Y Axis
mask_y_1 = points[:,1] > -1.3
mask_y_2 = points[:,1] < 0.9
# Z Axis
mask_z_1 = points[:,2] < 0.3 # Closer to floor
mask_z_2 = points[:,2] > -0.1 # Clooser to ceiling
mask_x = np.logical_and(mask_x_1, mask_x_2) # Along table's wide
mask_y = np.logical_and(mask_y_1, mask_y_2) # Along table's longitude
mask_z = np.logical_and(mask_z_1, mask_z_2) # Along table's height
mask = np.logical_and(mask_x, mask_y, mask_z)
pcd_clean.points = o3d.utility.Vector3dVector(points[mask])
return pcd_clean
def inclusion_pointcloud(self):
vetices_in_mesh_states = self.mesh_inclusion.contains(self.points)
vetices_in_mesh = self.points[vetices_in_mesh_states == True]
# Creating cropped point cloud
cropped_pc = o3d.geometry.PointCloud()
cropped_pc.points = o3d.utility.Vector3dVector(vetices_in_mesh)
cropped_pc.paint_uniform_color([0,0,0])
self.points = np.asarray(cropped_pc.points, dtype=np.float32)
self.verts = torch.from_numpy(self.points)
self.verts = self.verts.to(device)
self.pointcloud_points = Pointclouds(points=[self.verts])
self.pc_mesh = trimesh.Trimesh(vertices=self.points)
I think you are using too much nb_neighbors for the filter. Try less points , like 6 or 10, and a better threshold, like 1.0 or even 0.5. Here is the same filter on MATLAB documentation https://www.mathworks.com/help/vision/ref/pcdenoise.html, standard value for the threshold is 1.0 and for the knn is 6. You can also try the Radius Outlier Removal or the median filter: https://www.mathworks.com/help/lidar/ref/pcmedian.html
I am trying to classify an image by selecting a pixel at random, then finding all pixels in the image that are a certain euclidian distance in color space from that original pixel. My current script takes a prohibitively long time. I wonder if I am able to use this equation to generate a boolean matrix that will allow quicker manipulation of the image.
( x-cx ) ^2 + (y-cy) ^2 + (z-cz) ^ 2 < r^2
Here is the code I am using now:
import PIL, glob, numpy, random, math, time
def zone_map(picture, threshold):
im = PIL.Image.open(picture)
pix = im.load()
[width, height] = im.size
mask = numpy.zeros((width,height))
while 0 in mask:
x = random.randint(0, width)
y = random.randint(0, height)
if mask[x, y] == 0:
point = pix[x,y]
to_average = {(x, y): pix[x, y]}
start = time.clock()
for row in range(0, width):
for column in range(0, height):
if euclid_dist(point, pix[row,column]) <= threshold:
to_average[(row,column)] = pix[row, column]
#to_average = in_sphere(pix, point)
end = time.clock()
print(end - start)
to_average_sum = (0, 0, 0)
for value in to_average.values():
to_average_sum = tuple_sum(to_average_sum, value)
average = tuple_divide(to_average_sum, len(to_average.values()))
for coordinate in to_average.keys():
pix[coordinate] = average
mask[coordinate] = 1
unique, counts = numpy.unique(mask, return_counts=True)
progress = dict(zip(unique, counts))
print((progress[1] / progress[0])*100, '%')
im.save()
return im
def euclid_dist(tuple1, tuple2):
"""
Finds euclidian distance between two points in n dimensional sapce
"""
tot_sq = 0
for num1, num2 in zip(tuple1, tuple2):
tot_sq += (num1 + num2)**2
return math.sqrt(tot_sq)
def tuple_sum(tuple1, tuple2):
"""
Returns tuple comprised of sums of input tuples
"""
sums = []
for num1, num2 in zip(tuple1, tuple2):
sums.append(num1 + num2)
return tuple(sums)
def tuple_divide(tuple1, divisor):
"""
Divides numerical values of tuples by divisisor, yielding integer results
"""
quotients = []
for value in tuple1:
quotients.append(int(round(value/divisor)))
return tuple(quotients)
Any information on how to incorporate the described boolean matrix, or any other ideas on how to speed this up, would be greatly appreciated.
Just load the image as a numpy array, and then use array operations instead of looping over pixels:
import numpy as np
import matplotlib.pyplot as plt
import PIL
def zone_map(picture, threshold, show=True):
with PIL.Image.open(picture) as img:
rgb = np.array(img, dtype=np.float)
height, width, _ = rgb.shape
mask = np.zeros_like(rgb)
while not np.any(mask):
# get random pixel
position = np.random.randint(height), np.random.randint(width)
color = rgb[position]
# get euclidean distance of all pixels in colour space
distance = np.sqrt(np.sum((rgb - color)**2, axis=-1))
# threshold
mask = distance < threshold
if show: # show output
fig, (ax1, ax2) = plt.subplots(1,2)
ax1.imshow(rgb.astype(np.uint8))
ax2.imshow(mask, cmap='gray')
fig.suptitle('Random color: {}'.format(color))
return mask
def test():
zone_map("Lenna.jpg", threshold=20)
plt.show()
I have this image.
Image of a map
I want to:-
1. Recognize all the regions in this image
2. Recognize which region is connected to other regions
My goal is to apply four color theorem in this image and output a properly colored image. I'm a beginner in both python and opencv.
Your assistance in this matter would be greatly appreciated.
here is a MATLAB code which does what you want. It shouldn't be too complicated to implement it in Python+OpenCV:
% read image
m = rgb2gray(imread('map.jpg'));
% remove noise
m = medfilt2(m);
b = m < 250;
se1 = strel('disk',1,0);
se2 = ones(7);
b = imclose(imopen(b,se1),se2);
% skeletonize
B = bwmorph(b,'skel',inf);
% remove background
R = padarray(~B,[1 1],1);
bg = bwselect(R,1,1,4);
R(bg) = 0;
R = R(2:end-1,2:end-1);
% get regions connected components
ccRegions = bwconncomp(R,4);
maskRegions = false([size(R),ccRegions.NumObjects]);
MAP = zeros(size(R));
% generate a binary mask for each region, dilate it to detect overlaps
% between neigbors
for ii = 1:ccRegions.NumObjects
maskRegions((ii - 1)*numel(R) + (ccRegions.PixelIdxList{ii})) = 1;
maskRegions(:,:,ii) = imdilate(maskRegions(:,:,ii),se1);
MAP(maskRegions(:,:,ii)) = ii;
end
% detect neighbors using masks overlapping
neighborsRegions = cell(ccRegions.NumObjects,1);
for ii = 1:ccRegions.NumObjects
r = repmat(maskRegions(:,:,ii),[1 1 ccRegions.NumObjects]);
idxs = any(any(r & maskRegions,1),2); %indexes of touching neighbors
idxs(ii) = 0; % remove self index
neighborsRegions{ii} = find(idxs);
end
% show result
imshow(MAP,[])
c = regionprops(ccRegions,'Centroid');
for ii = 1:ccRegions.NumObjects
text(c(ii).Centroid(1),c(ii).Centroid(2),num2str(ii),'FontSize',20,'Color','r');
end
The output looks like that:
And the neighbors of each region is:
neighborsRegions = {[2;3]
[1;3;4]
[1;2;4;5;6]
[2;3;6]
[3;6]
[3;4;5]
[]}
My aim is to find the color of main object in a frame/image. In my case image is always of same type. For example News reporter(human) in Forest or News reporter in Animal farm.The position of news reporter is also same.What is the simple solution to find out the dominant colour of the main object(News Reporter)
Any help is welcome.Thanks
EDIT Code added
import cv2
from collections import namedtuple
from math import sqrt
import random
import webcolors
try:
import Image
except ImportError:
from PIL import Image
Point = namedtuple('Point', ('coords', 'n', 'ct'))
Cluster = namedtuple('Cluster', ('points', 'center', 'n'))
def get_points(img):
points = []
w, h = img.size
for count, color in img.getcolors(w * h):
points.append(Point(color, 3, count))
return points
rtoh = lambda rgb: '#%s' % ''.join(('%02x' % p for p in rgb))
def colorz(filename, n=3):
img = Image.open(filename)
img.thumbnail((200, 200))
w, h = img.size
points = get_points(img)
clusters = kmeans(points, n, 1)
rgbs = [map(int, c.center.coords) for c in clusters]
return map(rtoh, rgbs)
def euclidean(p1, p2):
return sqrt(sum([
(p1.coords[i] - p2.coords[i]) ** 2 for i in range(p1.n)
]))
def calculate_center(points, n):
vals = [0.0 for i in range(n)]
plen = 0
for p in points:
plen += p.ct
for i in range(n):
vals[i] += (p.coords[i] * p.ct)
return Point([(v / plen) for v in vals], n, 1)
def kmeans(points, k, min_diff):
clusters = [Cluster([p], p, p.n) for p in random.sample(points, k)]
while 1:
plists = [[] for i in range(k)]
for p in points:
smallest_distance = float('Inf')
for i in range(k):
distance = euclidean(p, clusters[i].center)
if distance < smallest_distance:
smallest_distance = distance
idx = i
plists[idx].append(p)
diff = 0
for i in range(k):
old = clusters[i]
center = calculate_center(plists[i], old.n)
new = Cluster(plists[i], center, old.n)
clusters[i] = new
diff = max(diff, euclidean(old.center, new.center))
if diff < min_diff:
break
return clusters
def main():
img = cv2.imread('d:/Emmanu/project-data/b1.jpg')
res=cv2.resize(img,(400,300))
crop_img = res[100:200, 150:250]
cv2.imwrite("d:/Emmanu/project-data/color-test.jpg", crop_img)
g= colorz('d:/Emmanu/project-data/color-test.jpg',1)
k=g[0]
print k
f=webcolors.hex_to_rgb(k)
print webcolors.rgb_to_name(f, spec='css3')
if __name__ == '__main__':main()
The problem is this returns the major color in the whole image not the main object
If your taking the colour of whole image,in most cases you will get wrong answer since background is more.If your image size is fixed and you are sure about object's position The most simple solution is Crop the image at where you expect the object.In most cases it will work.
In order to crop
import cv2
img = cv2.imread("'d:/Emmanu/project-data/b1.jpg'")
crop_img = img[200:400, 100:300] # Crop from x, y, w, h -> 100, 200, 300, 400
# NOTE: its img[y: y + h, x: x + w] and *not* img[x: x + w, y: y + h]
cv2.imshow("cropped", crop_img)
cv2.waitKey(0)
Now give this crop_image as input to your code.And in most cases it will give correct solution.There is nothing more simple that this.I think this will help.