Using Simple ITK to find bounding box - python

How can I capture the bounding box from the 3D mask by using Simple ITK in python?
The ITK has the bounding box function, but I couldn't find similar function in SITK.

You need a LabelShapeStatisticsImageFilter, and after Execute you can get the BoundingBox around certain values.
In case of several masks you can iterate on range(1,labelimfilter.GetNumberOfLabels()+1).
(Works this way because you can't calculate BoundingBox on the value 0.)
import SimpleITK as sitk
bbox=[]
labelimfilter=sitk.LabelShapeStatisticsImageFilter()
labelimfilter.Execute(yourmaskimage)
for i in range(1,labelimfilter.GetNumberOfLabels()+1):
box=labelimfilter.GetBoundingBox(i)
bbox.append(box)
This will return the bounding box coordinates in [xstart, ystart, zstart, xsize, ysize, zsize] order

Related

Get real GPS coordinates out of known edges values on python

I'm trying to find a way to convert pixels into a real coordinates. I have an image with known (GPS) edges values.
Top left = 43.51281, -70.46223
Top right = 43.51279, -70.46213
Bottom left = 43.51272, -70.46226
Bottom right = 43.51270, -70.46215
Image with known edges values
I have another script that prints the coordinates in pixels of an image. Is there any way that the value of each corner is declared, and that it prints the real coordinates of where I clicked?
For example: The next image shape is [460, 573] and when I click somewhere on it, the pixels of that click are shown, I want it to be real coordinates.
Example
An option is to use OpenCV's getPerspectiveTransform() function, see this for an intuitive explanation of how the function maps real world coordinates to coordinates on another image (which in your case would be mapping the GPS values to the pixel values within the image):
https://towardsdatascience.com/how-to-track-football-players-using-yolo-sort-and-opencv-6c58f71120b8
And these for an example of the function being used:
Python Open CV perspectiveTransform()
https://www.geeksforgeeks.org/perspective-transformation-python-opencv/

Convert point/dot annotation in to gaussian density map

I'm studying this paper: https://papers.nips.cc/paper/2010/file/fe73f687e5bc5280214e0486b273a5f9-Paper.pdf and I'm struggling at the function below:
Basically in an image, each person will be annotated a dot rather than bounding box or segmentation. The paper proposed a way to convert a dot into Gaussian density map, which acts as a ground truth. I have try numpy.random.multivariate_normal but it seems not working.
I am working on a research problem involving density maps. This code assumes you are looping over a list of text files, where each text file has the point annotations (or you are converting from object to point annotations, like I did). It also assumes that you have a list of annotations with (x,y) centre points to work with (after reading/processing said text file).
You can find a good implementation of this here:
https://github.com/CommissarMa/MCNN-pytorch/blob/master/data_preparation/k_nearest_gaussian_kernel.py
The above has some extra code for adaptive kernels.
The below code in context (with a lot more 'fluff') is here:
https://github.com/MattSkiff/cow_flow/blob/master/data_loader.py
Here is the code I used:
# running gaussian filter over points as in crowdcount mcnn
density_map = np.zeros((img_size[1],img_size[0]), dtype=np.float32)
# add points onto basemap
for point in annotations:
base_map = np.zeros((img_size[1], img_size[0]), dtype=np.float32)
# subtract 1 to account for 0 indexing
base_map[int(round(point[1]*img_size[1])-1),
int(round(point[0]*img_size[0])-1)] += 1
density_map += scipy.ndimage.filters.gaussian_filter(base_map, sigma = sigma, mode='constant')
This should create a density map that does what you want. Using 'imshow' on an ax object from matplotlib (e.g. ax.imshow(density,cmap='hot',interpolation='nearest') should produce a density map like so (I've added the aerial image to indicate what is being labelled):

How to get box around contour using skimage.segmentation.felzenszwalb?

I'm trying to get a box around a segmented object on the edge of the image, that is, there is no contour around the segmentation because the object is only partially inside the image region.
I use skimage.segmentation, find_boundaries, clear_border, and regioprops. However, regionprops does not provide those edge cases
segments_fz = felzenszwalb(cv2.cvtColor(image, cv2.COLOR_BGR2RGB), scale=300, sigma=0.5, min_size=50)
cleared = clear_border(segments_fz)
label_image = label(cleared)
regionprops(label_image)
A box around segmented object near the limit of the image region.
You shouldn't use clear_border. Then the objects on the border will be treated like any other. The bbox property should give you a bounding box for your object of interest, while find_boundaries and mark_boundaries will let you get or visualise the boundaries between segments.

Defining color range for histologic image mask within HSV colorspace (Python, OpenCV, Image-Analysis):

In an effort to separate histologic slides into several layers based on color, I modified some widely distributed code (1) available through OpenCV's community. Our staining procedure marks different cell types of tissue cross sections with different colors (B cells are red, Macrophages are brown, background nuceli have a bluish color).
I'm interested in selecting only the magenta-colored and brown parts of the image.
Here's my attempt to create a mask for the magenta pigment:
import cv2
import numpy as np
def mask_builder(filename,hl,hh,sl,sh,vl,vh):
#load image, convert to hsv
bgr = cv2.imread(filename)
hsv = cv2.cvtColor(bgr, cv2.COLOR_BGR2HSV)
#set lower and upper bounds of range according to arguements
lower_bound = np.array([hl,sl,vl],dtype=np.uint8)
upper_bound = np.array([hh,sh,vh],dtype=np.uint8)
return cv2.inRange(hsv, lower_bound,upper_bound)
mask = mask_builder('sample 20 138 1.jpg', 170,180, 0,200, 0,230)
cv2.imwrite('mask.jpg', mask)
So far a trial and error approach has produced poor results:
The can anyone suggest a smarter method to threshhold within the HSV colorspace? I've done my best to search for answers in previous posts, but it seems that these color ranges are particularly difficult to define due to the nature of the image.
References:
Separation with Colorspaces: http://opencv-python-tutroals.readthedocs.org/en/latest/py_tutorials/py_imgproc/py_colorspaces/py_colorspaces.html
python opencv color tracking
BGR separation: http://www.pyimagesearch.com/2014/08/04/opencv-python-color-detection/
UPDATE:
I've found a working solution to my problem. I increased the lower bound of 'S' and 'V' by regular intervals using a simple FOR control structure, outputing the results for each test image and choosing the best. I found my lower bounds for S and V should be set at 100 and 125. This systematic method of trial and error produced better results:
I am happy you found your answer.
I will suggest an alternate method that might work. Unfortunately I am not proficient with python so you'll need to find out how to code that in python (its basic).
If I had the firs image you have after the HSV threshold, I would use morphological operations to get the information I want.
I would probably give it a go to "closing", but if it doesnt work I would first dilate, then fill and then erode the same amount firstly dilated.
Probably after this first step you'll need to delete the small "noise" blobs you have around and you'll get the image.
This is how it would be in Matlab (showing this mainly so you can see the results):
I=imread('http://i.stack.imgur.com/RlH4V.jpg');
I=I>230; % Create Black and white image (this is because in stackoverflow its a jpg)
ker=strel('square',3); % Create a 3x3 square kernel
I1=imdilate(I,ker); % Dilate
I2=imfill(I1,'holes'); % Close
I3=imerode(I2,ker); % Erode
Ilabel=bwlabel(I3,8); % Get a label per independent blob
% Get maximum area blob (you can do this with a for in python easily)
areas = regionprops(Ilabel,'Centroid','Area','PixelIdxList');
[~,index] = max([areas.Area]); % Get the maximum area
Imask=Ilabel==index; % Get the image with only the max area.
% Plot: This is just matlab code, no relevance
figure;
subplot(131)
title('Dialted')
imshow(I1);
subplot(132)
title('Closed')
imshow(I2);
subplot(133)
title('Eroded')
imshow(I3);
figure;
imshow(imread('http://i.stack.imgur.com/ZqrF9.jpg'))
hold on
h=imshow(bwperim(Imask));
set(h,'alphadata',Imask/2)
Note that I started from the "bad" HSV segmentation. If you try a better one the results may improve. Also, play with the kernel size for the erosion and dilation.
Through trial-and-error (incrementing down and up the "S" and "V" scales), I found that my desired colors require a relaxed range for "S" and "V" values. I'll refrain from sharing the particular values I use because I don't think anyone would find such information useful.
Note that the original code shared works fine once more representitive ranges are used.

Detect the size of a QR Code in Python using OpenCV and Zbar

I have code that takes an image from the webcam, scans it for QR codes using zBar and returns the value of the code and an image with the QR code highlighted (based off http://sourceforge.net/p/qrtracker/wiki/Home/). How can I also make it tell me the size of the code (as a pixel value or % of the screen)?
Additional question: is there a way to detect how skewed it is (e.g rotation in Z about the Y-axis)?
Regarding the size of Code
zBar provides a method to do this in terms of pixel values (Once you know the size in pixel values, you can find it in %)
I would like to extend the code here: http://sourceforge.net/apps/mediawiki/zbar/index.php?title=HOWTO:_Scan_images_using_the_API
Above code finds a QR code in an image, prints its data etc. Now checking last few lines of code:
import math
scanner.scan(image)
[a,b,c,d] = x.location # it returns the four corners of the QR code in an order
w = math.sqrt((a[0]-b[0])**2 + (a[1]-b[1])**2) # Just distance between two points
h = math.sqrt((b[0]-c[0])**2 + (b[1]-c[1])**2)
Area = w*h
Skewness of QRCode
I think you want to transform it into a pre-defined shape (like square, rectangle, etc). If so, you can define corners of a pre-defined shape, say ((100,100), (300,100),(300,300),(100,300)). Then find the perspective transform and apply the transformation if you would like. An example in OpenCV is provided here: http://docs.opencv.org/trunk/doc/py_tutorials/py_imgproc/py_geometric_transformations/py_geometric_transformations.html#perspective-transformation

Categories