How can i measure distances in cinematic MRI sequence? - python

I'm working on an MRI dynamic sequence including a cinematic sum of 20 images and i would like to calculate on each image certain distances based on landmarks
I'm really lost i don't know where to start using python to do that
I would really really appreciate your guidance
i tried to create landmarks on the image but from that i don't know how to calculate distance between two landmarks or more

To measure distances in a cinematic MRI sequence, you need to perform image analysis. Here are the general steps to do so:
Pre-processing: Perform any necessary pre-processing steps on the images, such as correcting for distortions, enhancing the contrast, or removing noise.
Segmentation: Identify and isolate the structures of interest in each image. This can be done manually or using image processing algorithms such as thresholding, edge detection, or morphological operations.
Tracking: Follow the movement of the structures from one image to the next to create a series of "tracks". This can be done using techniques such as optical flow, particle filtering, or Kalman filtering.
Distance measurement: Once the tracks have been generated, you can measure the distances between points on the tracks, either by calculating the Euclidean distance between two points or by using more complex algorithms such as deformable image registration.
Validation: Verify the accuracy of the distance measurements by comparing them to known distances or by performing a comparison with other imaging modalities.
Note that the specific steps and techniques used in measuring distances in a cinematic MRI sequence can vary depending on the specific application and the type of structures being studied. It is also important to consider the limitations of the imaging technology and the quality of the images acquired.

Related

Calculating Percentage of Image Overlap

I’m currently working on an image registration algorithm which uses aerial imagery. My objective is to compute the percentage of overlap between two images as shown below.
Visually the images have about 50% overlap, I'm using OpenCV following this implementation and this formula, the registered images are warped making it's implementation tricky.
A similar implementation with a different formula can be found here.
Are there any simpler workarounds to just find a rough estimate of the overlap?

Conversion from pixel to general Metric(mm, in)

I am using openCV to process an image and use houghcircles to detect the circles in the image under test, and also calculating the distance between their centers using euclidean distance.
Since this would be in pixels, I need the absolute distances in mm or inches, can anyone let me know how this can be done
Thanks in advance.
The image formation process implies taking a 2D projection of the real, 3D world, through a lens. In this process, a lot of information is lost (e.g. the third dimension), and the transformation is dependent on lens properties (e.g. focal distance).
The transformation between the distance in pixels and the physical distance depends on the depth (distance between the camera and the object) and the lens. The complex, but more general way, is to estimate the depth (there are specialized algorithms which can do this under certain conditions, but require multiple cameras/perspectives) or use a depth camera which can measure the depth. Once the depth is known, after taking into account the effects of the lens projection, an estimation can be made.
You do not give much information about your setup, but the transformation can be measured experimentally. You simply take a picture of an object of known dimensions and you determine the physical dimension of one pixel (e.g. if the object is 10x10 cm and in the picture it has 100x100px, then 10px is 1mm). This is strongly dependent on the distance to the camera from the object.
An approach a bit more automated is to use a certain pattern (e.g. checkerboard) of known dimensions. It can be automatically detected in the image and the same transformation can be performed.

How to identify particles in this complex image?

I have been trying Python+OpenCV for quite long time already and followed many tutorials in order to identify particles in the following image:
My ultimate goal is to identify every particle, from there I will be able to e.g. count number of particles, calculate a size distribution, etc.
I have already tried to customize many examples several sites.
I got good hints based on:
How to define the markers for Watershed in OpenCV?
Counting particles using image processing in python
Although I was not able to achieve decent results.
How can I identify particles in this image using Python and OpenCV?
IMO, the only hope to get meaningful results is to use the fact that the particles are round. By using some homogeneity criterion, you could find candidate particle centers, and from these grow contours in such a way that they remain round and stop at edges. An option could be to draw rays from the seed point, find the closest edge points and use a robust fit of a circle or an ellipse.
Reject the shapes that are too far from roundness. This should allow you to find the unoccluded particles. Then you can continue the game from other seed points, this time growing contours that can be occluded by the already detected particles. (When an edge is hit, if it is known to belong to a particle, ignore it.)
Let's pretend the goal is to get an estimated number of particles. Also, let's assume those particles are spheres.
With that being said it should be possible to build a model, based on highlight, shadow, halftone to make the final result as accurate as it can be.
With that being said a simple proof of the concept based on highlight segmentation can be verified.
Initial result doesn't seem to be promising, but a tiny change of the contrast improves it:
Should be enough to get estimated number of stones and apply more advanced models for identified regions.

How to track and count multiple cars in a video using contours?

Steps I have followed:
Background subtraction with preprocessing.
Contour detection.
With these two steps, I am able to draw contours on all moving cars in the video. But how do I track contours to count number of cars in the video ?
I searched around a bit and there seem to be different techniques like Kalman Filter, Lucas Kannade and Optical Flow... But I don't know which one to use for my usecase. I am using opencv3-python.
Actually, this seems like a general question, but I am going to give a point of view (Myself, I had the same problem but with pointclouds, although it may be different than what you asked, I hope it will give you an idea of how to proceed).
Most of the times, once your contours are detected, tracking moving objects in the scene involves 3 main steps:
Feature Matching :
This step is about detecting features in your object (Frame N) and match it to features of objects in frame (N+1). the detection part has some standard algorithms and descriptors available in OpenCV (SURF, SIFT, ORB...) as well as the Features matching part.
Kalman Filter
The Kalman filter is used to get an initial prediction (generally by applying a constant velocity model for your objects). For each appearance-point of the track, a correspondence search is executed. If the average distance is above a specified threshold, feature matching is applied to get a better initial estimate.
In order to do that, you need to model your problem in a way it can be solved by a Kalman filter.
Dynamic Mapping
After the motion estimation, the appearance of each track is updated. In contrast to standard mapping techniques, dynamic mapping is an approach which tries to accumulate appearance details of both static and dynamic objects. thus refining your motion estimation and tracking process.
There are a lot of papers out there, you may as well take a further look at these papers :
Robust Visual Tracking and Vehicle Classification via Sparse Representation
Motion Estimation from Range Images in Dynamic Outdoor Scenes
Multiple Objects Tracking using CAMshift Algorithm in OpenCV
Hope it helps !

Image segmentation based on edge pixel map [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 4 years ago.
Improve this question
I have trained a classifier in Python for classifying pixels in an image of cells as edge or non edge. I've used it successfully on a few image datasets but am running into problems with this particular dataset, which seems pretty ambiguous even to the human eye. I don't know of any existing automated technique that can segment it accurately.
After prediction I obtain the following image:
I am relatively new to image processing and am unsure with how to proceed with actually obtaining the final segmentations of the cells. I have briefly tried a few different techniques - namely Hough circular transform, level sets, skeletonization, contour finding - but none have really done the trick. Am I just not tuning the parameters correctly or is there a better technique out there?
Here are the correct outlines, by the way, for reference.
And the original image:
And the continuous probability map:
Very nice work on boundary detection. I used to work on similar segmentation problems.
Theory:
Once you obtained your edge map where e(i,j) indicates the "edge-iness" degree of pixel i,j you would like a segmentation of the image that would respect the edge map as possible.
In order to formulate this "respect the edge map" in a more formal fashion I suggest you look at the Correlation clustering (CC) functional:
The CC functional asses the quality of a segmentation based on pair-wise relations between neighboring pixels whether they should be in the same cluster (no edge between them) or in different clusters (there is an edge between them).
Take a look at the example at section 7.1 of the aforementioned paper.
CC is used for similar segmentation problems in medical (neuronal) imaging as well, see e.g., here.
Practice
Once you convince yourself that CC is indeed an appropriate formulation for your problem, there is still the question of how exactly to convert your binary edge map into an affinity matrix that CC can process. Bear in mind that CC needs as an input a (usually sparse) adjacency matrix with positive entries for pairs of pixels assuming to belong to the same segment, and negative entries for pairs of pixels assumed to belong in different segments.
Here's my suggestion:
The edges in your edge map looks quite thick and not well localize. I suggest a non-max supression, or morphological thining as a pre-processing stage.
Once you have a better localized edges, you ignore the "edge" pixels and only work with the "non-edge" pixels, lets call them "active".
Two active pixels that are next to each other: there is no "edge" pixel between them - they should be together. So the adjecency matrix for immidiate nieghbors should have positive entires.
Consider three pixels on a line, with the two endpoints are "active" pixels: if the middle one is an edge then the two active pixels should not belong to the same cluster - the corresponding entries in the adjecency matrix should be negative. If the middle pixel is also active than the corresponding entries in the adjecency matrix should be positive.
Consider all possible neighboring pairs and triplets (inducing 24-connected grid graph) allows you to construct an affinity matrix with positive and negative entries suitable for CC.
Given a matrix you should search for the segmentation with the best CC score (optimization stage). I have Matlab code for this here. You can also use the excellent openGM package.
The optimization will result with a partition of the active pixels only, you can map it back to the input image domain, leaving the edge pixels as un-assigned to any segment.
Seeing the picture of the edges/non edges pixel in the classifier, we can see that the gradient image of your input already basically gives the result of the classifier you have learnt. But the confidence map shows a good solution except that:
1. they are connected levelsets, with varying sizes.
2. you have noisy bright spots in the cells that cause false outputs from the classifier. (maybe some smoothing could be considered)
3. I guess it would be probably easier to characterize the internal of each cell: the grayscale variations, the average size. Learning these distributions would probably get you better detection results. Topologically we have a set of low grayscale values nested in large grayscale values.
To perform this one could use Graphcuts with GMM model for the unitary costs and a learnt gradient distribution for the pairwise term
I think your Hough transform is a good idea. One thing you should try (if you don't already), is to threshold your image before you run it through your tranform, though the article I just linked seems to only be binary thresholding. What this might do is to exaggerate differences between the edge and the background, so it might be easier to detect. Basically, apply a function (in the form of a filter which operates on the pixel's value) to each pixel.
Another thing you can try is active contours. Basically, you lay down some circles and they move through the image until they find what you're looking for.
My last idea is maybe try a wavelet transform. These seem to work pretty well at picking out boundaries and borders in images. Hope these ideas can get you started.

Categories