Difference between subsampling and downscaling (image)? - python

I know that there are many ways to upscale (interpolate) an image using bilinear, bicubic,... Somehow, these same algorithms can also be used to downscale an image. But when it comes to subsampling, I've come across only two methods: Gaussian-blur then discard some rows and columns, or block-averaging.
Is there such thing as bilinear, bicubic,... subsampling?
Instead of Gaussian-blurring then discarding or block-averaging, can/should I do Gaussian-blurring then block-averaging? or bilinear, bicubic,...subsampling if there is such thing?
What I'm doing is trying to make image Gaussian and Laplacian pyramids; but I'm not sure how to decimate and interpolate the image. Some Python code with Scipy/Numpy would really help. Thank you very much.

Related

Image segmentation with a wide range of intensity values (Otsu thresholding)

I have raw microscopy images like this:
And I want to segment the objects, as you see some of them are really close and I have a great range of intensity values.
background: 700 a.u.
fluorescent shapes: from 7000 to 32000 a.u.
To segment them I use Otsu binary segmentation from skimage package (without prior processing of the image)
thresh, imgthresh=cv2.threshold(image, 0, 255, cv2.THRESH_BINARY+cv2.THRESH_OTSU)
The result is pretty good, but still fails in detecting the brightest shapes as individual objects.
I have tried a lot of things: watershed algorithm, image preprocessing (blurring), eroding , adaptive thresholding, but nothing works properly since the main problem is the difference in fluorescent values of the image.
Any smart idea on how to solve this?
Because your data have such a large range in intensity values, single histogram based methods on the whole image (e.g. Otsu) are going to have a little trouble accomplishing this task. I think that your best bet is going to be either:
threshold_multiotsu: and choose number of classes based on number of 'clusters' of intensities. Unfortunately, you will likely need to alter the number of classes on an image by image basis so this isn't super robust.
threshold_local: I know you said that you tried this but you might revisit this and alter the block_size parameter until you get something that looks reasonable. Based on your example images (and assuming a little bit about why the objects in your example images are green) it looks like that objects in close spatial proximity to one another generally have similar intensity values. Furthermore, you likely won't have to go through and alter the parameters as much as you would in option 1.
I suspect that these will be the simplest and most straight forward approaches but you could also delve into identifying the object edges using something from skimage.feature and then filling objects. Maybe something like outline here: https://scikit-image.org/docs/stable/auto_examples/features_detection/plot_blob.html. This will be a bit more involved, but these methods should be more robust with identifying objects with largely varied intensity values.
If all else fails you can try a couple of SOTA packages. The main ones that I am thinking of are https://github.com/stardist/stardist and https://github.com/MouseLand/cellpose but these seem like a bit of overkill based on your example data here.

Remove halo around image before segmentation (preprocessing)

I'm trying to perform image segmentation using Morphological Snakes, however the input images contain something like a halo that it is messing with the output. Here you can see an example image and its corresponding segmentation should look like this. As it can be seen in the first image the object of interest is the one in the center but around it there is, what I call, a gray halo and since the difference among the pixels is not much I haven't found a way to perform the segmentation.
I've tried preprocessing methods such as morphological operations and gaussian filter, however I didn't obtain the expected results. I've also tryed to segmentate the image using adaptative threshold and otsu, but didn't work either because, as I said before, the difference among the pixels is not much (that's why I moved to morphological snakes that work better but not enough for my case).
I want to know if there's any preprocessing method that could help in my case, maybe I haven't found the correct combination of them. I need to remove or clean the halo, or to highlight the the object in the center.
Thanks in advance to any who could help me.

Histogram equalization (or stretching) of RGB picture in python

I am creating automatically JPG pictures from multispectral data. Created picture is very dark. So I thought it would be best idea change brightness (like Image.Enhance in PIL). But there was a problem, because some pictures need more brightness than others.
So next idea was try linear stretching of histogram. So I created script which iterate over RGB tuples and compute new intensity for pixels. There was very small difference. Probably because the range of values was everytime 0-255. Then I tried histogram equalization (ImageOps) for R, G and B but the result was no good, please see middle part of picture. I found on the internet that this is not good approach because colors can change dramatically. It is probably my case.
The best idea looks convert RGB array to HSL and then change luminance but I can't use constant for maximize Luminance because pictures are different and need different constants for. Should I use histogram equalization on Luminance or what is the best approach how stretch or probably better histogram equalization of my picture?
I am looking for something like Image/Auto adjust colors in IrfanView or in some SW are used name Linear Normalization...
I hope that picture will be help to you understand my problem. I probably choose bad way how to achieve my goal.
Thank you for any answer, I will be very glad.
EDIT
Left image for download
Next images I can upload later, today.
I would suggest proceeding with the same approach as you have stated with slight modification.
Convert the RGB image to LAB image.
Apply localized histogram equalization to the L-channel.
Merge it back with the other channels.
Convert it back to RGB image.
You can check my answer for this in a different question here:
The code I have there is written for OpenCV using python. You can modify it for C language if you wish.
Let me know if it has helped you!!
I am not sure if this applies, and I have not applied this myself, but I was reading on this article about underwater contrast stretching:
http://www.iaeng.org/IJCS/issues_v34/issue_2/IJCS_34_2_12.pdf
What it suggests might help
"In order to address the issues discussed above, we propose
an approach based on slide stretching. Firstly, we use contrast
stretching of RGB algorithm to equalize the colour contrast in
the images. Secondly, we apply the saturation and intensity
stretching of HSI to increase the true colour and solve the
problem of lighting"

Correlate a large image with a kernel in python using numpy/scipy

I have an image (10000x10000 pixels) and I have a kernel (5x5 pixels). I want to find the place(s) in the image that best matches the kernel.
I vaguely remember from my studies that I need to compute a correlation coefficient for each pixel in the large image with respect to the kernel. But having something like (10000 - 4) * (10000 - 4) pixels to go through, I expect to get a huge performance hit doing this in python.
Having only a very brief knowledge on the subject I was hoping to find something in either numpy or scipy that would do this relatively fast, but I haven't been able to find anything.
Does either numpy or scipy contain a method for doing this?
This is usually referred to as template matching in image processing and most image processing packages will have something for it. If you can use scikit-image then you probably want match_template. Of course, OpenCV can do template matching too.
If you need to stick with pure scipy, it's easy enough to implement yourself: just find the maximum pixel (argmax) of a normalized cross-correlation (correlate2d).

How do i fill "holes" in an image?

I have photo images of galaxies. There are some unwanted data on these images (like stars or aeroplane streaks) that are masked out. I don't just want to fill the masked areas with some mean value, but to interpolate them according to surrounding data. How do i do that in python?
We've tried various functions in SciPy.interpolate package: RectBivariateSpline, interp2d, splrep/splev, map_coordinates, but all of them seem to work in finding new pixels between existing pixels, we were unable to make them fill arbitrary "hole" in data.
What you want is called Inpainting.
OpenCV has an inpaint() function that does what you want.
What you want is not interpolation at all. Interpolation depends on the assumption that data between known points is roughly contiguous. In any non-trivial image, this will not be the case.
You actually want something like the content-aware fill that is in Photoshop CS5. There is a free alternative available in The GIMP through the GIMP-resynthesize plugin. These filters are extremely advanced and to try to re-implement them is insane. A better choice would be to figure out how to use GIMP-resynthesize in your program instead.
I made my first gimp python script that might help you:
my scripts
It is called conditional filter as it is a matrix filter that fill all transparent pixels from an image according to the mean value of its 4 nearest neighbours that are not transparent.
Be sure to use a RGBA image with only 0 and 255 transparent values.
Its is rough, simple, slow, unoptimized but bug free.

Categories