Heatmap on top of gray-scale images in python - python

Simple and straightforward question, I need to plot a heatmap on top of a gray scale image in python, like the one below.
I come from MATLAB. There I would simply generate the probability map, pass it though a colormap and plug the result in the gray scalar picture.
I could do the same in python, though I feel like is not the most "pythonic" way to proceed (with numpy). You know any particular package that handles this kind of visualization?

Related

Image segmentation with a wide range of intensity values (Otsu thresholding)

I have raw microscopy images like this:
And I want to segment the objects, as you see some of them are really close and I have a great range of intensity values.
background: 700 a.u.
fluorescent shapes: from 7000 to 32000 a.u.
To segment them I use Otsu binary segmentation from skimage package (without prior processing of the image)
thresh, imgthresh=cv2.threshold(image, 0, 255, cv2.THRESH_BINARY+cv2.THRESH_OTSU)
The result is pretty good, but still fails in detecting the brightest shapes as individual objects.
I have tried a lot of things: watershed algorithm, image preprocessing (blurring), eroding , adaptive thresholding, but nothing works properly since the main problem is the difference in fluorescent values of the image.
Any smart idea on how to solve this?
Because your data have such a large range in intensity values, single histogram based methods on the whole image (e.g. Otsu) are going to have a little trouble accomplishing this task. I think that your best bet is going to be either:
threshold_multiotsu: and choose number of classes based on number of 'clusters' of intensities. Unfortunately, you will likely need to alter the number of classes on an image by image basis so this isn't super robust.
threshold_local: I know you said that you tried this but you might revisit this and alter the block_size parameter until you get something that looks reasonable. Based on your example images (and assuming a little bit about why the objects in your example images are green) it looks like that objects in close spatial proximity to one another generally have similar intensity values. Furthermore, you likely won't have to go through and alter the parameters as much as you would in option 1.
I suspect that these will be the simplest and most straight forward approaches but you could also delve into identifying the object edges using something from skimage.feature and then filling objects. Maybe something like outline here: https://scikit-image.org/docs/stable/auto_examples/features_detection/plot_blob.html. This will be a bit more involved, but these methods should be more robust with identifying objects with largely varied intensity values.
If all else fails you can try a couple of SOTA packages. The main ones that I am thinking of are https://github.com/stardist/stardist and https://github.com/MouseLand/cellpose but these seem like a bit of overkill based on your example data here.

How to generate bounding box data from heatmap data of an image?

I have a group of images and some separate heatmap data which (imperfectly) explains where subject of the image is. The heatmap data is in a numpy array with shape (224,224,3). I would like to generate bounding box data from this heatmap data.
The heatmaps are not always perfect, So I guess I'm wondering if anyone can think of an intelligent way to do this.
Here are some examples of what happens when I apply the heatmap data to the image:
I found a solution to this in matlab, but I have no idea how to read this code! I am a python programmer, unfortunately.
https://github.com/metalbubble/CAM/tree/master/bboxgenerator
Anyone have any ideas about how to approach something like this?
I am not quite sure how the heatmap data of your project exactly looks like, but it seems to me that you can use something like Selective Search. You can also have a look on this interesting paper. Maybe you can use this approach on your dataset.
I'm attempting a similar method for automating the creation of bounding boxes (since, lets face it: creating boxes manually takes along time)
this other stackpost covers a similar idea:
EDIT: (i originally had put a link to the current stack post 🤦 - but here is the stack post i was referring to)
Generating bounding boxes from heatmap data
the problem at hand that i recognize is that heatmaps can be fragmented and a bit arbitrary. the solution that comes to mind initially is setting a threshold of the heat map. So in the case of the example heat map images - when applying a bounding box cover all regions that are yellow/orange/red than say green/blue.
It depends on how many bounding boxes you need. You can set a threshold and have multiple bounding boxes for each of the highly activated regions, or try connecting the regions (by a morphological operation maybe) and calculate a single bounding box for connected activated pixels.

Histogram equalization (or stretching) of RGB picture in python

I am creating automatically JPG pictures from multispectral data. Created picture is very dark. So I thought it would be best idea change brightness (like Image.Enhance in PIL). But there was a problem, because some pictures need more brightness than others.
So next idea was try linear stretching of histogram. So I created script which iterate over RGB tuples and compute new intensity for pixels. There was very small difference. Probably because the range of values was everytime 0-255. Then I tried histogram equalization (ImageOps) for R, G and B but the result was no good, please see middle part of picture. I found on the internet that this is not good approach because colors can change dramatically. It is probably my case.
The best idea looks convert RGB array to HSL and then change luminance but I can't use constant for maximize Luminance because pictures are different and need different constants for. Should I use histogram equalization on Luminance or what is the best approach how stretch or probably better histogram equalization of my picture?
I am looking for something like Image/Auto adjust colors in IrfanView or in some SW are used name Linear Normalization...
I hope that picture will be help to you understand my problem. I probably choose bad way how to achieve my goal.
Thank you for any answer, I will be very glad.
EDIT
Left image for download
Next images I can upload later, today.
I would suggest proceeding with the same approach as you have stated with slight modification.
Convert the RGB image to LAB image.
Apply localized histogram equalization to the L-channel.
Merge it back with the other channels.
Convert it back to RGB image.
You can check my answer for this in a different question here:
The code I have there is written for OpenCV using python. You can modify it for C language if you wish.
Let me know if it has helped you!!
I am not sure if this applies, and I have not applied this myself, but I was reading on this article about underwater contrast stretching:
http://www.iaeng.org/IJCS/issues_v34/issue_2/IJCS_34_2_12.pdf
What it suggests might help
"In order to address the issues discussed above, we propose
an approach based on slide stretching. Firstly, we use contrast
stretching of RGB algorithm to equalize the colour contrast in
the images. Secondly, we apply the saturation and intensity
stretching of HSI to increase the true colour and solve the
problem of lighting"

How to automatically classify images by dominant color?

I have many images (tens of thousands) of fairly large JPG images. Each is an image of an index card. Most of them are white, but some have standard index-card colors (these colors).
The colors correspond to data attributes, so I would like to programmatically classify these cards by color. I know it’s possible to extract the dominant color from images in a web browser using a canvas element and a an algorithm like color thief, and it works–it gives me an rgb value which is enough to bin the cards.
But I can’t see how I could run such a thing through a web browser on so many images.
I’m wondering if anyone can recommend a command-line tool, perhaps a Python or Ruby module, that could do something similar.
This appears to strongly overlap with this question.
If I were in your situation, though, there are a couple of things I would try. First, you can get any one of several Python libraries that can work: OpenCV, Mahotas, PIL, or scikits.image can all probably handle this since you really just need to strip out average RGB values.
Write a script to load each image and append its average RGB value to a list. You can then use built-in methods from scikits.learn to perform either meanshift or k-means clustering on this big list of colors. Since you likely know how many different colors there are, k-means is a good choice. Alternatively, if you know the RGB values of the standard colors already, then for each image you load, just compute the average RGB value, then "classify" that image according to whichever standard color RGB vector is closest.

How do i fill "holes" in an image?

I have photo images of galaxies. There are some unwanted data on these images (like stars or aeroplane streaks) that are masked out. I don't just want to fill the masked areas with some mean value, but to interpolate them according to surrounding data. How do i do that in python?
We've tried various functions in SciPy.interpolate package: RectBivariateSpline, interp2d, splrep/splev, map_coordinates, but all of them seem to work in finding new pixels between existing pixels, we were unable to make them fill arbitrary "hole" in data.
What you want is called Inpainting.
OpenCV has an inpaint() function that does what you want.
What you want is not interpolation at all. Interpolation depends on the assumption that data between known points is roughly contiguous. In any non-trivial image, this will not be the case.
You actually want something like the content-aware fill that is in Photoshop CS5. There is a free alternative available in The GIMP through the GIMP-resynthesize plugin. These filters are extremely advanced and to try to re-implement them is insane. A better choice would be to figure out how to use GIMP-resynthesize in your program instead.
I made my first gimp python script that might help you:
my scripts
It is called conditional filter as it is a matrix filter that fill all transparent pixels from an image according to the mean value of its 4 nearest neighbours that are not transparent.
Be sure to use a RGBA image with only 0 and 255 transparent values.
Its is rough, simple, slow, unoptimized but bug free.

Categories