I have two microscope images. One in gray scale, one in red. I can merge these in photoshop, imagej, etc. I want to merge these two in opencv so I can perform this operation on many samples.
So far I've done the following (where dia=grayscale and epi=red).
# Load images
img_dia = cv2.imread(input_dia)
img_epi = cv2.imread(input_epi)
# Slice red channel from fluorescence image
b_epi, g_epi, r_epi = cv2.split(img_epi)
# Now I want to merge the grey scale and red image
No error messages. I could not find any documentation or other stack exchange pages to resolve this issue. Any help would be appreciated!
There are many ways of overlaying and blending images, but I think the nearest result to yours is probably screen-mode blending.
You can get what you want quite simply with ImageMagick which is included in most Linux distros and is available for macOS and Windows. So, just in Terminal (or Command Prompt on Windows), you can run the following without needing to write any code:
magick grey.tif red.tif -compose screen -composite result.png
If that is not the exact blend-mode you want, there 69 modes available in ImageMagick and you can see them all listed if you run:
magick -list compose
So, if you want Hard Light blending, use:
magick grey.tif red.tif -compose HardLight -composite result.png
You alluded to having lots of images to do, if so, you can get thousands done in parallel just by using GNU Parallel. You can search for answers on Stack Overflow that use ImageMagick and GNU Parallel by putting the following in the Stack Overflow Search box, including the square brackets:
[imagemagick] [gnu-parallel]
Or, provide some more detail on how your files are named and stored and I can help you further.
considering input_epi is an RGB image(3 channels)
when you load an image into opencv it is loaded as BGR by default
# colour image
img_epi = cv2.imread('input_epi.jpg')
# red channel
red_epi = img_epi[:,:,2]
# gray image
img_dia = cv2.imread('input_dia',0)
# creating a resultant image for combining only two channels
resultant_image = np.ones((img_dia.shape[0],img_dia.shape[1],2),np.uint8)
# merge two channels into a single two channel array
# first channel red
resultant_image[:,:,0] = red_epi
# second channel gray
resultant_image[:,:,1] = img_dia
Splitting and Merging Image Channels
In OpenCV Documentation
Warning cv2.split() is a costly operation (in terms of time). So do it only if you need it. Otherwise go for Numpy indexing.
Actual red colour is tricky to pin point from a RGB image better to use HSV to extract particular range of colour from an image
Related
I have a uint16 3-dim numpy array reppresenting an RGB image, the array is created from a TIF image.
The problem is that when I import the original image in QGIS for example is displayed correctly, but if I try to display within python (with plt.imshow) the result is different (in this case more green):
QGIS image:
Plot image:
I think it is somehow related to the way matplotlib manages uint16 but even if I try to divide by 255 and convert to uint8 I can't get good results.
Going by your comment, the image isn't encoded using an RGB colour space, since the R, G and B channels have a value range of [0-255] assuming 8 bits per channel.
I'm not sure exactly which colour space the image is using, but TIFF files generally use CMYK which is optimised for printing.
Other common colour spaces to try include YCbCr (YUV) and HSL, however there are lots of variations of these that have been created over the years as display hardware and video streaming technologies have advanced.
To convert the entire image to an RGB colour space, I'd recommend the opencv-python pip package. The package is well documented, but as an example, here's how you would convert a numpy array img from YUV to RGB:
img_bgr = cv.cvtColor(img, cv.COLOR_YUV2RGB)
When using plt.imshow there's the colormap parameter you can play with, try adding cmap="gray" so for example
plt.imshow(image, cmap="gray")
source:
https://matplotlib.org/3.1.1/api/_as_gen/matplotlib.pyplot.imshow.html
If I try to normalize the image I get good results:
for every channel:
image[i,:,:] = image[i,:,:] / image[i,:,:].max()
However, some images appear darker than others:
different images
I want to extract the person in a image without any background.I want to do this for multiple images of the same kind.Please help me do so using python which can automate this process.
https://depositphotos.com/148319285/stock-video-man-run-on-green-screen.html
I,ve tried using canny edge detector could only find edges but couldnot crop it.Is there any alternative way to detect background and remove completely.
Background subtraction using opencv. Check the link for examples and details: https://opencv-python-tutroals.readthedocs.io/en/latest/py_tutorials/py_video/py_bg_subtraction/py_bg_subtraction.html
In case of such prepared images, you migth find useful simple thresholding: https://docs.opencv.org/3.4.0/d7/d4d/tutorial_py_thresholding.html in your case you would need to get grayscale image by extracting green channel from image. It would produce binary (black and white ONLY) image with white for background and black for object (or reverse depending on your choice).
What functions (and how I should use them) should I use to crop out the center part of this image? I want to take just the less-dense parts, not the dense borders.
Thanks!
In the end, I want to either count the tiny circles/dots (cells) in the areas or calculate the area of the less-dense parts, outlined in the second image. I've done this before with ImageJ by tracing out the area by hand, but it is a really tedious process with lots of images.
Original
Area traced
I've currently looked at Scipy, but they are big and I don't really know how to approach this. If someone would point me in the right direction, that would be great!
It would take me a bit longer to do in Python, but I tried a few ideas just on the command-line with ImageMagick which is installed on most Linux distros and is available for free for macOS and Windows.
First, I trimmed your image to get rid of extraneous junk:
Then, the steps I did were:
discard the alpha/transparency channel
convert to greyscale as there is no useful colour information,
normalised to stretch contrast and make all pixels in range 0-255,
thresholded to find cells
replaced each pixel by the mean of their surrounding 49x49 pixels (box blur)
thresholded again at 90%
That command looks like this in Terminal/Command Prompt:
convert blobs.png -alpha off -colorspace gray -normalize -threshold 50% -statistic mean 49x49 -threshold 90% result.png
The result is:
If that approach looks promising for your other pictures we can work out a Python version pretty quickly, so let me know.
Of course, if you know other useful information about your image that could help improve things... maybe you know the density is always higher at the edges, for example.
In case anyone wants to see the intermediate steps, here is the image after grey scaling and normalising:
And here it is after blurring:
Assume we are reading and loading an image using OpenCV from a specific location on our drive and then we read some pixels values and colors, and lets assume that this is a scanned image.
Usually if we open scanned image we will notice some differences between the printed image (before scanning) and the image if we open it and see it on the display screen.
The question is:
The values of the pixels colors that we get from OpenCV. Are they according to our display screen color space or we get exactly the same colors we have in the scanned image (printed version) ??
I am not sure, what you want to do or achieve, here's one thing to mention about color profiles.
The most common color profile for cameras, screens and printers is sRGB, which is a limited color spectrum which does not include the whole RGB range (because the cheap hardware can't visualize it anyways).
Some cameras (and probably scanners) allow to use different color profiles like AdobeRGB, which increases the color space and "allows" more colors.
The problem is, if you capture (e.g. scan) an image in AdobeRGB color profile, but the system (browser/screen/printer) interprets it as sRGB, you'll probably get washed out colors, just because of wrong interpretation (like you'll get blue faces in your image, if you interpret BGR images as RGB images).
OpenCV and many browsers, printers, etc. always interpret images as sRGB images, according to http://fotovideotec.de/adobe_rgb/
As long as you don't change the extension of the image file, the pixel values don't change because they're stored in memory and your display or printer are just the way you want to see the image and often you don't get the same thing because it depends on the technology and different filters applied to you image before they're displayed or printed..
The pixel values are the ones you read in with
imgread
It depends on the flags you set for it. The original image may have a greater bit-depth (depending on your scanner) than the one you loaded.
Also the real file extension is determined from the first bytes of the file, not by the file name extension.
So it may not be the pixel value of the scanned image if the bit-depths differ.
Please have a look at the imgread documentation.
How can i convert the pixels of an image from square to hexagonal? Doing so i need to extract the rgb values from each hex pixel. Is there any library or function that simplify this process?
Example : Mona Lisa Hexagonal Pixel Shape
Nothing tried. Thanks
Here's a possible approach, though I am sure if you are able to write code to read, manipulate and use pixels from a file format that hasn't been invented yet, you should be able to create that file yourself ;-)
You could generate a hexagonal grid, using ImageMagick which is installed on most Linux distros and is available for OSX and Windows. Here, I am just doing things at the command-line in the Terminal, but there are Python, Perl, PHP, .Net, C/C++ and other bindings too - so take your pick.
First make a grid of hexagons - you'll have to work out the size you need, mine is arbitrary:
convert -size 512x256 pattern:hexagons hexagons.png
Now, fill in the hexagons, each with a different colour, I am just doing some examples of flood-filling here to give you the idea. Ideally, you would colour the first (top-left) hexagon with colour #000 and the next one across with #001 so that you could iterate through the coordinates of the output image as consecutive colours. Also, depending on your output image size, you may need to use a 32-bit PNG to accommodate the number of hexels (hexagonal pixels).
convert hexagons.png \
-fill red - draw "color 100,100 floodfill" \
-fill blue -draw "color 200,200 floodfill" \
colouredmask.png
Now iterate through all the colours, making every colour except that colour transparent. Note that I have added a black border just so you can see the context on StackOverflow's white background:
convert colouredmask.png -fill none +opaque red onecell.png
Now mask the original image with that mask and get the average colour of that one cell and write it to your yet-to-be-invented file format. Repeat for all cells/colours.
Note that the basic hexagon pattern is 30x18, so you should size your grid as multiples of that for it to tesselate properly.
Note that if you have lots of these to process, you should consider using something like GNU Parallel to take advantage of multiple cores. So, if you make a script called ProcessOneImage and you have 2,000 images to do, you would use:
parallel ProcessOneImage ::: *.png
and it will keep, say 8, jobs running all the time if your PC has 8 cores. There are many more options, try man parallel.
Fred has an Imagemagick script on his site that may do what you want: STAINEDGLASS
First of all, I think there is no such a function that is ready for you to perform the lattice conversion, thus you may need to implement the conversion process by yourself.
The lattice conversion is a re-sampling process, and it is also a interpolation process. There are many algorithms that have been developed in the hexagonal image processing papers.
Please see the example for you: