Unexpected results in scipy.ndimage.gaussian_laplace. Inproper 2D processing - python

I am trying to use the gaussian_laplace filter to process images in python, but I can't figure out how to specify the kernel. Without that, I think the analysis is not working properly.
I have checked the standard documentation (https://docs.scipy.org/doc/scipy-0.16.1/reference/generated/scipy.ndimage.filters.gaussian_laplace.html) but it says nothing about specifying a kernel
Consider the following 8 bit grayscale TIF image:
https://imgur.com/nbfxWjB
if I were to import it into python and run the gaussian_laplace filter over it with a sigma of 4 I get the following results:
from PIL import Image
from scipy.ndimage import gaussian_laplace
import numpy as np
file_path='White Spot.tif'
image_array=np.array(Image.open(file_path))
transformed=gaussian_laplace(image_array,4)
im = Image.fromarray(transformed)
im.save('transformed.TIF')
https://imgur.com/3mhpXI3
you will note a few things, the edge is not evenly detected in both directions. The left and right side of the circle look different from the top and bottom. This is not expected because this should be a 2 dimensional analysis. Therefore, if I transpose the image before I analyze and then back transpose to the original position it should look the same. But obviously it doesn't:
https://imgur.com/neZPDIM
For a typical gaussian_laplace, I would need to specify the size of the kernel over which the analysis is convolved. However, no such value appears to be present here. Maybe if I figure out how to specify the kernel then I could figure out why the analysis is different in the different directions?
I would expect this analysis should give mostly even edge detection of the circle input. It is understandable that I would see artifacts in the top right, top left, bottom left, and bottom right parts of the circle, but not at the top and bottom of the circle as I see.
Thanks for your help!

Related

Python: Compute the area and perimeter of structures from arrays of ones and zeros

I am facing the following issue: I have two 2D-arrays of ones and zeros (same shape; (1920,1440)), which specify the masks and outlines of objects. Here, ones indicate space occupied by said objects and zeros indicate empty space (ones indicate the outlines and zeros indicate empty space respectively).
Here you can find a graphical representation of the mask array: https://ibb.co/v36TrJv and here you can find a graphical representation of the outlines array: https://ibb.co/FxKwmTq. Ones are depicted in white and zeros are depicted in black.
As you can see, the masks form ellipse-like structures, which do not overlap and the outlines always form closed contours. I now would like to compute the area occupied by each structure as well as the perimeter. Ideally, I would end up with two 2D-arrays with the same shape as the input arrays. Here, the first array would hold the area of each structure at the points where the mask array has a value of one. Analogously, the second array would hold the respective perimeter of each structure at these points. I need the output arrays to be in this form, so that I can do shape index computations and produce graphical representations of the results.
As a minimum reproducible example you can download the images from provided links and use the following code to extract the arrays from them:
import skimage.io as sio
import numpy as np
masks = sio.imread("masks.png")
masks = np.mean(masks, axis =2)/255
outlines = sio.imread("outlines.png")
outlines = np.mean(outlines, axis=2)/255
I have already played around a bit with OpenCV, as it apparently has functions, which are specifically designed for the applications I am looking for. Yet, my efforts have not yielded any notable results so far. I tried to adapt the example code from the contour features section of the OpenCV Docs (https://docs.opencv.org/trunk/dd/d49/tutorial_py_contour_features.html):
import cv2 as cv
img = cv.imread('masks.png',0)
ret,thresh = cv.threshold(img,127,255,0)
contours,hierarchy = cv.findContours(thresh, 1, 2)
cnt = contours[0]
print(cnt)
Here, the outcome does not seem to be what I am looking for. I also tried to adjust the threshold but without success. I am unable to figure out which adjustments I would have to make in order to arrive at my desired results using OpenCV.
Furthermore, I have come across Green's theorem (https://en.wikipedia.org/wiki/Green%27s_theorem) and was considering to give implementing it for my purpose a try. But I thought I first ask for some external help, because I feel like there should be a more straight forward solution for my problem. Any help will be highly appreciated!
OpenCV's contours will do your work. I think you are understanding them wrong. In your code, contours[0] will only give the first contour detected. Whereas, you should iterate over the contours variable like for contour in contours:, and then for each contour, get the area and perimeter using the functions given in the doc you shared and store these details in a list of list. Thus your final list will be of size nĂ—2 where n is the number of objects in your image.
Also, a suggestion there, find the contours in the image having the objects filled with ones and background with 0 as in the first image you shared. Also, just to be on safe side, as all your objects are separated, use RETR_EXTERNAL as a flag while finding the contours. Refer to OpenCV docs to get more information on this.

edge detection of an image and saving cells of a grid

picture example
I have recently started learning Python with Spyder IDE and I'm a bit lost so I ask for advice.
The thing is that I need to program an algorithm that, given a random image representing a board with black spots in it (in the picture I upload It is a 4x5 board) so It recognizes the edges properly and draw a AxB grid on it. I also need to save each cell separately so as to work with them.
I know that open CV treat images and I have even tried auto_canny but I don't really know how to solve this problem. Can anybody give me some indications please?
as I understand from your question you need to have as an output the grid of the matrix in your picture (eg. 4x3) and each cell as separate image.
This is the way I would approach this problem:
Use canny + corner detection to get the intersection of the lines
With the coordinates of the corners you can form your regions of interest, crop each individually and save it as a new image
For the grid you can check the X's and the Y's of the coordinates, for example you will have something like: ((50, 30), (50,35),(50,40)) and from this you can tell that there are 3 points on the horizontal axis. I would encourage you to set a error margin as the points might not be all on the same coordinate, but may not differ a lot.
Good luck!

How to change intensity threshold in an image using python

I want to remove the background noise from microscopy images. I have tried different methods (hist equalization and morphological transformation methods) but I got the conclusion the best method is to remove low intensity pixels.
I can do this using photoshop:
As you can see, figure A is the original one. I have included the histogram, shown in the bottom insert. Applying the transformation in B, I get the desired final image, where background is removed. See the transformation I have applied in the bottom insert from B.
I start working on the python code:
import cv2
import numpy as np
import matplotlib.pyplot as plt
img = cv2.imread('lamelipodia/Lam1.jpg', 1)
#get green channel to gray
img_g = img[:,:,1]
#get histogram
plt.hist(img_g.flatten(), 100, [0,100], color = 'g')
cv2.imshow('b/w',img_g)
#cv2.imwrite('bw.jpg',img_g)
plt.show()
cv2.waitKey(0)
cv2.destroyAllWindows()
I converted the figure to black and white
and got the histogram:
Which is similar to the one from photoshop.
I have been browsing google and SO but although I found similar questions, I could not find how to modify the histogram as I described.
How can I apply this kind of transformations using python (numpy or openCV)? Or if you think this has been responded before, please let me know. I apologize, but I have been really looking for this.
Following Piglet link:
docs.opencv.org/3.3.1/d7/d4d/tutorial_py_thresholding.html,the function is needed for the goal is:
ret,thresh5 = cv2.threshold(img_g,150,255,cv2.THRESH_TOZERO)
This is not easy to read.
We have to understand as:
if any pixel in the image_g is less than 150 then make it ZERO, keep the rest the same value as it was.
If we apply this to the image, we get:
The trick on how to read the function is by the added style. For example, cv2.THRESH_BINARY makes it read it as:
if any pixel in the image_g is less than 150 then make it ZERO (black), the rest make it 255 (white)

How do I find and remove white specks from an image using SciPy/NumPy?

I have a series of images which serve as my raw data which I am trying to prepare for publication. These images have a series of white specks randomly throughout which I would like to replace with the average of some surrounding pixels.
I cannot post images, but the following code should produce a PNG that approximates the issue that I'm trying to correct:
import numpy as np
from scipy.misc import imsave
random_array = np.random.random_sample((512,512))
random_array[random_array < 0.999] *= 0.25
imsave('white_specs.png', random_array)
While this should produce an image with a similar distribution of the specks present in my raw data, my images do not have specks uniform in intensity, and some of the specks are more than a single pixel in size (though none of them are more than 2). Additionally, there are spots on my image that I do not want to alter that were intentionally saturated during data acquisition for the purpose of clarity when presented: these spots are approximately 10 pixels in diameter.
In principle, I could write something to look for pixels whose value exceeds a certain threshold then check them against the average of their nearest neighbors. However, I assume what I'm ultimately trying to achieve is not an uncommon action in image processing, and I very much suspect that there is some SciPy functionality that will do this without having to reinvent the wheel. My issue is that I am not familiar enough with the formal aspects/vocabulary of image processing to really know what I should be looking for. Can someone point me in the right direction?
You could simply try a median filter with a small kernel size,
from scipy.ndimage import median_filter
filtered_array = median_filter(random_array, size=3)
which will remove the specks without noticeably changing the original image.
A median filter is well suited for such tasks since it will better preserve features in your original image with high spatial frequency, when compared for instance to a simple moving average filter.
By the way, if your images are experimental (i.e. noisy) applying a non-aggressive median filter (such as the one above) never hurts as it allows to attenuate the noise as well.

Python/OpenCV: Computing a depth map from stereo images

I have two stereo images that I'd like to use to compute a depth map. While I unfortunately do not know C/C++, I do know python-- so when I found this tutorial, I was optimistic.
Unfortunately, the tutorial appears to be somewhat out of date. It not only needs to be tweaked to run at all (renaming 'createStereoBM' to 'StereoBM') but when it does run, it doesn't give a good result, even on the example stereo-images that were used in the tutorial itself.
Here's an example:
import numpy as np
import cv2
from matplotlib import pyplot as plt
imgL = cv2.imread('Yeuna9x.png',0)
imgR = cv2.imread('SuXT483.png',0)
stereo = cv2.StereoBM(1, 16, 15)
disparity = stereo.compute(imgL, imgR)
plt.imshow(disparity,'gray')
plt.show()
The result:
This looks very different from what the author of the tutorial achieves:
(source: opencv.org)
Tweaking the parameters does not improve matters. All documentation I've been able to find is for the original C-version of openCV code, not the python-library-equivalent. I unfortunately haven't been able to use this to improve things.
Any help would be appreciated!
You have the images the wrong way around.
Look at the images, the tin behind the lamp lets you work out the camera locations of the two images,
Just change this:
# v
imgR = cv2.imread('Yeuna9x.png',0)
imgL = cv2.imread('SuXT483.png',0)
# ^
If you look at the image in the tutorial which they say is the left frame, it the same as your right one.
Here's my result after the change.
It is possible that you need to keep adjusting the parameters of the block matching algorithm.
have a look at this blog article:https://erget.wordpress.com/2014/03/13/building-an-interactive-gui-with-opencv/
The article's author has composed a set of classes to make the process of calibrating the cameras more streamlined than the opencv tutorial. These classes are available as pypi package: https://github.com/erget/StereoVision
Hope this helps :)
The camera is translated vertically instead of horizontally. Rotate the images 90 degrees, then try. (Prove it to yourself by rotating the screen. I just picked up my laptop and turned it on its edge.)
You mention different software; perhaps a row-major/column-major kind of thing between the original and pyOpenCV.

Categories