I want to perform an oversegmentation of image using watershed method. Reading documentation, I'd need to use findContour and drawContour function to create marker. How can I use that?
This is my current code,
import cv2
import numpy as np
im=cv2.imread('balls.jpg')
marker=np.zeros(im.shape[:2])
marker[::30,::30]=200
marker=np.int32(marker)
cv2.watershed(im,marker)
out=cv2.convertScaleAbs(marker)
cv2.namedWindow('Out')
cv2.imshow('Out', out)
cv2.waitKey()
P/S: There's another question on this, but they used other approach(based on foreground and background. I want to use contours instead).
This is my goal: produce an oversegmetnation of image:
Input image can be downloaded from here:
http://decsai.ugr.es/~javier/denoise/peppers256.png
Related
I am new to Pillow and I would like to learn how to use it.
I would like to seek your helps and expertise to suggest if I could use Pillow to find the connected component of an image?
For example, if I have an image such as the following
May I ask if I could use Pillow to give me the shapes and positions of my two components in my example ? They are a square and a circle, and the circle is inside of the square.
Thank you very much,
Mi
Using Pillow you can find the edges in the given image.
from PIL import Image
from PIL import ImageFilter
image = Image.open("c1LDc.png")
image = image.convert('RGB')
imageWithEdges = image.filter(ImageFilter.FIND_EDGES)
image.show()
imageWithEdges.show()
Output:
You can't use Pillow for object detection, as answered here
I'm working on an imaging project that needs to read images, split them into overlapping patches, run some operation on the patches, and then recombine them into a single image. For this task, I decided to the sci-kit learn methods extract_patches_2d, and reconstruct_from_patches_2d.
https://scikit-learn.org/stable/modules/generated/sklearn.feature_extraction.image.extract_patches_2d.html
https://scikit-learn.org/stable/modules/generated/sklearn.feature_extraction.image.reconstruct_from_patches_2d.html
import numpy as np
import cv2
from sklearn.feature_extraction import image as extraction
img = cv2.imread("cat_small.jpg", cv2.IMREAD_COLOR)
grid_size = 500
images = extraction.extract_patches_2d(img, (grid_size, grid_size), max_patches=100)
image = extraction.reconstruct_from_patches_2d(images, img.shape)
cv2.imwrite("stack_overflow_test.jpg", image)
I can tell the extraction works correctly, since each of the patches can be saved as an individual image. The reconstruction does not work.
The image:
becomes:
Which looks entirely black when viewed on a white background, but does have some white pixels toward the top left (can be seen when opened in a separate tab). This same problem happens in grayscale.
I have tried adding astype(np.uint8) as explained in
How to convert array to image colour channel in python?
to no avail. How is this method used properly?
I want to remove the background noise from microscopy images. I have tried different methods (hist equalization and morphological transformation methods) but I got the conclusion the best method is to remove low intensity pixels.
I can do this using photoshop:
As you can see, figure A is the original one. I have included the histogram, shown in the bottom insert. Applying the transformation in B, I get the desired final image, where background is removed. See the transformation I have applied in the bottom insert from B.
I start working on the python code:
import cv2
import numpy as np
import matplotlib.pyplot as plt
img = cv2.imread('lamelipodia/Lam1.jpg', 1)
#get green channel to gray
img_g = img[:,:,1]
#get histogram
plt.hist(img_g.flatten(), 100, [0,100], color = 'g')
cv2.imshow('b/w',img_g)
#cv2.imwrite('bw.jpg',img_g)
plt.show()
cv2.waitKey(0)
cv2.destroyAllWindows()
I converted the figure to black and white
and got the histogram:
Which is similar to the one from photoshop.
I have been browsing google and SO but although I found similar questions, I could not find how to modify the histogram as I described.
How can I apply this kind of transformations using python (numpy or openCV)? Or if you think this has been responded before, please let me know. I apologize, but I have been really looking for this.
Following Piglet link:
docs.opencv.org/3.3.1/d7/d4d/tutorial_py_thresholding.html,the function is needed for the goal is:
ret,thresh5 = cv2.threshold(img_g,150,255,cv2.THRESH_TOZERO)
This is not easy to read.
We have to understand as:
if any pixel in the image_g is less than 150 then make it ZERO, keep the rest the same value as it was.
If we apply this to the image, we get:
The trick on how to read the function is by the added style. For example, cv2.THRESH_BINARY makes it read it as:
if any pixel in the image_g is less than 150 then make it ZERO (black), the rest make it 255 (white)
I am running blob detection on a camera image of circular objects, using OpenCV 2.4.9. I run a number of standard filters on the image (blur, adaptive threshold, skeletonization using skimage routines, and dilation of the skeleton), and would like to identify blobs (contiguous black areas) on the result. There is SimpleBlobDetector just for that, but however I set its parameters, it is not doing what I would like to.
This is the original image:
and this is the processed version, with keypoints from the detector drawn in yellow:
The keypoints don't seem to respect area constraints, and also don't appear where I would expect them.
The script looks like the following, is there something obviously wrong? Or any other suggestions?
import numpy as np
import cv2
import skimage, skimage.morphology
img0=cv2.imread('capture_b_07.cropped.png')
img1=cv2.cvtColor(img0,cv2.COLOR_BGR2GRAY)
img2=cv2.medianBlur(img1,5)
img3=cv2.bilateralFilter(img2,9,75,75)
img4=cv2.adaptiveThreshold(img3,255,cv2.ADAPTIVE_THRESH_GAUSSIAN_C,cv2.THRESH_BINARY_INV,21,0)
img5=skimage.img_as_ubyte(skimage.morphology.skeletonize(skimage.img_as_bool(img4)))
img6=cv2.dilate(img5,cv2.getStructuringElement(cv2.MORPH_RECT,(3,3)),iterations=1)
# blob detection
pp=cv2.SimpleBlobDetector_Params()
pp.filterByColor=True
pp.blobColor=0
pp.filterByArea=True
pp.minArea=500
pp.maxArea=5000
pp.filterByCircularity=True
pp.minCircularity=.4
pp.maxCircularity=1.
det=cv2.SimpleBlobDetector(pp)
keypts=det.detect(img6)
img7=cv2.drawKeypoints(img6,keypts,np.array([]),(0,255,255),cv2.DRAW_MATCHES_FLAGS_DRAW_RICH_KEYPOINTS)
cv2.imwrite('capture_b_07.blobs.png',img7)
Similar pipeline from ImageJ (Analyze Particles, circularity 0.5-1.0, area 500-5000 px^2), which I am trying to reproduce using OpenCV, gives something like this:
You can get a similar result to that of ImageJ using watershed.
I inverted your img6, labeled it and then used it as the marker for opencv watershed. Then I enlarged the watershed segmented boundary lines using a morphological filter and found connected components using opencv findContours. Below are the results. Not sure if this is what you want. I'm not posting the code as I quickly tried this out with a combination of python and c++, so it's a bit messy.
watershed segmentation
connected components
I was able to use SBD and produce a somewhat acceptable result. My first step was to try and even brightness over your image as the lower left was brighter than the upper right. Errors of omission > than commissions so there is still refinements possible.
How can i transform Image1 to Image2 using matplotlib.pyplot or another library in Python?
Image 1:
Image 2:
(This image turned out to be confidential,i removed it because i can't delete the post. Sorry for the inconvenience)
Any help is appreciated.
Have a look at the Python Imageing Library (which also has python bindings, by the way), especially the ImageFilter module.
But tools like ImageMagick or one of the built-in filters in Gimp might be more suitable for experimenting.
Is it the experimental data and already filtered from image 1 to image 2 by someone else? I wonder whether you have the point spread function along with the raw image 1?