I am trying to detect shapes within pixels using cv2. Currently using examples such as https://www.geeksforgeeks.org/how-to-detect-shapes-in-images-in-python-using-opencv/ however this will only define each pixel, rather than identifying from the larger outset. I've attempted to add blur however that causes cv2 to identify the shapes as circles as opposed to squares.
Example images:
What would the best way to process these? Ideally fro the above two images, there's 3 distinct square patterns.
Related
I would like to know if there is a clever way to segment individual pills using edge detector (e.g Canny) in the following image (without somekind of CNN or other ML way):
So far I have tried to use the Canny dector on a filtered image (box/gauss, with k=(3,3) or k=(5,5)) on different channels of several color spaces, e.g. GRAY, HSV, LAB. Unfortunately, I have not been able to find a perfect edges for several different experiments. If I could find the edges segmentation would already be quite simple, because I could compare the different colors of the pills and their sizes.
I thought that it could be done in another way: apply masks to the image corresponding to each pill and then find the edges on the filtered image, however, this method seems primitive and it will be difficult to distinguish between two-colored pills.
Best I can get so far:
I am wanting to create a function that would determine how colorblind-friendly an image is (on a scale from 0-1). I have several color-related functions that are capable of performing the following tasks:
Take an image (as a PIL image, filename, or RGB array) and transform it into an image representative of what a colorblind person would see (for the different types of colorblindness)
Take an image and determine the rgb colors associated with each pixel of the image (transform into numpy array of rgb colors)
Determine the color palette associated with an image
Find the similarity between two rgb arrays (using CIELAB- see colormath package)
My first instinct was to transform the image and colorblind version of the image into RGB arrays and then use the CIELAB function to determine the similarity between the two images. However, that doesn't really solve the problem since it wouldn't be able to pick out things like readability (e.g. if the text and background color end up being very similar after adjusting for colorblindness).
Any ideas for how to determine how colorblind-friendly an image is?
I have some processed images that have noise (background pixels) around the boundaries. Is there a way to detect only the boundary of the object itself and create a mask to remove the background pixels around the boundaries?
Im a beginner to OpenCV so any code samples would help.
Example:
Original Image
Processed Image
Expected Output
I have tried the findContours method but it creates a mask that includes the noisy pixels as well.
Also i have tried the erode method but it does not give the same results for different image sizes so that is not the solution im looking for.
Hellow everyone,
I am trying very hard to extract edges from a specific image. I have tried many many ways, including;
grayscale, blurring (laplacian,gaussian, averaging etc), gradient (sobel, prewitt, canny)
With morfological transformations
Even thresholding with different combinations
Even HSV convert and masking and then thresholding
Using Contour Methods with area thresholding
Except all of this, I have tried different combinations with all the above. BUT neither of the above, had an excellent result. Main problem is still too many edges/lines. The image is an orthomosaic 2D photo of a marble wall. I will upload the image. Anyone have any ideas?
P.S The final result should be an image that has only the "skeleton' or/ shape of the marbles.
Wall.tif
I am trying to write a script (in bash using imagemagick or in python), to generate an image similar as in this example:
The source is 25 separate jpeg's. So far I have written a script (imagemagick) which takes each of the images and detects the contours of the person and replaces the white background with a transparent one.
The next step is to fit the contours randomly into one large image. Each image should fit into the larger image, without overlapping it's neighbors. It seems I need to some type of collision detection.
I am looking for pointers on how to tackle this problem.