I was trying a pencil sketch effect in OpenCV. The post I was reading applied linear dodge on the image, with blurred (negative image) as mask to obtain the effect.
inv=255-img
blur=cv2.GaussianBlur(inv,(21,21),0,0)
res=cv2.divide(img,255-blur,scale=256)
but I found out that the effect can be achieved without inverting the image(results at the bottom, left -with inverse and right without inverse)
I did this.
blur2=cv2.GaussianBlur(img,(21,21),0,0)
res = cv2.divide(img,blur2,scale=256)
I also read in some StackOverflow answer, and they also suggested the first approach.
I wanted to know why it is required to apply blur to the negative if we are converting back to the original?
Thank you for taking out your time to answer the question.
Instructions to convert image to sketch in the book "opencv with python blueprints".
Convert the colour image to grayscale.
Invert the grayscale image to get a negative.
Apply a Gaussian blur to the negative from step 2.
Blend the grayscale image from step 1 with the blurred negative from step 3 using a colour dodge.
Reference
Related
i am quite new to Python and i try to write some code for image analysing.
Here is my initial image:
Initial image
After splitting the image in to the rgb channels, converting in to gradient, using a threshold and merging them back together i get the following image:
Gradient/Threshold
Now i have to draw contours around the black areas and get the size of the surrounded areas. I just dont know how to do it, since my trials with find/draw.contours in opencv are not succesfull at all.
Maybe someone also knows an easier way to get that from the initial image.
Hope someone can help me here!
I am coding in Python 3.
Try adaptive thresholding on the grayscale image of the input image.
Also play with the last two parameters of the adaptive thresholding. You will find good results as I have shown in the image. (Tip: Create trackbar and play with value, this will be quick and easy method to get best values of these params.)
For a school project I am trying to write a program in Python that tracks the movement of the pupil. In order to do that I am using OpenCV.
After looking up some tutorials on the internet, I noticed that almost everyone is using thresholding to achieve this, since a binary image is necessary for almost every step further down the road (e.g. HoughCircle Transofrmation, Contours). However, from my understanding thresholding is extremly light sensitive, therefore such an approach would only return good results in optimal lightning conditions.
So here comes my question: Is there any alternative or better approach than just Thresholding the image? Or is my understanding of thresholding in OpenCV wrong in the first place?
Here is a example image:
The purpose of thresholding is to segment the desired objects from the background where you can then perform additional processing (applying morphological operations) then perform contour filtering to further isolate the desired objects. Instead of applying image processing techniques on a BGR (3-channel) image or a grayscale (1-channel) image with range [0...255], thresholding allows us to obtain a binary image where every pixel is either 0 or 1 which makes distinguishing objects easier. Depending on your situation, there are many ways obtain a binary image, here are several methods:
cv2.Canny - Canny edge detection which uses a minVal and maxVal to determine edges
cv2.threshold - Simple thresholding with user selected arbitrary global threshold value
cv2.threshold + cv2.THRESH_OTSU - Otsu's thresholding to automatically calculate the threshold value.
cv2.adaptiveThreshold - Adaptive thresholding where the image has different lighting conditions in different areas. Essentially it will automatically calculate the threshold value for different regions of the image and gives better results with images with varying illumination
cv2.inRange - Color segmentation. The idea is to use lower and upper threshold ranges to obtain a binary image. Useful when trying to isolate a single color range
From the book 'Learning OpenCV 3 Computer Vision with Python - Second Edition', page 50
For blurring, let's use medianBlur(), which is effective in removing
digital video noise, especially in color images. For edge-finding,
let's use Laplacian(), which produces bold edge lines, especially in
grayscale images. After applying medianBlur(), but before applying
Laplacian(), we should convert the image from BGR to grayscale.
I'm having troubles trying to understand why it suggests to apply the grayscale conversion after the blur step. I've got a code where I apply first the grayscale conversion, and I'm curious about this.
Any clues?
I am trying to paste an object with a completely tight known mask onto an image so it should be easy, but without some post treatments I get artefacts at the border. I want to use the blending technique Poisson Blending to reduce the artefacts. It is implemented in opencv seamlessClone.
import cv2
import matplotlib.pyplot as plt
#user provided tight mask array tight_mask of dtype uint8 with only white pixel the ones on the object the others are black (50x50x3)
tight_mask
#object obj to paste a 50x50x3 uint8 in color
obj
#User provided image im which is large 512x512 of a mostly uniform background in colors
im
#two different modes of poisson blending, which give approximately the same result
normal_clone=cv2.seamlessClone(obj, im, mask, center, cv2.NORMAL_CLONE)
mixed_clone=cv2.seamlessClone(obj, im, mask, center, cv2.MIXED_CLONE)
plt.imshow(normal_clone,interpolation="none")
plt.imshow(mixed_clone, interpolation="none")
However, with the code above, I only get images where the pasted objects are very very very transparent. So they are obviously well blended but they are so blended that they fade away like ghosts of objects.
I was wondering if I was the only one to have such issues and if not what were the alternatives in term of poisson blending ?
Do I have to reimplement it from scratch to modify the blending factor (is that even possible ?), is there another way ? Do I have to use dilatation on the mask to lessen the blending ? Can I enhance the contrast somehow afterwards ?
In fact the poisson blending is using the gradient information in the image to paste to blend it into the target image.
It turns out that if the mask is completely tight the border gradient is then artificially interpreted as null.
That is why it ignores it completely and produces ghosts.
Using a larger mask by dilating the original mask using morphological operations and therefore including some background is thus the solution.
Care must be taken when choosing the color of the background included if the contrast is too big the gradient will be too strong and the image would not be well blended.
Using a color like gray is a good starting point.
I would like to darken one image based on the mask of an edge-detected second image.
Image 1: Original (greyscale) image
Image 2: Edge detected (to be used as mask)
Image 3: Failed example showing cv2.subtract processing
In my failed example (Image 3), I subtracted the white pixels (255) from the original image but what I want to do is DARKEN the original image based on a mask of the edge detected image.
In this article: How to fast change image brightness with python + OpenCV?, Bill Gates describes how he converts the image to HSV, splits out then modifies Value, and then finally merges back. This seems like a reasonable approach but I only want to modify the Value where the mask is white i.e. the edge exists.
Ultimately, I am trying to enhance the edge of a low resolution thermal video stream in a similar way to the FLIR One VividIR technology.
I believe that I've made it really far as a complete novice to image processing, OpenCV and Python but after days now of trying just about every function OpenCV offers, I've got myself stuck.
## get the edge coordinates
pos = np.where(edge >0)
## divide
img[pos] //=2