I have an image like this that I'm trying to create a dark silhouette of, such as this.
I've tried using image.filter(ImageFilter.BLUR) with varying radii but the intensity gets reduced a lot. Is there any other way to do this?
FYI your first link doesn't work. If by intensity you mean darkness of the color you might want to try using ImageFilter.UnsharpMask. That might mix less of what I assume to be a white background into the middle of the logo while giving the soft edges you are looking for.
Related
I have two images, one image which contains a box and one without. There is a small vertical disparity between the two pictures since the camera was not at the same spot and was translated a bit. I want to cut out the box and replace the hole with the information from the other picture.
I want to achieve something like this (a slide from a computer vision course)
I thought about using the cv2.createBackgroundSubtractorMOG2() method, but it does not seem to work with only 2 pictures.
Simply subtracting the picture from another does not work either because of the disparity.
The course suggests using RANSAC to compute the most likely relationship between two pictures and subtract the area thaht changed a lot. But how do I actually fill in the holes?
Many thanks in advance!!
If you plant ot use only a pair of images (or only a few images), image stitching methods are better than background subtraction.
The steps are:
Calculate homography between the two images.
Warp the second image to overlap the second.
Replace the region with the human with pixels from the warped image.
This link shows a basic example of image stitching. You will need extra work if both images have humans in different places, but otherwise it should not be hard to tweak this code.
You can try this library for background subtraction issues. https://github.com/andrewssobral/bgslibrary
there is python wrappers of this tool.
I am creating automatically JPG pictures from multispectral data. Created picture is very dark. So I thought it would be best idea change brightness (like Image.Enhance in PIL). But there was a problem, because some pictures need more brightness than others.
So next idea was try linear stretching of histogram. So I created script which iterate over RGB tuples and compute new intensity for pixels. There was very small difference. Probably because the range of values was everytime 0-255. Then I tried histogram equalization (ImageOps) for R, G and B but the result was no good, please see middle part of picture. I found on the internet that this is not good approach because colors can change dramatically. It is probably my case.
The best idea looks convert RGB array to HSL and then change luminance but I can't use constant for maximize Luminance because pictures are different and need different constants for. Should I use histogram equalization on Luminance or what is the best approach how stretch or probably better histogram equalization of my picture?
I am looking for something like Image/Auto adjust colors in IrfanView or in some SW are used name Linear Normalization...
I hope that picture will be help to you understand my problem. I probably choose bad way how to achieve my goal.
Thank you for any answer, I will be very glad.
EDIT
Left image for download
Next images I can upload later, today.
I would suggest proceeding with the same approach as you have stated with slight modification.
Convert the RGB image to LAB image.
Apply localized histogram equalization to the L-channel.
Merge it back with the other channels.
Convert it back to RGB image.
You can check my answer for this in a different question here:
The code I have there is written for OpenCV using python. You can modify it for C language if you wish.
Let me know if it has helped you!!
I am not sure if this applies, and I have not applied this myself, but I was reading on this article about underwater contrast stretching:
http://www.iaeng.org/IJCS/issues_v34/issue_2/IJCS_34_2_12.pdf
What it suggests might help
"In order to address the issues discussed above, we propose
an approach based on slide stretching. Firstly, we use contrast
stretching of RGB algorithm to equalize the colour contrast in
the images. Secondly, we apply the saturation and intensity
stretching of HSI to increase the true colour and solve the
problem of lighting"
I'm doing a face swapping. I have done it but Its final step is not finished.the photos have different brightness. I don't know I should correct it. I need your help. I'm not going to blur the image. I also tried to equalize their histogram before swapping but it didn't get a good result. thank you.to see the image click here.
Looks like you need seamless cloning:
https://github.com/alyssaq/face_morpher
http://www.learnopencv.com/seamless-cloning-using-opencv-python-cpp/
There are many photos are dark, These photos does not make sense to the viewer.
So I want to use opencv identified it, how to do that by opencv?
Any python source example will good:)
Perhaps you can transform the image to the HSV color space and use the V values for measuring the amount of light in the scene. Extract a histogram of the V values and compare it between light and dark images to see the differences.
your question is a little unclear as of what you wish to do. Can you give an example of the scene? I wish to see how dark it is before coming to a clear conclusion.
But if you just want to determine if an image is dark as via your title, its simple, draw an Histogram. Dark images tends to have peaks on the left hand side of the histogram, where the intensity of the pixels are pretty much in the value range of 0 to 20 or maybe can go up to 40/50, the right hand side of the histogram should pretty much have nothing should the image is really dark as what you mentioned. Following which, do a threshold on the image to determine whether the image is dark or not.
If you want to make the image clearer to the human eye, you can do histogram normalization. But that depends on how bad the image really is.
Was that what you are looking for?
Was wondering whether anyone know how to do the following in SimpleCV. I would like to colour correct a photo, so that if it's under or over exposed it is corrected. I believe cameras do this by taking an average of the colours and then adjusting the colours to turn the average into a 50% grey. This simple method should work ok for my scenario.
If anyone has some example Python code to do this or something more complex it would be much appreciated.
Thanks
There is a function built into SimpleCV:
balanced_img = Image('myphoto.jpg').whiteBalance('GrayWorld') # 'Simple' or 'GrayWorld'
You can read about the white balance methods from links in the SimpleCV docs for whiteBalance
This does what you described you wanted - adjust the average to a gray-scale. The 'Simple' method basically stretches the color range for each channel from 0 - 255 after clipping some of the outliers.
You can also do color correction with functions like applyRGBCurve.