I am currently trying to work on a raw image and I would like apply very little processing. I am trying to understand what the no_auto_scale parameter in rawpy.postprocessrawpy.Params is. I don't understand what disabling pixel value scaling does. Could anyone help me please ?
My ultimate goal is to load the Bayer matrix with the colors scaled to balance out the sensitivity of each color sensor. So every pixel in the final image will correspond to a different color depending on where it is in the Bayer pattern but they will all be on a similar scale.
Related
I conducted an experiment to see radioactivity using a camera sensor. so i captured long exposure images(greyscale) in a dark environment and saw that there is a pixel pattern (bright pixels) repeating in my image data set(I think they are called hot pixels).I need to identify these pixels and ignore them when calculating the number of radiation interactions observed in each image. A radiation interaction would also appear as a bright pixel or a collection of a couple of pixels.
sample image
**I am using python to analyze the image data set. I am a newbie to Python programming and do not have much knowledge about handling these level problems or the libraries/functions to be used in analysis like this. thought it would be a good project to learn a bit of image analysis.
I dont know how to code this problem. So I definitely need help with that. **However **i thought of an algorithm that could possibly help me achieve my goal. ****
-Since hot pixels and radiation interactions appear as white pixels/spots in a black/dark background, I would assign a suitable pixel value threshold to the image to set the background pixel value to 0 (min) and white pixels to 255(max).
then I would check each pixel value in all 100 images and identify the pixel positions that have the same value in all images. (eg: lets say pixel value at position (1,1) in an image is 255 for all 100 images. then i would note that position as a hot pixel).
Next, I would set the pixel value of those positions to 0, so i will be left with bright pixels from radiation events only.
sometimes radiation events can have more than one pixel (but they will be next to each other). So i need to know a method to count them as one event.
I would truly appreciate your help if you can help me resolve this problem in a much more efficient manner.
I am doing some image processing in Python 3.5.2. After some work I have segmented and image using Support Vector Machines (used as pixel-wise classification task). As expected after training, when I try to predict a new image I get some pixels misslabeled. I only have to classes for the segmentation so the result will work as a mask with 1 in the desired region and 0 elsewhere.
An example predicted mask looks like this:
EDIT:
Here is the link for this image (saved using cv2.imwrite()):
https://i.ibb.co/74nxLvZ/img.jpg
As you can see there is a big region with some holes in it that means they are False Negative (FN) pixel predictions. Also, there are some False Positive (FP) pixels outside that big region.
I want to be able to get a mask for that big region alone and filled. Therefore I've been thinking about using some clustering method like DBSCAN or K-means to create clusters on this data points hopefully getting a cluster for the big region. Do you have any suggestion on the matter?
Now, assume I have that clusters. How can I fill the holes in tha big region. I would want to create some sort of figure/polygon/roi around that big region and then get all the pixels inside. Can any one shed some light on how to achieve this?
Somehow I would want something like this:
Hope I made myself clear. If I wasn´t let me know on the comments. Hope someone can help me figure this out.
Thanks in advance
You can in fact use DBSCAN to cluster data points. Specially when you don't know the number of clusters you are trying to get.
Then, you can get the contour of the region you want to fill. In this case the big white region with holes.
# im_gray: is the binary image you have
cnt, _ = cv2.findContours(im_gray, mode=cv2.RETR_EXTERNAL, method=cv2.CHAIN_APPROX_NONE)
You can loop through cnt to select the correct contour. Then, assuming you "know" the contour you want, you can use the function cv2.approxPolyDP() from OpenCV
taken from OpenCV tutorial
It approximates a contour shape to another shape with less number of vertices
depending upon the precision we specify. It is an implementation of Douglas-Peucker
algorithm.
epsilon = 0.001
approxPoly= cv2.approxPolyDP(np.array(maxPoly), epsilon, closed=True)
epsilon is an accuracy parameter, it is the maximum distance from the contour to approximated contour. As suggested in the documentation (link above) you can use epsilon=0.1*cv2.arcLength(cnt,True). In this case I used value 0.001.
Once you have this, you can just draw it:
poligon_mask = np.zeros(im_gray.shape)
poligon_mask = cv2.drawContours(max_poligon_mask, [approxPoly], cv2.FILLED, (255), -1)
Hope this helps.
I know the basic flow or process of the Image Registration/Alignment but what happens at the pixel level when 2 images are registered/aligned i.e. similar pixels of moving image which is transformed to the fixed image are kept intact but what happens to the pixels which are not matched, are they averaged or something else?
And how the correct transformation technique is estimated i.e. how will I know that whether to apply translation, scaling, rotation, etc and how much(i.e. what value of degrees for rotation, values for translation, etc.) to apply?
Also, in the initial step how the similar pixel values are identified and matched?
I've implemented the python code given in https://simpleitk.readthedocs.io/en/master/Examples/ImageRegistrationMethod1/Documentation.html
Input images are of prostate MRI scans:
Fixed Image Moving Image Output Image Console output
The difference can be seen in the output image on the top right and top left. But I can't interpret the console output and how the things actually work internally.
It'll be very helpful if I get a deep explanation of this thing. Thank you.
A transformation is applied to all pixels. You might be confusing rigid transformations, which will only translate, rotate and scale your moving image to match the fixed image, with elastic transformations, which will also allow some morphing of the moving image.
Any pixel that a transformation cannot place in the fixed image is interpolated from the pixels that it is able to place, though a registration is not really intelligent.
What it attempts to do is simply reduce a cost function, where a high cost is associated with a large difference and a low cost is associated with a small difference. Cost functions can be intensity based (pixel values) or feature based (shapes). It will (semi-)randomly shift the image around untill a preset criteria is met, generally a maximum amount of iterations.
What that might look like can be seen in the following gif:
http://insightsoftwareconsortium.github.io/SimpleITK-Notebooks/registration_visualization.gif
How to go from the image on the left to the image on the right programmatically using Python (and maybe some tools, like OpenCV)?
I made this one by hand using an online tool for clipping. I am completely noob in image processing (especially in practice). I was thinking to apply some edge or contour detection to create a mask, which I will apply later on the original image to paint everything else (except the region of interest) black. But I failed miserably.
The goal is to preprocess a dataset of very similar images, in order to train a CNN binary classifier. I tried to train it by just cropping the image close to the region of interest, but the noise is so high that the CNN learned absolutely nothing.
Can someone help me do this preprocessing?
I used OpenCV's implementation of watershed algorithm to solve your problem. You can find out how to use it if you read this great tutorial, so I will not explain this into a lot of detail.
I selected four points (markers). One is located on the region that you want to extract, one is outside and the other two are within lower/upper part of the interior that does not interest you. I then created an empty integer array (the so-called marker image) and filled it with zeros. Then I assigned unique values to pixels at marker positions.
The image below shows the marker positions and marker values, drawn on the original image:
I could also select more markers within the same area (for example several markers that belong to the area you want to extract) but in that case they should all have the same values (in this case 255).
Then I used watershed. The first input is the image that you provided and the second input is the marker image (zero everywhere except at marker positions). The algorithm stores the result in the marker image; the region that interests you is marked with the value of the region marker (in this case 255):
I set all pixels that did not have the 255 value to zero. I dilated the obtained image three times with 3x3 kernel. Then I used the dilated image as a mask for the original image (i set all pixels outside the mask to zero) and this is the result i got:
You will probably need some kind of method that will find markers automatically. The difficulty of this task depends heavily on the set of the input images. In some cases, the method can be really straightforward and simple (as in the tutorial linked above) but sometimes this can be a tough nut to crack. But I can't recommend anything because I don't know how your images look like in general (you only provided one). :)
I am creating automatically JPG pictures from multispectral data. Created picture is very dark. So I thought it would be best idea change brightness (like Image.Enhance in PIL). But there was a problem, because some pictures need more brightness than others.
So next idea was try linear stretching of histogram. So I created script which iterate over RGB tuples and compute new intensity for pixels. There was very small difference. Probably because the range of values was everytime 0-255. Then I tried histogram equalization (ImageOps) for R, G and B but the result was no good, please see middle part of picture. I found on the internet that this is not good approach because colors can change dramatically. It is probably my case.
The best idea looks convert RGB array to HSL and then change luminance but I can't use constant for maximize Luminance because pictures are different and need different constants for. Should I use histogram equalization on Luminance or what is the best approach how stretch or probably better histogram equalization of my picture?
I am looking for something like Image/Auto adjust colors in IrfanView or in some SW are used name Linear Normalization...
I hope that picture will be help to you understand my problem. I probably choose bad way how to achieve my goal.
Thank you for any answer, I will be very glad.
EDIT
Left image for download
Next images I can upload later, today.
I would suggest proceeding with the same approach as you have stated with slight modification.
Convert the RGB image to LAB image.
Apply localized histogram equalization to the L-channel.
Merge it back with the other channels.
Convert it back to RGB image.
You can check my answer for this in a different question here:
The code I have there is written for OpenCV using python. You can modify it for C language if you wish.
Let me know if it has helped you!!
I am not sure if this applies, and I have not applied this myself, but I was reading on this article about underwater contrast stretching:
http://www.iaeng.org/IJCS/issues_v34/issue_2/IJCS_34_2_12.pdf
What it suggests might help
"In order to address the issues discussed above, we propose
an approach based on slide stretching. Firstly, we use contrast
stretching of RGB algorithm to equalize the colour contrast in
the images. Secondly, we apply the saturation and intensity
stretching of HSI to increase the true colour and solve the
problem of lighting"