Add chroma noise to image - python

I'm training a deep neural network to improve the quality of images. The images contain some specific types of noise that I want to reduce/remove by means of a deep learning model. In order to do so I'm using a huge dataset of similar clear high-res images with barely any noise, add the specific types of noise to the images and train the network on regenerating the original image (a custom autoencoder network). With one of the several noise types this works very well so far. Without going to far into the details, adding that particular type of noise was easy.
Now I need to add another noise type to the images, more precisely: chroma noise like in the following image (the bottom right one): link
How do I artificially generate and add chroma noise to an image in Python? I can use the full range of image processing packages, PIL, numpy, OpenCV, torchvision...

You need to convert the image to a colorspace such as HSV or CIE Lab. You then add noise to the chromacity channels (a, b in Lab, or H, S is HSV). Finally, convert back to RGB.
This colorspace conversion step is very common and most image toolkits should have that functionality.

Related

What is the right way to normalize satellite images to feed into a Nerual network?

I am trying to feed small patches of satellite image data (landsat-8 Surface Reflectance Bands) into neural networks for my project. However the downloaded image values range from 1 to 65535.
So I tried dividing images by 65535(max value) but plotting them shows all black/brown image like this!
But most of the images do not have values near 65535
Without any normalization the image looks all white.
Dividing the image with 30k looks like this.
If the images are too dark or too light my network may not perform as intended.
My question: Is dividing the image with max value possible (65535) is the only solution or are there any other ways to normalize images especially for satellite data.
Please help me with this.
To answer your question, though. There are other ways to normalize images. Standardization is the most common way (subtract the mean and divide by the standard deviation).
Using numpy...
image = (image - np.mean(image)) / np.std(image)
As I mentioned in a clarifying comment, you want the normalization method to match how the NN training set.

Converting a 12-bit grayscale image to 8-bit per channel color image in Python

I am looking for suggestions or best practices to follow in terms of converting a 12-bit (0-4096) grayscale image to a 3-channel 8-bit color image in Python in order to pass it into a convolutional neural network.
With 8-bit RGB, I essentially have 24-bit colour, so I see no reason to lose data in the conversion, although most other posts suggest simply dividing the pixel value by 16 in order to squeeze it into 8-bits and then replicating this over all three channels in a lossy way.
Some ideas I have come up with include creating some kind of gradient function that converts the 12-bit uint to a corresponding colour on the gradient, but my understanding of the RGB colour space is that this would be tricky to implement using Numpy or other such libraries.
Do any of the common libraries such as OpenCV / Scikit offer this kind of functionality? I cannot find anything in the docs. Other ideas include using some kind of intermediary color space such as HSL/L*AB but I don't really know enough about this.
Please note that I am ultimately trying to create an 8-bit RGB image, not a 16-bit RGB image. Simply trying to colorize the grayscale image in a way that retains the original 12-bit data across the colour range.
Hope somebody can help!
My first question would be: Why do you need to convert it to color in any specific way?
If you are training the CNN on these images, any arbitrary transformation should work and give you similar performance. You just need to convert the training images and input images in the same manner.
You could probably just split the 16 bits and put the bottom half in R, the top half in G, and leave B with zeroes.
It kinda depends on how black-box this CNN is. But you can always test this by running a few training/testing cycles with the conversion I mentioned above and then do that again with the "compressed" versions.

Data augmentation after image segmentation

Lets assume i have a little dataset. I want to implement data augmentation. First i implement image segmentation (after this, image will be binary image) and then implement data augmentation. Is this a good way?
For image augmentation in segmentation and instance segmentation, you have to either no change the positions of the objects contained in the image by manipulating colors for example, or modify these positions by applying translations and rotation.
So, yes this way works, but you have to take into consideration the type of data you have and what you are looking to achieve. Data augmentation isn't a ready to-go process with good results everywhere.
In case you have a:
Semantic segmentation : Each pixel of your image has a row i and a column j which are labeled as its enclosing object. This means having your main image I and a label image L with its same size linking every pixel to its object label. In this case, your data augmentation is applied to both I and L, giving a combination of the two transformed images.
Instance segmentation : Here we generate a mask for every instance of the original image and the augmentation is applied to all of them including the original, then from these transformed masks we get our new instances.
EDIT:
Take a look at CLoDSA (Classification, Localization, Detection and Segmentation Augmentor) it may help you implement your idea.
In case your dataset is small, you should add data-augmentation during the training. It is important to change the original image & the targets (masks) in the same way !!.
For example, If an image is rotated 90 degrees, then its mask should also be rotated 90 degrees. Since you are using Keras library, You should check if the ImageDataGenerator also changes the target images (masks), along with the inputs. If it doesn't, You can implement the augmentations by yourself. This repository shows how it is done in OpenCV here:
https://github.com/kochlisGit/random-data-augmentations

CV2 resizing with CNN

I am using CV2 to resize various images with different dimensions(i.e. 70*300, 800*500, 60*50) to a specific (200*200) pixels dimension. Later, I am feeding the pictures to CNN algorithm to classify the images. (my understanding that pictures must have the same size when fed into CNN).
My questions:
1- How low picture resolutions are converted into higher one and how higher resolutions are converted into lower one? Will this affect the stored information in the pictures
2- Is it good practice to use this approach with CNN? Or is it better to Pad zeros to the end of the image to get the desired resolution? I have seen many researchers pad the end of a file with zeros when trying to detect Malware files to have a common dimension for all the files. Does this mean that padding is more accurate than resizing?
Using interpolation. https://chadrick-kwag.net/cv2-resize-interpolation-methods/
Definitely, resizing is a lossy process and you'll lose information.
Both are okay and used depending on the needs. Resizing is also equally applicable. If your CNN can't differentiate between the original and resized images it must be a badly overfitted one. Resizing is a very light regularization too, even it's advisable to apply more augmentation schemes on the images before CNN training.

Getting started with denoising elements of a 200x200 numpy array

I have a 200x200 numpy array that has a shape in it which I can see when I graph it using matplotlib's imshow() function. However, there is also a lot of noise added in that picture. I am trying to use openCV to emphasize the shape and denoise the image. But it keeps throwing error messages that I don't understand. What should I do to get started on the denoising problem. The shape is visible to me as I see it but extra noise was added using the np.random.randint() function on top of the image. I want to reduce that noise
Here are some tutorials about image denoising techniques available in opencv.
Blurring out the noise
The most basic is applying a blur to average out the random noise. This will have the negative effect that the edges in the image will not be as sharp as originally. Depending on your application, this might be fine. Depending on the amount of noise, you can chance the size of the filter k. A larger value will produce a blurrier image with less noise.
k = 5
filtered_image = cv.blur(img,(k,k))
Advanced denoising
Alternatively, you can use more advanced techniques such as Non-local Means Denoising. This applies averaging across similar patches in the image. This technique has a few more parameters to tune to your specific application which you can read about here. (There are different versions of this function for greyscale and colour images, as well as for image sequences).
luminosity_filter_strength = 10
colour_filter_strength = 10
template_window_size = 7
search_window_size = 21
filtered_image = cv.fastNlMeansDenoisingColored(img,
luminosity_filter_strength,
colour_filter_strength,
template_window_size,
search_window_size)
I solved the problem using Scikit Image. They have very accessible documentation page for new comers and the error messages are a lot easier to understand. As for my problem I had to use Scikit Image's restoration library which has a lot of denoising functions much like openCV however the examples and the easy to understand error messages really helped. Playing around with Bilateral filters and Non-local Means Denoising solved the problem for me.

Categories