Image filtering with scikit-image? - python

I'm moving to python from a Matlab background, and there are a few elementary operations I've yet to conquer in Python/skimage:
How can I apply a user-generated linear filter (given as a small 2d array) to an image? I can do it with scipy.ndimage.convolve, but is there a method in skimage?
In Matlab, image filtering always returns a result of the same numeric type as its input, be it uint8 or float. Does skimage behave the same way?
Does skimage include unsharp masking somewhere? (I've found an unsharp masking filter in PIL but that's a bit of a pain, as PIL uses its own Image class, rather than ndarrays).
Is there a method, maybe similar to Matlab's "colfilt" by which a user can apply a non-linear filter to an image? The idea is that the user supplies a function which produces a single number from a 3x3 array, say; then that function is applied across the image as a spatial filter.

How can I apply a user-generated linear filter (given as a small 2d array) to an image? I can do it with scipy.ndimage.convolve, but is there a method in skimage?
The goal of scikit-image (and the scikits, in general) is to extend the functionality of scipy. (Smaller, more focused projects tend to evolve more rapidly than larger ones.) It tries not to duplicate any functionality, and only does so if it can improve upon that functionality.
In Matlab, image filtering always returns a result of the same numeric type as its input, be it uint8 or float. Does skimage behave the same way?
No, there is no such guarantee. Sometimes it's just more efficient to convert to a single type. (Sometimes, it's just a lack of time/man-power.) Here's some documentation on the matter:
http://scikit-image.org/docs/0.9.x/user_guide/data_types.html#output-types
There are convenient methods (e.g. img_as_float, img_as_ubyte) for transforming images if you need a certain type (and they check if the input type is the desired type, so you don't go wasting time with unnecessary conversion).
Does skimage include unsharp masking somewhere? (I've found an unsharp masking filter in PIL but that's a bit of a pain, as PIL uses its own Image class, rather than ndarrays).
Not that I know of, but you could roll your own. Something like the following would work:
from skimage import data
from skimage import filter
from skimage import img_as_float
import matplotlib.pyplot as plt
unsharp_strength = 0.8
blur_size = 8 # Standard deviation in pixels.
# Convert to float so that negatives don't cause problems
image = img_as_float(data.camera())
blurred = filter.gaussian_filter(image, blur_size)
highpass = image - unsharp_strength * blurred
sharp = image + highpass
fig, axes = plt.subplots(ncols=2)
axes[0].imshow(image, vmin=0, vmax=1)
axes[1].imshow(sharp, vmin=0, vmax=1)
plt.show()
There are, however, many ways to implement unsharp masking.
Is there a method, maybe similar to Matlab's "colfilt" by which a user can apply a non-linear filter to an image? The idea is that the user supplies a function which produces a single number from a 3x3 array, say; then that function is applied across the image as a spatial filter.
Not in scikit-image, but there's generic filtering capability in scipy.ndimage:
https://docs.scipy.org/doc/scipy-0.19.0/reference/generated/scipy.ndimage.generic_filter.html

Related

Getting started with denoising elements of a 200x200 numpy array

I have a 200x200 numpy array that has a shape in it which I can see when I graph it using matplotlib's imshow() function. However, there is also a lot of noise added in that picture. I am trying to use openCV to emphasize the shape and denoise the image. But it keeps throwing error messages that I don't understand. What should I do to get started on the denoising problem. The shape is visible to me as I see it but extra noise was added using the np.random.randint() function on top of the image. I want to reduce that noise
Here are some tutorials about image denoising techniques available in opencv.
Blurring out the noise
The most basic is applying a blur to average out the random noise. This will have the negative effect that the edges in the image will not be as sharp as originally. Depending on your application, this might be fine. Depending on the amount of noise, you can chance the size of the filter k. A larger value will produce a blurrier image with less noise.
k = 5
filtered_image = cv.blur(img,(k,k))
Advanced denoising
Alternatively, you can use more advanced techniques such as Non-local Means Denoising. This applies averaging across similar patches in the image. This technique has a few more parameters to tune to your specific application which you can read about here. (There are different versions of this function for greyscale and colour images, as well as for image sequences).
luminosity_filter_strength = 10
colour_filter_strength = 10
template_window_size = 7
search_window_size = 21
filtered_image = cv.fastNlMeansDenoisingColored(img,
luminosity_filter_strength,
colour_filter_strength,
template_window_size,
search_window_size)
I solved the problem using Scikit Image. They have very accessible documentation page for new comers and the error messages are a lot easier to understand. As for my problem I had to use Scikit Image's restoration library which has a lot of denoising functions much like openCV however the examples and the easy to understand error messages really helped. Playing around with Bilateral filters and Non-local Means Denoising solved the problem for me.

Resize CSV data using Python and Keras

I have CSV files that I need to feed to a Deep-Learning network. Currently my CSV files are of size 360*480, but the network restricts them to be of size 224*224. I am using Python and Keras for the deep-learning part. So how can I resize the matrices?
I was thinking that since aspect ratio is 3:4, so if I resize them to 224:(224*4/3) = 224:299, and then crop the width of the matrix to 224, it could serve the purpose. But I cannot find a suitable function to do that. Please suggest.
I think you're looking for cv.resize() if you're using images.
If not, try numpy.ndarray.resize()
Image processing
If you want to do nontrivial alterations to the data as images (i.e. interpolating between pixel values, assuming that they represent photographs) then you might want to use proper image processing libraries for that. You'd need to treat them not as raw matrixes (csv of numbers) but convert them to rgb images, do the transformations you desire, and convert them back to a numpy matrix.
OpenCV (https://docs.opencv.org/3.4/da/d6e/tutorial_py_geometric_transformations.html)
or Pillow (https://pillow.readthedocs.io/en/3.1.x/reference/Image.html) might be useful to do that.
I found a short and simple way to solve this. This uses the Python Image Library/Pillow.
import numpy as np
import pylab as pl
from PIL import Image
matrix = np.array(list(csv.reader(open('./path/mat.csv', "r"), delimiter=","))).astype("uint8") #read csv
imgObj = Image.fromarray(matrix) #convert matrix to Image object
resized_imgObj = img.resize((224,224)) #resize Image object
imgObj.show()
resized_imgObj.show()
resized_matrix = np.asarray(img) #convert Image object to matrix
While numpy module also has a resize function, but it is not as useful as the aforementioned way.
When I tried it, the resized matrix had lost all the intricacies and aesthetic aspect of the original matrix. This is probably due to the fact that numpy.ndarray.resize doesn't interpolate and missing entries are filled with zeros.
So, for this case Image.resize() is more useful.
You could also convert the csv file to a list, truncate the list, and then convert the list to a numpy array and then use np.reshape.

How can I blur or pixify images in python by using matrixes?

I already have a function that converts an image to a matrix, and back. But I was wondering how to manipulate the matrix so that the picture becomes blurry, or pixified?
I suggest to use scipy.
To blur use gaussian_filter from scipy.ndimage.
Documentation
Please note that blurring may require additional normalization because typically the maximum values go down (are smeared out).
To pixelate use downsampling, for example decimate from scipy.signal.
Documentation
In case of color images I suggest to apply the blurring or pixelation to each color separately.
To make it blurry filter it using any low-pass filter (mean filter, gaussian filter etc.).

How do I find and remove white specks from an image using SciPy/NumPy?

I have a series of images which serve as my raw data which I am trying to prepare for publication. These images have a series of white specks randomly throughout which I would like to replace with the average of some surrounding pixels.
I cannot post images, but the following code should produce a PNG that approximates the issue that I'm trying to correct:
import numpy as np
from scipy.misc import imsave
random_array = np.random.random_sample((512,512))
random_array[random_array < 0.999] *= 0.25
imsave('white_specs.png', random_array)
While this should produce an image with a similar distribution of the specks present in my raw data, my images do not have specks uniform in intensity, and some of the specks are more than a single pixel in size (though none of them are more than 2). Additionally, there are spots on my image that I do not want to alter that were intentionally saturated during data acquisition for the purpose of clarity when presented: these spots are approximately 10 pixels in diameter.
In principle, I could write something to look for pixels whose value exceeds a certain threshold then check them against the average of their nearest neighbors. However, I assume what I'm ultimately trying to achieve is not an uncommon action in image processing, and I very much suspect that there is some SciPy functionality that will do this without having to reinvent the wheel. My issue is that I am not familiar enough with the formal aspects/vocabulary of image processing to really know what I should be looking for. Can someone point me in the right direction?
You could simply try a median filter with a small kernel size,
from scipy.ndimage import median_filter
filtered_array = median_filter(random_array, size=3)
which will remove the specks without noticeably changing the original image.
A median filter is well suited for such tasks since it will better preserve features in your original image with high spatial frequency, when compared for instance to a simple moving average filter.
By the way, if your images are experimental (i.e. noisy) applying a non-aggressive median filter (such as the one above) never hurts as it allows to attenuate the noise as well.

What's the most efficient way to compute the mode in a sliding window over a 2D array in Python?

I have an RGBA image that I need to upscale while keeping it smooth.
The catch is that I need to keep the colors exactly the way they are (background: I'm resizing a map where provinces are color-coded), and so I cannot just perform a resize with bicubic interpolation, because that will also interpolate the pixel colors while smoothing.
Thus, in order to get smooth edges I was hoping to upscale using nearest neighbor (giving me staircase patterns) and then round out the edges by replacing each pixel in the target image with the pixel color that occurs most often within a certain radius, a la so:
from PIL import Image, ImageFilter
amount=3
image=Image.open(<file>)
image=image.filter(ImageFilter.ModeFilter(amount))
This finishes fairly quickly, except that it doesn't work, as PIL's ImageFilters operate separately on each channel. shakes fist
I tried resorting to numpy arrays and doing the following in a loop:
dest[x,y]=Counter([tuple(e) for e in reshape(source[max(x-r,0):x+r+1,max(y-r,0):y+r+1],(-1,4))]).most_common()[0][0]
Note that dest and source here are the same shape XxYx4 arrays, hence the necessary reshaping and converting into tuples.
In theory this would work, but would take 12 hours to finish for the 82 million pixel image I am operating on. I am inferring that this is mostly due to unnecessary overhead with casting and reshaping.
What would be the appropriate way to do this in Python?
I am about ready to throw up my hands and write a C++ module to do this task.
Anything to steer me away from this path would be much appreciated!
If you care about a fixed set of colors in your image, the "Palette" image mode would perhaps be more appropriate (at least, if you don't have more than 256 colors in your map).
I would suggest to first convert your image to the "P" mode (Since I'm not really familiar with PIL, I'm not sure, how easy that is. Perhaps you'll have to explicitely construct the palette first?) and then apply the mode filter.
Another solution which comes into my mind is to simply use bicubic interpolation when upsizing and then converting to a palette image using a palette derived from the original image. That might yield better results (and be easier to implement) than your current approach.
EPX described in Image_scaling introduces no new colors. Scale 2x does this:
A --\ 1 2
C P B --/ 3 4
D
IF C==A => 1=A
IF A==B => 2=B
...
and scale 3x is described there too.
Apart from that, I'd agree with "go straight to C" -- depends on what you know.
Has anyone used np_inline ?

Categories