I have a 3D numpy array that contains my ROI, obtained by peforming a logic and between the CT image and the mask.
Plot of the ROI
After performing this operation and a zoom using scipy.ndimage.zoom in order to obtain a volume of 160x160x160, I would like to enlarge the ROI, since right now the amount of values different from 0 is around 5%, or at least remove a lot of 0s (for instance by reducing the volume to 80x80x80 around my ROI)
Do you have any advice?
Thank you in advance.
I found a solution that worked for my problem.
A user here posted a code that is useful to trim all the 0s from a given n-dimensional array in input.
Then, i used scipy.ndimage.zoom to enlarge the volume.
This series of steps allowed me to have a smaller volume with a larger ROI.
Related
In this project, you will implement the image super-resolution problem. Specifically,
you will start from a digital image of size M*N pixels, and then you will enlarge the
image to (3M) * (3N) pixels. While the pixels in the original image should keep their
original intensities, the intensities of new pixels are interpolated by using a local radial
basis function in a user-chosen neighborhood of each new pixel.
This is the image I want to enlarge.
The image is 256 x 256. I want to use Colab and I found a function pysteps.utils.interpolate.rbfinterp2d and here is the documentation for this function:
https://pysteps.readthedocs.io/en/latest/generated/pysteps.utils.interpolate.rbfinterp2d.html.
I am very new to computer programming and I am wondering how do I actually do this. I can do individual steps, so I am more or less looking for a (detailed, if possible) outline of steps to accomplish the task. At the end of the project I want to display the original image and then the resulting image after up-scaling it.
Any help would be much appreciated. Thanks in advance!
I'm working on a perspective transform application involving transforming 3D points to 2D camera pixels. It is a purely mathematical model, because I'm preparing to use it on hardware that I don't really have access to (so I'm making up focal length and offset values for the intrinsic camera matrix).
When I do the mapping, depending on the xyz location of the camera, I get huge differences in where my transformed image is, and I have to make the matrix where I'm inputting the pixels really large. (I'm mapping an image of 1000x1000 pixels to an image of about 600x600 pixels, but its located around 6000, so I have to make my output matrix 7000x7000, which takes a long time to plt.imshow. I have no use for the actual location of the pixels, because I'm only concerned with what the remapped image looks like.
I was wondering how people dealt with this issue:
I can think of just cropping the image down to the area that is non-zero (where my pixels are actually mapped too? Like seen in:
How to crop a numpy 2d array to non-zero values?
but that still requires me to use space and time to alot a 7000x7000 destination matrix
I have a raster that is being stored in a numpy array that holds an aerial photo of an area that may be any shape. There is a good amount of noise in the data within the area that needs to be smoothed out. The edge of the image (where no data is) is marked by 0s that extend to the edge of the raster.
I have tried using the gaussian filter in scipy.ndimage.filters, but that reduces the values of the pixels at the edge of the data set. I can't find a flag to set nodata value. Is there a better way to do this in Python?
I'm trying to implement a blob detector based on LOG, the steps are:
creating an array of n levels of LOG filters
use each of the filters on the input image to create a 3d array of h*w*n where h = height, w = width and n = number of levels.
find a local maxima and circle the blob in the original image.
I already created the filters and the 3d array (which is an array of 2d images).
I used padding to make sure I don't have any problems around the borders (which includes creating a constant border for each image and create 2 extra empty images).
Now I'm trying to figure out how to find the local maxima in the array.
I need to compare each pixel to its 26 neighbours (8 in the same picture and the 9 pixels in each of the two adjacent scales)
The brute force way of checking the pixel value directly seems ugly and not very efficient.
Whats the best way to find a local maxima point in python using openCV?
I'd take advantage of the fact that dilations are efficiently implemented in OpenCV. If a point is a local maximum in 3d, then it is also in any 2d slice, therefore:
Dilate each image in the array with a 3x3 kernel, keep as candidate maxima the points whose intensity is unchanged.
Brute-force test the candidates against their upper and lower slices.
I have a series of images which serve as my raw data which I am trying to prepare for publication. These images have a series of white specks randomly throughout which I would like to replace with the average of some surrounding pixels.
I cannot post images, but the following code should produce a PNG that approximates the issue that I'm trying to correct:
import numpy as np
from scipy.misc import imsave
random_array = np.random.random_sample((512,512))
random_array[random_array < 0.999] *= 0.25
imsave('white_specs.png', random_array)
While this should produce an image with a similar distribution of the specks present in my raw data, my images do not have specks uniform in intensity, and some of the specks are more than a single pixel in size (though none of them are more than 2). Additionally, there are spots on my image that I do not want to alter that were intentionally saturated during data acquisition for the purpose of clarity when presented: these spots are approximately 10 pixels in diameter.
In principle, I could write something to look for pixels whose value exceeds a certain threshold then check them against the average of their nearest neighbors. However, I assume what I'm ultimately trying to achieve is not an uncommon action in image processing, and I very much suspect that there is some SciPy functionality that will do this without having to reinvent the wheel. My issue is that I am not familiar enough with the formal aspects/vocabulary of image processing to really know what I should be looking for. Can someone point me in the right direction?
You could simply try a median filter with a small kernel size,
from scipy.ndimage import median_filter
filtered_array = median_filter(random_array, size=3)
which will remove the specks without noticeably changing the original image.
A median filter is well suited for such tasks since it will better preserve features in your original image with high spatial frequency, when compared for instance to a simple moving average filter.
By the way, if your images are experimental (i.e. noisy) applying a non-aggressive median filter (such as the one above) never hurts as it allows to attenuate the noise as well.