I want to implement iradon. Radon image shape is (168,400).
But when I implement iradon on it the image shape become (168,168)!!!
I want to have an image with (168,400) shape.
Where is the problem?
Image = iradon (radon_image, filter_name=None)
I read the instruction on:
https://scikit-image.org/docs/stable/api/skimage.transform.html#skimage.transform.iradon
; but I did not understand the problem.
I really appreciate if anyone can help me.
I think you might be mis-understanding how the Radon (and inverse Radon) transform work.
The size of your radon_image is (number of pixels in original image, number of projections). So in your example, the image is 168 pixels across with 400 different projections (see image below from the scikit-image website). Therefore, when you perform the inverse Radon transform you will get an image with a size of 168 pixels.
Perhaps if you explain further why you want the resulting image with a particular size, it will be possible to help further.
Related
I am trying to feed small patches of satellite image data (landsat-8 Surface Reflectance Bands) into neural networks for my project. However the downloaded image values range from 1 to 65535.
So I tried dividing images by 65535(max value) but plotting them shows all black/brown image like this!
But most of the images do not have values near 65535
Without any normalization the image looks all white.
Dividing the image with 30k looks like this.
If the images are too dark or too light my network may not perform as intended.
My question: Is dividing the image with max value possible (65535) is the only solution or are there any other ways to normalize images especially for satellite data.
Please help me with this.
To answer your question, though. There are other ways to normalize images. Standardization is the most common way (subtract the mean and divide by the standard deviation).
Using numpy...
image = (image - np.mean(image)) / np.std(image)
As I mentioned in a clarifying comment, you want the normalization method to match how the NN training set.
I am looking for suggestions or best practices to follow in terms of converting a 12-bit (0-4096) grayscale image to a 3-channel 8-bit color image in Python in order to pass it into a convolutional neural network.
With 8-bit RGB, I essentially have 24-bit colour, so I see no reason to lose data in the conversion, although most other posts suggest simply dividing the pixel value by 16 in order to squeeze it into 8-bits and then replicating this over all three channels in a lossy way.
Some ideas I have come up with include creating some kind of gradient function that converts the 12-bit uint to a corresponding colour on the gradient, but my understanding of the RGB colour space is that this would be tricky to implement using Numpy or other such libraries.
Do any of the common libraries such as OpenCV / Scikit offer this kind of functionality? I cannot find anything in the docs. Other ideas include using some kind of intermediary color space such as HSL/L*AB but I don't really know enough about this.
Please note that I am ultimately trying to create an 8-bit RGB image, not a 16-bit RGB image. Simply trying to colorize the grayscale image in a way that retains the original 12-bit data across the colour range.
Hope somebody can help!
My first question would be: Why do you need to convert it to color in any specific way?
If you are training the CNN on these images, any arbitrary transformation should work and give you similar performance. You just need to convert the training images and input images in the same manner.
You could probably just split the 16 bits and put the bottom half in R, the top half in G, and leave B with zeroes.
It kinda depends on how black-box this CNN is. But you can always test this by running a few training/testing cycles with the conversion I mentioned above and then do that again with the "compressed" versions.
In this project, you will implement the image super-resolution problem. Specifically,
you will start from a digital image of size M*N pixels, and then you will enlarge the
image to (3M) * (3N) pixels. While the pixels in the original image should keep their
original intensities, the intensities of new pixels are interpolated by using a local radial
basis function in a user-chosen neighborhood of each new pixel.
This is the image I want to enlarge.
The image is 256 x 256. I want to use Colab and I found a function pysteps.utils.interpolate.rbfinterp2d and here is the documentation for this function:
https://pysteps.readthedocs.io/en/latest/generated/pysteps.utils.interpolate.rbfinterp2d.html.
I am very new to computer programming and I am wondering how do I actually do this. I can do individual steps, so I am more or less looking for a (detailed, if possible) outline of steps to accomplish the task. At the end of the project I want to display the original image and then the resulting image after up-scaling it.
Any help would be much appreciated. Thanks in advance!
I want to make a program that turns a given image into the format of the MNIST dataset, as a kind of exercise to understand the various preprocessing steps involved. But the description the authors made on their site: http://yann.lecun.com/exdb/mnist/ was not entirely straightforward:
The original black and white (bilevel) images from NIST were size
normalized to fit in a 20x20 pixel box while preserving their aspect
ratio. The resulting images contain grey levels as a result of the
anti-aliasing technique used by the normalization algorithm. the
images were centered in a 28x28 image by computing the center of mass
of the pixels, and translating the image so as to position this point
at the center of the 28x28 field.
So from the original I have to normalize it to fit a 20x20 box, and still preserving their aspect ratio (I think they mean the aspect ratio of the actual digit, not the entire image). Still I really don't know how to do this.
Center of mass: I have found some online code about this, but I don't think I understand the principle. Here is my take on this: The coordinate of each pixel is actually a vector from the origin to that point, so for each point you multiply the coordinate with the image intensity, then sum everything, before dividing by the total intensity of the image. I may be wrong about this :(
Translating the image so as to position this point at the center: Maybe cook up some translation equation, or maybe use a convolutional filter to facilitate translation, then find a path that leads to the center (Dijikstra's shortest path ?).
All in all, I think i still need guidance on this. Can anyone explain about these parts for me ? Thank you very much.
I think
I am creating automatically JPG pictures from multispectral data. Created picture is very dark. So I thought it would be best idea change brightness (like Image.Enhance in PIL). But there was a problem, because some pictures need more brightness than others.
So next idea was try linear stretching of histogram. So I created script which iterate over RGB tuples and compute new intensity for pixels. There was very small difference. Probably because the range of values was everytime 0-255. Then I tried histogram equalization (ImageOps) for R, G and B but the result was no good, please see middle part of picture. I found on the internet that this is not good approach because colors can change dramatically. It is probably my case.
The best idea looks convert RGB array to HSL and then change luminance but I can't use constant for maximize Luminance because pictures are different and need different constants for. Should I use histogram equalization on Luminance or what is the best approach how stretch or probably better histogram equalization of my picture?
I am looking for something like Image/Auto adjust colors in IrfanView or in some SW are used name Linear Normalization...
I hope that picture will be help to you understand my problem. I probably choose bad way how to achieve my goal.
Thank you for any answer, I will be very glad.
EDIT
Left image for download
Next images I can upload later, today.
I would suggest proceeding with the same approach as you have stated with slight modification.
Convert the RGB image to LAB image.
Apply localized histogram equalization to the L-channel.
Merge it back with the other channels.
Convert it back to RGB image.
You can check my answer for this in a different question here:
The code I have there is written for OpenCV using python. You can modify it for C language if you wish.
Let me know if it has helped you!!
I am not sure if this applies, and I have not applied this myself, but I was reading on this article about underwater contrast stretching:
http://www.iaeng.org/IJCS/issues_v34/issue_2/IJCS_34_2_12.pdf
What it suggests might help
"In order to address the issues discussed above, we propose
an approach based on slide stretching. Firstly, we use contrast
stretching of RGB algorithm to equalize the colour contrast in
the images. Secondly, we apply the saturation and intensity
stretching of HSI to increase the true colour and solve the
problem of lighting"