I have photo images of galaxies. There are some unwanted data on these images (like stars or aeroplane streaks) that are masked out. I don't just want to fill the masked areas with some mean value, but to interpolate them according to surrounding data. How do i do that in python?
We've tried various functions in SciPy.interpolate package: RectBivariateSpline, interp2d, splrep/splev, map_coordinates, but all of them seem to work in finding new pixels between existing pixels, we were unable to make them fill arbitrary "hole" in data.
What you want is called Inpainting.
OpenCV has an inpaint() function that does what you want.
What you want is not interpolation at all. Interpolation depends on the assumption that data between known points is roughly contiguous. In any non-trivial image, this will not be the case.
You actually want something like the content-aware fill that is in Photoshop CS5. There is a free alternative available in The GIMP through the GIMP-resynthesize plugin. These filters are extremely advanced and to try to re-implement them is insane. A better choice would be to figure out how to use GIMP-resynthesize in your program instead.
I made my first gimp python script that might help you:
my scripts
It is called conditional filter as it is a matrix filter that fill all transparent pixels from an image according to the mean value of its 4 nearest neighbours that are not transparent.
Be sure to use a RGBA image with only 0 and 255 transparent values.
Its is rough, simple, slow, unoptimized but bug free.
Related
I have raw microscopy images like this:
And I want to segment the objects, as you see some of them are really close and I have a great range of intensity values.
background: 700 a.u.
fluorescent shapes: from 7000 to 32000 a.u.
To segment them I use Otsu binary segmentation from skimage package (without prior processing of the image)
thresh, imgthresh=cv2.threshold(image, 0, 255, cv2.THRESH_BINARY+cv2.THRESH_OTSU)
The result is pretty good, but still fails in detecting the brightest shapes as individual objects.
I have tried a lot of things: watershed algorithm, image preprocessing (blurring), eroding , adaptive thresholding, but nothing works properly since the main problem is the difference in fluorescent values of the image.
Any smart idea on how to solve this?
Because your data have such a large range in intensity values, single histogram based methods on the whole image (e.g. Otsu) are going to have a little trouble accomplishing this task. I think that your best bet is going to be either:
threshold_multiotsu: and choose number of classes based on number of 'clusters' of intensities. Unfortunately, you will likely need to alter the number of classes on an image by image basis so this isn't super robust.
threshold_local: I know you said that you tried this but you might revisit this and alter the block_size parameter until you get something that looks reasonable. Based on your example images (and assuming a little bit about why the objects in your example images are green) it looks like that objects in close spatial proximity to one another generally have similar intensity values. Furthermore, you likely won't have to go through and alter the parameters as much as you would in option 1.
I suspect that these will be the simplest and most straight forward approaches but you could also delve into identifying the object edges using something from skimage.feature and then filling objects. Maybe something like outline here: https://scikit-image.org/docs/stable/auto_examples/features_detection/plot_blob.html. This will be a bit more involved, but these methods should be more robust with identifying objects with largely varied intensity values.
If all else fails you can try a couple of SOTA packages. The main ones that I am thinking of are https://github.com/stardist/stardist and https://github.com/MouseLand/cellpose but these seem like a bit of overkill based on your example data here.
I'm using OpenCV with Python to process images for AI training. I need to scale the images down to 32×32 pixels, but with cv2.resize() the images come out too noisy. It looks like this function takes the value of a single pixel from each region of the image, but I need an average value of each region so that the images are less noisy. Is there an alternative to cv2.resize()? I could just write my own function but I don't think it would be very fast.
As you can see in the cv2.resize documentation, the last parameter interpolation determines the way the image is resampled.
See also the possible values for it.
The default is cv2.INTER_LINEAR meaning a linear interpolation. It can create a blurred/noisy effect when the image is downsampled.
You can try to use other interpolation methods to see if the result is better suited for your needs.
Sepcifically I recommend you to try the cv2.INTER_NEAREST option. It will determine the destination pixel value based on the color of the nearest pixel in the source. The downsampled image should be pixellated, but not blurred.
Another option is cv2.INTER_AREA as mentioned in #fmw42's comment.
I want to programmatically modify a bitmap using python but don't really need a thorough grounding in the subject, so would like to concentrate on learning just what I need to get the job done.
A good example of the kind of thing I'm after would be a bitmap image of england and it's counties. This would initially display a black border around all the counties on a white background.
So far so good, but how can I dynamically change the background color of a county?
Off the top of my head I was thinking I might find a flood-fill routine that works similar to a simple paint app. Something that changes all the pixels within an area enclosed by a specified color. I've had a quick look at the PIL documentation but didn't find anything I recognised as a flood fill function?
I don't yet know exactly what a mask is or how to use it but maybe this is an avenue I should explore. Maybe I could define a mask for each county and then use the mask to guide the fill process? Can masks be defined and stored within the bitmap for later use by my program?
Same goes for paths???
Any help or pointers would be greatly appreciated.
PIL has an undocumented function ImageDraw.floodfill:
>>> import ImageDraw
>>> help(ImageDraw.floodfill)
Help on function floodfill in module ImageDraw:
floodfill(image, xy, value, border=None)
Fill bounded region.
(Flood-filling should generally be a last resort because it interacts poorly with anti-aliased lines. It is usually better to get the actual boundary data for the counties and then draw a filled polygon. However, PIL doesn't support anti-aliased line drawing so this advice is useless unless you switch your drawing module to something more capable like PythonMagick or pycairo.)
You can try the opencv binding in python. Here is some example: http://opencv.willowgarage.com/documentation/python-introduction.html
You can then use the cvFloodFill function to flood fill a region.
I am creating automatically JPG pictures from multispectral data. Created picture is very dark. So I thought it would be best idea change brightness (like Image.Enhance in PIL). But there was a problem, because some pictures need more brightness than others.
So next idea was try linear stretching of histogram. So I created script which iterate over RGB tuples and compute new intensity for pixels. There was very small difference. Probably because the range of values was everytime 0-255. Then I tried histogram equalization (ImageOps) for R, G and B but the result was no good, please see middle part of picture. I found on the internet that this is not good approach because colors can change dramatically. It is probably my case.
The best idea looks convert RGB array to HSL and then change luminance but I can't use constant for maximize Luminance because pictures are different and need different constants for. Should I use histogram equalization on Luminance or what is the best approach how stretch or probably better histogram equalization of my picture?
I am looking for something like Image/Auto adjust colors in IrfanView or in some SW are used name Linear Normalization...
I hope that picture will be help to you understand my problem. I probably choose bad way how to achieve my goal.
Thank you for any answer, I will be very glad.
EDIT
Left image for download
Next images I can upload later, today.
I would suggest proceeding with the same approach as you have stated with slight modification.
Convert the RGB image to LAB image.
Apply localized histogram equalization to the L-channel.
Merge it back with the other channels.
Convert it back to RGB image.
You can check my answer for this in a different question here:
The code I have there is written for OpenCV using python. You can modify it for C language if you wish.
Let me know if it has helped you!!
I am not sure if this applies, and I have not applied this myself, but I was reading on this article about underwater contrast stretching:
http://www.iaeng.org/IJCS/issues_v34/issue_2/IJCS_34_2_12.pdf
What it suggests might help
"In order to address the issues discussed above, we propose
an approach based on slide stretching. Firstly, we use contrast
stretching of RGB algorithm to equalize the colour contrast in
the images. Secondly, we apply the saturation and intensity
stretching of HSI to increase the true colour and solve the
problem of lighting"
I have an RGBA image that I need to upscale while keeping it smooth.
The catch is that I need to keep the colors exactly the way they are (background: I'm resizing a map where provinces are color-coded), and so I cannot just perform a resize with bicubic interpolation, because that will also interpolate the pixel colors while smoothing.
Thus, in order to get smooth edges I was hoping to upscale using nearest neighbor (giving me staircase patterns) and then round out the edges by replacing each pixel in the target image with the pixel color that occurs most often within a certain radius, a la so:
from PIL import Image, ImageFilter
amount=3
image=Image.open(<file>)
image=image.filter(ImageFilter.ModeFilter(amount))
This finishes fairly quickly, except that it doesn't work, as PIL's ImageFilters operate separately on each channel. shakes fist
I tried resorting to numpy arrays and doing the following in a loop:
dest[x,y]=Counter([tuple(e) for e in reshape(source[max(x-r,0):x+r+1,max(y-r,0):y+r+1],(-1,4))]).most_common()[0][0]
Note that dest and source here are the same shape XxYx4 arrays, hence the necessary reshaping and converting into tuples.
In theory this would work, but would take 12 hours to finish for the 82 million pixel image I am operating on. I am inferring that this is mostly due to unnecessary overhead with casting and reshaping.
What would be the appropriate way to do this in Python?
I am about ready to throw up my hands and write a C++ module to do this task.
Anything to steer me away from this path would be much appreciated!
If you care about a fixed set of colors in your image, the "Palette" image mode would perhaps be more appropriate (at least, if you don't have more than 256 colors in your map).
I would suggest to first convert your image to the "P" mode (Since I'm not really familiar with PIL, I'm not sure, how easy that is. Perhaps you'll have to explicitely construct the palette first?) and then apply the mode filter.
Another solution which comes into my mind is to simply use bicubic interpolation when upsizing and then converting to a palette image using a palette derived from the original image. That might yield better results (and be easier to implement) than your current approach.
EPX described in Image_scaling introduces no new colors. Scale 2x does this:
A --\ 1 2
C P B --/ 3 4
D
IF C==A => 1=A
IF A==B => 2=B
...
and scale 3x is described there too.
Apart from that, I'd agree with "go straight to C" -- depends on what you know.
Has anyone used np_inline ?