Extract number of pixels from a cropped region - python

I was hoping to find out the number of pixels in a specific region from a pretty dark image , but cant find the right way. Can I please get some help.
I was trying to crop the image along the two red boundaries and then draw a white line over the red line and count nonzero pixels over it. But i think its a pretty ugly approach.
My end goal : It is to find the pixel to mm ratio (1 block between two red lines is 1.891mm)
P.S - It may sound a little silly to ask but i had a question too , if i zoom in to the image , does the number of pixels in a fixed region change than the original state(i believe they dont, not sure) while resizing does affects the number of pixels in a region right ?
Thank you so much in advance.

The red line stands out from the background. Pick its color and mark every pixel of a similar color. This should result in a binary image with only the scale.
Then follow the horizontal line from left to right, preferring horizontal moves over diagonal ones. Then determine the local width by means of vertical displacements. The maxima of the width will reveal the positions of the tick marks.
You can probably refine the measurements and resort to subpixel analysis, but this is another story.

Related

Remove uneven white border from images using OpenCV

I'm trying to remove uneven white borders from different set of pictures. They all look like these:
What I'm doing right now is just drawing a rectangle around the picture in hope that it covers the white area:
h, w = img.shape
cv2.rectangle(img, (0,0), (w,h), (0,0,0), 2)
Depending on the picture it might work or not. As there are a variety number of pictures which are in similar situation I'm looking for a more logical solution which is applicable to all pictures with this kind of issue.
I think your way is right, but it's unaware whether it overlays figures (you may increase the thickness if you know there won't be figures with that margin) and the desired thickness is unknown.
You may use findContour. Find the "thick" figures (if you expect particular metrics as in the picture). Sort their extreme coordinates, add some margin and that would set the max depth of the border.
However then not a rectangle, but a line would be better drawn per each side, in case there are figures very close to the border.
Another scenario: first draw concentrating black rectangles (or lines per side) in order to clear the unevenness, then draw the white lines/rectangle with the desired thickness.

How to measure color proximity / concentration in an image? (python solution preferred)

Consider the 2 images below:
Image 1:
Image 2:
For the sake of the argument let's say that the 2 images have the same amount of black pixels.
I would like to calculate the "concentration" of black pixels. I.e. image 1 has higher "index" than image 2, since the black pixels are more "grouped" or concentrated than in image 2.
Intuitively I would hope that a metric or function would already exist. I did some searches but couldn't quite find what I am looking for.
Applied to real examples, I would use kmeans to reduce color numbers to around 15 and apply that calculation to each color. I do not think that a histogram would help as I am assuming a same number of black pixels (pls correct me if I'm wrong).
One approach I can think of is:
1. Filter for color
2. Use contours to define blobs
3. Count size and number of blobs for each color
4. Quantify into a comparable metric per color
I'm not an expert in imaging libraries, so if you could provide some guidance on how to implement that, it would be great.
Any suggestions?
Thanks a lot!
I think you are heading towards the field of Granulometry with was developed by a Parisian mining company to determine the sizes of grains of minerals in rock in order to decide whether mining was viable.
Anyway, enough history. If you perform a "morphological closing" on each of your images with a disk as the structuring element, gradually increasing in radius, you will get a measure of the distribution of the sizes of blobs present in your image.
No time to write the Python at the minute, but the graph below shows your images side-by-side with a red vertical bar to separate them. In each successive frame of the video, I increase the radius of the circular disk-shaped structuring element by 1 pixel. The first frame has radius of 1, and the final frame has a radius of 39:
Hopefully you can see that the bigger, more concentrated shapes hang around longer than the smaller ones.

Houghlines not detecting top line of a simple rectangle

I'm stumped. It's a simple white rectangle on a black background. Houghlines can't find the top line. It can find all the others, just not the top.
Anyone?
https://www.screencast.com/t/bNu4sptcS3a
Make sure that the top edge appears on the result of canny edge detection.
Dilate the image to make the edge fatter, so that it can get more votes
Make sure that the resolution of parameter rho is 1, so that the detector dos not miss the line. See here for a description on parameters.
Decrease the threshold value for voting, just in case. While the top and bottom lines are supposed to get equal number of votes, this might be different in practice.
The value of rho should be lower for the top horizontal line than the bottom horizontal line.

Python subpixel-accurate image crop

I need to crop an image with subpixel accuracy. For example, I might need to create an interpolated rectangular crop with corners (108.5, 350.9) and (368.3, 230.1) out of an image with dimensions 640x480. How can I achieve this?
Edit: It's a reasonable concession to stretch the cropped area to fit it into a data matrix. However you can't just change the borders of the crop to integer coordinates.
Well I'm not sure if I can call this an answer because I don't really know what your question is but I try to shed some light on you.
So I guess your problem arises from some misconception.
First of all DPI, PPI or whatever you want to use is nothing but a factor that tells you how many dots, points, pixels you have per inch. That factor allows you to determin print sizes or convert between pixel dimensions and inch dimensions.
That's by no means related to cropping an image.
Cropping a rectangular region is a very common task.
Also having ROIs with sub-pixel coordinates is pretty common as their coordinates often arise from calculations that yield non-integer values.
Usually you simply round coordinates to integers so your problem vanishes.
If you want to get intensity values for sub-pixel coordinates you can interpolate between neigbouring pixels. But as images cannot have half pixels you will have to store that information in an image that has more or less pixels.
So here's what I would do if I didn't want to use rounded coordinates.
If my coordinate is >= x.5 I'd add a column or row, otherwise I'd skip the pixel.
If I would add a column or row I would interpolate it's values.
But to be honest I don't see any use case for this and I never had to do anything but using integer coordinates for cropping in my career.
You cannot print fractions of pixels and you cannot display them either, so what's the point?
The solution seems to require that you calculate the center of the rectangle that you want to crop out of the image, and the height and width of the rectangle as well. Then just scale up the entire image until the desired rectangle has integer dimensions, then do a usual crop. You will have to scale the horizontal and vertical dimensions by separate amounts, so this will slightly distort the cropped portion and you will have to adjust for the distortion in the image encoding format you use.

How to remove sun's reflection on a photo using image processing

I have multiple grayscale images in which each image has the sun's reflection or also known as glare as a bright glaring spot. It looks like a bright white blob which I want to remove. It is basically the front portion of a car image when the sun's reflection falls on the front steel grill of the car.
I want to remove this reflection as much as possible. I would appreciate if anyone can point me to a good algorithm , preferably in Python, that I can leverage to remove this glare and pre-process as much as possible.
I tried the approach of applying a threshold to the image pixels and then setting anything that is above 200 to a value of 128. It doesn't work very well because there are other parts of the image that contains white and those get affected.
do not forget to add some sample images ,...
I would try first identify the sun spot
by intensity and the shape of graph intensity=f(distance from spot middle)
it may have a distinct shape that could be used to identify the spot more reliably
After that I would bleed colors
from spot outer area to the its inside
recoloring the spot with its surrounding color ...
by finding all spot pixels that are next to non spot pixels
recoloring them to average of neighboring non spot pixels
and clearing them in spot mask
looping this until no spot pixel in mask is left
[notes]
But without any input images to test is this just a theory
also if you have the source RGB image not just gray the color patterns can may be also be used to help identify the spots
by checking for saturation of white and or some rainbow like pattern

Categories