Python - find out how much of an image is black - python

I am downloading satellite pictures like this
(source: u0553130 at home.chpc.utah.edu)
Since some images are mostly black, like this one, I don't want to save it.
How can I use python to check if the image is more than 50% black?

You're dealing with gifs which are mostly grayscale by the look of your example image, so you might expect most of the RGB components to be equal.
Using PIL:
from PIL import Image
im = Image.open('im.gif')
pixels = im.getdata() # get the pixels as a flattened sequence
black_thresh = 50
nblack = 0
for pixel in pixels:
if pixel < black_thresh:
nblack += 1
n = len(pixels)
if (nblack / float(n)) > 0.5:
print("mostly black")
Adjust your threshold for "black" between 0 (pitch black) and 255 (bright white) as appropriate).

The thorough way is to count the pixels using something like PIL, as given in the other answers.
However, if they're all compressed images, you may be able to check the file size, as images with lots of plain-colour areas should compress a lot more than ones with variation like the cloud cover.
With some tests, you could at least find a heuristic of which images with lots of cloud you know you can instantly discard without expensive looping over their pixels. Others closer to 50% can be checked pixel by pixel.
Additionally, when iterating over the pixels, you don't need to count all the black pixels, and then check if at least 50% are black. Instead, stop counting and discard as soon as you know at least 50% are black.
A second optimisation: if you know the images are generally mostly cloudy rather than mostly black, go the other way. Count the number of non-black pixels, and stop and keep the images as soon as that crosses 50%.

Load image
Read each pixel and increment result if pixel = (0,0,0)
If result =< (image.width * image.height)/2
Save image
Or check if it's almost black by returning true if your pixel R (or G or B) component is less that 15 for example.

Utilizing your test image, the most common color has an RGB value of (1, 1, 1). This is very black, but not exactly black. My answer utilizes the PIL library, webcolors and a generous helping of code from this answer.
from PIL import Image
import webcolors
def closest_color(requested_color):
min_colors = {}
for key, name in webcolors.css3_hex_to_names.items():
r_c, g_c, b_c = webcolors.hex_to_rgb(key)
rd = (r_c - requested_color[0]) ** 2
gd = (g_c - requested_color[1]) ** 2
bd = (b_c - requested_color[2]) ** 2
min_colors[(rd + gd + bd)] = name
return min_colors[min(min_colors.keys())]
def get_color_name(requested_color):
try:
closest_name = actual_name = webcolors.rgb_to_name(requested_color)
except ValueError:
closest_name = closest_color(requested_color)
actual_name = None
return actual_name, closest_name
if __name__ == '__main__':
lt = Image.open('test.gif').convert('RGB').getcolors()
lt.sort(key=lambda tup:tup[0], reverse=True)
actual_name, closest_name = get_color_name(lt[0][4])
print lt[0], actual_name, closest_name
Output:
(531162, (1, 1, 1)) None black
In this case, you'd be interested in the closest_name variable. The first (lt[0]) is showing you the most common RGB value. This doesn't have a defined web color name, hence the None for actual_name
Explanation:
This is opening the file you've provided, converting it to RGB and then running PIL's getcolors method on the image. The result of this is a list of tuples in the format (count, RGB_color_value). I then sort the list (in reverse order). Utilizing the functions from the other answer, I pass the most common RGB color value (now the first tuple in the list and the RBG is the second element in the tuple).

Related

How to remove the empty images using python [duplicate]

Using the Python Imaging Library PIL how can someone detect if an image has all it's pixels black or white?
~Update~
Condition: Not iterate through each pixel!
if not img.getbbox():
... will test to see whether an image is completely black. (Image.getbbox() returns the falsy None if there are no non-black pixels in the image, otherwise it returns a tuple of points, which is truthy.) To test whether an image is completely white, invert it first:
if not ImageChops.invert(img).getbbox():
You can also use img.getextrema(). This will tell you the highest and lowest values within the image. To work with this most easily you should probably convert the image to grayscale mode first (otherwise the extrema might be an RGB or RGBA tuple, or a single grayscale value, or an index, and you have to deal with all those).
extrema = img.convert("L").getextrema()
if extrema == (0, 0):
# all black
elif extrema == (1, 1):
# all white
The latter method will likely be faster, but not so you'd notice in most applications (both will be quite fast).
A one-line version of the above technique that tests for either black or white:
if sum(img.convert("L").getextrema()) in (0, 2):
# either all black or all white
Expanding on Kindall:
if you look at an image called img with:
extrema = img.convert("L").getextrema()
It gives you a range of the values in the images. So an all black image would be (0,0) and an all white image is (255,255). So you can look at:
if extrema[0] == extrema[1]:
return("This image is one solid color, so I won't use it")
else:
# do something with the image img
pass
Useful to me when I was creating a thumbnail from some data and wanted to make sure it was reading correctly.
from PIL import Image
img = Image.open("test.png")
clrs = img.getcolors()
clrs contains [("num of occurences","color"),...]
By checking for len(clrs) == 1 you can verify if the image contains only one color and by looking at the second element of the first tuple in clrs you can infer the color.
In case the image contains multiple colors, then by taking the number of occurences into account you can also handle almost-completly-single-colored images if 99% of the pixles share the same color.
I tried the Kindall solution ImageChops.invert(img).getbbox() without success, my test images failed.
I had noticed a problem, white should be 255 BUT I have found white images where numeric extrema are (0,0).. why? See the update below.
I have changed Kindall second solution (getextrema), that works right, in a way that doesn't need image conversion, I wrote a function and verified that works with Grayscale and RGB images both:
def is_monochromatic_image(img):
extr = img.getextrema()
a = 0
for i in extr:
if isinstance(i, tuple):
a += abs(i[0] - i[1])
else:
a = abs(extr[0] - extr[1])
break
return a == 0
The img argument is a PIL Image object.
You can also check, with small modifications, if images are black or white.. but you have to decide if "white" is 0 or 255, perhaps you have the definitive answer, I have not. :-)
Hope useful
UPDATE: I suppose that white images with zeros inside.. may be PNG or other image format with transparency.

Find minimal number of rectangles in the image

I have binary images where rectangles are placed randomly and I want to get the positions and sizes of those rectangles.
If possible I want the minimal number of rectangles necessary to exactly recreate the image.
On the left is my original image and on the right the image I get after applying scipys.find_objects()
(like suggested for this question).
import scipy
# image = scipy.ndimage.zoom(image, 9, order=0)
labels, n = scipy.ndimage.measurements.label(image, np.ones((3, 3)))
bboxes = scipy.ndimage.measurements.find_objects(labels)
img_new = np.zeros_like(image)
for bb in bboxes:
img_new[bb[0], bb[1]] = 1
This works fine if the rectangles are far apart, but if they overlap and build more complex structures this algorithm just gives me the largest bounding box (upsampling the image made no difference). I have the feeling that there should already exist a scipy or opencv method which does this.
I would be glad to know if somebody has an idea on how to tackle this problem or even better knows of an existing solution.
As result I want a list of rectangles (ie. lower-left-corner : upper-righ-corner) in the image. The condition is that when I redraw those filled rectangles I want to get exactly the same image as before. If possible the number of rectangles should be minimal.
Here is the code for generating sample images (and a more complex example original vs scipy)
import numpy as np
def random_rectangle_image(grid_size, n_obstacles, rectangle_limits):
n_dim = 2
rect_pos = np.random.randint(low=0, high=grid_size-rectangle_limits[0]+1,
size=(n_obstacles, n_dim))
rect_size = np.random.randint(low=rectangle_limits[0],
high=rectangle_limits[1]+1,
size=(n_obstacles, n_dim))
# Crop rectangle size if it goes over the boundaries of the world
diff = rect_pos + rect_size
ex = np.where(diff > grid_size, True, False)
rect_size[ex] -= (diff - grid_size)[ex].astype(int)
img = np.zeros((grid_size,)*n_dim, dtype=bool)
for i in range(n_obstacles):
p_i = np.array(rect_pos[i])
ps_i = p_i + np.array(rect_size[i])
img[tuple(map(slice, p_i, ps_i))] = True
return img
img = random_rectangle_image(grid_size=64, n_obstacles=30,
rectangle_limits=[4, 10])
Here is something to get you started: a naïve algorithm that walks your image and creates rectangles as large as possible. As it is now, it only marks the rectangles but does not report back coordinates or counts. This is to visualize the algorithm alone.
It does not need any external libraries except for PIL, to load and access the left side image when saved as a PNG. I'm assuming a border of 15 pixels all around can be ignored.
from PIL import Image
def fill_rect (pixels,xp,yp,w,h):
for y in range(h):
for x in range(w):
pixels[xp+x,yp+y] = (255,0,0,255)
for y in range(h):
pixels[xp,yp+y] = (255,192,0,255)
pixels[xp+w-1,yp+y] = (255,192,0,255)
for x in range(w):
pixels[xp+x,yp] = (255,192,0,255)
pixels[xp+x,yp+h-1] = (255,192,0,255)
def find_rect (pixels,x,y,maxx,maxy):
# assume we're at the top left
# get max horizontal span
width = 0
height = 1
while x+width < maxx and pixels[x+width,y] == (0,0,0,255):
width += 1
# now walk down, adjusting max width
while y+height < maxy:
for w in range(x,x+width,1):
if pixels[x,y+height] != (0,0,0,255):
break
if pixels[x,y+height] != (0,0,0,255):
break
height += 1
# fill rectangle
fill_rect (pixels,x,y,width,height)
image = Image.open('A.png')
pixels = image.load()
width, height = image.size
print (width,height)
for y in range(16,height-15,1):
for x in range(16,width-15,1):
if pixels[x,y] == (0,0,0,255):
find_rect (pixels,x,y,width,height)
image.show()
From the output
you can observe the detection algorithm can be improved, as, for example, the "obvious" two top left rectangles are split up into 3. Similar, the larger structure in the center also contains one rectangle more than absolutely needed.
Possible improvements are either to adjust the find_rect routine to locate a best fit¹, or store the coordinates and use math (beyond my ken) to find which rectangles may be joined.
¹ A further idea on this. Currently all found rectangles are immediately filled with the "found" color. You could try to detect obviously multiple rectangles, and then, after marking the first, the other rectangle(s) to check may then either be black or red. Off the cuff I'd say you'd need to try different scan orders (top-to-bottom or reverse, left-to-right or reverse) to actually find the minimally needed number of rectangles in any combination.

Starting with black 100x100 jpg and turning some pixels white with a probability

Let's say I have a 100x100 black jpeg. Every pixel has a 1% chance of turning white every iteration, and I execute 30 loops.
If generating it is better, I am thinking maybe some kind of 100x100 array where every value starts black (0) and some turn white (1), and then turning this array into an image printed on the console or saved (for every iteration)
If it is even possible, I would like to further add a chance modifier where some columns have a higher chance than others, based on the height of a circular curve: maximum chance at the centre, and decreasing further out.
How would I go about doing this? If the circle equation part cannot be done, no problem. Just getting a square grid changing is sufficient.
This should get you going:
import numpy as np
from scipy.misc import imsave
N = 100
def iterate(img, filename, it_num):
swaps = np.random.choice((0, 255), size=(N,N))
img = np.clip(img + swaps, 0, 255)
imsave(filename + '_it_' + str(it_num) + '.jpg', img)
if np.all(img):
return None
else:
return img
img = np.zeros((N, N), dtype=np.uint8)
it = 1
while img is not None:
img = iterate(img, 'output', it)
it += 1
The last image, which is full of 255 should be white, but gets indeed black. I'm not sure why, but i'm also not sure if that's important.
Your circle-equation part is not really formulated in a precise way and i skipped it. It's probably not that hard, using numpy's random-tools.

Trim scanned images with PIL?

What would be the approach to trim an image that's been input using a scanner and therefore has a large white/black area?
the entropy solution seems problematic and overly intensive computationally. Why not edge detect?
I just wrote this python code to solve this same problem for myself. My background was dirty white-ish, so the criteria that I used was darkness and color. I simplified this criteria by just taking the smallest of the R, B or B value for each pixel, so that black or saturated red both stood out the same. I also used the average of the however many darkest pixels for each row or column. Then I started at each edge and worked my way in till I crossed a threshold.
Here is my code:
#these values set how sensitive the bounding box detection is
threshold = 200 #the average of the darkest values must be _below_ this to count (0 is darkest, 255 is lightest)
obviousness = 50 #how many of the darkest pixels to include (1 would mean a single dark pixel triggers it)
from PIL import Image
def find_line(vals):
#implement edge detection once, use many times
for i,tmp in enumerate(vals):
tmp.sort()
average = float(sum(tmp[:obviousness]))/len(tmp[:obviousness])
if average <= threshold:
return i
return i #i is left over from failed threshold finding, it is the bounds
def getbox(img):
#get the bounding box of the interesting part of a PIL image object
#this is done by getting the darekest of the R, G or B value of each pixel
#and finding were the edge gest dark/colored enough
#returns a tuple of (left,upper,right,lower)
width, height = img.size #for making a 2d array
retval = [0,0,width,height] #values will be disposed of, but this is a black image's box
pixels = list(img.getdata())
vals = [] #store the value of the darkest color
for pixel in pixels:
vals.append(min(pixel)) #the darkest of the R,G or B values
#make 2d array
vals = np.array([vals[i * width:(i + 1) * width] for i in xrange(height)])
#start with upper bounds
forupper = vals.copy()
retval[1] = find_line(forupper)
#next, do lower bounds
forlower = vals.copy()
forlower = np.flipud(forlower)
retval[3] = height - find_line(forlower)
#left edge, same as before but roatate the data so left edge is top edge
forleft = vals.copy()
forleft = np.swapaxes(forleft,0,1)
retval[0] = find_line(forleft)
#and right edge is bottom edge of rotated array
forright = vals.copy()
forright = np.swapaxes(forright,0,1)
forright = np.flipud(forright)
retval[2] = width - find_line(forright)
if retval[0] >= retval[2] or retval[1] >= retval[3]:
print "error, bounding box is not legit"
return None
return tuple(retval)
if __name__ == '__main__':
image = Image.open('cat.jpg')
box = getbox(image)
print "result is: ",box
result = image.crop(box)
result.show()
For starters, Here is a similar question. Here is a related question. And a another related question.
Here is just one idea, there are certainly other approaches. I would select an arbitrary crop edge and then measure the entropy* on either side of the line, then proceed to re-select the crop line (probably using something like a bisection method) until the entropy of the cropped-out portion falls below a defined threshold. As I think, you may need to resort to a brute root-finding method as you will not have a good indication of when you have cropped too little. Then repeat for the remaining 3 edges.
*I recall discovering that the entropy method in the referenced website was not completely accurate, but I could not find my notes (I'm sure it was in a SO post, however.)
Edit:
Other criteria for the "emptiness" of an image portion (other than entropy) might be contrast ratio or contrast ratio on an edge-detect result.

What are some methods to analyze image brightness using Python?

I'd like some advice on performing a simple image analysis in python. I need to calculate a value for the "brightness" of an image. I know PIL is the goto library for doing something like this. There is a built-in histogram function.
What I need is a "perceived brightness" values I can decide if further adjustments to the image are necessary. So what are something of the basic techniques that will work in this situation? Should I just work with the RGB values, or will histogram give me something close enough?
One possible solution might be to combine the two, and generate average R,G,and B values using the histogram, then apply the "perceived brightness" formula.
Using the techniques mentioned in the question, I came up with a few different versions.
Each method returns a value close, but not exactly the same as the others. Also, all methods run about the same speed except for the last one, which is much slower depending on the image size.
Convert image to greyscale, return average pixel brightness.
def brightness( im_file ):
im = Image.open(im_file).convert('L')
stat = ImageStat.Stat(im)
return stat.mean[0]
Convert image to greyscale, return RMS pixel brightness.
def brightness( im_file ):
im = Image.open(im_file).convert('L')
stat = ImageStat.Stat(im)
return stat.rms[0]
Average pixels, then transform to "perceived brightness".
def brightness( im_file ):
im = Image.open(im_file)
stat = ImageStat.Stat(im)
r,g,b = stat.mean
return math.sqrt(0.241*(r**2) + 0.691*(g**2) + 0.068*(b**2))
RMS of pixels, then transform to "perceived brightness".
def brightness( im_file ):
im = Image.open(im_file)
stat = ImageStat.Stat(im)
r,g,b = stat.rms
return math.sqrt(0.241*(r**2) + 0.691*(g**2) + 0.068*(b**2))
Calculate "perceived brightness" of pixels, then return average.
def brightness( im_file ):
im = Image.open(im_file)
stat = ImageStat.Stat(im)
gs = (math.sqrt(0.241*(r**2) + 0.691*(g**2) + 0.068*(b**2))
for r,g,b in im.getdata())
return sum(gs)/stat.count[0]
Update Test Results
I ran a simulation against 200 images. I found that methods #2, #4 gave almost identical results. Also methods #3, #5 were also nearly identical. Method #1 closely followed #3, #5 (with a few exceptions).
Given that you're just looking for an average across the whole image, and not per-pixel brightness values, averaging PIL's histogram and applying the brightness function to the output seems like the best approach for that library.
If using ImageMagick (with the PythonMagick bindings), I would suggest using the identify command with the "verbose" option set. This will provide you with a mean value for each channel, saving you the need to sum and average a histogram — you can just multiply each channel directly.
I think your best result would come from converting the RGB to grayscale using your favorite formula, then taking the histogram of that result. I'm not sure if the mean or the median of the histogram would be more appropriate, but on most images they are probably similar.
I'm not sure how to do the conversion to grayscale in PIL using an arbitrary formula, but I'm guessing it's possible.
the code below will give you the brightness level of an image from 0-10
1- calculate the average brightness of the image after converting the image to HSV format using opencv.
2- find where this value lies in the list of brightness range.
import numpy as np
import cv2
import sys
from collections import namedtuple
#brange brightness range
#bval brightness value
BLevel = namedtuple("BLevel", ['brange', 'bval'])
#all possible levels
_blevels = [
BLevel(brange=range(0, 24), bval=0),
BLevel(brange=range(23, 47), bval=1),
BLevel(brange=range(46, 70), bval=2),
BLevel(brange=range(69, 93), bval=3),
BLevel(brange=range(92, 116), bval=4),
BLevel(brange=range(115, 140), bval=5),
BLevel(brange=range(139, 163), bval=6),
BLevel(brange=range(162, 186), bval=7),
BLevel(brange=range(185, 209), bval=8),
BLevel(brange=range(208, 232), bval=9),
BLevel(brange=range(231, 256), bval=10),
]
def detect_level(h_val):
h_val = int(h_val)
for blevel in _blevels:
if h_val in blevel.brange:
return blevel.bval
raise ValueError("Brightness Level Out of Range")
def get_img_avg_brightness():
if len(sys.argv) < 2:
print("USAGE: python3.7 brightness.py <image_path>")
sys.exit(1)
img = cv2.imread(sys.argv[1])
hsv = cv2.cvtColor(img, cv2.COLOR_BGR2HSV)
_, _, v = cv2.split(hsv)
return int(np.average(v.flatten()))
if __name__ == '__main__':
print("the image brightness level is:
{0}".format(detect_level(get_img_avg_brightness())))
This can be done by converting the BGR image from cv2 to grayscale and then finding the intensity - x and y are pixel coordinates. It's been explained well in this https://docs.opencv.org/3.4/d5/d98/tutorial_mat_operations.html document.
Scalar intensity = img.at<uchar>(y, x);
def calculate_brightness(image):
greyscale_image = image.convert('L')
histogram = greyscale_image.histogram()
pixels = sum(histogram)
brightness = scale = len(histogram)
for index in range(0, scale):
ratio = histogram[index] / pixels
brightness += ratio * (-scale + index)
return 1 if brightness == 255 else brightness / scale

Categories