Python code is bug or not - python

I try to test example all black pic it not show "mostly black" i wonder please help me
from PIL import Image
im = Image.open('im.gif')
pixels = im.getdata() # get the pixels as a flattened sequence
black_thresh = 50
nblack = 0
for pixel in pixels:
if pixel < black_thresh:
nblack += 1
n = len(pixels)
if (nblack / float(n)) > 0.5:
print("mostly black")

If the image returned as tuple containing each of the color components (e.g. RGB), not a single value. You can take the average value and compare against the threshold like this:
for pixel in pixels:
if sum(pixel) / len(pixel) < black_thresh:
nblack += 1

Related

Converting a PIL image in python to grayscale pixel by pixel, leaving 1 color alone

I'm trying to convert an image to grayscale in python 3.4.2, but I would like to leave all "red" pixels alone
from numpy import *
from pylab import *
from PIL import Image
from PIL import ImageOps
def grayscale(picture):
res = Image.new(picture.mode, picture.size)
red = '150,45,45' # for now I'm just tyring
x = red.split(",") # to change pixels with R value less than 150
#and G/B values greater than 45
width, height = picture.size #to greyscale
for i in range(0, width):
for j in range(0, height):
pixel = picture.getpixel((i, j)) #get a pixel
pixelStr = str(pixel)
pixelStr = pixelStr.replace('(', '').replace(')', '')
pixelStr.split(",") #remove parentheses and split so we
#can convert the pixel into 3 integers
#if its not specifically in the range of values we're trying to convert
#we place the original pixel otherwise we convert the pixel to grayscale
if not (int(pixelStr[0]) >= int(x[0]) and int(pixelStr[1]) <= int(x[1]) and int(pixelStr[2]) <= int(x[2])):
avg = (pixel[0] + pixel[1] + pixel[2]) / 3
res.putpixel((i, j), (int(avg), int(avg), int(avg)))
else:
res.putpixel(pixel)
return res
Right now this converts the image to grayscale but as far as I can tell it doesn't leave any colored pixels like I thought it would, any help/suggestions/alternate ways to accomplish my task would be greatly appreciated.
Thank you
So incase anyone reads this in the future my code was not working due to an error on my part
res.putpixel(pixel)
should have been throwing an error, because I didn't get it a position to place the pixel just the color information. Since it wasn't throwing an error we never actually got inside my else: statement.
Asked a teammate for help and we changed my code to this:
from numpy import *
from PIL import Image
red_lower_threshold = 150
green_blue_diff_threshold = 50
def grayscale(picture):
res = Image.new(picture.mode, picture.size)
for i in range(0, picture.size[0]):
for j in range(0, picture.size[1]):
pixel = picture.getpixel((i, j)) #get a pixel
red = pixel[0]
green = pixel[1]
blue = pixel[2]
if (red > red_lower_threshold and abs(green - blue) < green_blue_diff_threshold):
res.putpixel((i, j), pixel)
else:
avg = (pixel[0] + pixel[1] + pixel[2]) / 3
res.putpixel((i, j), (int(avg), int(avg), int(avg)))
res.save('output.jpg')
return res
it's not perfect but its a workable solution

Generate Color Spectrum using Python

I want to generate a color spectrum like that:
as a png picture. But the width and height of the picture should be adjustable. The Colors should be used as hex values like the HTML Color-Code (for example #FF0000).
I know how the scale works but i think there are already any solutions how to count up the blue to red, then counting down red etc. in a resolution that acquires the needed width of the picture.
For generating the Picture i thought about PIL:
from PIL import Image
im = Image.new("RGB", (width, height))
im.putdata(DEC_tuples)
im.save("Picture", "PNG")
Are there any existing working solutions available?
Found a solution by myself and it works pretty well, the generated image will become a new width because I won't generate float-numbers.
from PIL import Image
width = 300 # Expected Width of generated Image
height = 100 # Height of generated Image
specratio = 255*6 / width
print ("SpecRatio: " + str(specratio))
red = 255
green = 0
blue = 0
colors = []
step = round(specratio)
for u in range (0, height):
for i in range (0, 255*6+1, step):
if i > 0 and i <= 255:
blue += step
elif i > 255 and i <= 255*2:
red -= step
elif i > 255*2 and i <= 255*3:
green += step
elif i > 255*3 and i <= 255*4:
blue -= step
elif i > 255*4 and i <= 255*5:
red += step
elif i > 255*5 and i <= 255*6:
green -= step
colors.append((red, green, blue))
newwidth = int(i/step+1) # Generated Width of Image without producing Float-Numbers
print (str(colors))
im = Image.new("RGB", (newwidth, height))
im.putdata(colors)
im.save("Picture", "PNG")

Python create image with continous input

I have a code where an image got converted to B/W.
Now I want to build a new image in reference to the original image.
The output of the original image are the X-/Y-coordinates and "1" and "0" for Black and White.
The new image will receive these information but not chronologically.
Therefore it must check and provide a negative output if it already has received information about a specific coordinate so that double entries can be avoided.
I havenĀ“t found many similar examples to this; only some examples that are going in the about direction.
Does anyone have an idea how to realize that?
UPDATE:
I built the code which converts a pixel from a white image black, if the reference pixel from the original image is black (Otherwise it leaves it white).
Furthermore the used coordinate is entered into a list and checked if used.
However, this part is not working properly.
Although the coordinate [10, 10] has been used in the loop before, the code displays Coordinate not in the system
Any help would be appreciated!
import cv2
import numpy
white = cv2.imread('white.jpg') #loading white image
white = cv2.resize(white,(640,480)) #adjusting it to the size of the original image
y = 0 #for testing purposes the white image gets blackened manually
x = 0
j = 0
while j < 50:
content = numpy.zeros((200, 2)) #creating a list with 200 entries, every entry contains 2 values
content = ([x, y]) #adding two values to the list
if condition[y, x] = 1: #condition = 1 means that in the reference picture at this coordinate the pixel is black
white[y,x] = 0 #"0" creates a black pixel at the specified coordinate on the white image
x += 5
y += 5
j += 1
x = 10 #taking a value which already has been used
y = 10
try:
b = content.index([x, y]) #check if coordinate is in the list
except ValueError:
print("Coordinate not in the system")
else:
print("Coordinate already in the system")
i = 0
while i < 100:
cv2.imshow('Bild', white) #displays the image
if cv2. waitKey(1) == ord('q'):
break
It took me a while but I was able to solve it without any complex lists or arrays.
Might not be the most elegant way but at least it is working!
I created a second white picture (=reference) which is getting compared if the coordinate has already been used or not.
If the coordinate has not been used, it will create a black pixel.
The next time it is checking this coordinate it will find a black pixel and therefore know that it has been used.
In the end the white image will contain 49 black pixels (because the position [10, 10] has already been used and will not become painted).
import cv2
import numpy
white = cv2.imread('C:\white.jpg') #loading white image
reference = cv2.imread('C:\white.jpg') #loading white image
white = cv2.resize(white,(640,480)) #adjusting it to the size of the original image
reference = cv2.resize(white,(640,480)) #adjusting it to the size of the original image
y = 0 #for testing purposes the white image gets blackened manually
x = 0
j = 0
reference[10,10] = 0
while j < 50:
if [255,255,255] in reference[y,x]:
reference[y,x] = 0 #"0" creates a black pixel at the specified coordinate on the reference image
white[y,x] = 0 #"0" creates a black pixel at the specified coordinate on the white image
print("Coordinate not in system")
else:
print("coordinate already in system")
x += 5
y += 5
j += 1
i = 0
while i < 100:
cv2.imshow('image copy', white) #displays the image
if cv2. waitKey(1) == ord('q'):
break

Stitching final size and offset

I am making a stitching with opencv and Python. All works well, except one thing : I don't manage to compute the exact final size of the result picture.
My image is always too big and i have black border. Moreover, the offset doesn't seem to be correct because there is a black line where pictures have merged.
Here is my function :
def calculate_size(size_image1, size_image2, homography):
## Calculate the size and offset of the stitched panorama.
offset = abs((homography*(size_image2[0]-1,size_image2[1]-1,1))[0:2,2])
print offset
size = (size_image1[1] + int(offset[0]), size_image1[0] + int(offset[1]))
if (homography*(0,0,1))[0][1] > 0:
offset[0] = 0
if (homography*(0,0,1))[1][2] > 0:
offset[1] = 0
## Update the homography to shift by the offset
homography[0:2,2] += offset
return (size, offset)
## 4. Combine images into a panorama. [4] --------------------------------
def merge_images(image1, image2, homography, size, offset, keypoints):
## Combine the two images into one.
panorama = cv2.warpPerspective(image2,homography,size)
(h1, w1) = image1.shape[:2]
for h in range(h1):
for w in range(w1):
if image1[h][w][0] != 0 or image1[h][w][3] != 0 or image1[h][w][4] != 0:
panorama[h+offset[1]][w + offset[0]] = image1[h][w]
## TODO: Draw the common feature keypoints.
return panorama
And my results:
1st image :
2nd image :
Stitched image :
What am I doing wrong?
if (homography*(0,0,1))[0][1] > 0:
offset[0] = 0
if (homography*(0,0,1))[1][2] > 0:
offset[1] = 0
Your code is wrong.The right one as following:
if (homography*(0,0,1))[0][2] > 0:
offset[0] = 0
if (homography*(0,0,1))[1][2] > 0:
offset[1] = 0
Well, I don't know a lot about Python but basically I had the some problem.
To solve the size issues I did the following:
perspectiveTransform( obj_original_corners, scene_corners, homography);
After that, I just searched in both images the smallest_X, smallest_Y, biggest_X and biggest_Y.
These numbers I then used in:
cv::warpPerspective(img_2,WarpedImage,homography,cv::Size(biggestX-smallestX,biggestY-smallestY));
So in that case the new image itself will have the proper size even if the 2nd image has a negative x or negative y.
Only thing I'm still struggling with myself at this moment is how to apply the shift to warpPerspective because now part of my image is cutoff due to negative numbers.
Accordding to stitching,All your process are right.The result is because your source picture.
for h in range(h1):
for w in range(w1):
if image1[h][w][0] != 0 or image1[h][w][3] != 0 or image1[h][w][4] != 0:
panorama[h+offset[1]][w + offset[0]] = image1[h][w]
The operation only filter the pixel ,whose color is zero.In fact ,some pixel seems like black,but it is not pure black and very near black. So these seem black pixel will not filter out by your program.

What's wrong with this python image blur function?

EDIT: Thanks to Howard, I've corrected the code here and it seems to be working now.
EDIT2: I've updated the code to include a vertical blur as originally intended. Resulting sample output with various settings: Blur comparison images.jpg
Another reference for blur operations (Java): Blurring for Beginners
original post:
I'm trying to learn about basic image processing and duplicate this simple Blur method (the second function BlurHorizontal under "Reusing results") in python. I know there are already blur functions in PIL, but I want to try out the basic pixel operations myself.
This function should take a source image, then average RGB pixel values based on a certain radius and write the processed image to a new file. My problem is that I'm getting a lot of pixels with completely wrong averaged values (for example, bright green lines instead of red in certain areas).
With a blur radius of 2, the averaging method adds up the RGB values for the 5 pixels centered on the input pixel. It uses a "sliding window" to keep a running total, subtracting the outgoing pixel (left side) and adding the new incoming pixel (right side of window). Blur method explained here
Sample: Blur test image output.jpg
Any ideas where I've gone wrong? I'm not sure why some parts of the image blur cleanly while other areas are filled with colors completely unrelated to the surrounding areas.
Thanks for your help.
FIXED WORKING Code (Thanks Howard)
import Image, numpy, ImageFilter
img = Image.open('testimage.jpg')
imgArr = numpy.asarray(img) # readonly
# blur radius in pixels
radius = 2
# blur window length in pixels
windowLen = radius*2+1
# columns (x) image width in pixels
imgWidth = imgArr.shape[1]
# rows (y) image height in pixels
imgHeight = imgArr.shape[0]
#simple box/window blur
def doblur(imgArr):
# create array for processed image based on input image dimensions
imgB = numpy.zeros((imgHeight,imgWidth,3),numpy.uint8)
imgC = numpy.zeros((imgHeight,imgWidth,3),numpy.uint8)
# blur horizontal row by row
for ro in range(imgHeight):
# RGB color values
totalR = 0
totalG = 0
totalB = 0
# calculate blurred value of first pixel in each row
for rads in range(-radius, radius+1):
if (rads) >= 0 and (rads) <= imgWidth-1:
totalR += imgArr[ro,rads][0]/windowLen
totalG += imgArr[ro,rads][1]/windowLen
totalB += imgArr[ro,rads][2]/windowLen
imgB[ro,0] = [totalR,totalG,totalB]
# calculate blurred value of the rest of the row based on
# unweighted average of surrounding pixels within blur radius
# using sliding window totals (add incoming, subtract outgoing pixels)
for co in range(1,imgWidth):
if (co-radius-1) >= 0:
totalR -= imgArr[ro,co-radius-1][0]/windowLen
totalG -= imgArr[ro,co-radius-1][1]/windowLen
totalB -= imgArr[ro,co-radius-1][2]/windowLen
if (co+radius) <= imgWidth-1:
totalR += imgArr[ro,co+radius][0]/windowLen
totalG += imgArr[ro,co+radius][1]/windowLen
totalB += imgArr[ro,co+radius][2]/windowLen
# put average color value into imgB pixel
imgB[ro,co] = [totalR,totalG,totalB]
# blur vertical
for co in range(imgWidth):
totalR = 0
totalG = 0
totalB = 0
for rads in range(-radius, radius+1):
if (rads) >= 0 and (rads) <= imgHeight-1:
totalR += imgB[rads,co][0]/windowLen
totalG += imgB[rads,co][1]/windowLen
totalB += imgB[rads,co][2]/windowLen
imgC[0,co] = [totalR,totalG,totalB]
for ro in range(1,imgHeight):
if (ro-radius-1) >= 0:
totalR -= imgB[ro-radius-1,co][0]/windowLen
totalG -= imgB[ro-radius-1,co][1]/windowLen
totalB -= imgB[ro-radius-1,co][2]/windowLen
if (ro+radius) <= imgHeight-1:
totalR += imgB[ro+radius,co][0]/windowLen
totalG += imgB[ro+radius,co][1]/windowLen
totalB += imgB[ro+radius,co][2]/windowLen
imgC[ro,co] = [totalR,totalG,totalB]
return imgC
# number of times to run blur operation
blurPasses = 3
# temporary image array for multiple passes
imgTmp = imgArr
for k in range(blurPasses):
imgTmp = doblur(imgTmp)
print "pass #",k,"done."
imgOut = Image.fromarray(numpy.uint8(imgTmp))
imgOut.save('testimage-processed.png', 'PNG')
I suppose you have an issue with the line
for rads in range(-radius, radius):
which runs to radius-1 only (range excludes last). Add one to the second range argument.
Update: There is another small isue within the line
if (co-radius-1) > 0:
which should be
if (co-radius-1) >= 0:
I modified/refactored your code just a bit, and thought I'd share. I needed something to do a custom blur that would: 1) work on a data array, and 2) only wrap horizontally and not vertically. As the TODO notes, I'm thinking of further refactoring so it can do partial pixel blends (i.e. 0.5). Hope this helps someone:
def blur_image(image_data, blur_horizontal=True, blur_vertical=True, height=256, width=256, radius=1):
#TODO: Modify to support partial pixel blending
# blur window length in pixels
blur_window = radius*2+1
out_image_data = image_data
# blur horizontal row by row, and wrap around edges
if blur_horizontal:
for row in range(height):
for column in range(0, width):
total_red = 0
total_green = 0
total_blue = 0
for rads in range(-radius, radius+1):
pixel = (row*width) + ((column+rads) % width)
total_red += image_data[pixel][0]/blur_window
total_green += image_data[pixel][1]/blur_window
total_blue += image_data[pixel][2]/blur_window
out_image_data[row*width + column] = (total_red, total_green, total_blue, 255)
image_data = out_image_data
# blur vertical, but no wrapping
if blur_vertical:
for column in range(width):
for row in range(0, height):
total_red = 0
total_green = 0
total_blue = 0
blur_window = 0
for rads in range(-radius, radius+1):
if rads in range(0, height):
blur_window += 1
for rads in range(-radius, radius+1):
row_mod = row+rads
if row_mod in range(0, height):
pixel = (row_mod*width) + column
total_red += image_data[pixel][0]/blur_window
total_green += image_data[pixel][1]/blur_window
total_blue += image_data[pixel][2]/blur_window
out_image_data[row*width + column] = (total_red, total_green, total_blue, 255)
image_data = out_image_data
return image_data
You can use it when you've already got an image that's in an array of RGBA pixels, then run:
image_data = blur_image(image_data, height=height, width=width, radius=2)
im = Image.new('RGB', (width, height))
im.putdata(image_data)

Categories