Efficient way to shift image pixels in python - python

Trying to find a way to efficiently shift image pixels in python. Ultimately want to display the results on screen after each shift. For example, pixel at 1920,1080 becomes pixel at 1920,1079 and 1920,1080 is filled with a new value, at the end pixel at 0,0 is discarded and filled with pixel from 0,1. Looking to achieve 30fps or greater on a slow processor like the raspi 3.
My intuition is to move everything to a 1D array, shift everything "left", add the new pixel at the end and turn back into an image.
Crude drawing of the "Z" shifting effect I'm looking for
Here is my naive implementation of just the pixel shifting, which is too slow:
from PIL import Image
im = Image.new("RGBA", (1920, 1080), "black") #create blank image
pix = im.load() #load image
#iterate over width and height
for x in xrange(1,1919):
pix[x,y] = pix[x+1, y] #shift x
for y in xrange(1,1079):
pix[x,y] = pix[x, y+1] #shift y
pix[1919,1079] = (255,255,255) #set new pixel white

I thought of a couple of ways of doing this but can only get 40fps on a desktop Mac at the moment.
Here's the first method which uses Numpy roll() at its heart in order to roll the top-left pixel down to the bottom right corner and then overwrite it with white.
#!/usr/bin/env python3
import cv2
import numpy as np
from PIL import Image
w, h = 640, 480
im = Image.new("L", (w, h), 127)
# Just do 1,000 shifts so it can be timed
for frame in range(1000):
# Ravel image out to row vector, roll it left, and reshape
im = np.roll(np.ravel(im),-1).reshape(h,w)
# Make bottom right pixel white
im[-1,-1] = 255
cv2.imshow("Slidey Pushy",im)
cv2.waitKey(1)
I then thought of adding in all the new pixels right at the beginning and then just changing the offset into the image on each iteration, but surprisingly (to me at least) this was no faster.
#!/usr/bin/env python3
import cv2
import numpy as np
from PIL import Image
w, h = 640, 480
im = Image.new("L", (w, h), 127)
# Allocate space for and append 10 rows of white pixels up front
im = np.concatenate((im,np.ones((10*h,w),dtype=np.uint8)*255), axis=0)
im = np.ravel(im)
# Slide in 1,000 white pixels so it can be timed
for frame in range(1000):
frame = im[frame:frame+w*h].reshape(h,w)
cv2.imshow("Slidey Pushy 2",frame)
cv2.waitKey(1)

I don't think you are going to get anywhere close to 30fps on a raspberry pi, but I would look at numpy and pandas for this.

Related

How to generate a bespoke black and white bitmap

image example:
I want to create a black and white bitmap from scratch (not converting or manipulating an existing image) and have the ability to change individual pixels to either black or white using pixel coordinates, somehow, maybe via a dictionary?. Something like a chessboard but with one pixel per chessboard-square (if that makes sense?).I found something to generate a colour spectrum image but don't know how to adapt this.
from PIL import Image
img = Image.new( 'RGB', (300,50), "black") # Create a new black image
pixels = img.load() # Create the pixel map
for i in range(img.size[0]): # For every pixel:
for j in range(img.size[1]):
pixels[i,j] = (i, j, 100) # Set the colour accordingly
img.show()
Zoomed in on leftmost edge of bitmap
You can do individual odd pixels here and there like this:
from PIL import PIL
# Create new black image - L mode for b/w
img = Image.new( 'L', (10,6))
# Make pixels at locations (0,5) and (2,1) white (255)
img.putpixel((0,5), 255)
img.putpixel((2,1), 255)
# Save result
img.save('result.png')
However, if you want to do whole rows, or columns, or longer lines, I would recommend round-tripping to Numpy like this:
import numpy as np
# Create new black image - L mode for b/w
img = Image.new( 'L', (10,6))
# Convert to Numpy array for easy processing
na = np.array(img)
# Make row 1 white
na[1,:] = 255
# Make column 8 white
na[:,8] = 255
# Revert to PIL Image from Numpy array and save
Image.fromarray(na).save('result.png')
Or if you want to do a block:
... as above ...
na[1:3,5:9] = 255
... as above ...

How to analyze only a part of an image?

I want to analyse a specific part of an image, as an example I'd like to focus on the bottom right 200x200 section and count all the black pixels, so far I have:
im1 = Image.open(path)
rgb_im1 = im1.convert('RGB')
for pixel in rgb_im1.getdata():
Whilst you could do this with cropping and a pair of for loops, that is really slow and not ideal.
I would suggest you use Numpy as it is very commonly available, very powerful and very fast.
Here's a 400x300 black rectangle with a 1-pixel red border:
#!/usr/bin/env python3
import numpy as np
from PIL import Image
# Open the image and make into Numpy array
im = Image.open('image.png')
ni = np.array(im)
# Declare an ROI - Region of Interest as the bottom-right 200x200 pixels
# This is called "Numpy slicing" and is near-instantaneous https://www.tutorialspoint.com/numpy/numpy_indexing_and_slicing.htm
ROI = ni[-200:,-200:]
# Calculate total area of ROI and subtract non-zero pixels to get number of zero pixels
# Numpy.count_nonzero() is highly optimised and extremely fast
black = 200*200 - np.count_nonzero(ROI)
print(f'Black pixel total: {black}')
Sample Output
Black pixel total: 39601
Yes, you can make it shorter, for example:
h, w = 200,200
im = np.array(Image.open('image.png'))
black = h*w - np.count_nonzero(ni[-h:,-w:])
If you want to debug it, you can take the ROI and make it into a PIL Image which you can then display. So just use this line anywhere after you make the ROI:
# Display image to check
Image.fromarray(ROI).show()
You can try cropping the Image to the specific part that you want:-
img = Image.open(r"Image_location")
x,y = img.size
img = img.crop((x-200, y-200, x, y))
The above code takes an input image, and crops it to its bottom right 200x200 pixels. (make sure the image dimensions are more then 200x200, otherwise an error will occur)
Original Image:-
Image after Cropping:-
You can then use this cropped image, to count the number of black pixels, where it depends on your use case what you consider as a BLACK pixel (a discrete value like (0, 0, 0) or a range/threshold (0-15, 0-15, 0-15)).
P.S.:- The final Image will always have a dimension of 200x200 pixels.
from PIL import Image
img = Image.open("ImageName.jpg")
crop_area = (a,b,c,d)
cropped_img = img.crop(crop_area)

Replace Color Values in Image with Random Noise

I found something reasonably close to what I want to do here:
Python: PIL replace a single RGBA color
However, in my scenario I have images that were originally grayscale with color annotations added to the image (an x-ray with notes in color). I would like to replace any pixel that is not grayscale with random noise. My main problem is replacing values with noise and not a single color.
Edit: I figured out the random noise part, now just trying to figure out how to separate the color pixels from the pixels that were originally in grayscale.
from PIL import Image
import numpy as np
im = Image.open('test.jpg')
data = np.array(im) # "data" is a height x width x 3 numpy array
red, green, blue = data.T # Temporarily unpack the bands for readability
# Replace white with random noise...
white_areas = (red == 255) & (blue == 255) & (green == 255)
Z = random.random(data[...][white_areas.T].shape)
data[...][white_areas.T] = Z
im2 = Image.fromarray(data)
im2.show()
You could try
col_areas = np.logical_or(np.not_equal(red, blue), np.not_equal(red, green))
You could use this Pixel Editing python module
from PixelMenu import ChangePixels as cp
im = Image.open('test.jpg')
grayscalergb=(128, 128, 128) #RGB value of gray in your image
noise=(100,30,5) #You can adjust the noise based upon your requirements
outputimg=cp(im, col=grayscalergb, col2=noise, save=False,tolerance=100) #Adjust the tolerance until you get the right amount of noise in your image
Also:
I'd suggest you to use png images instead of jpg images because JPEG is designed with compression, everytime you load the image the RGB values change making it hard for your code to function perfectly everytime

how to add border around an image in opencv python

If I have an image like below, how can I add border all around the image such that the overall height and width of the final image increases but the height and width of the original image stays as-is in the middle.
The following code adds a constant border of size 10 pixels to all four sides of your original image.
For the colour, I have assumed that you want to use the average gray value of the background, which I have calculated from the mean value of bottom two lines of your image. Sorry, somewhat hard coded, but shows the general how-to and can be adapted to your needs.
If you leave bordersize values for bottom and right at 0, you even get a symmetric border.
Other values for BORDER_TYPE are possible, such as BORDER_DEFAULT, BORDER_REPLICATE, BORDER_WRAP.
For more details cf: http://docs.opencv.org/trunk/d3/df2/tutorial_py_basic_ops.html#gsc.tab=0
import numpy as np
import cv2
im = cv2.imread('image.jpg')
row, col = im.shape[:2]
bottom = im[row-2:row, 0:col]
mean = cv2.mean(bottom)[0]
bordersize = 10
border = cv2.copyMakeBorder(
im,
top=bordersize,
bottom=bordersize,
left=bordersize,
right=bordersize,
borderType=cv2.BORDER_CONSTANT,
value=[mean, mean, mean]
)
cv2.imshow('image', im)
cv2.imshow('bottom', bottom)
cv2.imshow('border', border)
cv2.waitKey(0)
cv2.destroyAllWindows()
Answer in one line
outputImage = cv2.copyMakeBorder(
inputImage,
topBorderWidth,
bottomBorderWidth,
leftBorderWidth,
rightBorderWidth,
cv2.BORDER_CONSTANT,
value=color of border
)
Try This:
import cv2
import numpy as np
img=cv2.imread("img_src.jpg")
h,w=img.shape[0:2]
base_size=h+20,w+20,3
# make a 3 channel image for base which is slightly larger than target img
base=np.zeros(base_size,dtype=np.uint8)
cv2.rectangle(base,(0,0),(w+20,h+20),(255,255,255),30) # really thick white rectangle
base[10:h+10,10:w+10]=img # this works
Add border using openCV
import cv2
white = [255,255,255]
img1 = cv2.imread('input.png')
constant= cv2.copyMakeBorder(img1,20,20,20,20,cv2.BORDER_CONSTANT,value=white)
cv2.imwrite('output.png',constant)

loop binary image pixel

i have this image with two people in it. it is binary image only contains black and white pixels.
first i want to loop over all the pixels and find white pixels in the image.
than what i want to do is that i want to find [x,y] for the one certain white pixel.
after that i want to use that particular[x,y] in the image which is for the white pixel in the image.
using that co-ordinate of [x,y] i want to convert neighbouring black pixels into white pixels. not whole image tho.
i wanted to post image here but i cant post it unfortunately. i hope my question is understandable now. in the below image you can see the edges.
say for example the edge of the nose i find that with loop using [x,y] and than turn all neighbouring black pixels into white pixels.
This is the binary image
The operation described is called dilation, from Mathematical Morphology. You can either use, for example, scipy.ndimage.binary_dilation or implement your own.
Here are the two forms to do it (one is a trivial implementation), and you can check the resulting images are identical:
import sys
import numpy
from PIL import Image
from scipy import ndimage
img = Image.open(sys.argv[1]).convert('L') # Input is supposed to the binary.
width, height = img.size
img = img.point(lambda x: 255 if x > 40 else 0) # "Ignore" the JPEG artifacts.
# Dilation
im = numpy.array(img)
im = ndimage.binary_dilation(im, structure=((0, 1, 0), (1, 1, 1), (0, 1, 0)))
im = im.view(numpy.uint8) * 255
Image.fromarray(im).save(sys.argv[2])
# "Other operation"
im = numpy.array(img)
white_pixels = numpy.dstack(numpy.nonzero(im != 0))[0]
for y, x in white_pixels:
for dy, dx in ((-1,0),(0,-1),(0,1),(1,0)):
py, px = dy + y, dx + x
if py >= 0 and px >= 0 and py < height and px < width:
im[py, px] = 255
Image.fromarray(im).save(sys.argv[3])

Categories