I have a problem in size of image size (M*N)
when i read an image as np.array after i apply an algorithm method ,the size of its change to another value.some time increase and some times decrease depend on image.
img = cv2.imread('baboon.jpg',1) # read image (255*255)
na = np.array(img) # convert it to array
x, y ,pp = img.shape[:3] # size of 3d
blue = np.array(range(x*y), int).reshape((x, y))
blue[:,:] = na[:, :, 0]
en_split_block_8(red,31,1) # function algorithm
I use this code to save the image and the size changes from (255*255) to (640*480)
plt.imshow(blue,interpolation='nearest',cmap="gray")
plt.savefig('blue.jpg')#(640*480)
I want the image to keep it's size. (I don't change size of image just i work on values).
That's because you are saving the image with pyplot which shows the image as a diagram. What you want to do instead is save the blue image using cv2 with: cv2.imwrite('blue.jpg', blue).
Beware that na = np.array(img) is redundant as to why img is already an ndarray.
Related
I want to keep track of a point/pixel for reference in a PIL image while I do a (perspective) transformation and cut off the transparent borders.
from PIL import Image
# load image
img = Image.open("img.png")
# do some perspective transformation
img.transform(new_size, Image.PERSPECTIVE, mapping_coeffs)
# cut the borders
img = img.crop(img.getbbox())
For the cropping I could keep track of a position by subtracting the size of the padding. But how can I do this for a perspective transformation, or even multiple transformations in a row?
For others with the same question, I made a black image with only the reference pixel in white using NumPy and transformed it in the same way as my image.
from PIL import Image
import numpy as np
# get black img with the same size
refArray = np.zeros(PILimg.size)
# make the reference pixel white
refArray[xRef, yRef] = 1e8
# to PIL image object
refImg = Image.fromarray(refArray.T)
Do the same transformations with the reference image, and then find the max value in the transformed reference image
ref = np.array(refImg).T
xRef, yRef = np.unravel_index(np.argmax(ref), ref.shape)
edit: For some transformations the pixel disappears, this is solved by using a small square of pixels (5x5) instead of a single pixel.
I want to change the pixel value of a grayscale image using OpenCV.
Assume that I have a grayscale image and I want to convert all its pixel to 0 value one at a time. So that the resultant image is completely black. I tried this but there is no change in the image:
image = cv2.imread('test_image.png',0)
for i in range(image.shape[0]):
for j in range(image.shape[1]):
image[i, j] = 0
Result:
display the updated image
In most cases, you want to avoid using double for loops to modify pixel values since it is very slow. A better approach is to use Numpy for pixel modification since OpenCV uses Numpy arrays to display images. To achieve your desired result, you can use np.zeros to create a completely black image with the same shape as the original image.
import cv2
import numpy as np
image = cv2.imread("test_image.png", 0)
black = np.zeros(image.shape, np.uint8)
cv2.imshow('image', image)
cv2.imshow('black', black)
cv2.waitKey(0)
For example with a test image. Original (left), result (right)
I would suggest you to always try manipulating the copy of an image so that the image doesn't get affected in the wrong way. Coming to your question, you can do the following:
import cv2
image = cv2.imread('test_image.png',0)
#Creating a copy of the image to confirm right operation is performed on the image.
image_copy = image.copy()
image_copy[:,:] = [0] #Setting all values to 0.
I want to analyse a specific part of an image, as an example I'd like to focus on the bottom right 200x200 section and count all the black pixels, so far I have:
im1 = Image.open(path)
rgb_im1 = im1.convert('RGB')
for pixel in rgb_im1.getdata():
Whilst you could do this with cropping and a pair of for loops, that is really slow and not ideal.
I would suggest you use Numpy as it is very commonly available, very powerful and very fast.
Here's a 400x300 black rectangle with a 1-pixel red border:
#!/usr/bin/env python3
import numpy as np
from PIL import Image
# Open the image and make into Numpy array
im = Image.open('image.png')
ni = np.array(im)
# Declare an ROI - Region of Interest as the bottom-right 200x200 pixels
# This is called "Numpy slicing" and is near-instantaneous https://www.tutorialspoint.com/numpy/numpy_indexing_and_slicing.htm
ROI = ni[-200:,-200:]
# Calculate total area of ROI and subtract non-zero pixels to get number of zero pixels
# Numpy.count_nonzero() is highly optimised and extremely fast
black = 200*200 - np.count_nonzero(ROI)
print(f'Black pixel total: {black}')
Sample Output
Black pixel total: 39601
Yes, you can make it shorter, for example:
h, w = 200,200
im = np.array(Image.open('image.png'))
black = h*w - np.count_nonzero(ni[-h:,-w:])
If you want to debug it, you can take the ROI and make it into a PIL Image which you can then display. So just use this line anywhere after you make the ROI:
# Display image to check
Image.fromarray(ROI).show()
You can try cropping the Image to the specific part that you want:-
img = Image.open(r"Image_location")
x,y = img.size
img = img.crop((x-200, y-200, x, y))
The above code takes an input image, and crops it to its bottom right 200x200 pixels. (make sure the image dimensions are more then 200x200, otherwise an error will occur)
Original Image:-
Image after Cropping:-
You can then use this cropped image, to count the number of black pixels, where it depends on your use case what you consider as a BLACK pixel (a discrete value like (0, 0, 0) or a range/threshold (0-15, 0-15, 0-15)).
P.S.:- The final Image will always have a dimension of 200x200 pixels.
from PIL import Image
img = Image.open("ImageName.jpg")
crop_area = (a,b,c,d)
cropped_img = img.crop(crop_area)
I am trying to resize a .jpg image with skimage.transform.resize function. Function returns me weird result (see image below). I am not sure if it is a bug or just wrong use of the function.
import numpy as np
from skimage import io, color
from skimage.transform import resize
rgb = io.imread("../../small_dataset/" + file)
# show original image
img = Image.fromarray(rgb, 'RGB')
img.show()
rgb = resize(rgb, (256, 256))
# show resized image
img = Image.fromarray(rgb, 'RGB')
img.show()
Original image:
Resized image:
I allready checked skimage resize giving weird output, but I think that my bug has different propeties.
Update: Also rgb2lab function has similar bug.
The problem is that skimage is converting the pixel data type of your array after resizing the image. The original image has a 8 bits per pixel, of type numpy.uint8, and the resized pixels are numpy.float64 variables.
The resize operation is correct, but the result is not being correctly displayed. For solving this issue, I propose 2 different approaches:
To change the data structure of the resulting image. Prior to changing to uint8 values, the pixels have to be converted to a 0-255 scale, as they are on a 0-1 normalized scale:
# ...
# Do the OP operations ...
resized_image = resize(rgb, (256, 256))
# Convert the image to a 0-255 scale.
rescaled_image = 255 * resized_image
# Convert to integer data type pixels.
final_image = rescaled_image.astype(np.uint8)
# show resized image
img = Image.fromarray(final_image, 'RGB')
img.show()
Update: This method is deprecated, as per scipy.misc.imshow
To use another library for displaying the image. Taking a look at the Image library documentation, there isn't any mode supporting 3xfloat64 pixel images. However, the scipy.misc library has the appropriate tools for converting the array format in order to display it correctly:
from scipy import misc
# ...
# Do OP operations
misc.imshow(resized_image)
My program takes in an image and crops the image into seperate images according to the scale parameter, e.g. scale = 3 produces 9 images of equal size. I then work out mean rgb of each cropped image and set all pixel values in the image equal to the mean rgb value.
I am wondering how I can stich the cropped images back together to output one image? Which in this case would be a grid of nine different colours.
Here is my code:
# import packages
import numpy as np
import cv2
import dateutil
import llist
from matplotlib import pyplot as plt
import argparse
#Read in image
img = cv2.imread('images/0021.jpg')
scale = 3
#Get x and y components of image
y_len,x_len,_ = img.shape
mean_values = []
for y in range(scale):
for x in range(scale):
#Crop image 3*3 windows
cropped_img=img[(y*y_len)/scale:((y+1)*y_len)/scale,
(x*x_len)/scale:((x+1)*x_len)/scale]
mean_val=cv2.mean(cropped_img)
mean_val=mean_val[:3]
#Set cropped img pixels equal to mean RGB
cropped_img[:,:,:] = mean_val
cv2.imshow('cropped',cropped_img)
cv2.waitKey(0)
#Print mean_values array
#mean_values.append([mean_val])
#mean_values=np.asarray(mean_values)
#print mean_values.reshape(3,3,3)
As it stands the nested for loop iterates over the image and outputs the images (which are just blocks of one colour) in the order that I want to stitch them together, but im not sure how to achieve this.
I don't know if such things exist in OpenCV, but in ImageMagick you can simply resize the image down to the tile-size (which will implicitly average the pixels) and the re-scale the image back up to the original size without interpolation - also called Nearest Neighbour Resampling. Like this:
# Get original width and height
identify -format "%wx%h" face1.jpg
500x529
# Resize down to, say 10x10 and then back up to the original size
convert face1.jpg -resize 10x10! -scale "${geom}"! out.jpg
Per your original, 3x3 becomes:
convert face1.jpg -resize 3x3! -scale "${geom}"! out.jpg
and 3x5 becomes:
convert face1.jpg -resize 3x5! -scale "${geom}"! out.jpg