This is my program, I'm trying for a 1x2 array of two cv2 images to past it to a bigger one. the idea is to do that for a bigger array of images and see them as a mosaic.
Problem is that this seems a mixture of cv2 and PIL and couldn't manage it to work.
Here my little code:
from PIL import Image
from matplotlib import cm
#example of one of the two images added
new_array = cv2.resize(x_test[0], (IMG_SIZE, IMG_SIZE))
trp.append(new_array2)
im_pil=[]
#I create the big image
new_im = Image.new('RGB', (IMG_SIZE,IMG_SIZE * 2))
k=0
#here i want to place my image in the mosaic
for i in range(0,IMG_SIZE,IMG_SIZE):
for j in range(0,IMG_SIZE *2,IMG_SIZE):
#paste the image at location i,j:
im_pil.append(Image.fromarray(trp[k])
new_im.paste(im_pil[k], (i,j))
k+=1
new_im
that's all the code( a little summarized) and I don't really know where the problem is. I get different kinds of errors in each modification, from invalid syntax to not defined or some crazy ones i dont understand.
although that
plt.imshow(trp[1])
plt.show()
works and shows one image
I'm sorry if the answer is obvious buy I'm trying my best to figure it out and I can't manage it.
Thanks in advance
I don't know why you use cv2 because you can do all in PIL. You can even display it without matplot
from PIL import Image
IMG_SIZE = 128
filenames = ['image1.png', 'image2.png']
images = []
for name in filenames:
img = Image.open(name)
img = img.resize((IMG_SIZE, IMG_SIZE))
images.append(img)
new_im = Image.new('RGB', (IMG_SIZE, IMG_SIZE*2))
k = 0
for i in range(0, IMG_SIZE, IMG_SIZE):
for j in range(0, IMG_SIZE *2, IMG_SIZE):
new_im.paste(images[k], (i,j))
k += 1
new_im.save('output.png')
new_im.show()
For images from cv2 you have to
convert colors from BGR to RGB
convert array to Image
do what you want
convert back Image to array
if you want to use again with cv2 then convert colors from RGB to BGR
Code:
from PIL import Image
import cv2
import numpy as np
import matplotlib.pyplot as plt
IMG_SIZE = 128
filenames = ['image1.png', 'image2.png']
images = []
for name in filenames:
im = cv2.imread(name)
im = cv2.cvtColor(im, cv2.COLOR_BGR2RGB)
im = cv2.resize(im, (IMG_SIZE, IMG_SIZE))
img = Image.fromarray(im)
images.append(img)
new_im = Image.new('RGB', (IMG_SIZE, IMG_SIZE*2))
k = 0
for i in range(0, IMG_SIZE, IMG_SIZE):
for j in range(0, IMG_SIZE *2, IMG_SIZE):
new_im.paste(images[k], (i,j))
k += 1
im = np.array(new_im)
plt.imshow(im)
plt.show()
#cv2.imshow('window', cv2.cvtColor(np.array(new_im), cv2.COLOR_RGB2BGR))
#cv2.waitKey(0)
Related
I am creating a PyTorch dataset and dataloader from CUB_200. When reading the images as pill, I need to change the BGR channels to RGB and I use the following code:
def _read_images_from_list(imagefile_list):
imgs = []
mean=[0.485, 0.456, 0.406]
std= [0.229, 0.224, 0.225]
Transformations = transforms.Compose([transforms.Resize([224, 224]), transforms.ToTensor(), transforms.Normalize(mean, std)])
for imagefile in imagefile_list:
# read images as PIL instead of NUMPY
img = Image.open(imagefile)
b, g, r = img.split()
img = Image.merge("RGB", (r, g, b))
img = Transformations(img) # ToTensor and between [0,1], then normalized using image net mean and std, then transposed into shape (C,H,W)
imgs += [img]
return imgs
After going through a number of classes, I get the following error.
ValueError: not enough values to unpack (expected 3, got 1)
I wonder what should I do now? it means that one of the images has only one channel instead of one. Can this be the case or there is a problem with my code? I had a different implementation before but it worked. The reason I changed this implementation was that I could not normalize my images.
This is the old implementation:
def _read_images_from_list(imagefile_list):
imgs = []
for imagefile in imagefile_list:
img = cv2.imread(imagefile).astype(np.float32)
img = cv2.resize(img, (224, 224))
# Convert RGB to BGR
img_r, img_g, img_b = np.split(img, 3, axis=2)
img = np.concatenate((img_b, img_g, img_r), axis=2)
# Extract mean
img -= np.array((103.94,116.78,123.68), dtype=np.float32) # BGR mean
# HWC -> CHW, compatible with pytorch
img = np.transpose(img, [2, 0, 1])
imgs += [img]
return imgs
I would strongly recommend you use skimage.io to load your images, not opencv. It opens the images in RGB format by default, removing your shuffling overhead, but if you want to convert BGR to RGB you can use this:
import numpy as np
img = np.arange(27).reshape(3,3,3)
b = img[:,:,0]
g = img[:,:,1]
r = img[:,:,2]
rgb = np.dstack([r,g,b])
print(img)
print("#"*20)
print(rgb)
I am trying to train a model with some noisy images having dithering.
What I have :
clean pdfs with white background
coloured pdfs(RGB) and grayscale pdfs (with 3 channels, RGB)
What I want:
convert only white background (not text) into gray background, if possible only half page should be converted
Add dithering to the gray background without loosing the text
what I tried:
import os
from PIL import Image
from numpy import asarray
ORIGIN_PATH = "/home/dithering/temp/"
DESTIN_PATH = "/home/dithering`enter code here`/temp_try/"
"""for filename in os.listdir(ORIGIN_PATH):
img = Image.open(ORIGIN_PATH + filename).convert("L")
rbg_grayscale_img = img.convert("RGB")
rbg_grayscale_img.save(DESTIN_PATH + filename)"""
for filename in os.listdir(ORIGIN_PATH):
img = Image.open(ORIGIN_PATH + filename).convert("L", dither=Image.Dither.FLOYDSTEINBERG)
# convert image to nparray
numpydata = asarray(img)
numpydata[numpydata > 250] = 128
# data
print(numpydata)
# convert array to image
final_image = Image.fromarray(numpydata)
# img show
final_image.show()
# img save
final_image.save(DESTIN_PATH + filename)
I expect something like this,
Any help would be appreciated, thanks in advance!
How can I add these images which I have converted to a (95,95) array into an array that gives me the number of images I have (in my case 10) and the dimensions of those images (95,95)?
My desired output would be an array <10,95,95>.
This is my code so far, thank you! code:
import cv2
import os
from matplotlib import pyplot as plt
# https://www.ocr2edit.com/convert-to-txt
x_train = "C:/Users/cuevas26/ae/crater_images_test"
categories = ["crater"]
#for category in categories:
path = x_train
for img in os.listdir(path):
img_array = cv2.imread(os.path.join(path, img), cv2.IMREAD_GRAYSCALE)
imgs = cv2.resize(img_array, (95, 95))
plt.imshow(imgs, cmap="gray")
plt.show()
print(type(imgs))
print(imgs.shape)
We may append the images to a list, and convert the final list into NumPy array using numpy.stack.
Start with an empty list:
images_list = []
In the loop, after img = cv2.resize, append the resized image to the list:
images_list.append(img)
After the end of the loop, convert the list into 3D NumPy array:
images = np.stack(images_list, axis=0)
images.shape is (10, 95, 95).
images.shape[0] is the number of images.
images.shape[1:] is the image dimensions (95, 95).
Use images[i] for accessing the image in index i.
Code sample:
import cv2
import os
from matplotlib import pyplot as plt
import numpy as np
# https://www.ocr2edit.com/convert-to-txt
x_train = "C:/Users/cuevas26/ae/crater_images_test"
images_list = [] # List of images - start with an empty list
# For category in categories:
path = x_train
for img in os.listdir(path):
img_array = cv2.imread(os.path.join(path, img), cv2.IMREAD_GRAYSCALE)
img = cv2.resize(img_array, (95, 95))
images_list.append(img) # Append the new image into a list
# Convert the list to of 2D arrays into 3D NumPy array (the first index is the index of the image).
# https://stackoverflow.com/questions/27516849/how-to-convert-list-of-numpy-arrays-into-single-numpy-array
images = np.stack(images_list, axis=0)
print(type(images)) # <class 'numpy.ndarray'>
print(images.shape) # (10, 95, 95)
n_images = images.shape[0]
# Show the images (using cv2.imshow instead of matplotlib)
for i in range(n_images):
cv2.imshow('img', images[i])
cv2.waitKey(1000) # Wait 1 second
cv2.destroyAllWindows()
I split my image (Raster image) into 1000 tiles for image segmentation prediction, and I want to mosaic the predicted rasters to a 1664*2432 grid sequentially.
I used these posts:
Generate a Photographic mosaic from a given set of thumbnails
How do you merge images into a canvas using PIL/Pillow?
PIL fill background image repeatedly
This is the code:
import PIL, os, glob
from PIL import Image
from math import ceil, floor
PATH = '/content/drive/MyDrive/classification/CNN_segmentation/big_Image/prediction'
frame_width = 1664
images_per_row = 26
padding = 2
os.chdir(PATH)
images = glob.glob("*.tiff")
images = images[:988]
img_width, img_height = Image.open(images[0]).size
sf = (frame_width-(images_per_row-1)*padding)/(images_per_row*img_width) #scaling factor
scaled_img_width = ceil(img_width*sf) #s
scaled_img_height = ceil(img_height*sf)
number_of_rows = 38
frame_height = 2432
new_im = Image.new('RGB', (frame_width, frame_height))
i,j=0,0
for num, im in enumerate(images):
if num%images_per_row==0:
i=0
im = Image.open(im)
im.thumbnail((64,64))
#Iterate through a 4 by 4 grid with 100 spacing, to place my image
y_cord = (j//images_per_row)*scaled_img_height
new_im.paste(im, (i,y_cord))
i=(i+64)+0
j+=1
new_im.show()
new_im.save("out.jpg", "JPEG", quality=80, optimize=True, progressive=True)
new_im
The result is a 1664*2432 image but all the pixels are black.
I tried to use, "im=rasterio.open(im)" instead of "im = Image.open(im)", but I faced this error:
AttributeError: 'DatasetReader' object has no attribute 'thumbnail'
I will be thankful if anybody could help me.
Note: I solved the problem by replacing RGB with P, but still do not know how mosaic the images by file name sequentially.
I use NumPy to create a new 2D array with 0 on the border and the array of the original image inside. I print the new array, it's what I expect. But when I plot it, it's all black.
I tried for-loop and NumPy, it's useless.
import cv2
import numpy as np
path = 'test.jpg'
img = cv2.imread(path,0)
print(img)
height,width = img.shape # 440 * 455
new_arr = np.zeros((height+2,width+2), dtype = int)
#for i in range(height):
# for j in range(width):
# new_arr[i+1][j+1] = img[i][j]
new_arr[1:height+1,1:width+1] = img
print(new_arr)
cv2.imshow('new image',new_arr)
cv2.waitKey(0)
cv2.destroyAllWindows()
The original image is here:
I expect an image with black border (just 1 pixel), and the inside is the original image to do median filtering, but the actual output is a black image.
Not sure how you are getting black image, as your code should throw an error. You need to set dtype value in proper namespace (np.) and the value should be uint8:
import cv2
import numpy as np
path = 'test.png'
img = cv2.imread(path,0)
height,width = img.shape
new_arr = np.zeros((height+2,width+2), dtype = np.uint8)
new_arr[1:height+1,1:width+1] = img
print(new_arr)
cv2.imshow('new image',new_arr)
cv2.waitKey(0)
cv2.destroyAllWindows()
Please note that the image you have given is png, not jpg. Code tested on that image.