How can I use something like below line to save images on a grid of 4x4 of heterogenous images? Imagine that images are identified by sample[i] and i takes 16 different values.
scipy.misc.imsave(str(img_index) + '.png', sample[1])
Similar to this answer but for 16 different images
https://stackoverflow.com/a/42041135/2414957
I am not biased towards the used method as long as it does the deed. Also, I am interested in saving images rather than showing them using plt.show() as I am using a remote server and dealing with CelebA image dataset which is a giant dataset. I just want to randomly select 16 images from my batch and save the results of DCGAN and see if it makes any sense or if it converges.
*Currently, I am saving images like below:
batch_no = random.randint(0, 63)
scipy.misc.imsave('sample_gan_images/iter_%d_epoch_%d_sample_%d.png' %(itr, epoch, batch_no), sample[batch_no])
and here, I have 25 epochs and 2000 iterations and batch size is 64.
Personally, I tend to use matplotlib.pyplot.subplots for these kinds of situations. If your images are really heterogenous it might be a better choice than the image concatenation based approach in the answer you linked to.
import matplotlib.pyplot as plt
from scipy.misc import face
x = 4
y = 4
fig,axarr = plt.subplots(x,y)
ims = [face() for i in range(x*y)]
for ax,im in zip(axarr.ravel(), ims):
ax.imshow(im)
fig.savefig('faces.png')
My big complaint about subplots is the quantity of whitespace in the resulting figure. As well, for your application you may not want the axes ticks/frames. Here's a wrapper function that deals with those issues:
import matplotlib.pyplot as plt
def savegrid(ims, rows=None, cols=None, fill=True, showax=False):
if rows is None != cols is None:
raise ValueError("Set either both rows and cols or neither.")
if rows is None:
rows = len(ims)
cols = 1
gridspec_kw = {'wspace': 0, 'hspace': 0} if fill else {}
fig,axarr = plt.subplots(rows, cols, gridspec_kw=gridspec_kw)
if fill:
bleed = 0
fig.subplots_adjust(left=bleed, bottom=bleed, right=(1 - bleed), top=(1 - bleed))
for ax,im in zip(axarr.ravel(), ims):
ax.imshow(im)
if not showax:
ax.set_axis_off()
kwargs = {'pad_inches': .01} if fill else {}
fig.savefig('faces.png', **kwargs)
Running savegrid(ims, 4, 4) on the same set of images as used earlier yields:
If you use savegrid, if you want each individual image to take up less space, pass the fill=False keyword arg. If you want to show the axes ticks/frames, pass showax=True.
I found this on github, also sharing it:
import matplotlib.pyplot as plt
def merge_images(image_batch, size):
h,w = image_batch.shape[1], image_batch.shape[2]
c = image_batch.shape[3]
img = np.zeros((int(h*size[0]), w*size[1], c))
for idx, im in enumerate(image_batch):
i = idx % size[1]
j = idx // size[1]
img[j*h:j*h+h, i*w:i*w+w,:] = im
return img
im_merged = merge_images(sample, [8,8])
plt.imsave('sample_gan_images/im_merged.png', im_merged )
Related
I've been stuck on this question and I have figured it out in the image below. The second image is the first task that task 2 is based on. I have been told that the solution needs to be in a single new array. The following starter script was given: grid_image = and plt.imshow(grid_image).
Task 1 code:
images = np.load('albatrosses.npy')
images = images.reshape((-1,56,56,3))
plt.imshow(images[0])
Task 2 code: He says he wants it on a single array.
grid_image = images
fig, axis = plt.subplots(2,2)
axis[0,0].imshow(images[0])
axis[0,1].imshow(images[1])
axis[1,0].imshow(images[2])
axis[1,1].imshow(images[3])
Here is an example using de CIFAR10 dataset from TensorFlow. For this task, you can use NumPy Indexing on 'ndarrays'.
import tensorflow as tf
from matplotlib import pyplot as plt
import numpy as np
(images, _), (_, _) = tf.keras.datasets.cifar10.load_data()
grid_image = images
fig, axis = plt.subplots(2,2)
axis[0,0].imshow(images[0])
axis[0,1].imshow(images[1])
axis[1,0].imshow(images[2])
axis[1,1].imshow(images[3])
print(images[0].shape) #shape(h,w,c)
print(images[0].dtype)
plt.imshow(images[0])
And here goes the code you need: First, we calculate the shape of the new grid image with the sub-images, considering their height and width. Then we create a new array, filled with zeros and the same data type, with the resulting shape (including the 3-channel dimension).
output_shape = (images[0].shape[0]+images[2].shape[0], images[0].shape[1]+images[1].shape[1], 3)
grid_image = np.zeros(output_shape, dtype='uint8')
#add sub image 0
grid_image[ :images[0].shape[0], :images[0].shape[1], :] = images[0]
#add sub image 1
grid_image[ :images[1].shape[0], images[1].shape[1]:, :] = images[1]
#add sub image 2
grid_image[ images[2].shape[0]:, :images[2].shape[1], :] = images[2]
#add sub image 3
grid_image[ images[3].shape[0]:, images[3].shape[1]:, :] = images[3]
print(grid_image.shape)
print(grid_image.dtype)
plt.imshow(grid_image)
I hope this may help you.
Using matplotlib in Python I'm plotting anywhere between 20 and 50 lines. Using matplotlib's sliding colour scales these become indistinguishable after a certain number of lines are plotted (well before 20).
While I've seen a few example of code in Matlab and C# to create colour maps of an arbitrary number of colours which are maximally distinguishable from one another I can't find anything for Python.
Can anyone point me in the direction of something in Python that will do this?
Cheers
I loved the idea of palette created by #xuancong84 and modified his code a bit to make it not depending on alpha channel. I drop it here for others to use, thank you #xuancong84!
import math
import numpy as np
from matplotlib.colors import ListedColormap
from matplotlib.cm import hsv
def generate_colormap(number_of_distinct_colors: int = 80):
if number_of_distinct_colors == 0:
number_of_distinct_colors = 80
number_of_shades = 7
number_of_distinct_colors_with_multiply_of_shades = int(math.ceil(number_of_distinct_colors / number_of_shades) * number_of_shades)
# Create an array with uniformly drawn floats taken from <0, 1) partition
linearly_distributed_nums = np.arange(number_of_distinct_colors_with_multiply_of_shades) / number_of_distinct_colors_with_multiply_of_shades
# We are going to reorganise monotonically growing numbers in such way that there will be single array with saw-like pattern
# but each saw tooth is slightly higher than the one before
# First divide linearly_distributed_nums into number_of_shades sub-arrays containing linearly distributed numbers
arr_by_shade_rows = linearly_distributed_nums.reshape(number_of_shades, number_of_distinct_colors_with_multiply_of_shades // number_of_shades)
# Transpose the above matrix (columns become rows) - as a result each row contains saw tooth with values slightly higher than row above
arr_by_shade_columns = arr_by_shade_rows.T
# Keep number of saw teeth for later
number_of_partitions = arr_by_shade_columns.shape[0]
# Flatten the above matrix - join each row into single array
nums_distributed_like_rising_saw = arr_by_shade_columns.reshape(-1)
# HSV colour map is cyclic (https://matplotlib.org/tutorials/colors/colormaps.html#cyclic), we'll use this property
initial_cm = hsv(nums_distributed_like_rising_saw)
lower_partitions_half = number_of_partitions // 2
upper_partitions_half = number_of_partitions - lower_partitions_half
# Modify lower half in such way that colours towards beginning of partition are darker
# First colours are affected more, colours closer to the middle are affected less
lower_half = lower_partitions_half * number_of_shades
for i in range(3):
initial_cm[0:lower_half, i] *= np.arange(0.2, 1, 0.8/lower_half)
# Modify second half in such way that colours towards end of partition are less intense and brighter
# Colours closer to the middle are affected less, colours closer to the end are affected more
for i in range(3):
for j in range(upper_partitions_half):
modifier = np.ones(number_of_shades) - initial_cm[lower_half + j * number_of_shades: lower_half + (j + 1) * number_of_shades, i]
modifier = j * modifier / upper_partitions_half
initial_cm[lower_half + j * number_of_shades: lower_half + (j + 1) * number_of_shades, i] += modifier
return ListedColormap(initial_cm)
These are the colours I get:
from matplotlib import pyplot as plt
import numpy as np
N = 16
M = 7
H = np.arange(N*M).reshape([N,M])
fig = plt.figure(figsize=(10, 10))
ax = plt.pcolor(H, cmap=generate_colormap(N*M))
plt.show()
Recently, I also encountered the same problem. So I created the following simple Python code to generate visually distinguishable colors for jupyter notebook matplotlib. It is not quite maximally perceptually distinguishable, but it works better than most built-in colormaps in matplotlib.
The algorithm splits the HSV scale into 2 chunks, 1st chunk with increasing RGB value, 2nd chunk with decreasing alpha so that the color can blends into the white background.
Take note that if you are using any toolkit other than jupyter notebook, you must make sure the background is white, otherwise, alpha blending will be different and the resultant color will also be different.
Furthermore, the distinctiveness of color highly depends on your computer screen, projector, etc. A color palette distinguishable on one screen does not necessarily imply on another. You must physically test it out if you want to use for presentation.
import numpy as np
import matplotlib.pyplot as plt
from matplotlib.colors import ListedColormap
def generate_colormap(N):
arr = np.arange(N)/N
N_up = int(math.ceil(N/7)*7)
arr.resize(N_up)
arr = arr.reshape(7,N_up//7).T.reshape(-1)
ret = matplotlib.cm.hsv(arr)
n = ret[:,3].size
a = n//2
b = n-a
for i in range(3):
ret[0:n//2,i] *= np.arange(0.2,1,0.8/a)
ret[n//2:,3] *= np.arange(1,0.1,-0.9/b)
# print(ret)
return ret
N = 16
H = np.arange(N*N).reshape([N,N])
fig = plt.figure(figsize=(10, 10))
ax = plt.pcolor(H, cmap=ListedColormap(generate_colormap(N*N)))
The end goal is to take an image and slice it up into samples that I save. The problem is that my slices are randomly returning black/ incorrect patches. Bellow is a small sample program.
import scipy.ndimage as ndimage
import scipy.misc as misc
import numpy as np
image32 = misc.imread("work0.png")
patches = np.zeros((36, 8, 8))
for i in range(4):
for j in range(4):
patches[i*4 + j] = image32[i:i+8,j:j+8]
misc.imsave("{0}{1}.png".format(i,j), patches[i*4 + j])
An example of my image would be:
Patch of 0,0 of 8x8 patch yields:
Two things:
You are initializing your patch matrix to be the wrong data type. By default, numpy will make patches matrix a np.float64 type and if you use this with saving, you won't get the results you would expect. Specifically, if you consult Mr. F's answer, there is actually some scaling performed on floating-point images where the minimum and maximum values of the image get scaled to black and white respectively and so if you have an image that is completely uniform in background, both the minimum and maximum will be the same and will get visualized to black. As such, the best thing is to respect the original image's data type, namely setting the dtype of your patches matrix to np.uint8.
Judging from your for loop indexing, you want to extract out 8 x 8 patches that are non-overlapping. This means that if you have a 32 x 32 image with 8 x 8 patches, you have 16 patches in total arranged in a 4 x 4 grid.
Therefore, you need to change the patches statement so that it has 16 in the first dimension, not 36. In addition, you'll have to adjust the way you're indexing into your image to extract out the 8 x 8 patches because right now, the patches are overlapping. Specifically, you want to make the image patch indexing go from 8*i to 8*(i+1) for the rows and 8*j to 8*(j+1) for the columns. If you substitute sample values of i and j yourself, you'll see that we get unique 8 x 8 patches for each grid in your image.
With both of the above things I noted, the modified code should be:
import scipy.ndimage as ndimage
import scipy.misc as misc
import numpy as np
image32 = misc.imread('work0.png')
patches = np.zeros((16,8,8), dtype=np.uint8) # Change
for i in range(4):
for j in range(4):
patches[i*4 + j] = image32[8*i:8*(i+1),8*j:8*(j+1)] # Change
misc.imsave("{0}{1}.png".format(i,j), patches[i*4 + j])
When I do this and take a look at the output images, I get what I expect.
To be absolutely sure, let's plot the segments using matplotlib. You've conveniently saved all of the patches in patches so it shouldn't be a problem showing what we need. However, I'll place some code in comments so that you can read in the images that were saved from disk with your above code so you can verify that it still works, regardless of looking at patches or the images on disk:
import matplotlib.pyplot as plt
plt.figure()
for i in range(4):
for j in range(4):
plt.subplot(4, 4, 4*i + j + 1)
img = patches[4*i + j]
# or you can do this:
# img = misc.imread('{0}{1}.png'.format(i,j))
img = np.dstack([img, img, img])
plt.imshow(img)
plt.show()
The weird thing about matplotlib.pyplot.imshow is that if you have an image that is single channel (such as your case) that has the same intensity all around, it gets visualized to black no matter what the colour map is, much like what we experienced with imsave. Therefore, I had to artificially make this a RGB image but with all of the channels to be the same so this gets visualized as grayscale before we show the image.
We get:
According to this answer the issue is that imsave normalizes the data so that the computed minimum is defined as black (and, if there is a distinct maximum, that is defined as white).
This led me to go digging as to why the suggested use of uint8 did work to create the desired output. As it turns out, in the source there is a function called bytescale that gets called internally.
Actually, imsave itself is a very thin wrapper around toimage followed by save (from the image object). Inside of toimage if mode is None (which it is by default), that's when bytescale gets invoked.
It turns out that bytescale has an if statement that checks for the uint8 data type, and if the data is in that format, it returns the data unaltered. But if not, then the data is scaled according to a max and min transformation (where 0 and 255 are the default low and high pixel values to compare to).
This is the full snippet of code linked above:
if data.dtype == uint8:
return data
if high < low:
raise ValueError("`high` should be larger than `low`.")
if cmin is None:
cmin = data.min()
if cmax is None:
cmax = data.max()
cscale = cmax - cmin
if cscale < 0:
raise ValueError("`cmax` should be larger than `cmin`.")
elif cscale == 0:
cscale = 1
scale = float(high - low) / cscale
bytedata = (data * 1.0 - cmin) * scale + 0.4999
bytedata[bytedata > high] = high
bytedata[bytedata < 0] = 0
return cast[uint8](bytedata) + cast[uint8](low)
For the blocks of your data that are all 255, cscale will be 0, which will be checked for and changed to 1. Then the line
bytedata = (data * 1.0 - cmin) * scale + 0.4999
will result in the whole image block having the float value of 0.4999, thus set explicitly to 0 in the next chunk of code (when casted to uint8 from float) as for example:
In [102]: np.cast[np.uint8](0.4999)
Out[102]: array(0, dtype=uint8)
You can see in the body of bytescale that there are only two possible ways to return: either your data is type uint8 and it's returned as-is, or else it goes through this kind of silly scaling process. So in the end, it is indeed correct, and good practice, to be using uint8 for the pieces of your code that specifically load from or save to an image format via these functions.
So this cascade of stuff is why you were getting all zeros in the outputted image file and why the other suggestion of using dtype=np.uint8 actually helps you. It's not because you need to avoid floating point data for images, just because of this bizarre convention to check and scale data on the part of imsave.
There are many questions over here which checks if two images are "nearly" similar or not.
My task is simple. With OpenCV, I want to find out if two images are 100% identical or not.
They will be of same size but can be saved with different filenames.
You can use a logical operator like xor operator. If you are using python you can use the following one-line function:
Python
def is_similar(image1, image2):
return image1.shape == image2.shape and not(np.bitwise_xor(image1,image2).any())
where shape is the property that shows the size of matrix and bitwise_xor is as the name suggests. The C++ version can be made in a similar way!
C++
Please see #berak code.
Notice: The Python code works for any depth images(1-D, 2-D, 3-D , ..), but the C++ version works just for 2-D images. It's easy to convert it to any depth images by yourself. I hope that gives you the insight! :)
Doc: bitwise_xor
EDIT: C++ was removed. Thanks to #Micka and # berak for their comments.
the sum of the differences should be 0 (for all channels):
bool equal(const Mat & a, const Mat & b)
{
if ( (a.rows != b.rows) || (a.cols != b.cols) )
return false;
Scalar s = sum( a - b );
return (s[0]==0) && (s[1]==0) && (s[2]==0);
}
import cv2
import numpy as np
a = cv2.imread("picture1.png")
b = cv2.imread("picture2.png")
difference = cv2.subtract(a, b)
result = not np.any(difference)
if result is True:
print("Pictures are the same")
else:
print("Pictures are different")
If they are same files except being saved in different file-names, you can check whether their Checksums are identical or not.
Importing the packages we’ll need — matplotlib for plotting, NumPy for numerical processing, and cv2 for our OpenCV bindings. Structural Similarity Index method is already implemented for us by scikit-image, so we’ll just use their implementation
# import the necessary packages
from skimage.measure import structural_similarity as ssim
import matplotlib.pyplot as plt
import numpy as np
import cv2
Then define the compare_images function which we’ll use to compare two images using both MSE and SSIM. The mse function takes three arguments: imageA and imageB, which are the two images we are going to compare, and then the title of our figure.
We then compute the MSE and SSIM between the two images.
We also simply display the MSE and SSIM associated with the two images we are comparing.
def mse(imageA, imageB):
# the 'Mean Squared Error' between the two images is the
# sum of the squared difference between the two images;
# NOTE: the two images must have the same dimension
err = np.sum((imageA.astype("float") - imageB.astype("float")) ** 2)
err /= float(imageA.shape[0] * imageA.shape[1])
# return the MSE, the lower the error, the more "similar"
# the two images are
return err
def compare_images(imageA, imageB, title):
# compute the mean squared error and structural similarity
# index for the images
m = mse(imageA, imageB)
s = ssim(imageA, imageB)
# setup the figure
fig = plt.figure(title)
plt.suptitle("MSE: %.2f, SSIM: %.2f" % (m, s))
# show first image
ax = fig.add_subplot(1, 2, 1)
plt.imshow(imageA, cmap = plt.cm.gray)
plt.axis("off")
# show the second image
ax = fig.add_subplot(1, 2, 2)
plt.imshow(imageB, cmap = plt.cm.gray)
plt.axis("off")
# show the images
plt.show()
Load images off disk using OpenCV. We’ll be using original image, contrast adjusted image, and our Photoshopped image
We then convert our images to grayscale
# load the images -- the original, the original + contrast,
# and the original + photoshop
original = cv2.imread("images/jp_gates_original.png")
contrast = cv2.imread("images/jp_gates_contrast.png")
shopped = cv2.imread("images/jp_gates_photoshopped.png")
# convert the images to grayscale
original = cv2.cvtColor(original, cv2.COLOR_BGR2GRAY)
contrast = cv2.cvtColor(contrast, cv2.COLOR_BGR2GRAY)
shopped = cv2.cvtColor(shopped, cv2.COLOR_BGR2GRAY)
We will generate a matplotlib figure, loop over our images one-by-one, and add them to our plot. Our plot is then displayed to us.
Finally, we can compare our images together using the compare_images function.
# initialize the figure
fig = plt.figure("Images")
images = ("Original", original), ("Contrast", contrast), ("Photoshopped", shopped)
# loop over the images
for (i, (name, image)) in enumerate(images):
# show the image
ax = fig.add_subplot(1, 3, i + 1)
ax.set_title(name)
plt.imshow(image, cmap = plt.cm.gray)
plt.axis("off")
# show the figure
plt.show()
# compare the images
compare_images(original, original, "Original vs. Original")
compare_images(original, contrast, "Original vs. Contrast")
compare_images(original, shopped, "Original vs. Photoshopped")
Reference- https://www.pyimagesearch.com/2014/09/15/python-compare-two-images/
I have done this task.
Compare file sizes.
Compare exif data.
Compare first 'n' byte, where 'n' is 128 to 1024 or so.
Compare last 'n' bytes.
Compare middle 'n' bytes.
Compare checksum
I'm working on a project using numpy and scipy and I need to fill in nanvalues. Currently I use scipy.interpolate.rbf, but it keeps causing python to crash so severely try/except won't even save it. However, after running it a few times, it seems as if it may keep failing in cases where there is data in the middle surrounded by all nans, like an island. Is there a better solution to this that won't keep crashing?
By the way, this is a LOT of data I need to extrapolate. Sometimes as much as half the image (70x70, greyscale), but it doesn't need to be perfect. It's part of an image stitching program, so as long as it's similar to the actual data, it'll work. I've tried nearest neighbor to fill in the nans, but the results are too different.
EDIT:
The image it always seems to fail on. Isolating this image allowed it to pass the image ONCE before crashing.
I'm using at least version NumPy 1.8.0 and SciPy 0.13.2.
Using SciPy's LinearNDInterpolator. If all images are of the same size, grid coordinates can be pre-computed and re-used.
import numpy as np
import matplotlib.pyplot as plt
from scipy import interpolate
x = np.linspace(0, 1, 500)
y = x[:, None]
image = x + y
# Destroy some values
mask = np.random.random(image.shape) > 0.7
image[mask] = np.nan
valid_mask = ~np.isnan(image)
coords = np.array(np.nonzero(valid_mask)).T
values = image[valid_mask]
it = interpolate.LinearNDInterpolator(coords, values, fill_value=0)
filled = it(list(np.ndindex(image.shape))).reshape(image.shape)
f, (ax0, ax1) = plt.subplots(1, 2)
ax0.imshow(image, cmap='gray', interpolation='nearest')
ax0.set_title('Input image')
ax1.imshow(filled, cmap='gray', interpolation='nearest')
ax1.set_title('Interpolated data')
plt.show()
This proved sufficient for my needs. It is actually quite fast and produces reasonable results:
ipn_kernel = np.array([[1,1,1],[1,0,1],[1,1,1]]) # kernel for inpaint_nans
def inpaint_nans(im):
nans = np.isnan(im)
while np.sum(nans)>0:
im[nans] = 0
vNeighbors = scipy.signal.convolve2d((nans==False),ipn_kernel,mode='same',boundary='symm')
im2 = scipy.signal.convolve2d(im,ipn_kernel,mode='same',boundary='symm')
im2[vNeighbors>0] = im2[vNeighbors>0]/vNeighbors[vNeighbors>0]
im2[vNeighbors==0] = np.nan
im2[(nans==False)] = im[(nans==False)]
im = im2
nans = np.isnan(im)
return im