Hi everyone my code below here.
import cv2
import numpy as np
import PIL
from matplotlib import pyplot
img1 = cv2.imread('D:/MyProject/SeniorProject/Mushroom Pictures/train/Class A/IMG_9604.jpg')
img2 = cv2.imread('D:/MyProject/SeniorProject/Mushroom Pictures/train/Class A/IMG_9605.jpg')
img1_hsv = cv2.cvtColor(img1,cv2.COLOR_BGR2HSV)
img2_hsv = cv2.cvtColor(img2,cv2.COLOR_BGR2HSV)
h_bins = 18
s_bins = 32
histSize = [h_bins, s_bins]
h_ranges = [0,180]
s_ranges = [0,256]
ranges = h_ranges + s_ranges
channels = [0,1]
hist1c = cv2.calcHist([img1_hsv],channels,None,histSize,ranges,accumulate=False)
hist2c = cv2.calcHist([img2_hsv],[0],None,[180],[0,180],accumulate=False)
pyplot.imshow(hist1c,interpolation = 'nearest')
pyplot.show()
I got hs histogram as AxesImage but I want to convert to arrays for apply to machine learning model train input. Can you help me for that.
Why I didn't use hist1c and hist2c are input of the model because it's not separate H-S each diamention it only keep in the H bins.
Thank you very much :)
Big
Rehsape your histogram data :
np.array(hist1c).reshape(-1, yourDataSize)
Related
I've been stuck on this question and I have figured it out in the image below. The second image is the first task that task 2 is based on. I have been told that the solution needs to be in a single new array. The following starter script was given: grid_image = and plt.imshow(grid_image).
Task 1 code:
images = np.load('albatrosses.npy')
images = images.reshape((-1,56,56,3))
plt.imshow(images[0])
Task 2 code: He says he wants it on a single array.
grid_image = images
fig, axis = plt.subplots(2,2)
axis[0,0].imshow(images[0])
axis[0,1].imshow(images[1])
axis[1,0].imshow(images[2])
axis[1,1].imshow(images[3])
Here is an example using de CIFAR10 dataset from TensorFlow. For this task, you can use NumPy Indexing on 'ndarrays'.
import tensorflow as tf
from matplotlib import pyplot as plt
import numpy as np
(images, _), (_, _) = tf.keras.datasets.cifar10.load_data()
grid_image = images
fig, axis = plt.subplots(2,2)
axis[0,0].imshow(images[0])
axis[0,1].imshow(images[1])
axis[1,0].imshow(images[2])
axis[1,1].imshow(images[3])
print(images[0].shape) #shape(h,w,c)
print(images[0].dtype)
plt.imshow(images[0])
And here goes the code you need: First, we calculate the shape of the new grid image with the sub-images, considering their height and width. Then we create a new array, filled with zeros and the same data type, with the resulting shape (including the 3-channel dimension).
output_shape = (images[0].shape[0]+images[2].shape[0], images[0].shape[1]+images[1].shape[1], 3)
grid_image = np.zeros(output_shape, dtype='uint8')
#add sub image 0
grid_image[ :images[0].shape[0], :images[0].shape[1], :] = images[0]
#add sub image 1
grid_image[ :images[1].shape[0], images[1].shape[1]:, :] = images[1]
#add sub image 2
grid_image[ images[2].shape[0]:, :images[2].shape[1], :] = images[2]
#add sub image 3
grid_image[ images[3].shape[0]:, images[3].shape[1]:, :] = images[3]
print(grid_image.shape)
print(grid_image.dtype)
plt.imshow(grid_image)
I hope this may help you.
I'm trying to work with ITK in Python (instead of openCV as I'm mostly using 3D image data) but can't get the filters working.
I'll skip the exact error messages as they depend on what I'm trying. You can reproduce them with the example below based on the ITK documentation. I create a blob using a 2D Gaussian and then try to extract its contours.
The approximate_signed_distance_map_image_filter acts as expected but the contour_extractor2_d_image_filter crashes on me in various ways no matter what I do.
Any ideas on how to solve this?
Minimal (2D) example
import itk
import matplotlib.pyplot as plt
import numpy as np
fig, axs = plt.subplots(1,3)
print('creating blob from 2d gaussian histogram')
arr = np.random.multivariate_normal([0,0], [[1,0],[0,1]], 100000)
h = np.histogram2d(arr[:,0],arr[:,1], bins=[30,30])
axs[0].set_title('Blob')
axs[0].imshow(h[0], cmap='gray')
print('applying itk approximate_signed_distance_map_image_filter')
arr_image = itk.image_view_from_array(h[0])
asdm = itk.approximate_signed_distance_map_image_filter(arr_image, inside_value=1000, outside_value=0)
asdm_arr = itk.array_from_image(asdm)
axs[1].set_title('signed distance')
axs[1].imshow(asdm_arr)
print('applying itk contour_extractor2_d_image_filter')
ce2d = itk.contour_extractor2_d_image_filter(itk.output(asdm), contour_value=1000)
ce2d_arr = itk.array_from_image(ce2d)
# also not working
# ce2d = itk.ContourExtractor2DImageFilter.New()
# ce2d.SetInput(asdm);
# ce2d.SetContourValue(0);
# ce2d.Update()
# ce2d_arr = itk.array_from_image(ce2d.GetOutput())
axs[2].set_title('contour')
axs[2].imshow(ce2d_arr)
plt.show()
I am trying to plot a 2D colormap.
I would imagine something like this:
grid = np.ndarray([2,2])
grid[0,0] = [35,74,3]
grid[0,1] = [146,252,7]
grid[1,0] = [215,84,14]
grid[1,1] = [16,62,8]
plotter.map(grid)
What library supports this?
from PIL import Image
import random
import numpy as np
steps = 100
grid = np.ndarray([steps,steps,3])
for step_search_depth in range(steps):
for step_simulated_games_per_state in range(steps):
colour = random.randint(0,200)
grid[step_search_depth, step_simulated_games_per_state] = (colour,colour,colour)
Image.fromarray(grid, 'RGB').show()
I'm trying to change the phase of an image in the Fourier domain pseudorandomly while keeping the magnitude same to get a noisy image. Here's the code for that:
import numpy as np
import matplotlib.pyplot as plt
import cv2
img_orig = cv2.imread("Lenna.png", 0)
plt.imshow(img_orig, cmap="gray");
Original Image
f = np.fft.fft2(img_orig)
mag_orig, ang_orig = np.abs(f), np.arctan2(f.imag, f.real)
np.random.seed(42)
ns = np.random.uniform(0, 6.28, size = f.shape)
ang_noise = ang_orig+ns
img_noise = np.abs(np.fft.ifft2(mag_orig*np.exp(ang_noise*1j)))
plt.imshow(img_noise, cmap="gray");
Noisy Image
But when I try to reconstruct the original image by removing the noise the way I added it, I get a noisy version of the original image. Here's the code:
f_noise = np.fft.fft2(img_noise)
mag_noise, ang_noise = np.abs(f_noise), np.arctan2(f_noise.imag, f_noise.real)
ang_recover = ang_noise-ns
img_recover = np.abs(np.fft.ifft2(mag_noise*np.exp(ang_recover*1j)))
plt.imshow(img_recover, cmap="gray");
Reconstructed Image
Any idea about why this is happening and how to remove it? I'll appreciate any help that I can get. Thank You
Add to yours code, after string
ns = np.random.uniform(0, 6.28, size = f.shape)
this make symmetric phase:
ns = np.fft.fft2(ns)
ns = np.arctan2(ns.imag, ns.real)
After adding noise in Fourier space, your image in real space will be complex (i.e will have both a magnitude and a phase). In your case you are taking the absolute value though, probably so that you can plot it, but in doing so you are removing this phase information and altering your image when you shouldn't.
In short, I think you need to remove the abs in this line:
img_noise = np.abs(np.fft.ifft2(mag_orig*np.exp(ang_noise*1j)))
so I've been working on this facial identification project. It's for my science fair and I'm in the phase where I'm trying to get data graphs, plots, and visualizations. I've got it to work to some extent, but it's not consistent (in terms of execution).
The thing is, sometimes the code works, sometimes it'll give me an error.
For some context, the error is with Numpy append(). I have a variable I want to append data to but when it doesn't work the error is AttributeError: 'numpy.ndarray' object has no attribute 'append'
#Although the results aren't as expected, this can make for a good demo in ISEF
#The whole refresh after a face is detected is cool and can be used to show how different faces cluster
# Numerical computation requirements
import numpy as np
from numpy import linalg, load, expand_dims, asarray, savez_compressed, append
from numpy.linalg import norm
import pandas as pd
# Plotting requirements
import matplotlib
from matplotlib import pyplot as plt
import matplotlib.patheffects as PathEffects
from matplotlib.animation import FuncAnimation as ani
import seaborn as sb
# Clustering requirements
import sklearn
from sklearn.cluster import KMeans
from sklearn.manifold import TSNE
from sklearn.preprocessing import scale
# Miscellaneous requirements
import os
import cv2
from PIL import Image
from mtcnn.mtcnn import MTCNN
from keras.models import load_model
from scipy.spatial.distance import squareform, pdist
# Initialize RNG seed and required size for Facenet
seed = 12345678
size = (160,160)
# Required networks
facenet = load_model('facenet_keras.h5')
fd = MTCNN()
# Initialize Seaborn plots
sb.set_style('darkgrid')
sb.set_palette('muted')
sb.set_context('notebook', font_scale=1.5, rc={'lines.linewidth': 2.5})
# Matplotlib animation requirements?
plt.style.use('fivethirtyeight')
fig = plt.figure()
# Load embeddings
data = load('jerome only npz/jerome embeddings.npz')
Data_1 = data['arr_0']
Dataset = []
for array in Data_1:
Dataset.append(np.expand_dims(array, axis=0))
# Create cluster
cluster = KMeans(n_clusters=2, random_state=0).fit(Data_1)
y = cluster.labels_
z = pd.DataFrame(y.tolist())
faces = list()
def scatter(x,colors):
palette = np.array(sb.color_palette('hls', 26))
plot = plt.figure()
ax = plt.subplot(aspect='equal')
# sc = ax.scatter(x[:,0],x[:,1], lw =0, s=120, c=palette[colors.astype(np.int)])
sc = ax.scatter(x[:,0],x[:,1], lw =0, s=120)
labels = []
return plot , ax, sc, labels
def detembed():
cam = cv2.VideoCapture(0)
_,frame = cam.read()
info = fd.detect_faces(frame)
if info != []:
for i in info:
print("***************** FACE DETECTED *************************************************")
x,yc,w,h = i['box']
x,y = abs(x), abs(yc)
w,h = abs(w), abs(h)
xx, yy = x+w, yc+h
#cv2.rectangle(frame, (x,y), (xx,yy), (0,0,255),2)
face = frame[yc:yy, x:xx]
image = Image.fromarray(face)
image = image.resize(size)
arr = asarray(image)
arr = arr.astype('float32')
mean, std = arr.mean(), arr.std()
arr = (arr - mean) / std
samples = expand_dims(arr, axis=0)
faces.append(samples)
#cv2.imshow('Camera Feed', frame)
while True:
detembed()
embeddings = Dataset
if not faces:
continue
else:
for face in faces:
embeds = facenet.predict(face)
#switch these if conflicts arise
embeddings.append(embeds)
embeddings = asarray(embeddings)
embeddings = embeddings[:,0,:]
cluster = KMeans(n_clusters=2, random_state=0).fit(Data_1)
y = cluster.labels_
points = TSNE(random_state=seed).fit_transform(embeddings)
# here "y" dictates the color of the plots depending on the kmeans algorithm
scatter(points,y)
graph = ani(fig, scatter, interval=20)
fcount = len(embeddings)
plt.text(0,0,'{} points'.format(fcount))
plt.show()
# reset embeddings var to initial dataset
Dataset = np.delete(Dataset, fcount - 1,0)
embeddings = Dataset
if cv2.waitKey(1) & 0xFF == ord('q'):
break
cv2.release()
cv2.destroyAllWindows
Note that I am not a talented programmer; this code was botched from some example I found online. I had to pick up Python as I went along with this project. I do have a background in C, so I would say I get the basics of code logic.
Please help. I'm getting really desperate; the science fair is getting closer and I am a high schooler with no ML mentor. I live on an island (Guam) with no machine learning practitioners (not even in the university), so I turn to Stackoverflow.
There's no issue with NumPy's append(). Here(3rd statement) you're trying to append a value to Numpy array without using NumPy's np.append().
Dataset.append(np.expand_dims(array, axis=0))
embeddings = Dataset
embeddings.append(embeds)
Since Datasets contain Numpy array after running the first statement, embeddings will also be a NumPy array and hence the operation fails whenever the execution comes here.
A simple fix would be to use this:
np.append(embeddings, embeds)
Or this,
embeddings = list(Dataset)
Hope that helps.