scikit-image LineModel() estimate incorrect - python

I have some pretty straight forward code to segment out a blob and then do a total least squares line fit to it, but the estimate of the line (blue) is clearly not correct. Anybody know what I did wrong?
import skimage as s
import skimage.feature as feat
from skimage import measure
#dash is the original image of the dash as a numpy array
g= dash[2662:2800,3050:3263,0]
edges = feat.canny(g/255., 3.2, .01, .45)
fill = ndimage.binary_fill_holes(edges)
t = s.img_as_ubyte(flt.sobel_h(fill))/2+s.img_as_ubyte(flt.sobel_v(fill))/2
ptpairs = np.int0(np.where(t>160))
l = measure.LineModel()
l.estimate(ptpairs.T)
xdata = np.arange(60,80)
ydata = l.predict_y(xdata)
imshow(g+edges*100)
plot(xdata,ydata)

Related

How to reduce contrast of a NumPy array?

I want to reduce the contrast of my whole dataset as an experiment. The dataset is in a NumPy array of np.array(X).reshape(-1, 28, 28, 1) I tried to use the library Albumentations, which works really well for motion blur and Gaussian noise, but I didn't find a way to reduce the contrast with that library. How can I do this?
You would need to rescale the difference to the mean value, here is an example, using this image as source. Extra code using PIL and imshow for visuals:
import numpy as np
import PIL.Image
import matplotlib.pyplot as plt
im = PIL.Image.open('a-600-600-image-of-a-building.jpg')
np_im = np.array(im)
np_mean = np.mean(np.array(im))
for factor in [1.0, 0.9, 0.5, 0.1]:
plt.figure()
plt.title("{}".format(factor))
reduced_contrast=(np_im-np_mean)*factor + np_mean
new_im = PIL.Image.fromarray(reduced_contrast)
plt.imshow(np.array(list(new_im.convert('RGBA').getdata())).reshape(new_im.height, new_im.width, 4))
plt.savefig("{}.png".format(factor))
Output:
The relevant line is reduced_contrast=(np_im-np_mean)*factor + np_mean

Numpy append sometimes works, sometimes doesn't

so I've been working on this facial identification project. It's for my science fair and I'm in the phase where I'm trying to get data graphs, plots, and visualizations. I've got it to work to some extent, but it's not consistent (in terms of execution).
The thing is, sometimes the code works, sometimes it'll give me an error.
For some context, the error is with Numpy append(). I have a variable I want to append data to but when it doesn't work the error is AttributeError: 'numpy.ndarray' object has no attribute 'append'
#Although the results aren't as expected, this can make for a good demo in ISEF
#The whole refresh after a face is detected is cool and can be used to show how different faces cluster
# Numerical computation requirements
import numpy as np
from numpy import linalg, load, expand_dims, asarray, savez_compressed, append
from numpy.linalg import norm
import pandas as pd
# Plotting requirements
import matplotlib
from matplotlib import pyplot as plt
import matplotlib.patheffects as PathEffects
from matplotlib.animation import FuncAnimation as ani
import seaborn as sb
# Clustering requirements
import sklearn
from sklearn.cluster import KMeans
from sklearn.manifold import TSNE
from sklearn.preprocessing import scale
# Miscellaneous requirements
import os
import cv2
from PIL import Image
from mtcnn.mtcnn import MTCNN
from keras.models import load_model
from scipy.spatial.distance import squareform, pdist
# Initialize RNG seed and required size for Facenet
seed = 12345678
size = (160,160)
# Required networks
facenet = load_model('facenet_keras.h5')
fd = MTCNN()
# Initialize Seaborn plots
sb.set_style('darkgrid')
sb.set_palette('muted')
sb.set_context('notebook', font_scale=1.5, rc={'lines.linewidth': 2.5})
# Matplotlib animation requirements?
plt.style.use('fivethirtyeight')
fig = plt.figure()
# Load embeddings
data = load('jerome only npz/jerome embeddings.npz')
Data_1 = data['arr_0']
Dataset = []
for array in Data_1:
Dataset.append(np.expand_dims(array, axis=0))
# Create cluster
cluster = KMeans(n_clusters=2, random_state=0).fit(Data_1)
y = cluster.labels_
z = pd.DataFrame(y.tolist())
faces = list()
def scatter(x,colors):
palette = np.array(sb.color_palette('hls', 26))
plot = plt.figure()
ax = plt.subplot(aspect='equal')
# sc = ax.scatter(x[:,0],x[:,1], lw =0, s=120, c=palette[colors.astype(np.int)])
sc = ax.scatter(x[:,0],x[:,1], lw =0, s=120)
labels = []
return plot , ax, sc, labels
def detembed():
cam = cv2.VideoCapture(0)
_,frame = cam.read()
info = fd.detect_faces(frame)
if info != []:
for i in info:
print("***************** FACE DETECTED *************************************************")
x,yc,w,h = i['box']
x,y = abs(x), abs(yc)
w,h = abs(w), abs(h)
xx, yy = x+w, yc+h
#cv2.rectangle(frame, (x,y), (xx,yy), (0,0,255),2)
face = frame[yc:yy, x:xx]
image = Image.fromarray(face)
image = image.resize(size)
arr = asarray(image)
arr = arr.astype('float32')
mean, std = arr.mean(), arr.std()
arr = (arr - mean) / std
samples = expand_dims(arr, axis=0)
faces.append(samples)
#cv2.imshow('Camera Feed', frame)
while True:
detembed()
embeddings = Dataset
if not faces:
continue
else:
for face in faces:
embeds = facenet.predict(face)
#switch these if conflicts arise
embeddings.append(embeds)
embeddings = asarray(embeddings)
embeddings = embeddings[:,0,:]
cluster = KMeans(n_clusters=2, random_state=0).fit(Data_1)
y = cluster.labels_
points = TSNE(random_state=seed).fit_transform(embeddings)
# here "y" dictates the color of the plots depending on the kmeans algorithm
scatter(points,y)
graph = ani(fig, scatter, interval=20)
fcount = len(embeddings)
plt.text(0,0,'{} points'.format(fcount))
plt.show()
# reset embeddings var to initial dataset
Dataset = np.delete(Dataset, fcount - 1,0)
embeddings = Dataset
if cv2.waitKey(1) & 0xFF == ord('q'):
break
cv2.release()
cv2.destroyAllWindows
Note that I am not a talented programmer; this code was botched from some example I found online. I had to pick up Python as I went along with this project. I do have a background in C, so I would say I get the basics of code logic.
Please help. I'm getting really desperate; the science fair is getting closer and I am a high schooler with no ML mentor. I live on an island (Guam) with no machine learning practitioners (not even in the university), so I turn to Stackoverflow.
There's no issue with NumPy's append(). Here(3rd statement) you're trying to append a value to Numpy array without using NumPy's np.append().
Dataset.append(np.expand_dims(array, axis=0))
embeddings = Dataset
embeddings.append(embeds)
Since Datasets contain Numpy array after running the first statement, embeddings will also be a NumPy array and hence the operation fails whenever the execution comes here.
A simple fix would be to use this:
np.append(embeddings, embeds)
Or this,
embeddings = list(Dataset)
Hope that helps.

Why does scipy.signal.correlate2d fail to work in this example?

I am trying to cross-correlate two images, and thus locate the template image on the first image, by finding the maximum correlation value.
I drew an image with some random shapes (first image), and cut out one of these shapes (template). Now, when I use scipy's correlate2d, and locate point in the correlation with maximum values, several point appear. From my knowledge, shouldn't there only be one point where the overlap is at max?
The idea behind this exercise is to take some part of an image, and then correlate that to some previous images from a database. Then I should be able to locate this part on the older images based on the maximum value of correlation.
My code looks something like this:
from matplotlib import pyplot as plt
from PIL import Image
import scipy.signal as sp
img = Image.open('test.png').convert('L')
img = np.asarray(img)
temp = Image.open('test_temp.png').convert('L')
temp = np.asarray(temp)
corr = sp.correlate2d(img, temp, boundary='symm', mode='full')
plt.imshow(corr, cmap='hot')
plt.colorbar()
coordin = np.where(corr == np.max(corr)) #Finds all coordinates where there is a maximum correlation
listOfCoordinates= list(zip(coordin[1], coordin[0]))
for i in range(len(listOfCoordinates)): #Plotting all those coordinates
plt.plot(listOfCoordinates[i][0], listOfCoordinates[i][1],'c*', markersize=5)
This yields the figure:
Cyan stars are points with max correlation value (255).
I expect there to be only one point in "corr" to have the max value of correlation, but several appear. I have tried to use different modes of correlating, but to no avail.
This is the test image I use when correlating.
This is the template, cut from the original image.
Can anyone give some insight to what I might be doing wrong here?
You are probably overflowing the numpy type uint8.
Try using:
img = np.asarray(img,dtype=np.float32)
temp = np.asarray(temp,dtype=np.float32)
Untested.
Applying
img = img - img.mean()
temp = temp - temp.mean()
before computing the 2D cross-correlation corr should give you the expected result.
Cleaning up the code, for a full example:
from imageio import imread
from matplotlib import pyplot as plt
import scipy.signal as sp
import numpy as np
img = imread('https://i.stack.imgur.com/JL2LW.png', pilmode='L')
temp = imread('https://i.stack.imgur.com/UIUzJ.png', pilmode='L')
corr = sp.correlate2d(img - img.mean(),
temp - temp.mean(),
boundary='symm',
mode='full')
# coordinates where there is a maximum correlation
max_coords = np.where(corr == np.max(corr))
plt.plot(max_coords[1], max_coords[0],'c*', markersize=5)
plt.imshow(corr, cmap='hot')

Implementing FFT with Pytorch

I am trying to implement FFT by using the conv1d function provided in Pytorch.
Generating artifical signal
import numpy as np
import torch
from torch.autograd import Variable
from torch.nn.functional import conv1d
from scipy import fft, fftpack
import matplotlib.pyplot as plt
%matplotlib inline
# Creating filters
d = 4096 # size of windows
def create_filters(d):
x = np.arange(0, d, 1)
wsin = np.empty((d,1,d), dtype=np.float32)
wcos = np.empty((d,1,d), dtype=np.float32)
window_mask = 1.0-1.0*np.cos(x)
for ind in range(d):
wsin[ind,0,:] = np.sin(2*np.pi*((ind+1)/d)*x)
wcos[ind,0,:] = np.cos(2*np.pi*((ind+1)/d)*x)
return wsin,wcos
wsin, wcos = create_filters(d)
wsin_var = Variable(torch.from_numpy(wsin), requires_grad=False)
wcos_var = Variable(torch.from_numpy(wcos),requires_grad=False)
# Creating signal
t = np.linspace(0,1,4096)
x = np.sin(2*np.pi*100*t)+np.sin(2*np.pi*200*t)+np.random.normal(scale=5,size=(4096))
plt.plot(x)
FFT with Pytorch
signal_input = torch.from_numpy(x.reshape(1,-1),)[:,None,:4096]
signal_input = signal_input.float()
zx = conv1d(signal_input, wsin_var, stride=1).pow(2)+conv1d(signal_input, wcos_var, stride=1).pow(2)
FFT with Scipy
fig = plt.figure(figsize=(20,5))
plt.plot(np.abs(fft(x).reshape(-1))[:500])
My Question
As you can see the two outputs are quite similar in terms of the peaks characteristics. That means my implementation is not totally wrong.
However, there are also some subtleties, such as the scale of the spectrum, and the signal to noise ratio. I am unable to figure out what's missing here to get the exact same result.
You calculated the power rather than the amplitude.
You simply need to add the line zx = zx.pow(0.5) to take the square root to get the amplitude.
As of version 1,8, PyTorch has a native implementation torch.fft:
torch.fft.fft(x)

local histogram equalization

I am trying to use do some image analysis in python (I have to use python). I need to do both a global and local histogram equalization. The global version works well however the local version, using a 7x7 footprint, gives a very poor result.
This is the global version:
import matplotlib.pyplot as plt
import matplotlib.image as mpimg
from scipy import ndimage,misc
import scipy.io as io
from scipy.misc import toimage
import numpy as n
import pylab as py
from numpy import *
mat = io.loadmat('image.mat')
image=mat['imageD']
def histeq(im,nbr_bins=256):
#get image histogram
imhist,bins = histogram(im.flatten(),nbr_bins,normed=True)
cdf = imhist.cumsum() #cumulative distribution function
cdf = 0.6 * cdf / cdf[-1] #normalize
#use linear interpolation of cdf to find new pixel values
im2 = interp(im.flatten(),bins[:-1],cdf)
#returns image and cumulative histogram used to map
return im2.reshape(im.shape), cdf
im=image
im2,cdf = histeq(im)
To do the local version, I am trying to use a generic filter like so (using the same image as loaded previously):
def func(x):
cdf=[]
xhist,bins=histogram(x,256,normed=True)
cdf = xhist.cumsum()
cdf = 0.6 * cdf / cdf[-1]
im_out = interp(x,bins[:-1],cdf)
midval=interp(x[24],bins[:-1],cdf)
return midval
print im.shape
im3=ndimage.filters.generic_filter(im, func,size=im.shape,footprint=n.ones((7,7)))
Does anyone have any suggestions/thoughts as to why the second version will not work? I'm really stuck and any comments would be greatly appreciated! Thanks in advance!
You could use the scikit-image library to perform Global and Local Histogram Equalization. Stealing with pride from the link, below is the snippet. The equalization is done with a disk shaped kernel (or footprint), but you could change this to a square, by setting kernel = np.ones((N,M)).
import numpy as np
import matplotlib
import matplotlib.pyplot as plt
from skimage import data
from skimage.util import img_as_ubyte
from skimage import exposure
import skimage.morphology as morp
from skimage.filters import rank
# Original image
img = img_as_ubyte(data.moon())
# Global equalize
img_global = exposure.equalize_hist(img)
# Local Equalization, disk shape kernel
# Better contrast with disk kernel but could be different
kernel = morp.disk(30)
img_local = rank.equalize(img, selem=kernel)
fig, (ax_img, ax_global, ax_local) = plt.subplots(1, 3)
ax_img.imshow(img, cmap=plt.cm.gray)
ax_img.set_title('Low contrast image')
ax_img.set_axis_off()
ax_global.imshow(img_global, cmap=plt.cm.gray)
ax_global.set_title('Global equalization')
ax_global.set_axis_off()
ax_local.imshow(img_local, cmap=plt.cm.gray)
ax_local.set_title('Local equalization')
ax_local.set_axis_off()
plt.show()

Categories