**
I am new to the python interface. I am importing the numpy, scipy , gradient descend libraries . And By using the skimage I am importing imread and rgb2gray .By using the scipy library I am using the signal function .With gradient descent I am converting the image into imgray .Then I am applying the gaussian filter and when I am mapping corner function using harris response function I am getting the error:
It will be really helpful for me if anyone can help me.I have tried to install skimage multiple of time I have installed it and uninstalled it but I couldn't able to find any solution to it.I have also imported all the libraries required but I don't know what is the issue . Please provide me accurate result for the code. Thank you in advance.
Below is my error :
**
**NameError Traceback (most recent call last)
<ipython-input-7-76c3bdecb24d> in <module>
----> 1 corners = corner_peaks(harris_response)
2 fig, ax = plt.subplots()
3 ax.imshow(img, interpolation='nearest', cmap=plt.cm.gray)
4 ax.plot(corners[:, 1], corners[:, 0], '.r', markersize=3)
NameError: name 'corner_peaks' is not defined**
Below is my code :
from skimage.io import imread
from skimage.color import rgb2gray
img = imread('box.jpg')
imggray = rgb2gray(img)
from scipy import signal as sig
import numpy as np
def gradient_x(imggray):
##Sobel operator kernels.
kernel_x = np.array([[-1, 0, 1],[-2, 0, 2],[-1, 0, 1]])
return sig.convolve2d(imggray, kernel_x, mode='same')
def gradient_y(imggray):
kernel_y = np.array([[1, 2, 1], [0, 0, 0], [-1, -2, -1]])
return sig.convolve2d(imggray, kernel_y, mode='same')
I_x = gradient_x(imggray)
I_y = gradient_y(imggray)
from scipy.ndimage import gaussian_filter
Ixx = gaussian_filter(I_x**2, sigma=1)
Ixy = gaussian_filter(I_y*I_x, sigma=1)
Iyy = gaussian_filter(I_y**2, sigma=1)
k = 0.05
# determinant
detA = Ixx * Iyy - Ixy ** 2
# trace
traceA = Ixx + Iyy
harris_response = detA - k * traceA ** 2
img_copy_for_corners = np.copy(img)
img_copy_for_edges = np.copy(img)
for rowindex, response in enumerate(harris_response):
for colindex, r in enumerate(response):
if r > 0:
# this is a corner
img_copy_for_corners[rowindex, colindex] = [255,0,0]
elif r < 0:
# this is an edge
img_copy_for_edges[rowindex, colindex] = [0,255,0]
corners = corner_peaks(harris_response)
fig, ax = plt.subplots()
ax.imshow(img, interpolation='nearest', cmap=plt.cm.gray)
ax.plot(corners[:, 1], corners[:, 0], '.r', markersize=3)
It seems you missed to import the function. I assume it needs something like
from skimage.feature import corner_peaks
at the beginning. (Your indentation in the example above is wrong by the way but I assume that is a copy/paste error.)
Later in the script you will encounter the error that plt is not defined. You'd need to
import matplotlib.pyplot as plt
Related
I am trying to overlay precisely one image patch to a larger patch of the image
I have the coordinates where I'd like to put the patch and overlay the image but I don't know how to do with matplotlib.
I know it's possible with PILLOW (as explained here)but since I am using matplotlib for everything I'd be happy to stick to it.
For this example it would be moving the 'red patch' into the rectangle where 'it's supposed' to be.
Here is the code that I used for that:
temp_k = img_arr[0][
np.min(kernel[:, 1]) : np.max(kernel[:, 1]),
np.min(kernel[:, 0]) : np.max(kernel[:, 0]),
]
temp_w = img_arr[1][
np.min(window[:, 1]) : np.max(window[:, 1]),
np.min(window[:, 0]) : np.max(window[:, 0]),
]
w, h = temp_k.shape[::-1]
res = cv2.matchTemplate(temp_w, temp_k, cv2.TM_CCORR_NORMED)
min_val, max_val, min_loc, max_loc = cv2.minMaxLoc(res)
cv2.rectangle(temp_w, top_left, bottom_right, 255, 1)
plt.imshow(temp_w, cmap="bone")
plt.imshow(temp_k, cmap="magma", alpha=0.6)
plt.plot(max_loc[0], max_loc[1], "yo")
plt.savefig("../images/test.png")
plt.tight_layout()
Does anyone has an idea how to do that ?
Thanks in advance.
Just as with Pillow, you need to tell Matplotlib where to place data. If you omit that, it will assume a default extent of [0,xs,ys,0], basically plotting it in the top-left corner as shown on your image.
Generating some example data:
import matplotlib.pyplot as plt
import numpy as np
n = 32
m = n // 2
o = n // 4
a = np.random.randn(n,n)
b = np.random.randn(m,m)
a_extent = [0,n,n,0]
b_extent = [o, o+m, o+m, o]
# a_extent = [0, 32, 32, 0]
# b_extent = [8, 24, 24, 8]
Plotting with:
fig, ax = plt.subplots(figsize=(5,5), constrained_layout=False, dpi=86, facecolor="w")
ax.imshow(a, cmap="bone", extent=a_extent)
ax.autoscale(False)
ax.imshow(b, cmap="magma", extent=b_extent)
Will result in:
This is a "working example" that does not work. Why does this not run? scipy seems to not work.
i get this error:
File "display_map.py", line 35, in
rot_cw = R.from_quat(keyframe["rot_cw"]).as_matrix()
AttributeError: 'Rotation' object has no attribute 'as_matrix'
please can someone help me me change it. I tried reducing the version of scipy
import msgpack
import matplotlib.pyplot as plt
from mpl_toolkits.mplot3d import Axes3D
import numpy as np
from numpy.linalg import inv
from scipy.spatial.transform import Rotation as R
import open3d as o3d
import sys
if len(sys.argv) < 2:
print(
"ERROR: Please provide path to .msg file. Example usage is; python3 visualize_openvslam_map.py path_to.msg"
)
exit()
with open(sys.argv[1], "rb") as f:
upacked_msg = msgpack.Unpacker(f)
packed_msg = upacked_msg.unpack()
keyfarmes = packed_msg["keyframes"]
landmarks = packed_msg["landmarks"]
# FILL IN KEYFRAME POINTS(ODOMETRY) TO ARRAY
keyframe_points = []
keyframe_points_color = []
for keyframe in keyfarmes.values():
# get conversion from camera to world
trans_cw = np.matrix(keyframe["trans_cw"]).T
rot_cw = R.from_quat(keyframe["rot_cw"]).as_matrix()
# compute conversion from world to camera
rot_wc = rot_cw.T
trans_wc = -rot_wc * trans_cw
keyframe_points.append((trans_wc[0, 0], trans_wc[1, 0], trans_wc[2, 0]))
keyframe_points = np.array(keyframe_points)
keyframe_points_color = np.repeat(np.array([[0., 1., 0.]]),
keyframe_points.shape[0],
axis=0)
# FILL IN LANDMARK POINTS TO ARRAY
landmark_points = []
landmark_points_color = []
for lm in landmarks.values():
landmark_points.append(lm["pos_w"])
landmark_points_color.append([
abs(lm["pos_w"][1]) * 4,
abs(lm["pos_w"][1]) * 2,
abs(lm["pos_w"][1]) * 3
])
landmark_points = np.array(landmark_points)
landmark_points_color = np.array(landmark_points_color)
# CONSTRUCT KEYFRAME(ODOMETRY) FOR VISUALIZTION
keyframe_points_pointcloud = o3d.geometry.PointCloud()
keyframe_points_pointcloud.points = o3d.utility.Vector3dVector(keyframe_points)
keyframe_points_pointcloud.colors = o3d.utility.Vector3dVector(
keyframe_points_color)
# CONSTRUCT LANDMARK POINTCLOUD FOR VISUALIZTION
landmark_points_pointcloud = o3d.geometry.PointCloud()
landmark_points_pointcloud.points = o3d.utility.Vector3dVector(landmark_points)
landmark_points_pointcloud.colors = o3d.utility.Vector3dVector(
landmark_points_color)
# VISULIZE MAP
o3d.visualization.draw_geometries([
keyframe_points_pointcloud, landmark_points_pointcloud,
o3d.geometry.TriangleMesh.create_coordinate_frame()
])
In scipy.spatial.Rotation methods from_dcm, as_dcm were renamed to from_matrix, as_matrix respectively.
I am using skimage. I need to create a mask equal in area to an image. The mask will have a region which will hide part of the image. I am building it as in the sample below but this is very slow and am sure there is a pythonic way of doing it. Could anyone highlight this please?
Code am using presently:
import matplotlib.pyplot as plt
import matplotlib
import numpy as np
import skimage as sk
sourceimage = './sample.jpg'
img = np.copy(io.imread(sourceimage, as_gray=True))
mask = np.full(img.shape, 1)
maskpolygon = [(1,200),(300,644),(625,490),(625,1)]
from shapely.geometry import Point
from shapely.geometry.polygon import Polygon
pgon = Polygon(maskpolygon)
for r in range(mask.shape[0]):
for c in range(mask.shape[1]):
p = Point(r,c)
if pgon.contains(p):
mask[r,c] = 0
Expected result is like (for a 9x9 image - but I am working on 700x700)
[1,1,1,1,1,1,1,1,1]
[1,1,1,1,1,1,1,1,1]
[1,1,0,0,1,1,1,1,1]
[1,1,0,0,1,1,1,1,1]
[1,1,0,0,0,0,1,1,1]
[1,1,0,0,0,0,0,1,1]
[1,1,1,0,0,0,0,1,1]
[1,1,1,1,0,0,1,1,1]
[1,1,1,1,1,1,1,1,1]
I can invert 1's and 0's to show/hide region.
Thank you.
I have been able to resolve this thanks to #HansHirse.
Below is how I worked it out
sourceimage = './sample.jpg'
figuresize = (100, 100)
from skimage.draw import polygon
#open source and create a copy
img = np.copy(io.imread(sourceimage, as_gray=True))
mask = np.full(img.shape, 0)
maskpolygon = [(1,1), (280,1),(625, 280),(460, 621),(15, 625)]
maskpolygonr = [x[0] for x in maskpolygon]
maskpolygonc = [x[1] for x in maskpolygon]
rr, cc = polygon(maskpolygonr, maskpolygonc)
mask[rr ,cc] = 1
masked_image = img * mask
# show step by step what is happening
fig, axs = plt.subplots(nrows = 3, ncols = 1, sharex=True, sharey = True, figsize=figuresize )
ax = axs.ravel()
ax[0].imshow(img)#, cmap=plt.cm.gray)
ax[1].imshow(mask)#, cmap=plt.cm.gray)
ax[2].imshow(masked_image)#, cmap=plt.cm.gray)
I implemented the code given by Cris Luengo for convolution in frequency in domain, however I'm not getting the intended gradient image in x direction.
Image without flipping the kernel in x and y direction:
Image after flipping the kernel:
If you notice, the second image is same as given by ImageKernel filter from the pillow library. Also, one thing to notice is I don't have to flip the kernel if I apply Sobel kernel in y direction, I get the exactly intended image.
This is my code:
import numpy as np
from scipy import misc
from scipy import fftpack
import matplotlib.pyplot as plt
from PIL import Image,ImageDraw,ImageOps,ImageFilter
from pylab import figure, title, imshow, hist, grid,show
im1=Image.open("astronaut.png").convert('L')
# im1=ImageOps.grayscale(im1)
img=np.array(im1)
# kernel = np.ones((3,3)) / 9
# kernel=np.array([[0,-1,0],[-1,4,-1],[0,-1,0]])
kernel=np.array([[-1,0,1],[-2,0,2],[-1,0,1]])
kernel=np.rot90(kernel,2)
print(kernel)
sz = (img.shape[0] - kernel.shape[0], img.shape[1] - kernel.shape[1]) # total
amount of padding
kernel = np.pad(kernel, (((sz[0]+1)//2, sz[0]//2), ((sz[1]+1)//2, sz[1]//2)),
'constant')
kernel = fftpack.ifftshift(kernel)
filtered = np.real(fftpack.ifft2(fftpack.fft2(img) *
fftpack.fft2(kernel)))+np.imag(fftpack.ifft2(fftpack.fft2(img) *
fftpack.fft2(kernel)))
filtered=np.maximum(0,np.minimum(filtered,255))
im2=Image.open("astronaut.png").convert('L')
u=im2.filter(ImageFilter.Kernel((3,3), [-1,0,1,-2,0,2,-1,0,1],
scale=1, offset=0))
fig2=figure()
ax1 = fig2.add_subplot(221)
ax2 = fig2.add_subplot(222)
ax3 = fig2.add_subplot(223)
ax1.title.set_text('Original Image')
ax2.title.set_text('After convolving in freq domain')
ax3.title.set_text('imagefilter conv')
ax1.imshow(img,cmap='gray')
ax2.imshow(filtered,cmap='gray')
ax3.imshow(np.array(u),cmap='gray')
show()
We can use np.fft module's FFT implementation too and here is how we can obtain convolution with sobel horizontal kernel in frequency domain (by the convolution theorem):
h, w = im.shape
kernel = np.array(array([[-1, 0, 1],
[-2, 0, 2],
[-1, 0, 1]])) #sobel_filter_x
k = len(kernel) // 2 # assuming odd-length square kernel, here it's 3x3
kernel_padded = np.pad(kernel, [(h//2-k-1, h//2-k), (w//2-k-1, w//2-k)])
im_freq = np.fft.fft2(im) # input image frequency
kernel_freq = np.fft.fft2(kernel_padded) # kernel frequency
out_freq = im_freq * kernel_freq # frequency domain convolution output
out = np.fft.ifftshift(np.fft.ifft2(out_freq)).real # spatial domain output
The below figure shows the input, kernel and output images in spatial and frequency domain:
I'm working thru an image processing example in python 2.7.13. The code has
import skimage.morphology as morph and then later has the line lm1 = morph.is_local_maximum(fimg). I get the error message:
File "2dlocalmaxima.py", line 29, in <module>
lm1 = morph.is_local_maximum(fimg)
AttributeError: 'module' object has no attribute 'is_local_maximum'.
I've googled this and have found many instances of this module being used. I can find no instance of this being deprecated. Am I doing something wrong? I have tried running in python 2.7.13 and 3.6. Both give same error message.
The total code from the book is:
import numpy as np
import matplotlib.pyplot as mpl
import scipy.ndimage as ndimage
import skimage.morphology as morph
# Generating data points with a non-uniform background
x = np.random.uniform(low=0, high=200, size=20).astype(int)
y = np.random.uniform(low=0, high=400, size=20).astype(int)
# Creating image with non-uniform background
func = lambda x, y: np.cos(x)+ np.sin(y)
grid_x, grid_y = np.mgrid[0:12:200j, 0:24:400j]
bkg = func(grid_x, grid_y)
bkg = bkg / np.max(bkg)
# Creating points
clean = np.zeros((200,400))
clean[(x,y)] += 5
clean = ndimage.gaussian_filter(clean, 3)
clean = clean / np.max(clean)
# Combining both the non-uniform background
# and points
fimg = bkg + clean
fimg = fimg / np.max(fimg)
# Calculating local maxima
lm1 = morph.is_local_maximum(fimg)
x1, y1 = np.where(lm1.T == True)
# Creating figure to show local maximum detection
# rate success
fig = mpl.figure(figsize=(8, 4))
ax = fig.add_subplot(111)
ax.imshow(fimg)
ax.scatter(x1, y1, s=100, facecolor='none', edgecolor='#009999')
ax.set_xlim(0,400)
ax.set_ylim(0,200)
ax.xaxis.set_visible(False)
ax.yaxis.set_visible(False)
fig.savefig('scikit_image_f02.pdf', bbox_inches='tight')
After searching thru different files I determined that the module is_local_maximum had its name changed to local_maxima. My code ran to completion and produced the expected result when that substitution was made.