as a homework task I was given to create an RGB spectrum image just with numpy functions.
This is my current code:
zero = np.dstack([
np.linspace(0.0, 1.0, self.resolution),
np.linspace(0.0, 0.0, self.resolution),
np.linspace(1.0, 0.0, self.resolution)
])
spectrum = np.tile(zero, (self.resolution, 1, 1))
What this produces is a gradient from red to blue. Now, what is left is to linspace the green value into the third dimension.
Anyone here who has some tips how to do that?
Edit: Let me re-phrase - how can I avoid this loop with numpy?
spectrum = np.tile(zero, (self.resolution, 1, 1))
for i in range(self.resolution):
spectrum[i, :, 1] = green[i]
Your last for loop is:
spectrum[:, :, 1] = np.linspace(0.0, 1.0, resolution)[:, None]
Edit: after playing with your spectrum, this also do the job:
res = np.linspace(0.0, 1.0, resolution)
s = np.meshgrid(res, res)
spectrum = np.stack([s[0], s[1], 1-s[0]],axis=-1)
Related
I have a signal and want to reconstruct it from its spectrum as a sum of sines and/or cosines. I am aware of the inverse FFT but I want to reconstruct the signal in this way.
An example would look like this:
sig = np.array([1, 5, -3, 0.7, 3.1, -5, -0.5, 3.2, -2.3, -1.1, 3, 0.3, -2.05, 2.1, 3.05, -2.3])
fft = np.fft.rfft(sig)
mag = np.abs(fft) * 2 / sig.size
phase = np.angle(fft)
x = np.arange(sig.size)
reconstructed = list()
for x_i in x:
val = 0
for i, (m, p) in enumerate(zip(mag, phase)):
val += ... # what's the correct form?
reconstructed.append(val)
What's the correct code to write in the next-to-last line?
I have points in a 3D plane that I have converted to a 2D projection using the following method:
import numpy as np
# Calculate axes for 2D projection
# Create random vector to cross
rv = np.add(self.plane.normal, [-1.0, 0.0, 1.0])
rv = np.divide(rv, np.linalg.norm(rv))
horizontal = np.cross(self.plane.normal, rv)
vertical = np.cross(self.plane.normal, horizontal)
diff2 = np.zeros((len(point23D), 3), dtype=np.float32)
diff2[:, 0] = np.subtract(point23D[:, 0], self.plane.origin[0])
diff2[:, 1] = np.subtract(point23D[:, 1], self.plane.origin[1])
diff2[:, 2] = np.subtract(point23D[:, 2], self.plane.origin[2])
x2 = np.add(np.add(np.multiply(diff2[:, 0], horizontal[0]), np.multiply(diff2[:, 1], horizontal[1])), np.multiply(diff2[:, 2], horizontal[2]))
y2 = np.add(np.add(np.multiply(diff2[:, 0], vertical[0]), np.multiply(diff2[:, 1], vertical[1])), np.multiply(diff2[:, 2], vertical[2]))
twodpoints2 = np.zeros((len(point23D), 3), dtype=np.float32)
twodpoints2[:, 0] = x2
twodpoints2[:, 1] = y2
I then do some calculations on these points in 2D space. After that I need to get the points back in 3D space on the same relative position. I have written the following code for that:
# Transform back to 3D
rotation_matrix = np.array([[horizontal[0], vertical[0], -self.plane.normal[0]],
[horizontal[1], vertical[1], -self.plane.normal[1]],
[horizontal[2], vertical[2], -self.plane.normal[2]]])
transformed_vertices = np.matmul(twodpoints, rotation_matrix)
transformed_vertices = np.add(transformed_vertices, self.plane.origin)
But this doesn't seem to do the projection correctly, the points projected back in 3D do not lie on the original 3D plane at all. Does anyone know why this is wrong or does anyone have a suggestion that would work better?
In this example I just projected the same points back into 3D to see if it works correctly, which it doesn't. In reality I'll have different points that need to be projected back, but they still need to be in the same plane in 3D space.
# You have a plane perpendicular to a vector
# N = np.array([x_N, y_N, z_N])
# and passing through a point
# Q = np.array([x_Q, y_Q, z_Q])
U = np.zeros((3,3))
U[2,:] = N / np.linalg.norm(N)
e = np.array([0,0,0])
e[np.argmin(np.abs(U[0,:]))] = 1
U[0, :] = np.cross(e, U[2,:])
U[0, :] = U[0, :] / np.linalg.norm(U[0, :])
U[1, :] = np.cross(U[2, :], U[0, :])
point2D = (point23D - Q).dot(U)
result_point2D = some_calcs(point2D)
result_point23D = res_point2D.dot(U.transpose())
I have 3 images, with an applyied mean filter.
I0 beeing just the noise image, taken with the cap on.
I20 taken an image which only shows a 20% reflectance target
I90 an image showing only a 90% reflectance target for each pixel.
So rather than looping over each pixel and using polynomial fit (https://docs.scipy.org/doc/numpy/reference/generated/numpy.polyfit.html)
Where X = [I0(i), I20(i), I90(i)] and Y=[0,0.2,0.9]
and then applying the polyfit to get the parameters for each pixel,
is there a way to feed a X(i,3) and Y(i,3) into polyfit or something similar to get the same result but faster?
Thanks
Ben
If your goal is to vectorize polyfit then yes, this can be done but requires rewriting np.polyfit manually. Fortunately, it can be built on top of np.linalg.lstsq and the polynomial design matrix provided by np.vander. All in all, the routine looks like the following:
import numpy as np
def fit_many(x, y, order=2):
'''
arguments:
x: [N]
y: [N x S]
where:
N - # of measurements per pixel
S - # pixels
returns [`order` x S]
'''
A = np.vander(x, N=order)
return np.linalg.lstsq(A, y, rcond=None)[0]
And can be used like below
# measurement x values. I suppose those are your reflectances?
x = np.array([0, 1, 2])
y = np.array([ # a row per pixel
[-1, 0.2, 0.9],
[-.9, 0.1, 1.2],
]).T
params = fit_many(x, y)
import matplotlib.pyplot as plt
poly1 = np.poly1d(params[:, 0])
poly2 = np.poly1d(params[:, 1])
plt.plot(x, y[:, 0], 'bo')
plt.plot(x, poly1(x), 'b-')
plt.plot(x, y[:, 1], 'ro')
plt.plot(x, poly2(x), 'r-')
plt.show()
Keep in mind np.linalg.lstsq doesn't allow for dimensions higher than two, so you will have to reshape your 2d image into flattened versions, fit and convert back.
I have a numpy array that is very large (1 million integers). I'm using np.convolve in order to find the "densest" area of that array. By "desnsest" area I mean the window of a fixed length that has the the highest numbers when the window is summed. Let me show you in code:
import numpy as np
example = np.array([0,0,0,1,1,1,1,1,1,1,0,1,1,1,1,0,0,0,1,0,0,1,1,0,1,0,0,0,0,0,1,0])
window_size = 10
density = np.convolve(example, np.ones([window_size]), mode='valid')
print(density)
# [7.0, 7.0, 8.0, 9.0, 9.0, 9.0, 8.0, 7.0, 6.0, 6.0, 5.0, 5.0, 5.0, 5.0, 4.0, 4.0, 4.0, 4.0, 4.0, 3.0, 3.0, 4.0, 3.0]
I can then use np.argmax(density) to get the starting index of the highest density area 3.
Anyway, with this example it runs fast. but when convolving over million element array and with a window size of 10,000 it takes 2 seconds to complete. if I choose a windows_size of 500,000 it takes 3 minutes to complete.
Is there a better way to sum over the array with a certain window size to speed this up? If I converted this into a pandas series instead could I perhaps use something there?
Thanks for your help!
Try using scipy.signal.convolve. It has the option to compute the convolution using the fast Fourier transform (FFT), which should be much faster for the array sizes that you mentioned.
Using an array example with length 1000000 and convolving it with an array of length 10000, np.convolve took about 1.45 seconds on my computer, and scipy.signal.convolve took 22.7 milliseconds.
cumsum = np.cumsum(np.insert(example, 0, 0))
density2 = cumsum[window_size:]-cumsum[:-window_size]
np.all(density2 == density)
True
(remove insertion if you can live without the first value...)
This is how you can use the built-in NumPy real FFT functions to convolve in 1 dimension:
import numpy, numpy.fft.fftpack_lite
def fftpack_lite_rfftb(buf, s):
n = len(buf)
m = (n - 1) * 2
temp = numpy.empty(m, buf.dtype)
numpy.divide(buf, m, temp[:n])
temp[n:m] = 0
return numpy.fft.fftpack_lite.rfftb(temp[:m], s)
def fftconvolve(x, y):
xn = x.shape[-1]
yn = y.shape[-1]
cn = xn + yn - (xn + yn > 0)
m = 1 << cn.bit_length()
s = numpy.fft.fftpack_lite.rffti(m) # Initialization; can be factored out for performance
xpad = numpy.pad(x, [(0, 0)] * (len(x.shape) - 1) + [(0, m - xn)], 'constant')
a = numpy.fft.fftpack_lite.rfftf(xpad, s) # Forward transform
ypad = numpy.pad(y, [(0, 0)] * (len(y.shape) - 1) + [(0, m - yn)], 'constant')
b = numpy.fft.fftpack_lite.rfftf(ypad, s) # Forward transform
numpy.multiply(a, b, b) # Spectral multiplication
c = fftpack_lite_rfftb(b, s) # Backward transform
return c[:cn]
# Verify convolution is correct
assert (lambda a, b: numpy.allclose(fftconvolve(a, b), numpy.convolve(a, b)))(numpy.random.randn(numpy.random.randint(1, 32)), numpy.random.randn(numpy.random.randint(1, 32)))
Bear in mind that this padding is inefficient for convolution of vectors with significantly different sizes (> 100%); you'll want to use a linear combination technique like overlap-add to do smaller convolution.
I'm trying to create a script for mirroring transforms across the yz plane in Maya.
I was able to set up a node network that gets the desired results. I took a node at the origin with sz set to -1 and a source node from the left side (lf_grp for this test), and fed their worldMatrix attrs into a multMatrix node. Then I passed the output (multMatrix.matrixSum) through a decompose matrix and into my destination node.
I'd really prefer to not create a bunch of nodes to do my mirroring - running a create/connect/disconnect/delete cycle every time is slow and painful... I'd rather just "math the crap out of it" through my script, but I can't seem to figure out how to actually multiply my two matrices...
Oh, I'm using the MTransformationMatrix since it handles a few things for you that the MMatrix does not - like rotation order (at least from what I've read...)
Thank you for any help you can give!
import maya.cmds as mc
import maya.OpenMaya as om
src_xfm = 'lf_grp'
mir_matrix_vals = [-1.0, -0.0, -0.0, 0.0,
0.0, 1.0, 0.0, 0.0,
0.0, 0.0, 1.0, 0.0,
0.0, 0.0, 0.0, 1.0]
# get src xfm matrix
#
selList = om.MSelectionList()
selList.add(src_xfm)
mDagPath = om.MDagPath()
selList.getDagPath(0, mDagPath)
src_xfmFn = om.MFnTransform(mDagPath)
src_matrix = src_xfmFn.transformation()
# construct mir xfm matrix
#
mir_matrix = om.MTransformationMatrix()
tmp_matrix = om.MMatrix()
om.MScriptUtil().createMatrixFromList(mir_matrix_vals, tmp_matrix)
mir_matrix = om.MTransformationMatrix(tmp_matrix)
# multiply matrices to get mirrored matrix
#
dst_matrix = src_matrix * mir_matrix # HOW DO YOU DO THIS????
Here's how do to it using the openMaya api version 2.
Nowadays this is the preferred method for doing Python api work - among other things it's a lot less wordy and avoids MScriptUtil, which is prone to crashiness if used incorrectly. It's also faster for most things.
This is the plain matrix multiplication:
from maya.api.OpemMaya import MMatrix
mat1 = MMatrix ([0.707107, 0, -0.707107, 0, 0.5, 0.707107, 0.5, 0, 0.5, -0.707107, 0.5, 0, 0, 0, 0, 1])
mat2 = MMatrix([1, 0, 0, 0, 0, 1, 0, 0, 0, 0, 1, 0, 100, 200, 300, 1])
print mat1 * mat2
# (((0.707107, 0, -0.707107, 0), (0.5, 0.707107, 0.5, 0), (0.5, -0.707107, 0.5, 0), (100, 200, 300, 1)))
You can't directly multiply an MTransformationMatrix -- that class isn't a linear algebra matrix, it's an accessor for the various position, rotation, scale, shear and pivot data functions of a matrix. You use it if you want get around doing all of the concatenating math yourself on a transform node, like setting its rotation without changing its scale.
You can get the underlyying matrix from an MTransformationMatrix with its asMatrix() function. To apply a matrix to an object :
from maya.api.OpenMaya import MTransformationMatrix, MGlobal, MSelectionList, MFnDagNode
sel = MGlobal.getActiveSelectionList() # selection
dagpath = sel.getDependNode(0) # first node
transform_node = MFnTransform(dagpath) # MFnTransform
xfm= transform_node.transformation().asMatrix() # matrix
new_matrix = mat1 * xfm # math
new_trans = MTransformationMatrix(new_matrix)
transform_node.setTransformation(new_trans)