The scipy.fftpack.rfft function returns the DFT as a vector of floats, alternating between the real and complex part. This means to multiply to DFTs together (for convolution) I will have to do the complex multiplication "manually" which seems quite tricky. This must be something people do often - I presume/hope there is a simple trick to do this efficiently that I haven't spotted?
Basically I want to fix this code so that both methods give the same answer:
import numpy as np
import scipy.fftpack as sfft
X = np.random.normal(size = 2000)
Y = np.random.normal(size = 2000)
NZ = np.fft.irfft(np.fft.rfft(Y) * np.fft.rfft(X))
SZ = sfft.irfft(sfft.rfft(Y) * sfft.rfft(X)) # This multiplication is wrong
NZ
array([-43.23961083, 53.62608086, 17.92013729, ..., -16.57605207,
8.19605764, 5.23929023])
SZ
array([-19.90115323, 16.98680347, -8.16608202, ..., -47.01643274,
-3.50572376, 58.1961597 ])
N.B. I am aware that fftpack contains a convolve function, but I only need to fft one half of the transform - my filter can be fft'd once in advance and then used over and over again.
You don't have to flip back to np.float64 and hstack. You can create an empty destination array, the same shape as sfft.rfft(Y) and sfft.rfft(X), then create a np.complex128 view of it and fill this view with the result of the multiplication. This will automatically fill the destination array as wanted.
If I retake your example :
import numpy as np
import scipy.fftpack as sfft
X = np.random.normal(size = 2000)
Y = np.random.normal(size = 2000)
Xf = np.fft.rfft(X)
Xf_cpx = Xf[1:-1].view(np.complex128)
Yf = np.fft.rfft(Y)
Yf_cpx = Yf[1:-1].view(np.complex128)
Zf = np.empty(X.shape)
Zf_cpx = Zf[1:-1].view(np.complex128)
Zf[0] = Xf[0]*Yf[0]
# the [...] is important to use the view as a reference to Zf and not overwrite it
Zf_cpx[...] = Xf_cpx * Yf_cpx
Zf[-1] = Xf[-1]*Yf[-1]
Z = sfft.irfft.irfft(Zf)
and that's it!
You can use a simple if statement if you want your code to be more general and handle odd lengths as explained in Jaime's answer.
Here is a function that does what you want:
def rfft_mult(a,b):
"""Multiplies two outputs of scipy.fftpack.rfft"""
assert a.shape == b.shape
c = np.empty( a.shape )
c[...,0] = a[...,0]*b[...,0]
# To comply with the rfft support of multi dimensional arrays
ar = a.reshape(-1,a.shape[-1])
br = b.reshape(-1,b.shape[-1])
cr = c.reshape(-1,c.shape[-1])
# Note that we cannot use ellipses to achieve that because of
# the way `view` work. If there are many dimensions, one should
# consider to manually perform the complex multiplication with slices.
if c.shape[-1] & 0x1: # if odd
for i in range(len(ar)):
ac = ar[i,1:].view(np.complex128)
bc = br[i,1:].view(np.complex128)
cc = cr[i,1:].view(np.complex128)
cc[...] = ac*bc
else:
for i in range(len(ar)):
ac = ar[i,1:-1].view(np.complex128)
bc = br[i,1:-1].view(np.complex128)
cc = cr[i,1:-1].view(np.complex128)
cc[...] = ac*bc
c[...,-1] = a[...,-1]*b[...,-1]
return c
You can take a view of a slice of your return array, e.g.:
>>> scipy.fftpack.fft(np.arange(8))
array([ 28.+0.j , -4.+9.65685425j, -4.+4.j ,
-4.+1.65685425j, -4.+0.j , -4.-1.65685425j,
-4.-4.j , -4.-9.65685425j])
>>> a = scipy.fftpack.rfft(np.arange(8))
>>> a
array([ 28. , -4. , 9.65685425, -4. ,
4. , -4. , 1.65685425, -4. ])
>>> a.dtype
dtype('float64')
>>> a[1:-1].view(np.complex128) # First and last entries are real
array([-4.+9.65685425j, -4.+4.j , -4.+1.65685425j])
You will need to handle even or odd sized FFTs differently:
>>> scipy.fftpack.fft(np.arange(7))
array([ 21.0+0.j , -3.5+7.26782489j, -3.5+2.79115686j,
-3.5+0.79885216j, -3.5-0.79885216j, -3.5-2.79115686j,
-3.5-7.26782489j])
>>> a = scipy.fftpack.rfft(np.arange(7))
>>> a
array([ 21. , -3.5 , 7.26782489, -3.5 ,
2.79115686, -3.5 , 0.79885216])
>>> a.dtype
dtype('float64')
>>> a[1:].view(np.complex128)
array([-3.5+7.26782489j, -3.5+2.79115686j, -3.5+0.79885216j])
Related
Good day to everyone! I'm currently converting a MATLAB project to Python 2.7. I am trying to convert the line
h = [ im(:,2:cols) zeros(rows,1) ] - [ zeros(rows,1) im(:,1:cols-1) ];
When I try to convert it
h = np.concatenate((im[1,range(2,cols)], np.zeros((rows, 1)))) -
np.concatenate((np.zeros((rows, 1)),im[1,range(2,cols - 1)] ))
IDLE returns different errors like
ValueError: all the input arrays must have same number of dimensions
I'm very new to Python and I would appreciate it if you would suggest other methods. Thank you so much! Here's the function I am trying to convert.
function [gradient, or] = canny(im, sigma, scaling, vert, horz)
xscaling = vert; yscaling = horz;
hsize = [6*sigma+1, 6*sigma+1]; % The filter size.
gaussian = fspecial('gaussian',hsize,sigma);
im = filter2(gaussian,im); % Smoothed image.
im = imresize(im, scaling, 'AntiAliasing',false);
[rows, cols] = size(im);
h = [ im(:,2:cols) zeros(rows,1) ] - [ zeros(rows,1) im(:,1:cols-1) ];
And I also would ask the equivalent of ':' operator that is used mainly in indeces and arrays in Python. Is there any equivalent for the : operator?
The Python converted code I started:
def canny(im=None, sigma=None, scaling=None, vert=None, horz=None):
xscaling = vert
yscaling = horz
hsize = (6 * sigma + 1), (6 * sigma + 1) # The filter size.
gaussian = gauss2D(hsize, sigma)
im = filter2(gaussian, im) # Smoothed image.
print("This is im")
print(im)
print("This is hsize")
print(hsize)
print("This is scaling")
print(scaling)
#scaling = 0.4
#scaling = tuple(scaling)
im = cv2.resize(im,None, fx=scaling, fy=scaling )
[rows, cols] = np.shape(im)
Say your data is in a list of lists. Try this:
a = [[2, 9, 4], [7, 5, 3], [6, 1, 8]]
im = np.array(a, dtype=float)
rows = 3
cols = 3
h = (np.hstack([im[:, 1:cols], np.zeros((rows, 1))])
- np.hstack([np.zeros((rows, 1)), im[:, :cols-1]]))
The equivalent of MATLAB's horzcat (that is, [A B]) is np.hstack and the equivalent of vertcat ([A; B]) is np.vstack.
Array indexing in numpy is very close to MATLAB, except that indexes start at 0 in numpy, and the range p:q means "p to q-1".
Also, the storage order of arrays is row-major by default, and you can use column-major order if you want (see this). In MATLAB, arrays are stored in column-major order. To check in Python, type for instance np.isfortran(im). If it returns true, the array has the same order as MATLAB (Fortran order), otherwise it's row-major (C order). It's important when you want to optimize loops, or when you pass an array to a C or Fortran routine.
Ideally, try to put everything in an np.array as soon as possible, and don't use lists (they take much more space and processing is much slower). There are also some quirks: for instance, 1.0 / 0.0 throws an exception, but np.float64(1.0) / np.float64(0.0) returns inf, like in MATLAB.
Another example from the comments:
d1 = [ im(2:rows,2:cols) zeros(rows-1,1); zeros(1,cols) ] - ...
[ zeros(1,cols); zeros(rows-1,1) im(1:rows-1,1:cols-1) ];
d2 = [ zeros(1,cols); im(1:rows-1,2:cols) zeros(rows-1,1); ] - ...
[ zeros(rows-1,1) im(2:rows,1:cols-1); zeros(1,cols) ];
For this one, rather than np.vstack and np.hstack, you can use np.block.
im = np.ones((10, 15))
rows, cols = im.shape
d1 = (np.block([[im[1:rows, 1:cols], np.zeros((rows-1, 1))],
[np.zeros((1, cols))]]) -
np.block([[np.zeros((1, cols))],
[np.zeros((rows-1, 1)), im[:rows-1, :cols-1]]]))
d2 = (np.block([[np.zeros((1, cols))],
[im[:rows-1, 1:cols], np.zeros((rows-1, 1))]]) -
np.block([[np.zeros((rows-1, 1)), im[1:rows, :cols-1]],
[np.zeros((1, cols))]]))
With np.zeros((Nrows,1)) you are generating a 2D array containing Nrows 1D arrays with 1 element. Then, with im[1,2:cols] your are getting a 1D array of cols-2 elements. You should change np.zeros((rows,1)) by np.zeros(rows).
Moreover, at the second np.concatenate, when you get a subarray from 'im' you should take the same number of elements than in the first concatenate. Note that you are taking one element less: range(2,cols) VS range(2,cols-1).
So I want to implement a matrix standardisation method.
To do that, I've been told to
subtract the mean and divide by the standard deviation for each dimension
And to verify:
after this processing, each dimension has zero mean and unit variance.
That sounds simple enough ...
import numpy as np
def standardize(X : np.ndarray,inplace=True,verbose=False,check=False):
ret = X
if not inplace:
ret = X.copy()
ndim = np.ndim(X)
for d in range(ndim):
m = np.mean(ret,axis=d)
s = np.std(ret,axis=d)
if verbose:
print(f"m{d} =",m)
print(f"s{d} =",s)
# TODO: handle zero s
# TODO: subtract m along the correct axis
# TODO: divide by s along the correct axis
if check:
means = [np.mean(X,axis=d) for d in range(ndim)]
stds = [np.std(X,axis=d) for d in range(ndim)]
if verbose:
print("means=\n",means)
print("stds=\n",stds)
assert all(all(m < 1e-15 for m in mm) for mm in means)
assert all(all(s == 1.0 for s in ss) for ss in stds)
return ret
e.g. for ndim == 2, we could get something like
A=
[[ 0.40923704 0.91397416 0.62257397]
[ 0.15614258 0.56720836 0.80624135]]
m0 = [ 0.28268981 0.74059126 0.71440766] # can broadcast with ret -= m0
s0 = [ 0.12654723 0.1733829 0.09183369] # can broadcast with ret /= s0
m1 = [ 0.33333333 -0.33333333] # ???
s1 = [ 0.94280904 0.94280904] # ???
How do I do that?
Judging by Broadcast an operation along specific axis in python , I thought I may be looking for a way to create
m[None, None, None, .., None, : , None, None, .., None]
Where there is exactly one : at index d.
But even if I knew how to do that, I'm not sure it'd work.
You can swap your axes such that the first axes is the one you want to normalize. This should also work inplace, since swapaxes just returns a view on your data.
Using the numpy command swapaxes:
for d in range(ndim):
m = np.mean(ret,axis=d)
s = np.std(ret,axis=d)
ret = np.swapaxes(ret, 0, d)
# Perform Normalisation of Axis
ret -= m
ret /= s
ret = np.swapaxes(ret, 0, d)
I got an
array([[ 0.01454911+0.j, 0.01392502+0.00095922j,
0.00343284+0.00036535j, 0.00094982+0.0019255j ,
0.00204887+0.0039264j , 0.00112154+0.00133549j, 0.00060697+0.j],
[ 0.02179418+0.j, 0.01010125-0.00062646j,
0.00086327+0.00495717j, 0.00204473-0.00584213j,
0.00159394-0.00678094j, 0.00121372-0.0043044j , 0.00040639+0.j]])
I need a solution which gives me the possibility to replace just the imaginary components by an random value generated by:
numpy.random.vonmises(mu, kappa, size=size)
The resulting array needs to be in the same form as the first one.
Loop over the numbers and just set them to a value you like. The parameters mu and kappa for the numpy.random.vonmises function need to be defined, since in they are undefined in the below example.
import numpy as np
data = np.array([[ 0.01454911+0.j, 0.01392502+0.00095922j,
0.00343284+0.00036535j, 0.00094982+0.0019255j ,
0.00204887+0.0039264j , 0.00112154+0.00133549j, 0.00060697+0.j],
[ 0.02179418+0.j, 0.01010125-0.00062646j,
0.00086327+0.00495717j, 0.00204473-0.00584213j,
0.00159394-0.00678094j, 0.00121372-0.0043044j , 0.00040639+0.j]])
def setRandomImag(c):
c.imag = np.random.vonmises(mu, kappa, size=size)
return c
data = [ setRandomImag(i) for i in data]
n_epochs = 2
n_freqs = 7
# form giving parameters for the array
data2 = np.zeros((n_epochs, n_freqs), dtype=complex)
for i in range(0,n_epochs):
data2[i] = np.real(data[i]) + np.random.vonmises(mu, kappa) * complex(0,1)
It gives my whole n_epoch the same imaginary value. Not exactly what I was asking for, but solves my problem.
Try using this approach:
Store your numbers into a 2-D array: Real-part and Imaginary-part.
Then replace the Imaginary-part with the randomly chosen numbers.
With numpy or scipy, is there any existing method that will return the endpoints of an interval which contains a specified percent of the values in a 1D array? I realize that this is simple to write myself, but it seems like the kind of thing that might be built in, although I can't find it.
E.g:
>>> import numpy as np
>>> x = np.random.randn(100000)
>>> print(np.bounding_interval(x, 0.68))
Would give approximately (-1, 1)
You can use np.percentile:
In [29]: x = np.random.randn(100000)
In [30]: p = 0.68
In [31]: lo = 50*(1 - p)
In [32]: hi = 50*(1 + p)
In [33]: np.percentile(x, [lo, hi])
Out[33]: array([-0.99206523, 1.0006089 ])
There is also scipy.stats.scoreatpercentile:
In [34]: scoreatpercentile(x, [lo, hi])
Out[34]: array([-0.99206523, 1.0006089 ])
I don't know of a built-in function to do it, but you can write one using the math package to specify approximate indices like this:
from __future__ import division
import math
import numpy as np
def bound_interval(arr_in, interval):
lhs = (1 - interval) / 2 # Specify left-hand side chunk to exclude
rhs = 1 - lhs # and the right-hand side
sorted = np.sort(arr_in)
lower = sorted[math.floor(lhs * len(arr_in))] # use floor to get index
upper = sorted[math.floor(rhs * len(arr_in))]
return (lower, upper)
On your specified array, I got the interval (-0.99072237819851039, 0.98691691784955549). Pretty close to (-1, 1)!
On the numpy page they give the example of
s = np.random.dirichlet((10, 5, 3), 20)
which is all fine and great; but what if you want to generate random samples from a 2D array of alphas?
alphas = np.random.randint(10, size=(20, 3))
If you try np.random.dirichlet(alphas), np.random.dirichlet([x for x in alphas]), or np.random.dirichlet((x for x in alphas)), it results in a
ValueError: object too deep for desired array. The only thing that seems to work is:
y = np.empty(alphas.shape)
for i in xrange(np.alen(alphas)):
y[i] = np.random.dirichlet(alphas[i])
print y
...which is far from ideal for my code structure. Why is this the case, and can anyone think of a more "numpy-like" way of doing this?
Thanks in advance.
np.random.dirichlet is written to generate samples for a single Dirichlet distribution. That code is implemented in terms of the Gamma distribution, and that implementation can be used as the basis for a vectorized code to generate samples from different distributions. In the following, dirichlet_sample takes an array alphas with shape (n, k), where each row is an alpha vector for a Dirichlet distribution. It returns an array also with shape (n, k), each row being a sample of the corresponding distribution from alphas. When run as a script, it generates samples using dirichlet_sample and np.random.dirichlet to verify that they are generating the same samples (up to normal floating point differences).
import numpy as np
def dirichlet_sample(alphas):
"""
Generate samples from an array of alpha distributions.
"""
r = np.random.standard_gamma(alphas)
return r / r.sum(-1, keepdims=True)
if __name__ == "__main__":
alphas = 2 ** np.random.randint(0, 4, size=(6, 3))
np.random.seed(1234)
d1 = dirichlet_sample(alphas)
print "dirichlet_sample:"
print d1
np.random.seed(1234)
d2 = np.empty(alphas.shape)
for k in range(len(alphas)):
d2[k] = np.random.dirichlet(alphas[k])
print "np.random.dirichlet:"
print d2
# Compare d1 and d2:
err = np.abs(d1 - d2).max()
print "max difference:", err
Sample run:
dirichlet_sample:
[[ 0.38980834 0.4043844 0.20580726]
[ 0.14076375 0.26906604 0.59017021]
[ 0.64223074 0.26099934 0.09676991]
[ 0.21880145 0.33775249 0.44344606]
[ 0.39879859 0.40984454 0.19135688]
[ 0.73976425 0.21467288 0.04556287]]
np.random.dirichlet:
[[ 0.38980834 0.4043844 0.20580726]
[ 0.14076375 0.26906604 0.59017021]
[ 0.64223074 0.26099934 0.09676991]
[ 0.21880145 0.33775249 0.44344606]
[ 0.39879859 0.40984454 0.19135688]
[ 0.73976425 0.21467288 0.04556287]]
max difference: 5.55111512313e-17
I think you're looking for
y = np.array([np.random.dirichlet(x) for x in alphas])
for your list comprehension. Otherwise you're simply passing a python list or tuple. I imagine the reason numpy.random.dirichlet does not accept your list of alpha values is because it's not set up to - it already accepts an array, which it expects to have a dimension of k, as per the documentation.