Unable to produce the same result in python porting from matlab - python

I am stuck again with this issue of not able to produce the same result in Python using numpy compared to Matlab output. I am hoping that the ported code is exactly the same in terms of what it does in Python.
Here it is-
Matlab
product =fftshift(fft2(alphabet)) .* fftshift(fft2(gaussmap));
combine= abs(fftshift(ifft2(product)));
Python
product =np.multiply(np.fft.fftshift(np.fft.fft2(alphabet)), np.fft.fftshift(np.fft.fft2(gaussmap)))
combine= abs(np.fft.fftshift(np.fft.ifft2(product)))
Here are the results comparing both plotting combine
Both alphabet and gaussmap are 200x200 matrices. Any insights where I might be doing simple mistake here?

Related

Signal Convolution in C++ like Python np.convolve

I am writing a numerical simulation code where a convolution of a signal and a response function is needed (full mode). Now this sounds like a standard problem and I have used np.convolve etc. in python to great effect.
However, given that I need a faster computation (this convolution needs to be performed millions of times per simulation), I have started to implement this in C++, but I have struggled to find a analogue of np.convolve or scipy.fftconvolve in C++, where I would just plug in two std::vector<double> arrays and get the result of the discrete convolution. The only thing remotely resembling what I need is the implementation of a convolution from Numerical Recipes, however comparing to the numpy results this implementation seems to be just wrong.
So my question boils down to: Where can I find a library/code that performs the convolution just like the Python implementations do? Surely there must be some already exisiting, fast solution.

How to use Python to optimize an objective function with multiple terms (SVM Dual Problem)?

I am trying to find the solution for the following SVM dual problem using python. The problem is formatted as a quadratic programming problem:
I have been trying to use the Python library, CVXOPT, but according to its docs, and this example, It can only solve SVM problem in the form of:
Which would work fine for problems in the form of:
However, the problem I am trying to solve has two extra terms at the end (first image).
I am wondering how I can adjust the way my problem is formulated such that it is able to be solved with CVXOPT or any other python optimization package.

Sparse linear differential equation solving Matlab vs Python

I am currently working on a project that involves huge matrix. The size is around 500 000 by 500 000 but they are very sparse with a density of around 0.000025 so about 6-6.5 millions non-zero elements only. Here is my problem, i need to solve the linear equation Ax = B with A being the 500k by 500k matrix and B being a vector, multiple time, with only B changing. I starting working on this project using someone else code which was done in Matlab and i wanted to implement it in python because i though it would be faster (am i wrong in thinking that? is Matlab faster for solving these type of equation?) Heres the thing... when im using their script (in Matlab) doing A\B takes about 110 seconds which seems really fast to me. But when i try using SciPy sparse solving to solve the same system it just doesn't work as fast or in fact not at all, i always end up cancelling the script because it is too long.
So basically, is using A\B in Matlab just too good or am i doing something wrong? Could it be because of memory? As in python needs to create something in between steps and i run out of place so it just crash or something?
Thanks in advance!
PS : If you are thinking about suggesting LU decomposition for futur iteration, i am already planning on doing this, i just want to know what is up with directly solving it using SciPy sparse solve.

Difference in fmin_l_bfgs_b implemented in MATLAB and Scipy.optimize

I was following the Sparse Autoencoder tutorial in the Stanford UFLDL series (http://ufldl.stanford.edu/wiki/index.php/Exercise:Sparse_Autoencoder).
I finished the MATLAB version of implementation and it works perfectly fine, but I noticed some discrepancies when ported the same implementation to Python (with numpy, scipy, and Matplotlib).
I noticed that the cost function were not minimized to the same magnitude. I am aware that due to the fact thetas are randomly initialized, each run will give different final cost values, but after rerunning both implementations for 20+ times, I always see the Python implementation resulting f_cost = 4.57e-1 while the MATLAB version gives answers around f_cost = 4.46e-1. In other words, there is a consistent difference of ~0.01.
Since these are two implementations identical in theory (same cost func, same gradient, same minFunc, LBFGS)
Since I suspect the problem is conditional on the cost function and gradient computation, I could not reproduce it in a few lines. But you can find the full code in Python and MATLAB on Github (https://github.com/alanhyue/cs294a_2011-sparseAutoencoders).
Additional details
Here are a few more details that might help clarify the problem.
LBFGS and LBFGS-B
The starter code provided by the tutorial uses LBFGS to minimize thetas, while I did not find the exact equivalent in Scipy, I am using the scipy.optimize.fmin_l_bfgs_b. I read on Wikipedia that LBFGS-B is a boxed version of LBFGS. I suppose they should give the same result?
Both implementations passed numerical gradient checking
I assume this means the gradient calculation is correct.
Results looks somewhat correct.
As indicated in the lecture notes, a correct implementation should get a collection of line detectors, which means each patches would look like a picture of a straight line.
Here is the result from Python (with a cost of 0.457).
Results of Python implementation
Here is the result from MATLAB (with a cost of 0.446).
enter image description here

Periodogram in MATLAB and Python scipy gives different results? [duplicate]

I am porting some matlab code to python using scipy and got stuck with the following line:
Matlab/Octave code
[Pxx, f] = periodogram(x, [], 512, 5)
Python code
f, Pxx = signal.periodogram(x, 5, nfft=512)
The problem is that I get different output on the same data. More specifically, Pxx vectors are different. I tried different windows for signal.periodogram, yet no luck (and it seems that default scypy's boxcar window is the same as default matlab's rectangular window) Another strange behavior is that in python, first element of Pxx is always 0, no matter what data input is.
Am i missing something? Any advice would be greatly appreciated!
Simple Matlab/Octave code with actual data: http://pastebin.com/czNeyUjs
Simple Python+scipy code with actual data: http://pastebin.com/zPLGBTpn
After researching octave's and scipy's periodogram source code I found that they use different algorithm to calculate power spectral density estimate. Octave (and MATLAB) use FFT, whereas scipy's periodogram use the Welch method.
As #georgesl has mentioned, the output looks quite alike, but still, it differs. And for porting reason it was critical. In the end, I simply wrote a small function to calculate PSD estimate using FFT, and now output is the same. According to timeit testing, it works ~50% faster (1.9006s vs 2.9176s on a loop with 10.000 iterations). I think it's due to the FFT being faster than Welch in scipy's implementation, of just being faster.
Thanks to everyone who showed interest.
I faced the same problem but then I came across the documentation of scipy's periodogram
As you would see there that detrend='constant' is the default argument. This means that python automatically subtracts the mean of the input data from each point. (Read here). While Matlab/Octave do no such thing. I believe that is the reason why the outputs are different. Try specifying detrend=False, while calling scipy's periodogram you should get the same output as Matlab.
After reading the Matlab and Scipy documentation, another contribution to the different values could be that they use different default window function. Matlab uses a Hamming window, and Scipy uses a Hanning. The two window functions and similar but not identical.
Did you look at the results ?
The slight differences between the two results may comes from optimizations/default windows/implementations/whatever etc.

Categories