stokesCavity example for FiPy returns False - python

I have tried to run the stokesCavity example, which uses lid-driven boundary conditions for the flow. At the end of the code, the values in the top-right cell are compared with some reference values.
>>> print(numerix.allclose(pressure.globalValue[..., -1], 162.790867927)) #doctest: +NOT_PYAMGX_SOLVER
1
>>> print(numerix.allclose(xVelocity.globalValue[..., -1], 0.265072740929)) #doctest: +NOT_PYAMGX_SOLVER
1
>>> print(numerix.allclose(yVelocity.globalValue[..., -1], -0.150290488304)) #doctest: +NOT_PYAMGX_SOLVER
1
When I tried running this example, my output was
False
False
False
The actual values I get for the top-right cell are 129.235, 0.278627 and -0.166620 (instead of 162.790867927, 0.265072740929 and -0.150290488304). Does anyone know why am I getting different values? I've tried changing the solver (used scipy, Trilinos and pysparse), but the results do not change up to the 12th digit. The velocity profile looks similar to the one shown in their manual, but I am still worried something is off.
I run it on Linux (python 2.7.14, fipy 3.2, pysparse 1.2.dev0, Trilinos 12.12, scipy 1.2.1) and on Windows (python 2.7.15, fipy 3.1.3, scipy 1.1.0).

When run as part of the test suite, this example only does 5 sweeps, and the numerical check is hard-wired for this. When you run the example in isolation, it does 300 sweeps and the solution is better (or at least differently) converged. There's nothing wrong with the example, other than it's not written in a very robust way. Thanks for asking about this; we'll try to clean up the example.

Related

Intel Vtune cannot find python source file

This is an old problem as is demonstrated as in https://community.intel.com/t5/Analyzers/Unable-to-view-source-code-when-analyzing-results/td-p/1153210. I have tried all the listed methods, none of them works, and I cannot find any more solutions on the internet. Basically vtune cannot find the custom python source file no matter what is tried. I am using the most recently version as of speaking. Please let me whether there is a solution.
For example, if you run the following program.
def myfunc(*args):
# Do a lot of things.
if __name__ = '__main__':
# Do something and call myfunc
Call this script main.py. Now use the newest vtune version (I have using Ubuntu 18.04), run the vtune-gui and basic hotspot analysis. You will not found any information on this file. However, a huge pile of information on Python and its other codes are found (related to your python environment). In theory, you should be able to find the source of main.py as well as cost on each line in that script. However, that is simply not happening.
Desired behavior: I would really like to find the source file and function in the top-down manual (or any really). Any advice is welcome.
VTune offer full support for profiling python code and the tool should be able to display the source code in your python file as you expected. Could you please check if the function you are expecting to see in the VTune results, ran long enough?
Just to confirm that everything is working fine, I wrote a matrix multiplication code as shown below (don't worry about the accuracy of the code itself):
def matrix_mul(X, Y):
result_matrix = [ [ 1 for i in range(len(X)) ] for j in range(len(Y[0])) ]
# iterate through rows of X
for i in range(len(X)):
# iterate through columns of Y
for j in range(len(Y[0])):
# iterate through rows of Y
for k in range(len(Y)):
result_matrix[i][j] += X[i][k] * Y[k][j]
return result_matrix
Then I called this function (matrix_mul) on my Ubuntu machine with large enough matrices so that the overall execution time was in the order of few seconds.
I used the below command to start profiling (you can also see the VTune version I used):
/opt/intel/oneapi/vtune/2021.1.1/bin64/vtune -collect hotspots -knob enable-stack-collection=true -data-limit=500 -ring-buffer=10 -app-working-dir /usr/bin -- python3 /home/johnypau/MyIntel/temp/Python_matrix_mul/mat_mul_method.py
Now open the VTune results in the GUI and under the bottom-up tab, order by "Module / Function / Call-stack" (or whatever preferred grouping is).
You should be able to see the the module (mat_mul_method.py in my case) and the function "matrix_mul". If you double click, VTune should be able to load the sources too.

unexpected Shapiro results python

Shapiro p value does not show normality though histogram and qqplot seem to show normality. My concern is if I am correctly using the scipy Shapiro function.
pty.hist(RankList[4])
sm.qqplot(np.array(RankList[4]), line='s')
print(stats.shapiro(RankList[4]))
pty.show()
RESULT From Shapiro
(0.9911481738090515, 7.637918031377922e-08)
I would expect the p value to be higher.
It looks like you have quite a bit of data, if you check the notes on the SciPy documentation page it states:
For N > 5000 the W test statistic is accurate but the p-value may not be.
You may want to consider using the Anderson Darling test.

Numpy floating point rounding errors

While searching for some numpy stuff, I came across a question discussing the rounding accuracy of numpy.dot():
Numpy: Difference between dot(a,b) and (a*b).sum()
Since I happen to have two (different) Computers with Haswell-CPUs sitting on my desk, that should provide FMAand everything, I thought I'd test the example given by Ophion in the first answer, and I got a result that somewhat surprised me:
After updating/installing/fixing lapack/blas/atlas/numpy, I get the following on both machines:
>>> a = np.ones(1000, dtype=np.float128)+1e-14
>>> (a*a).sum()
1000.0000000000199999
>>> np.dot(a,a)
1000.0000000000199948
>>> a = np.ones(1000, dtype=np.float64)+1e-14
>>> (a*a).sum()
1000.0000000000198
>>> np.dot(a,a)
1000.0000000000176
So the standard multiplication + sum() is more precise than np.dot(). timeit however confirmed that the .dot() version is faster (but not much) for both float64 and float128.
Can anyone provide an explanation for this?
edit: I accidentally deleted the info on numpy versions: same results for 1.9.0 and 1.9.3 with python 3.4.0 and 3.4.1.
It looks like they recently added a special Pairwise Summation to ndarray.sum for improved numerical stability.
From PR 3685, this affects:
all add.reduce calls that go over float_add with IS_BINARY_REDUCE true
so this also improves mean/std/var and anything else that uses sum.
See here for code changes.

Using Numpy in different platforms

I have a piece of code which computes the Helmholtz-Hodge Decomposition.
I've been running on my Mac OS Yosemite and it was working just fine. A month ago, however, my Mac got pretty slow (it was really old), and I opted to buy a new notebook (Windows 8.1, Dell).
After installing all Python libs and so on, I continued my work running this same code (versioned in Git). And then the result was pretty weird, completely different from the one obtained in the old notebook.
For instance, what I do is to construct to matrices a and b(really long calculus) and then I call the solver:
s = numpy.linalg.solve(a, b)
This was returning a (wrong, and different of the result obtained in my Mac, which was right).
Then, I tried to use:
s = scipy.linalg.solve(a, b)
And the program exits with code 0 but at the middle of it.
Then, I just made a simple test of:
print 'here1'
s = scipy.linalg.solve(a, b)
print 'here2'
And here2 is never printed.
I tried:
print 'here1'
x, info = numpy.linalg.cg(a, b)
print 'here2'
And the same happens.
I also tried to check the solution after using numpy.linalg.solve:
print numpy.allclose(numpy.dot(a, s), b)
And I got a False (?!).
I don't know what is happening, how to find a solution, I just know that the same code runs in my Mac, but it would be very good if I could run it in other platforms. Now I'm stucked in this problem (don't have a Mac anymore) and with no clue about the cause.
The weirdest thing is that I don't receive any error on runtime warning, no feedback at all.
Thank you for any help.
EDIT:
Numpy Suit Test Results:
Scipy Suit Test Results:
Download Anaconda package manager
http://continuum.io/downloads
When you download this it will already have all the dependencies for numpy worked out for you. It installs locally and will work on most platforms.
This is not really an answer, but this blog discusses in length the problems of having a numpy ecosystem that evolves fast, at the expense of reproducibility.
By the way, which version of numpy are you using? The documentation for the latest 1.9 does not report any method called cg as the one you use...
I suggest the use of this example so that you (and others) can check the results.
>>> import numpy as np
>>> import scipy.linalg
>>> np.random.seed(123)
>>> a = np.random.random(size=(10000, 10000))
>>> b = np.random.random(size=(10000,))
>>> s_np = np.linalg.solve(a, b)
>>> s_sc = scipy.linalg.solve(a, b)
>>> np.allclose(s_np,s_sc)
>>> s_np
array([-15.59186559, 7.08345804, 4.48174646, ..., -16.43310046,
-8.81301553, -10.77509242])
I hope you can find the answer - one option in the future is to create a virtual machine for each of your projects, using Docker. This allows easy portability.
See a great article here discussing Docker for research.

Does Python/Scipy have a firls( ) replacement (i.e. a weighted, least squares, FIR filter design)?

I am porting code from Matlab to Python and am having trouble finding a replacement for the firls( ) routine. It is used for, least-squares linear-phase Finite Impulse Response (FIR) filter design.
I looked at scipy.signal and nothing there looked like it would do the trick. Of course I was able to replace my remez and freqz algorithsm, so that's good.
On one blog I found an algorithm that implemented this filter without weighting, but I need one with weights.
Thanks, David
The firls equivalent in python now appears to be implemented as part of the signal package:
https://docs.scipy.org/doc/scipy/reference/generated/scipy.signal.firls.html#scipy.signal.firls
Also I agree with everything that #pev hall stated above especially how firls is optimum in many situations (such as when overall signal to noise is being optimized for a given number of taps), and to not use the boxcar window as he stated, they are not equivalent at all! firls generally outperforms all window and frequency sampling approaches to filter design when designing traditional FIR filters.
To my current understanding, scipy.signal and Octave only support odd length (even order) least squared filters. In this case when I need an even length (Type II or Type IV filter), I resort to using the windowed design approach with a Kaiser Window specifically. I have found the Kaiser window solution to come quite close to the optimum least squares solution.
This blog post contains code detailing how to use scipy.signal to implement FIR filters.
Obviously, this post is somewhat dated, but maybe it is still interesting for some:
I think there are two near-equivalents to firls in Python:
You can try the firwin function with window='boxcar'. This is similar to Matlab where fir1 with a boxcar window delivers the same (? or at least very similar results) as firls.
You could also try the firwin2 method (frequency sampling method, similar to fir2 in Matlab), again using window='boxcar'
I did try one example from the Matlab firls reference and achieved near-identical results for:
Matlab:
F = [0 0.3 0.4 0.6 0.7 0.9];
A = [0 1 0 0 0.5 0.5];
b = firls(24,F,A,'hilbert');
Python:
F = [0, 0.3, 0.4, 0.6, 0.7, 0.9, 1]
A = [0, 1, 0, 0, 0.5, 0.5, 0]
bb = sig.firwin2( 25, F,A, window='boxcar', antisymmetric=True )
I had to increase the order to N = 25 and I also had to add another data point (F = 1, A = 0) which Python insisted upon; the option antisymmetric = True is only necessary for this special case (Hilbert filter)
This post is really in response to
You can try the firwin function with window='boxcar'...
Do don't use boxcar it means no window at all (it is ideal but only works "ideally" with an infinite number of multipliers - sinc in time). The whole perpose of using a window is the reduce the number of multipliers required to get good stop band attenuation. See Window function
When comparing filters please use dB/log scale.
Scipy not having firls (FIR least squares filter) function is a large limitation (as it generates the optimum filter in many situations).
REMEZ has it's place but the flat roll off is a real killer when your trying to get the best results (and not just meeting some managers spec). ( warning scipy remez implementation can give amplification in stop band - see plot at bottom)
If you are using python (or need to use some window) I recommend using the kasiar window which gets very good results and can easily be tweaked for your attenuation vs transition vs multipliers requirement(attenuation (in dB) = 2.285 * (multipliers - 1) * pi * width + 7.95). It performance is not quite as good as firls but it has the benefit of being fast and easy to calculate (great if you don't store the coefficients).
I found a firls() implementation attached here in SciPy ticket 648
Minor changes to get it working:
Swap the following two lines:
bands, desired, weight = array(bands), array(desired), array(weight)
if weight==None : weight = ones(len(bands)/2)
import roots from numpy instead of scipy.signal
Since version 0.18 in July, 2016 scipy includes an implementation of firls, as scipy.signal.firls.
It seems unlikely that you'll find exactly what you seek already written in Python, but perhaps the Matlab function's help page gives or references a description of the algorithm?

Categories