How do numpy and GMPY2 compare with GMP in terms of speed? - python

I understand that GMPY2 supports the GMP library and numpy has fast numerical libraries. I want to know how the speed compares to actually writing C (or C++) code with GMP. Since Python is a scripting language, I don't think it will ever be as fast as a compiled language, however I have been wrong about these generalizations before.
I can't get GMP to work on my computer, so I can't run any tests. If I could, just general math like addition and maybe some trig functions. I'll figure out GMP later.

numpy and GMPY2 have different purposes.
numpy has fast numerical libraries but to achieve high performance, numpy is effectively restricted to working with vectors or arrays of low-level types - 16, 32, or 64 bit integers, or 32 or 64 bit floating point values. For example, numpy access highly optimized routines written in C (or Fortran) for performing matrix multiplication.
GMPY2 uses the GMP, MPFR, and MPC libraries for multiple-precision calculations. It isn't targeted towards vector or matrix operations.
The Python interpreter adds overhead to each call to an external library. Whether or not the slowdown is significant depends on the how much time is spend by the external library. If the running time of the external library is very short, say 10e-8 seconds, then Python's overhead is significant. If the running time of the external library is relatively long, several seconds or longer, then Python's overhead is probably insignificant.
Since you haven't said what you are trying to accomplish, I can't give a better answer.
Disclaimer: I maintain GMPY2.

Related

How does pypy2 compare to pypy3 in speed benchmarks?

It's well-known pypy3 is faster than CPython for number-crunching tasks not already written in C: https://speed.pypy.org/
But how does pypy2 compare to pypy3 in terms of speed? Pypy used to only support only Python 2, but both pypy2 and pypy3 are built on the same codebase, so they should expect to run about the same speed. I recall in earlier versions of python 3, number-crunching and string operations in pure python (NOT numpy) ran slower due to integer sizes and unicode strings, but I'm not sure if this is the case now. If the speed is comparable, I can drop the compatibility imports for the code designed for pypy2.
In the comparison at https://speed.pypy.org/comparison/ I compared cpython 2.7.11, cpython 3.7.6, pypy-jit-64 latest (I think this is compatible with python 2.7.11), pypy3.9-jit-64 latest. pypy vs pypy3 seem comparable in speed on tasks ai, float, go, json_bench, scimark_fft but I'm not sure if I did the comparisons correctly and didn't find which codebases the benchmarks come from.
I think it would be better to ask "how does PyPy perform on my tasks" rather than search for a generic "what is faster" measure. Those generic measures are only one way to composite the scores. What matters at the end of the day is how to best implement the task that interests you.

alternative to scipy's matrix exponential

I am using one of the new MacBooks with the M1 chip. I cannot use SciPy natively unless I go through Rosetta, but for other reasons I cannot do that now.
The ONLY thing I need from SciPy is scipy.linalg.expm. Is there an alternative library where there is an implementation of expm that is equally (or almost equally) fast?

Execute Randomforest model built using python in julia

Is it possible to use a randomforest model built in python to be exported and executed natively in julia? Will it give performance boost?
You can use PyCall to call python code in Julia. Julia can't magically make Python code (or any other code) faster. You could call more basic components written in python and glue the desired end results together in Julia, which should theoretically be faster. For example, much of Scikit-learn uses Numpy, but you could call the Numpy library and construct the relevant code to create a random forest, which may be faster because Julia can compile the binding code. At this point it would make more sense to just use Julia entirely though, because most of Numpy's functionality is available Julia's LinearAlgebra package.
It's just a trade off how fast you want your code to be versus how much work you want to put into optimizing it.

What exactly is PyOpenGL-accelerate?

The title is the main question here. I had some PyOpenGL code I was running on my computer, which was running somewhat slow. I realized I hadn't installed PyOpenGL-accelerate. This didn't change the speed at all, but most tutorials with the Python OpenGL bindings suggest that PyOpenGL-accelerate should be installed.
What exactly does this module do?
First of all note that PyOpenGL-accelerate isn't a silver bullet. Thereby if you're already poorly optimizing your application, then PyOpenGL-accelerate wouldn't gain you that much if any additional performance.
That being said. PyOpenGL-accelerate consist of Cython accelerator modules which attempt to speed up various aspects of PyOpenGL 3.x. Thus if you're using glBegin() and glEnd() to draw with, then you won't gain any performance from this.
So what is Cython accelerator modules?
These modules are completely self-contained, and are created solely to run faster than the equivalent pure Python code runs in CPython. Ideally, accelerator modules will always have a pure Python equivalent to use as a fallback if the accelerated version isn’t available on a given system. The CPython standard library makes extensive use of accelerator modules.
– Python – Binary Extensions
In more layman's terms. Cython is a bit of a mix between Python and C so to speak. With a goal being optimization and execution speed.
In relation to PyOpenGL-accelerate this means that the various helper classes PyOpenGL offers. Is instead implemented in a manner that offers more performance.
From the documentation:
This set of C (Cython) extensions provides acceleration of common operations for slow points in PyOpenGL 3.x. For code which uses large arrays extensively speed-up is around 10% compared to unaccelerated code.
You could dig through the code if you want to know precisely which optimizations are defined, but OpenGL is usually built around surprisingly coarse optimizations to account for different hardware - i suppose that extends to running off of an interpreter as well.

Why does SciPy require an installation procedure?

I'm trying to wrap my head around the Python ecosystem and parts of it aren't making complete sense to me so far.
I'm coming from the Java world and when I want to make use of, say JUnit, I simply add the JUnit jar to my classpath and that's pretty much it. If I want to be nice to my users I can also easily bunch together all my dependencies into a single jar, so that all that they need to do is install a Java Runtime get a hold of my jar.
Reading through the SciPy installation guide I can't find an explanation for why all this is really necessary. And how is this ever going to work at deployment time? It's like JUnit asked me to install a new JRE just for them.
SciPy has parts written in C that require compilation for the specific platform it's being deployed too.
How can SciPy be fast if it is written in an interpreted language like Python?
Actually, the time-critical loops are usually implemented in C or
Fortran. Much of SciPy is a thin layer of code on top of the
scientific routines that are freely available at
http://www.netlib.org/. Netlib is a huge repository of incredibly
valuable and robust scientific algorithms written in C and Fortran. It
would be silly to rewrite these algorithms and would take years to
debug them. SciPy uses a variety of methods to generate “wrappers”
around these algorithms so that they can be used in Python. Some
wrappers were generated by hand coding them in C. The rest were
generated using either SWIG or f2py. Some of the newer contributions
to SciPy are either written entirely or wrapped with Cython.
Source: http://www.scipy.org/scipylib/faq.html#id12
On Linux, SciPy and NumPy libraries’ official releases are source-code
only. Installing NumPy and SciPy from source is reasonably easy;
However, both packages depend on other software, some of them which
can be challenging to install, or shipped with incompatibilities by
major Linux distributions. Hopefully, you can install NumPy and SciPy
without any software outside the necessary tools to build python
extensions, as most dependencies are optional
Source: http://www.scipy.org/scipylib/building/linux.html

Categories