I am currently working on a Raspberry Pi project in which I am trying to calculate a FFT on some numpy arrays containing some measurement data. I would like to get this done on the GPU to free resources on the CPU. I found the GPU_FFT library of Andrew Holme that apparently allows exactly that.
http://www.aholme.co.uk/GPU_FFT/Main.htm
However, I do not know how to use this library exactly (I read the included instructions) as I have no knowledge of interaction between Python and C (I have never used C before). Further, I could not find any instructions on how to use GPU_FFT on the internet.
Is there any documentation/explanation that I could not find myself or can I use other Python libaries like PyFFT?
Have you tried using the Mathematica package on the rpi? I haven't but they advertise that their code is very fast and use the gpu.
Related
I am using an open-source Matlab toolbox for brain-computer interface (BCI). I want to send the brain imaging data over to Tensorflow for classification and get the results back to Matlab. Is there any way to pass data structures from Matlab to Tensorflow and get the results back into Matlab?
In case someone lands here with a similar question, I'd like to suggest a Matlab package I am currently writing. It's called tensorflow.m and it's available on GitHub. There's no stable release yet, but simple functionality like importing a frozen graph and running an inference is already possible (see the examples) - this is all you'd need to classify the images in Matlab (only).
The advantage is that you don't need any expensive toolbox nor a Python/Tensorflow installation on your machine. The Python interface of Matlab also seems to be rather adventurous, while tensorflow.m is pure Matlab/C.
I'd be glad if the package can be of use for someone looking for similar solutions; even more so, in case you extend/implement something and open a PR.
So far the best way I found is to run your python module in matlab through matlab's now built-in mechanism for connecting to python:
I wrote my python script in a .py file and in there I imported tensorflow and used it in different functions. You can then return the results to matlab by calling
results = py.myModule.myFunction(arg1,arg2,...,argN)
More detailed instructions for calling user-defined python modules in matlab could be found in the following link:
http://www.mathworks.com/help/matlab/matlab_external/call-user-defined-custom-module.html
So I am working on setting up an agent-based model that runs over a geographic map--syria in this case. I tried writing it in python, but the performance is rather slow--even after some optimization tricks. I was thinking that I should shift to just writing the model in C++, but I don't know which visualization packages can incorporate maps? I tend to use gnuplot in C++, but I have not been able to find a way to incorporate a gis basemap in that package. I am not sure if this is possible in VTK or any other packages. I would like to find a way to run my model fast in C++ but not lose the geographic information. Any suggestions?
Perhaps this project could be useful to you ?
http://code.google.com/p/vtk-grass-bridge/
If you can handle your GIS data using GRASS, it seems that project can convert it to something VTK can render, all in one C++ application.
So I actually figured out the answer to this problem and am posting the solution for everyone. The best choice if you are using python, is to just use the mayavi and tvtk packages from Enthought. Mayavi is a gui on top of the C++ VTK libraries. And tvtk is actually a wrapper for python access to VTK objects. So this allows a person to use python GIS packages--like pyshp, Shapely, and others to manipulate GIS objects and then write them to robust and fast mayavi for visualization. At the same time, if you want to stick to C++ then you can still just write your code in C++ using gdal or ogr, etc., and then run your visualization in VTK. This seems a lot easier and more intuitive then trying to run through some other packages like GRASS, QGIS, or ArcGIS.
Here is a good example of this toolset in action.
Example
What makes you believe that a C++ implementation of your model will be dramatically faster? I suggest before being concerned with how you will visualize the results you focus first on what causes your python implementation to be slow. Is it that your algorithm won't scale? If you have tried optimization tricks, what tricks were those and why do you believe they did not work?
It all eventually comes down to machine instructions being executed on hardware, whether those instructions started out as python, C++ , or some other language source code. Unless your python was running fully interpreted all the time I don't think you will find that switching languages alone will cause you to have a fundamentally different performance profile. Premature optimization is still something to be avoided.
I have recently discovered the power of GP-GPU (general purpose graphics processing unit) and want to take advantage of it to perform 'heavy' scientific and math calculations (that otherwise require big CPU clusters) on a single machine.
I know that there are several interfaces to operate on a GPU, the most prominent of those being CUDA and OpenCL. The latter has the advantage against CUDA to run on most graphics cards (NVIDIA, AMD, Intel) rather than NVIDA cards only. In my case, I have an ordinary Intel 4000 GPU that seems to be well cooperating with OpenCL.
Now, I need to learn how to operate with PyOpenCL to get it on further! So here comes the question:
How can I get started with PyOpenCL? What are the prerequisites? Do I really need to be experienced in Python and/or OpenCL?
My background is in fortran and as a matter of fact I need to translate and parallelize a lengthy fortran code to python (or pyopencl) that mainly deals with solving PDEs and diagonalizing matrices.
I have read the two relevant websites http://enja.org/2011/02/22/adventures-in-pyopencl-part-1-getting-started-with-python/ and http://documen.tician.de/pyopencl/ but they are not really helpful for newbies (ie, dummies).
I just don't know what to begin with. I do not aspire on becoming an expert on the field, just to get to know how one can parallelize simple math and linear algebra on pyopencl.
Any advice and help is highly welcome!
It seems you are looking for the fastest and most effective path to learn PyOpenCL. You do not need to know OpenCL (the hard part) at the start, but it will be helpful to know Python when you begin.
For learning Python syntax quickly, I recommend Codecademy's Python track: http://www.codecademy.com/tracks/python
Then, the Udacity parallel programming course is a great place to start with GPGPU (even though the course is taught in CUDA). https://www.udacity.com/course/cs344 This course will teach you fundamental GPGPU concepts very quickly. You will not need a NVIDIA GPU to participate, because all the course assessments are done online.
After (or during) the Udacity course, I recommend you read, run, and customize PyOpenCL code examples: https://github.com/inducer/pyopencl/tree/master/examples
Irrespective of the language of adoption for GPGPU computing such as Java,C/C++, Python, I would recommend you first get started with the basics of GPGPU computing and OpenCL.
You can use the following resources all are C/C++ oriented but this should you give enough knowledge about OpenCL, GPGPU hardware to get you started.
AMD OpenCL University Tool kit
Hetergeneous Computing with OpenCL Book 2nd Edition
NVIDIA OpenCL pages is another Excellent resorce
Streamcomputing.eu has nice openCL starter articles.
Intel OpenCL SDK tutorial
PyOpenCL specific
OpenCL in Action: How to Accelerate Graphics and Computation
has a chapter on PyOpenCL
OpenCL Programming Guide has chapter PyOpenCL
Both the books contain OpenCL 1.1 implementation but it should be good starting point for you.
As someone new to GPU programming I found the relevant articles you mentioned fairly straightforward though I found the sample code ran perfectly from the command line but not in Eclipse with Anaconda. I think this may be because the Eclipse pyopencl from anaconda is different from the command line version, and I have yet to work out how to resolve this.
For learning python there are a large number of resources online including free ebooks.
https://wiki.python.org/moin/BeginnersGuide
http://codecondo.com/10-ways-to-learn-python/
should be good starters. If you use Eclipse you should install pydev. In any case install Anaconda https://docs.continuum.io/anaconda/install as this will save you a lot of hassle.
I estimate a week or so to get to the level of proficiency you need in Python as long as you picj a few simple mini projects. You may also find that with numpy and scipy and possibly ipython notebook you may not need to delve into GPU programming
These links may help you avoid GPU programming or at least delay having to learn it. Be aware that thr cost of switching between cores means you have ot assign a singificant amount of work to each core
http://blog.dominodatalab.com/simple-parallelization/
https://pythonhosted.org/joblib/parallel.html
Generally I find it more efficient, if less fun, to learn only one thing at a time.
I hope this helps.
you can see this :)
Introduction 1
https://github.com/fr33dz/PyOpenCL_Tuto/blob/master/Intro_PyopenCL.ipynb
Introduction 2
https://github.com/fr33dz/PyOpenCL_Tuto/blob/master/2_PyOpenCL.ipynb
Matrix Multiplication
https://github.com/fr33dz/PyOpenCL_Tuto/blob/master/PyOpenCL_produit_2_matrices.ipynb
I'm building a rendering engine in Python for fun. I need to load 3D scenes. Any standard modern format like DAE, 3DS, or MAX would work: I can convert my files easily between standard formats.
OpenSceneGraph seems to be the most comprehensive and well-maintained solution. It would be ideal to be able to use it in Python without much hassle. Are there working Python bindings for OSG that are easy to install, work on Mac OS X (I'm on 10.8), and are compatible with the latest versions of OSG?
I searched around and came across osgswig (http://code.google.com/p/osgswig/) and PyOSG (http://sourceforge.net/projects/pyosg/), but they don't seem to be actively maintained. I don't see any recent activity related to these packages, and it seems that people had trouble running osgswig on OSX. Ideally, I'd like to find something that "just works", without major compilation hassles. I'd like to just install a package and be able to import a module that will let me load COLLADA or 3DS files.
I also came across pycollada (https://github.com/pycollada/pycollada). It seems active, but fairly early-stage. Ideally, I'd like a reasonably comprehensive package that supports specular maps, normal maps, and other reasonably advanced features. Animation would be nice as well.
In summary, I need to load 3D scenes in Python. Bindings for OSG would probably be ideal, because OSG is so comprehensive. But I need something that works on OSX. I would also prefer something that can be installed reasonably easily. Does something like this exist?
Thanks!
Take a look at Open Asset Import Library (short name: Assimp). It is a portable Open Source library to import various well-known 3D model formats in a uniform manner. http://www.assimp.org/
You should loot at panda3D (http://www.panda3d.org/), it's a game engine with extensive python bindings. It has the features you want : http://www.panda3d.org/manual/index.php/Features
I used it for a few years and it was a solid tool.
I made my own fork of a mirror of a clone of the osgswig project for a similar purpose. I have it working with OpenSceneGraph version 3.2.1 on Windows and Mac; and it's likely I will eventually polish it for linux too. I'm already delivering one product to customers based on my version of osgswig, and I'm considering making others. Find my fork here:
https://github.com/cmbruns/osgswig
If others show enough interest, I might be coaxed into creating binary installers for my version of the osgswig module, to make installation easier.
If you just want the easiest OpenSceneGraph bindings for OSG 3.2.1, you can stop reading this answer here. Read on for more of my thoughts for the future.
Though I am maintaining a fork of osgswig (as stated above), I sort of hate SWIG, and I would prefer to use bindings based on Boost.Python, rather than on SWIG. For large, complex C++ APIs, like OpenSceneGraph, Boost.Python can be much more elegant than SWIG, both for the API consumer, and for the binding maintainer (me, and me). I found one project using Boost.Python to wrap OSG, at https://code.google.com/p/osgboostpython/, but the developer is lovingly wrapping each part of the interface by hand, and has thus only completed a tiny fraction of the large OpenSceneGraph API.
Taking that Boost.Python based project as inspiration, I created yet another OpenSceneGraph Python binding project, at https://github.com/JaneliaSciComp/osgpyplusplus. Eventually, I want to use this osgpyplusplus project for all my python osg needs. And I would appreciate help in making it ready. Right now, osgpyplusplus suffers from the following weaknesses, compared to osgswig:
osgpyplusplus is not yet used in any working product
The build environment is tricky to set up, requiring both Boost.Python and Pyplusplus
I haven't paid much attention to osgpyplusplus recently, so it might rust away if I continue to ignore it.
Though osgpyplusplus probably wraps most of the OpenSceneGraph API, there are probably some important missing pieces that won't be identified until someone tries to develop a significant project with it.
It would be a lot of work for me to create a binary module installer for osgpyplusplus at this point, so please don't ask me to.
I have been browsing around for simple ways to program FFTs to work on my graphic card (Which is a recent NVIDIA supporting CUDA 3.something).
My current option is either to learn C, then that special C version for CUDA, or use some python CUDA functions. I'd rather not learn C yet, since I only programmed in high-level languages.
I looked at pyCUDA and other ways to use my graphic card in python, but I couldn't find any FFT library which could be use with python code only.
Some libraries/project seem to tackle similar project (CUDAmat, Theano), but sadly I found no FFTs.
Does a function exist which could do the same thing as numpy.fft.fft2(), using my graphic card?
EDIT: Bonus point for an open source solution.
There's PyFFT, which is open-source and based on Apple's (somewhat limited) implementation. Disclaimer: I work on PyFFT :)
Yes, ArrayFire has a 2-D FFT for Python.
Disclaimer: I work on ArrayFire.