MIDI on Python / PyGame, Ubuntu 12.04 - python

Trying to get a MIDI interface to work with pygame on Ubuntu 12.04. I know the keyboard works because it can control vkeybd and works with PyGame on OSX, so the issue with with MIDI in python.
$ python -m pygame.examples.midi --list
Traceback (most recent call last):
File "/usr/lib/python2.7/runpy.py", line 162, in _run_module_as_main
"__main__", fname, loader, pkg_name)
File "/usr/lib/python2.7/runpy.py", line 72, in _run_code
exec code in run_globals
File "/usr/lib/python2.7/dist-packages/pygame/examples/midi.py", line 820, in <module>
print_device_info()
File "/usr/lib/python2.7/dist-packages/pygame/examples/midi.py", line 25, in print_device_info
pygame.midi.init()
File "/usr/lib/python2.7/dist-packages/pygame/midi.py", line 71, in init
import pygame.pypm
ImportError: /usr/lib/libportmidi.so.0: undefined symbol: snd_seq_event_input_pending
python-pygame installed through the package manager, as is python-pm.
Any ideas? :)

Although this won't exactly answer your question, it may help you debug the problem yourself.
The error is this :
ImportError: /usr/lib/libportmidi.so.0: undefined symbol: snd_seq_event_input_pending
The undefined symbol is a failure of the dynamic linker to find the code required for the snd_seq_event_input_pending function.
On an example 32 bit Oneiric system we can do this to look at some symbols of libportmidi.so.0.
nm -DC /usr/lib/libportmidi.so.0 | grep snd_seq_event_input_pending
U snd_seq_event_input_pending
This tells us that the libportmidi library requires the code for snd_seq_event_input_pending but the symbol is undefined. So for libportmidi to function it must also load an additional library which contains this function.
On Oneiric I've found that this symbol is defined in libasound2.so.2.
nm -DC /usr/lib/i386-linux-gnu/libasound.so.2 | grep snd_seq_event_input_pending
000a0fa0 T snd_seq_event_input_pending
The T indicates that the function exists and is in the text (code) segment.
Usually, linking of associated libraries occurs automatically as libasound.so.2 should be referenced by libportmidi. On the same system.
ldd /usr/lib/libportmidi.so.0
....
libasound.so.2 => /usr/lib/i386-linux-gnu/libasound.so.2 (0x00e35000)
which shows that libmidi depends on libasound. In the ldd output list in your comments there is no reference to libasound, and so it won't try to automatically dynamically link libasound.so.2 when it is loaded, resulting in your error.
There's a few reasons why there may be an error:
The way linking from libportmidi may have change from Oneiric to Precise. eg libportmidi may attempt to find its own dependencies for libasound. (Unlikely).
There is a bug in packaging of libportmidi where it doesn't reference libasound.so.2 as it should. This may be platform specific (eg only an error on 64 bit systems).
I'd suggest that you try to find out the library on your system that contains the snd_seq_event_input_pending function and then work backwards to try and determine why it has not been linked with libportmidi.
The following bash command will help you find the libaries implementing snd_seq_event_input_pending. If you don't find anything, there's a problem with the libraries installed on your machine.
find /lib /usr/lib -name "lib*.so.*" | while read f; do
if nm -DC "$f" | grep -q 'T snd_seq_event_input_pending'; then
echo "$f"
fi
done

I have exactly the same problem (on Ubuntu 12.04.1), using e.g. the MIDI playback tool in Frescobaldi (which is a Python application). This used to work fine, but doesn't anymore.
This is quite obviously a miscompiled portmidi package which was pushed out on on 2013-01-25, see https://launchpad.net/ubuntu/+source/portmidi/1:200-0ubuntu1.12.04.1. Downgrading to the previous 1:200-0ubuntu1 package solved the issue for me.
I guess that the proper course of action would be to file a bug report against the 1:200-0ubuntu1.12.04.1 version on Launchpad at https://bugs.launchpad.net/ubuntu/+source/portmidi/+bugs. If it doesn't get fixed, we might also ask falkTX if he would be willing to provide a working package in his KXStudio PPAs instead.
Just for the record, here's what ldd gives for the 1:200-0ubuntu1 libportmidi on my system:
linux-vdso.so.1 => (0x00007fffe9bff000)
libasound.so.2 => /usr/lib/x86_64-linux-gnu/libasound.so.2 (0x00007f26264cb000)
libpthread.so.0 => /lib/x86_64-linux-gnu/libpthread.so.0 (0x00007f26262ae000)
libporttime.so.0 => /usr/lib/libporttime.so.0 (0x00007f26260ab000)
libc.so.6 => /lib/x86_64-linux-gnu/libc.so.6 (0x00007f2625cec000)
libm.so.6 => /lib/x86_64-linux-gnu/libm.so.6 (0x00007f26259f0000)
libdl.so.2 => /lib/x86_64-linux-gnu/libdl.so.2 (0x00007f26257eb000)
librt.so.1 => /lib/x86_64-linux-gnu/librt.so.1 (0x00007f26255e3000)
/lib64/ld-linux-x86-64.so.2 (0x00007f26269f4000)
And the broken 1:200-0ubuntu1.12.04.1 version:
linux-vdso.so.1 => (0x00007fff9e3ff000)
libporttime.so.0 => /usr/lib/libporttime.so.0 (0x00007fb84ac71000)
libc.so.6 => /lib/x86_64-linux-gnu/libc.so.6 (0x00007fb84a8b2000)
libpthread.so.0 => /lib/x86_64-linux-gnu/libpthread.so.0 (0x00007fb84a694000)
/lib64/ld-linux-x86-64.so.2 (0x00007fb84b0af000)
So any application which doesn't happen to link in libasound2 by itself will be hosed. Specifically, that seems to be the case for the Python portmidi module. (This kind of error is also aggravated by the fact that, at least from Ubuntu 12.04 onwards, gcc uses the --as-needed linker flag by default. I bet that there are still quite a few packages in the Ubuntu repos which are broken because of that.)

If you want fix it now you can checkout the latest version of portmidi and build the library as follows (assuming you've checked out or unpacked portmidi into a dir called portmidi):
cd portmidi
make -f pm_linux/Makefile
The default install doesn't build a dynamic version of library so you need to build one like this:
gcc -shared -Wl,-soname,libportmidi.so.0 -o pm_linux/libportmidi.so.0 pm_common/pmutil.o pm_linux/pmlinuxalsa.o pm_linux/pmlinux.o pm_common/portmidi.o -lasound
Then you can make a copy of the old library (just in case), and then copy this new one in its place:
sudo cp /usr/lib/libportmidi.so.0 /usr/lib/libportmidi.so.0.orig
sudo cp pm_linux/libportmidi.so.0 /usr/lib/libportmidi.so.0
Your apps should now work...

Related

Calling Pardiso 6 in Python

I'm trying to use Pardiso 6 sparse solver library in Python. The problem is that I can't seem to load the Pardiso shared object (SO). Here's the error that I get when calling
import ctypes
pardiso = ctypes.CDLL(pardiso_so_address)
Traceback (most recent call last):
File "test.py", line 27, in <module>
pardiso = ctypes.CDLL(lib720)
File "/home/amin/anaconda3/envs/idp/lib/python3.7/ctypes/__init__.py", line 364, in __init__
self._handle = _dlopen(self._name, mode)
OSError: ./libpardiso600-GNU720-X86-64.so: undefined symbol: sgetrf_
I'd really appreciate it if someone could shed some light on this.
PS. I already contacted Pardiso developers and they told me that I need to link against optimized BLAS, but I already have MKL installed via conda.
Update 1: I installed mkl via conda, but it didn't help. Strangely, I added import scipy to the header and the error went away. The same thing happens if I add import mkl. So, for some reason, unless scipy or mkl are manually imported, the .so doesn't know that a lapack installation exists. Anyway, now another error is thrown, which I think might be related the libgfortran library. Here's the error
Traceback (most recent call last):
File "test.py", line 34, in <module>
pardiso = ctypes.CDLL(lib720)
File "/home/amin/anaconda3/envs/test/lib/python3.7/ctypes/__init__.py", line 364, in __init__
self._handle = _dlopen(self._name, mode)
OSError: ./libpardiso600-GNU720-X86-64.so: undefined symbol: _gfortran_st_close
I double-checked to see if libgfortran is installed, and indeed it is:
(test) PyPardisoProject$ ldconfig -p | grep libgfortran
libgfortran.so.5 (libc6,x86-64) => /lib/x86_64-linux-gnu/libgfortran.so.5
libgfortran.so.4 (libc6,x86-64) => /lib/x86_64-linux-gnu/libgfortran.so.4
I think something similar might be at play, i.e. the library is there but it needs to be triggered (similar to what import scipy seems to have done for liblapack, but I have no idea how I can trigger it.
Note: I found an example in C on Pardiso website and tested the .so file against it via
$ gcc pardiso_sym.c -o pardiso_sym -L . -lpardiso600-GNU720-X86-64 -llapack -fopenmp -lgfortran
$ OMP_NUM_THREADS=1 ./pardiso_sym
and it worked with no problem (with the existing libraries on my machine). So, the .so works, it's just that I don't know how to inform it of its dependencies in Python.
Update 2: Here's the output of ldd pardiso_sym:
Scripts$ ldd pardiso_sym
linux-vdso.so.1 (0x00007ffe7e982000)
libpardiso600-GNU720-X86-64.so (0x00007f326802d000)
liblapack.so.3 => /lib/x86_64-linux-gnu/liblapack.so.3 (0x00007f3267976000)
libgfortran.so.4 => /lib/x86_64-linux-gnu/libgfortran.so.4 (0x00007f3267795000)
libgomp.so.1 => /lib/x86_64-linux-gnu/libgomp.so.1 (0x00007f326775b000)
libc.so.6 => /lib/x86_64-linux-gnu/libc.so.6 (0x00007f3267568000)
libpthread.so.0 => /lib/x86_64-linux-gnu/libpthread.so.0 (0x00007f3267545000)
libm.so.6 => /lib/x86_64-linux-gnu/libm.so.6 (0x00007f32673f6000)
/lib64/ld-linux-x86-64.so.2 (0x00007f32685df000)
libblas.so.3 => /lib/x86_64-linux-gnu/libblas.so.3 (0x00007f3267389000)
libgfortran.so.5 => /lib/x86_64-linux-gnu/libgfortran.so.5 (0x00007f32670e9000)
libgcc_s.so.1 => /lib/x86_64-linux-gnu/libgcc_s.so.1 (0x00007f32670cf000)
libquadmath.so.0 => /lib/x86_64-linux-gnu/libquadmath.so.0 (0x00007f3267083000)
libdl.so.2 => /lib/x86_64-linux-gnu/libdl.so.2 (0x00007f326707d000)
So, I added the common path, i.e. /lib/x86_64-linux-gnu and /lib64 to PATH and ran the Python script again via:
PATH=$PATH:/lib/x86_64-linux-gnu:/lib64 python padiso_script.py
but the same error is thrown. I also tried adding to LD_LIBRARY_PATH as well, but didn't work either.
Pardiso 6 sparse solver depends on Lapack functions at least sgetrf, that computes an LU factorization of a general M-by-N matrix A using partial pivoting with row interchanges.
From what we read, libpardiso600-GNU720-X86-64.so is linked dynamically against a shared Lapack library. You need to provide a PATH containing one implementation.
Before launching Python, I would recommend you to play with the LD_LIBRARY_PATH and include the path to the BLAS/Lapack library you are using. It can be the netlib implementation, the ATLAS implementation or the MKL implementation.
LD_LIRARY_PATH=$LD_LIRARY_PATH:/my_path_to_lapack \
python -c"import ctypes; pardiso = ctypes.CDLL(pardiso_so_address)"
If you use conda, you can install with the command
conda install -c anaconda mkl
In this case, the installation may directly solves the problem.
The trick is, rather than adding the location of dependencies to system PATHs, you need to explicitly load the dependencies, i.e. lapack, blas, and gfortran in the Python script prior to loading the Pardiso library. Also, it's essential that you explicitly pass the optional mode=ctypes.RLTD_GLOBAL argument to ctypes.CDLL method in order to make the dependencies globally accessible and hence, Pardiso can access them.
import ctypes
import ctypes.util
shared_libs = ["lapack", "blas", "omp", "gfortran"]
for lib in shared_libs:
# Fetch the proper name of the dependency
libname = ctypes.util.find_library(lib)
# Load the dependency and make it globally accessible
ctypes.CDLL(libname, mode=ctypes.RTLD_GLOBAL)
# Finally, load the Pardiso library
pardiso = ctypes.CDLL(pardiso_so_address)
In my experience, if you are inside a conda environment with mkl installed, you only need to list gfortran as dependency and the rest are automatically loaded and accessible, in which case set shared_libs = ["gfortran"].
Pardiso 6 and Intel MKL Pardiso are not compatible as they have different API. You may try to remove MKL from your systems paths, add OpenBLAS, and try to link your example once again.

How to update libstdc++.so.6 or change the file to use on Tensorflow, Python

I am using Python3 and Tensorflow 1.15 on Apache server CentOS6.
Now, I am Struggling with this error.
It requires GLIBCXX_3.4.17
ImportError: /usr/lib64/libstdc++.so.6: version `GLIBCXX_3.4.17' not found (required by /home/app/anaconda3/lib/python3.7/site-packages/tensorflow_core/python/_pywrap_tensorflow_internal.so)
So,Now I checked the libstdc version.
strings /usr/lib64/libstdc++.so.6 | grep GLIBCXX
GLIBCXX_3.4
GLIBCXX_3.4.1
GLIBCXX_3.4.2
GLIBCXX_3.4.3
GLIBCXX_3.4.4
GLIBCXX_3.4.5
GLIBCXX_3.4.6
GLIBCXX_3.4.7
GLIBCXX_3.4.8
GLIBCXX_3.4.9
GLIBCXX_3.4.10
GLIBCXX_3.4.11
GLIBCXX_3.4.12
GLIBCXX_3.4.13
GLIBCXX_FORCE_NEW
GLIBCXX_DEBUG_MESSAGE_LENGTH
There is no 3.4.17
However in conda directory, there is another libstdc++ too
strings /home/app/anaconda3/lib/libstdc++.so.6 | grep GLIBCXX
GLIBCXX_3.4
GLIBCXX_3.4.1
GLIBCXX_3.4.2
GLIBCXX_3.4.3
GLIBCXX_3.4.4
GLIBCXX_3.4.5
GLIBCXX_3.4.6
GLIBCXX_3.4.7
GLIBCXX_3.4.8
GLIBCXX_3.4.9
GLIBCXX_3.4.10
GLIBCXX_3.4.11
GLIBCXX_3.4.12
GLIBCXX_3.4.13
GLIBCXX_3.4.14
GLIBCXX_3.4.15
GLIBCXX_3.4.16
GLIBCXX_3.4.17
GLIBCXX_3.4.18
GLIBCXX_3.4.19
GLIBCXX_3.4.20
GLIBCXX_3.4.21
GLIBCXX_3.4.22
GLIBCXX_3.4.23
GLIBCXX_3.4.24
GLIBCXX_3.4.25
GLIBCXX_3.4.26
GLIBCXX_DEBUG_MESSAGE_LENGTH
So I have two ideas.
update /usr/lib/64/libstdc++.so.6 on CentOS6
sudo yum update libstdc++-devel
no package found...
force python to use /home/app/anaconda3/lib/libstdc++.so.6
However I have no idea how to ....
If you have any ideas, Please help..
See https://gcc.gnu.org/onlinedocs/libstdc++/manual/using_dynamic_or_shared.html#manual.intro.using.linkage.dynamic which explains how to ensure a newer libstdc++.so.6 is found by the linker.
Also the libstdc++ FAQ at https://gcc.gnu.org/onlinedocs/libstdc++/faq.html#faq.how_to_set_paths

Boost Python own module throws Segmentation Fault `GlobalError::PushToStack()`

I'm trying to wrap around an existing C++ llibrary we have for Python 3.6. I've followed the tutorials of Boost Python:
https://flanusse.net/interfacing-c++-with-python.html
https://www.mantidproject.org/Boost_Python_Introduction
https://github.com/TNG/boost-python-examples/blob/master/01-HelloWorld/CMakeLists.txt
All of them SIGSEV, so I run the command under gdb:
gdb --args python -c 'import MyPyLib'
And the actual output is:
Program received signal SIGSEGV, Segmentation fault.
0x00007ffff3bb02a9 in GlobalError::PushToStack() () from /usr/lib/x86_64-linux-gnu/libapt-pkg.so.5.0
I tried to run the boost-python-examples from Github and I get the same problem. If it helps, I'm on:
gcc 7.4.0
g++ 7.4.0
python 3.6.8
ibboost-python-dev 1.65.1
I found the problem, all examples use
find_package(Boost REQUIRED COMPONENTS python)
But if you pay attention, there are two libraries in the system:
sudo ldconfig -p | grep "libboost_python*"
libboost_python3-py36.so.1.65.1 (libc6,x86-64) => /usr/lib/x86_64-linux-gnu/libboost_python3-py36.so.1.65.1
libboost_python3-py36.so (libc6,x86-64) => /usr/lib/x86_64-linux-gnu/libboost_python3-py36.so
libboost_python-py27.so.1.65.1 (libc6,x86-64) => /usr/lib/x86_64-linux-gnu/libboost_python-py27.so.1.65.1
libboost_python-py27.so (libc6,x86-64) => /usr/lib/x86_64-linux-gnu/libboost_python-py27.so
So I started suspecting that my module was being linked to python 2.7 boost-python.
I swapped in CMakeLists.txt the actual component:
find_package(Boost REQUIRED COMPONENTS python3)
And now it works fine. It's quite surprising that such a miss-match throws such a cryptic error. Also cmake complains when using python3 that no headers were found or indexed.

ImportError: bad magic number in 'random': b'\x03\xf3\r\n' [duplicate]

What's the "Bad magic number" ImportError in python, and how do I fix it?
The only thing I can find online suggests this is caused by compiling a .py -> .pyc file and then trying to use it with the wrong version of python. In my case, however, the file seems to import fine some times but not others, and I'm not sure why.
The information python's providing in the traceback isn't particularly helpful (which is why I was asking here...), but here it is in case it helps:
Traceback (most recent call last):
File "run.py", line 7, in <module>
from Normalization import Normalizer
The magic number comes from UNIX-type systems where the first few bytes of a file held a marker indicating the file type.
Python puts a similar marker into its pyc files when it creates them.
Then the python interpreter makes sure this number is correct when loading it.
Anything that damages this magic number will cause your problem. This includes editing the pyc file or trying to run a pyc from a different version of python (usually later) than your interpreter.
If they are your pyc files, just delete them and let the interpreter re-compile the py files. On UNIX type systems, that could be something as simple as:
rm *.pyc
or:
find . -name '*.pyc' -delete
If they are not yours, you'll have to either get the py files for re-compilation, or an interpreter that can run the pyc files with that particular magic value.
One thing that might be causing the intermittent nature. The pyc that's causing the problem may only be imported under certain conditions. It's highly unlikely it would import sometimes. You should check the actual full stack trace when the import fails?
As an aside, the first word of all my 2.5.1(r251:54863) pyc files is 62131, 2.6.1(r261:67517) is 62161. The list of all magic numbers can be found in Python/import.c, reproduced here for completeness (current as at the time the answer was posted, it may have changed since then):
1.5: 20121
1.5.1: 20121
1.5.2: 20121
1.6: 50428
2.0: 50823
2.0.1: 50823
2.1: 60202
2.1.1: 60202
2.1.2: 60202
2.2: 60717
2.3a0: 62011
2.3a0: 62021
2.3a0: 62011
2.4a0: 62041
2.4a3: 62051
2.4b1: 62061
2.5a0: 62071
2.5a0: 62081
2.5a0: 62091
2.5a0: 62092
2.5b3: 62101
2.5b3: 62111
2.5c1: 62121
2.5c2: 62131
2.6a0: 62151
2.6a1: 62161
2.7a0: 62171
Deleting all .pyc files will fix "Bad Magic Number" error.
find . -name "*.pyc" -delete
Loading a python3 generated *.pyc file with python2 also causes this error.
Take the pyc file to a windows machine. Use any Hex editor to open this pyc file. I used freeware 'HexEdit'. Now read hex value of first two bytes. In my case, these were 03 f3.
Open calc and convert its display mode to Programmer (Scientific in XP) to see Hex and Decimal conversion. Select "Hex" from Radio button. Enter values as second byte first and then the first byte i.e f303 Now click on "Dec" (Decimal) radio button. The value displayed is one which is correspond to the magic number aka version of python.
So, considering the table provided in earlier reply
1.5 => 20121 => 4E99 so files would have first byte as 99 and second as 4e
1.6 => 50428 => C4FC so files would have first byte as fc and second as c4
In my case it was not .pyc but old binary .mo translation files after I renamed my own module, so inside this module folder I had to run
find . -name \*.po -execdir sh -c 'msgfmt "$0" -o `basename $0 .po`.mo' '{}' \;
(please do backup and try to fix .pyc files first)
"Bad magic number" error also happens if you have manually named your file with an extension .pyc
This can also be due to missing __init__.py file from the directory. Say if you create a new directory in Django for separating the unit tests into multiple files and place them in one directory then you also have to create the __init__.py file beside all the other files in new created test directory. otherwise it can give error like:
Traceback (most recent call last):
File "C:\Users\USERNAME\AppData\Local\Programs\Python\Python35\Lib\unittest\loader.py",line 153, in loadTestsFromName
module = __import__(module_name)
ImportError: bad magic number in 'APPNAME.tests': b'\x03\xf3\r\n'
I had a strange case of Bad Magic Number error using a very old (1.5.2) implementation. I generated a .pyo file and that triggered the error. Bizarrely, the problem was solved by changing the name of the module. The offending name was sms.py. If I generated an sms.pyo from that module, Bad Magic Number error was the result. When I changed the name to smst.py, the error went away. I checked back and forth to see if sms.py somehow interfered with any other module with the same name but I could not find any name collision. Even though the source of this problem remained a mistery for me, I recommend trying a module name change.
This is much more efficent than above.
find {directory-of-.pyc-files} -name "*.pyc" -print0 | xargs -0 rm -rf
where {directory-of-.pyc-files} is the directory that contains the compiled python files.
This can also happen if you have the wrong python27.dll file (in case of Windows), to solve this just re-install (or extract) python with the exact corresponding dll version. I had a similar experience.
I just faced the same issue with Fedora26 where many tools such as dnf were broken due to bad magic number for six.
For an unknown reason i've got a file /usr/bin/six.pyc, with the unexpected magic number. Deleting this file fix the problem
You will need to run this command in every path you have in your environment.
>>> import sys
>>> sys.path
['', '/usr/lib/python36.zip', '/usr/lib/python3.6', '/usr/lib/python3.6/lib-dynload', '/usr/local/lib/python3.6/dist-packages', '/source_code/src/python', '/usr/lib/python3/dist-packages']
Then run the command in every directory here
find /usr/lib/python3.6/ -name "*.pyc" -delete
find /usr/local/lib/python3.6/dist-packages -name "*.pyc" -delete
# etc...
In my case, I've git clone a lib which had an interpreter of
#!/usr/bin/env python
While python was leading to Python2.7 even though my main code was running with python3.6 ... it still created a *.pyc file for 2.7 version ...
I can say that this error probably is a result of a mix between 2.7 & 3+ versions, this is why cleanup ( in any way you can think of that you're using ) - will help here ...
don't forget to adjust those Python2x code -> python 3...
So I had the same error : ImportError bad magic number. This was on windows 10
This error was because I installed mysql-connector
So I had to:
pip uninstall mysql-comnector
pip uninstall mysql-connector-python
pip install mysql-connector-python
Don't delete them!!! Until..........
Find a version on your git, svn or copy folder that works.
Delete them and then recover all .pyc.
That's work for me.

Using f2py on a Fortran code linked to PETSc

My question is related to this post:
Including a compiled module in module that is wrapped with f2py (Minimum working example)?
in which the poster was trying to compile a Fortran code (Test.f90) with f2py and link that to a pre-compiled library (or in my case, object, myex44f.o). The answer enabled me to compile the Fortran code and generated the python module.
My problem is different from the above posters problem in that my object is linked to PETSc. When I try to import my f2py-generated library into python, I get the error that it cannot locate 'VecDestroy', a PETSc subroutine. My most recent attempt was:
f2py -c --fcompiler=gfortran -I. myex44f.o ../../../Codes/third_party/petsc/include/petsc/finclude/petscdef.h -m test Test.f90
Here is the code Test.f90:
subroutine test
USE petsctest
call mainsub
end subroutine test
which calls mainsub from the module petsctest:
module petsctest ! Solves the linear system J x = f
#include <petsc/finclude/petscdef.h>
contains
subroutine mainsub
use petscksp; use petscdm
Vec x,f
Mat J
DM da
KSP ksp
PetscErrorCode ierr
call PetscInitialize(PETSC_NULL_CHARACTER,ierr)
call DMDACreate1d(MPI_COMM_WORLD,DM_BOUNDARY_NONE,8,1,1, &
& PETSC_NULL_INTEGER,da,ierr)
call DMCreateGlobalVector(da,x,ierr)
call VecDuplicate(x,f,ierr)
call DMSetMatType(da,MATAIJ,ierr)
call DMCreateMatrix(da,J,ierr)
call ComputeRHS(da,f,ierr)
call ComputeMatrix(da,J,ierr)
call KSPCreate(MPI_COMM_WORLD,ksp,ierr)
call KSPSetOperators(ksp,J,J,ierr)
call KSPSetFromOptions(ksp,ierr)
call KSPSolve(ksp,f,x,ierr)
call MatDestroy(J,ierr)
call VecDestroy(x,ierr)
call VecDestroy(f,ierr)
call KSPDestroy(ksp,ierr)
call DMDestroy(da,ierr)
call PetscFinalize(ierr)
end
The error that I get is:
import test Traceback (most recent call last): File "", line 1, in ImportError: ./test.so: undefined symbol: vecdestroy_
Does anyone have any suggestions? Thank you very much for any help you can provide me.
UPDATE:
I generated the original myex44f.o object using the makefile provided with the PETSc examples. Looking at the link line, I reasoned that I might need to link the petsc library when compiling with f2py. My current attempt is:
f2py -c --fcompiler=gfortran -m test Test.f90 -I. myex44f.o -I/home/costoich/Documents/AFPWork/Codes/third_party/petsc/include -I/home/costoich/Documents/AFPWork/Codes/third_party/petsc/arch-linux2-c-debug/include -L/home/costoich/Documents/AFPWork/Codes/third_party/petsc/arch-linux2-c-debug/lib -lpetsc
This seems to be linking correctly during the compile steps (if I just write -lpetsc without the path the compiler fails). However, when I type ldd test.so, I get:
linux-vdso.so.1 => (0x00007ffe09886000)
libpetsc.so.3.7 => not found
libgfortran.so.3 => /usr/lib/x86_64-linux-gnu/libgfortran.so.3 (0x00007fc315be5000)
libc.so.6 => /lib/x86_64-linux-gnu/libc.so.6 (0x00007fc31581b000)
libquadmath.so.0 => /usr/lib/x86_64-linux-gnu/libquadmath.so.0 (0x00007fc3155dc000)
libm.so.6 => /lib/x86_64-linux-gnu/libm.so.6 (0x00007fc3152d3000)
libgcc_s.so.1 => /lib/x86_64-linux-gnu/libgcc_s.so.1 (0x00007fc3150bc000)
/lib64/ld-linux-x86-64.so.2 (0x000055a3fad27000)
Do I need two use the link flags Wl,rpath? f2py seems to not understand these. Thank you for any comments.
RESOLVED
I found my issue. I can't get f2py to accept the -Wl,rpath options, but if I define the environment variable LD_LIBRARY_PATH=/home/costoich/Documents/AFPWork/Codes/third_party/petsc/arch-linux2-c-debug/lib everything works out. Thank you for your help.
#VladimirF, has a point.
It looks like if the VecDestrou is not in the PETSC module you are using.
Seems to me following parts of PETSc are required in your module.
#include <petsc/finclude/petscsysdef.h>
#include <petsc/finclude/petscvecdef.h>
! Optional
#include <petsc/finclude/petscdef.h>
#include <petsc/finclude/petscdm.h>
#include <petsc/finclude/petscvec.h>
#include <petsc/finclude/petscvec.h90>
#include <petsc/finclude/petscmat.h>
#include <petsc/finclude/petscmat.h90>
! might be not completed
! Or
use petscksp
use petscdm
use petscvec
use petscmat
!might be not completed
How to use PETSc with Fortran is diccussed here, personally I go for option 2 in that page. Most of the existing PETSc examples are following option 2 as well.
Please let me clarify that I am not encouraging you to use include over use, that's the way I am used to do only. PETSc documentation has an example that using Fortran modules, for example here. So you can choose any of these methods/or have both optionally (please realize that preprocessor option in that example, PETSC_USE_FORTRAN_MODULES), but still need to add required modules depends on what you are using.

Categories