My question is related to this post:
Including a compiled module in module that is wrapped with f2py (Minimum working example)?
in which the poster was trying to compile a Fortran code (Test.f90) with f2py and link that to a pre-compiled library (or in my case, object, myex44f.o). The answer enabled me to compile the Fortran code and generated the python module.
My problem is different from the above posters problem in that my object is linked to PETSc. When I try to import my f2py-generated library into python, I get the error that it cannot locate 'VecDestroy', a PETSc subroutine. My most recent attempt was:
f2py -c --fcompiler=gfortran -I. myex44f.o ../../../Codes/third_party/petsc/include/petsc/finclude/petscdef.h -m test Test.f90
Here is the code Test.f90:
subroutine test
USE petsctest
call mainsub
end subroutine test
which calls mainsub from the module petsctest:
module petsctest ! Solves the linear system J x = f
#include <petsc/finclude/petscdef.h>
contains
subroutine mainsub
use petscksp; use petscdm
Vec x,f
Mat J
DM da
KSP ksp
PetscErrorCode ierr
call PetscInitialize(PETSC_NULL_CHARACTER,ierr)
call DMDACreate1d(MPI_COMM_WORLD,DM_BOUNDARY_NONE,8,1,1, &
& PETSC_NULL_INTEGER,da,ierr)
call DMCreateGlobalVector(da,x,ierr)
call VecDuplicate(x,f,ierr)
call DMSetMatType(da,MATAIJ,ierr)
call DMCreateMatrix(da,J,ierr)
call ComputeRHS(da,f,ierr)
call ComputeMatrix(da,J,ierr)
call KSPCreate(MPI_COMM_WORLD,ksp,ierr)
call KSPSetOperators(ksp,J,J,ierr)
call KSPSetFromOptions(ksp,ierr)
call KSPSolve(ksp,f,x,ierr)
call MatDestroy(J,ierr)
call VecDestroy(x,ierr)
call VecDestroy(f,ierr)
call KSPDestroy(ksp,ierr)
call DMDestroy(da,ierr)
call PetscFinalize(ierr)
end
The error that I get is:
import test Traceback (most recent call last): File "", line 1, in ImportError: ./test.so: undefined symbol: vecdestroy_
Does anyone have any suggestions? Thank you very much for any help you can provide me.
UPDATE:
I generated the original myex44f.o object using the makefile provided with the PETSc examples. Looking at the link line, I reasoned that I might need to link the petsc library when compiling with f2py. My current attempt is:
f2py -c --fcompiler=gfortran -m test Test.f90 -I. myex44f.o -I/home/costoich/Documents/AFPWork/Codes/third_party/petsc/include -I/home/costoich/Documents/AFPWork/Codes/third_party/petsc/arch-linux2-c-debug/include -L/home/costoich/Documents/AFPWork/Codes/third_party/petsc/arch-linux2-c-debug/lib -lpetsc
This seems to be linking correctly during the compile steps (if I just write -lpetsc without the path the compiler fails). However, when I type ldd test.so, I get:
linux-vdso.so.1 => (0x00007ffe09886000)
libpetsc.so.3.7 => not found
libgfortran.so.3 => /usr/lib/x86_64-linux-gnu/libgfortran.so.3 (0x00007fc315be5000)
libc.so.6 => /lib/x86_64-linux-gnu/libc.so.6 (0x00007fc31581b000)
libquadmath.so.0 => /usr/lib/x86_64-linux-gnu/libquadmath.so.0 (0x00007fc3155dc000)
libm.so.6 => /lib/x86_64-linux-gnu/libm.so.6 (0x00007fc3152d3000)
libgcc_s.so.1 => /lib/x86_64-linux-gnu/libgcc_s.so.1 (0x00007fc3150bc000)
/lib64/ld-linux-x86-64.so.2 (0x000055a3fad27000)
Do I need two use the link flags Wl,rpath? f2py seems to not understand these. Thank you for any comments.
RESOLVED
I found my issue. I can't get f2py to accept the -Wl,rpath options, but if I define the environment variable LD_LIBRARY_PATH=/home/costoich/Documents/AFPWork/Codes/third_party/petsc/arch-linux2-c-debug/lib everything works out. Thank you for your help.
#VladimirF, has a point.
It looks like if the VecDestrou is not in the PETSC module you are using.
Seems to me following parts of PETSc are required in your module.
#include <petsc/finclude/petscsysdef.h>
#include <petsc/finclude/petscvecdef.h>
! Optional
#include <petsc/finclude/petscdef.h>
#include <petsc/finclude/petscdm.h>
#include <petsc/finclude/petscvec.h>
#include <petsc/finclude/petscvec.h90>
#include <petsc/finclude/petscmat.h>
#include <petsc/finclude/petscmat.h90>
! might be not completed
! Or
use petscksp
use petscdm
use petscvec
use petscmat
!might be not completed
How to use PETSc with Fortran is diccussed here, personally I go for option 2 in that page. Most of the existing PETSc examples are following option 2 as well.
Please let me clarify that I am not encouraging you to use include over use, that's the way I am used to do only. PETSc documentation has an example that using Fortran modules, for example here. So you can choose any of these methods/or have both optionally (please realize that preprocessor option in that example, PETSC_USE_FORTRAN_MODULES), but still need to add required modules depends on what you are using.
Related
Okay so according to the answer I found here to the question titled "Calling C/C++ from Python?" here, and also on the cppyy documentation website, I made some sample classes in .h and .cpp files and tried to include them in Python. While the .h file gets included easily, when I try to use the cppyy.load_library() function, it gives me a runtime error for some reason. Can someone please help? I've tried to look for solutions online but apparently no one has got a similar problem before. This is what I'm running in Jupyter Notebook:
import cppyy
cppyy.include("foo.h")
cppyy.load_library("libfoo")
the final line is giving me the following error:
RuntimeError Traceback (most recent call last)
<ipython-input-3-eea6173ad08e> in <module>
----> 1 cppyy.load_library("libfoo")
~\anaconda3\lib\site-packages\cppyy\__init__.py in load_library(name)
219 sc = gSystem.Load(name)
220 if sc == -1:
--> 221 raise RuntimeError('Unable to load library "%s"%s' % (name, err.err))
222 return True
223
RuntimeError: Unable to load library "libfoo"
This is my .h file:
class Foo {
public:
void bar();
};
And here is my .cpp file:
#include "foo.h"
#include <iostream>
void Foo::bar() { std::cout << "Hello" << std::endl; }
I'm using the commands g++ -c -fPIC foo.cpp -o foo.o and g++ -shared -Wl,-soname,libfoo.so -o libfoo.so foo.o to compile my cpp code.
Please can someone help?
The example works just fine for me.
To debug, first make sure to start python in the same directory where libfoo.so is located, or to add the directory where libfoo.so lives to LD_LIBRARY_PATH (for any process to use), or call cppyy.add_library_path() with the path as argument to add it for use by cppyy only. As concerns the name, the .so extension is automatically added as needed, as is the lib part, so either one of foo, libfoo, foo.so, or libfoo.so is fine.
If that still fails, a reasonable way on Linux (only) of getting more information for what could be going wrong, is to use ctypes:
$ python
>>> import cypes
>>> lib = ctypes.CDLL("libfoo.so")
which will show you if there are different problems such as missing symbols or missing dependent libraries (but neither is the case here).
I'm trying to use Pardiso 6 sparse solver library in Python. The problem is that I can't seem to load the Pardiso shared object (SO). Here's the error that I get when calling
import ctypes
pardiso = ctypes.CDLL(pardiso_so_address)
Traceback (most recent call last):
File "test.py", line 27, in <module>
pardiso = ctypes.CDLL(lib720)
File "/home/amin/anaconda3/envs/idp/lib/python3.7/ctypes/__init__.py", line 364, in __init__
self._handle = _dlopen(self._name, mode)
OSError: ./libpardiso600-GNU720-X86-64.so: undefined symbol: sgetrf_
I'd really appreciate it if someone could shed some light on this.
PS. I already contacted Pardiso developers and they told me that I need to link against optimized BLAS, but I already have MKL installed via conda.
Update 1: I installed mkl via conda, but it didn't help. Strangely, I added import scipy to the header and the error went away. The same thing happens if I add import mkl. So, for some reason, unless scipy or mkl are manually imported, the .so doesn't know that a lapack installation exists. Anyway, now another error is thrown, which I think might be related the libgfortran library. Here's the error
Traceback (most recent call last):
File "test.py", line 34, in <module>
pardiso = ctypes.CDLL(lib720)
File "/home/amin/anaconda3/envs/test/lib/python3.7/ctypes/__init__.py", line 364, in __init__
self._handle = _dlopen(self._name, mode)
OSError: ./libpardiso600-GNU720-X86-64.so: undefined symbol: _gfortran_st_close
I double-checked to see if libgfortran is installed, and indeed it is:
(test) PyPardisoProject$ ldconfig -p | grep libgfortran
libgfortran.so.5 (libc6,x86-64) => /lib/x86_64-linux-gnu/libgfortran.so.5
libgfortran.so.4 (libc6,x86-64) => /lib/x86_64-linux-gnu/libgfortran.so.4
I think something similar might be at play, i.e. the library is there but it needs to be triggered (similar to what import scipy seems to have done for liblapack, but I have no idea how I can trigger it.
Note: I found an example in C on Pardiso website and tested the .so file against it via
$ gcc pardiso_sym.c -o pardiso_sym -L . -lpardiso600-GNU720-X86-64 -llapack -fopenmp -lgfortran
$ OMP_NUM_THREADS=1 ./pardiso_sym
and it worked with no problem (with the existing libraries on my machine). So, the .so works, it's just that I don't know how to inform it of its dependencies in Python.
Update 2: Here's the output of ldd pardiso_sym:
Scripts$ ldd pardiso_sym
linux-vdso.so.1 (0x00007ffe7e982000)
libpardiso600-GNU720-X86-64.so (0x00007f326802d000)
liblapack.so.3 => /lib/x86_64-linux-gnu/liblapack.so.3 (0x00007f3267976000)
libgfortran.so.4 => /lib/x86_64-linux-gnu/libgfortran.so.4 (0x00007f3267795000)
libgomp.so.1 => /lib/x86_64-linux-gnu/libgomp.so.1 (0x00007f326775b000)
libc.so.6 => /lib/x86_64-linux-gnu/libc.so.6 (0x00007f3267568000)
libpthread.so.0 => /lib/x86_64-linux-gnu/libpthread.so.0 (0x00007f3267545000)
libm.so.6 => /lib/x86_64-linux-gnu/libm.so.6 (0x00007f32673f6000)
/lib64/ld-linux-x86-64.so.2 (0x00007f32685df000)
libblas.so.3 => /lib/x86_64-linux-gnu/libblas.so.3 (0x00007f3267389000)
libgfortran.so.5 => /lib/x86_64-linux-gnu/libgfortran.so.5 (0x00007f32670e9000)
libgcc_s.so.1 => /lib/x86_64-linux-gnu/libgcc_s.so.1 (0x00007f32670cf000)
libquadmath.so.0 => /lib/x86_64-linux-gnu/libquadmath.so.0 (0x00007f3267083000)
libdl.so.2 => /lib/x86_64-linux-gnu/libdl.so.2 (0x00007f326707d000)
So, I added the common path, i.e. /lib/x86_64-linux-gnu and /lib64 to PATH and ran the Python script again via:
PATH=$PATH:/lib/x86_64-linux-gnu:/lib64 python padiso_script.py
but the same error is thrown. I also tried adding to LD_LIBRARY_PATH as well, but didn't work either.
Pardiso 6 sparse solver depends on Lapack functions at least sgetrf, that computes an LU factorization of a general M-by-N matrix A using partial pivoting with row interchanges.
From what we read, libpardiso600-GNU720-X86-64.so is linked dynamically against a shared Lapack library. You need to provide a PATH containing one implementation.
Before launching Python, I would recommend you to play with the LD_LIBRARY_PATH and include the path to the BLAS/Lapack library you are using. It can be the netlib implementation, the ATLAS implementation or the MKL implementation.
LD_LIRARY_PATH=$LD_LIRARY_PATH:/my_path_to_lapack \
python -c"import ctypes; pardiso = ctypes.CDLL(pardiso_so_address)"
If you use conda, you can install with the command
conda install -c anaconda mkl
In this case, the installation may directly solves the problem.
The trick is, rather than adding the location of dependencies to system PATHs, you need to explicitly load the dependencies, i.e. lapack, blas, and gfortran in the Python script prior to loading the Pardiso library. Also, it's essential that you explicitly pass the optional mode=ctypes.RLTD_GLOBAL argument to ctypes.CDLL method in order to make the dependencies globally accessible and hence, Pardiso can access them.
import ctypes
import ctypes.util
shared_libs = ["lapack", "blas", "omp", "gfortran"]
for lib in shared_libs:
# Fetch the proper name of the dependency
libname = ctypes.util.find_library(lib)
# Load the dependency and make it globally accessible
ctypes.CDLL(libname, mode=ctypes.RTLD_GLOBAL)
# Finally, load the Pardiso library
pardiso = ctypes.CDLL(pardiso_so_address)
In my experience, if you are inside a conda environment with mkl installed, you only need to list gfortran as dependency and the rest are automatically loaded and accessible, in which case set shared_libs = ["gfortran"].
Pardiso 6 and Intel MKL Pardiso are not compatible as they have different API. You may try to remove MKL from your systems paths, add OpenBLAS, and try to link your example once again.
I've got a c++ program I'm trying to wrap/convert to Cython. It uses a particular library that for some reason, will not result in a working module for importing. There is a working c++ program, by the way. Here's the setup.py:
ext_modules = [
Extension(
name="libnmfpy",
sources=["interface/nmf_lib.pyx"],
include_dirs = ["../src/", numpy.get_include()],
libraries=["nmf","mpi_cxx","mpi","m"],
library_dirs=["../build/Linux/bin.release","/usr/local/lib/","/usr/lib"],
language="c++",)
]
setup(
name = 'libnmfpy',
cmdclass = {'build_ext': build_ext},
ext_modules = ext_modules,
)
I should mention it is libnmf that seems to be causing problems. The first build of libnmf would cause this script to generate this error:
/usr/bin/ld: ../build/Linux/bin.release/libnmf.a(nmf.o): relocation R_X86_64_32 against `.rodata' can not be used when making a shared object; recompile with -fPIC
../build/Linux/bin.release/libnmf.a: could not read symbols: Bad value
collect2: error: ld returned 1 exit status
When I rebuild libnmf with -fPIC, setup generates a libnmfpy.so, but when I import that in another script, I would get the aforementioned undefined symbol:
Traceback (most recent call last):
File "test.py", line 1, in <module>
import libnmfpy
ImportError: $path/cython/libnmfpy.so: undefined symbol: _ZN4elem6lapack3SVDEiiPdiS1_
If it would help, here's something my search suggested:
nm libnmfpy.so | grep _ZN4elem6lapack3SVDEiiPdiS1_
U _ZN4elem6lapack3SVDEiiPdiS1_
nm ../build/Linux/bin.release/libnmf.a | grep _ZN4elem6lapack3SVDEiiPdiS1_
U _ZN4elem6lapack3SVDEiiPdiS1_
Which is what I guess to the cause of the error. I look in what I think is the offending library that libnmf is built on:
nm $another_path/lib/libelemental.a | grep _ZN4elem6lapack3SVDEiiPdiS1_
0000000000005290 T _ZN4elem6lapack3SVDEiiPdiS1_
I am not too familiar yet with libraries and linkers, so any help would be appreciated. Thanks
edit: a little digging made me realize something. Is there a difference between Mac OS X and Linux that I should watch out for? The people I work for that wrote this originally reported no build errors like this
You should use nm -C, to unmangle your symbols. It also looks like you are mixing static and shared libraries which is generally not a good idea. Also, gcc's linker is a one pass linker which means the order of library flags matters. You want to list the libraries in reverse dependency order. In other words, if a depends on b, then b must appear before a in the linker flags.
Well I can't try to recreate your setup and then work out and test a solution on my setup since I don't have your source, but it seems to me that when you recompiled libnmf with fpic, it was recompiled with dynamic linking, while before it used to be statically linked.
If my guess is correct, then you can try either:
compiling libnmf again with -fPIC , AND -static.
changing your setup.py - add "elemental" to the libraries list - this will make the linker fetch that lib as well.
You should note that approach #1 is usually considered less desirable, but as I said it could be that it was originally compiled that way anyways. #2, however, could take more work because if there are other libs that are required, you will have to find and add them as well.
Trying to get a MIDI interface to work with pygame on Ubuntu 12.04. I know the keyboard works because it can control vkeybd and works with PyGame on OSX, so the issue with with MIDI in python.
$ python -m pygame.examples.midi --list
Traceback (most recent call last):
File "/usr/lib/python2.7/runpy.py", line 162, in _run_module_as_main
"__main__", fname, loader, pkg_name)
File "/usr/lib/python2.7/runpy.py", line 72, in _run_code
exec code in run_globals
File "/usr/lib/python2.7/dist-packages/pygame/examples/midi.py", line 820, in <module>
print_device_info()
File "/usr/lib/python2.7/dist-packages/pygame/examples/midi.py", line 25, in print_device_info
pygame.midi.init()
File "/usr/lib/python2.7/dist-packages/pygame/midi.py", line 71, in init
import pygame.pypm
ImportError: /usr/lib/libportmidi.so.0: undefined symbol: snd_seq_event_input_pending
python-pygame installed through the package manager, as is python-pm.
Any ideas? :)
Although this won't exactly answer your question, it may help you debug the problem yourself.
The error is this :
ImportError: /usr/lib/libportmidi.so.0: undefined symbol: snd_seq_event_input_pending
The undefined symbol is a failure of the dynamic linker to find the code required for the snd_seq_event_input_pending function.
On an example 32 bit Oneiric system we can do this to look at some symbols of libportmidi.so.0.
nm -DC /usr/lib/libportmidi.so.0 | grep snd_seq_event_input_pending
U snd_seq_event_input_pending
This tells us that the libportmidi library requires the code for snd_seq_event_input_pending but the symbol is undefined. So for libportmidi to function it must also load an additional library which contains this function.
On Oneiric I've found that this symbol is defined in libasound2.so.2.
nm -DC /usr/lib/i386-linux-gnu/libasound.so.2 | grep snd_seq_event_input_pending
000a0fa0 T snd_seq_event_input_pending
The T indicates that the function exists and is in the text (code) segment.
Usually, linking of associated libraries occurs automatically as libasound.so.2 should be referenced by libportmidi. On the same system.
ldd /usr/lib/libportmidi.so.0
....
libasound.so.2 => /usr/lib/i386-linux-gnu/libasound.so.2 (0x00e35000)
which shows that libmidi depends on libasound. In the ldd output list in your comments there is no reference to libasound, and so it won't try to automatically dynamically link libasound.so.2 when it is loaded, resulting in your error.
There's a few reasons why there may be an error:
The way linking from libportmidi may have change from Oneiric to Precise. eg libportmidi may attempt to find its own dependencies for libasound. (Unlikely).
There is a bug in packaging of libportmidi where it doesn't reference libasound.so.2 as it should. This may be platform specific (eg only an error on 64 bit systems).
I'd suggest that you try to find out the library on your system that contains the snd_seq_event_input_pending function and then work backwards to try and determine why it has not been linked with libportmidi.
The following bash command will help you find the libaries implementing snd_seq_event_input_pending. If you don't find anything, there's a problem with the libraries installed on your machine.
find /lib /usr/lib -name "lib*.so.*" | while read f; do
if nm -DC "$f" | grep -q 'T snd_seq_event_input_pending'; then
echo "$f"
fi
done
I have exactly the same problem (on Ubuntu 12.04.1), using e.g. the MIDI playback tool in Frescobaldi (which is a Python application). This used to work fine, but doesn't anymore.
This is quite obviously a miscompiled portmidi package which was pushed out on on 2013-01-25, see https://launchpad.net/ubuntu/+source/portmidi/1:200-0ubuntu1.12.04.1. Downgrading to the previous 1:200-0ubuntu1 package solved the issue for me.
I guess that the proper course of action would be to file a bug report against the 1:200-0ubuntu1.12.04.1 version on Launchpad at https://bugs.launchpad.net/ubuntu/+source/portmidi/+bugs. If it doesn't get fixed, we might also ask falkTX if he would be willing to provide a working package in his KXStudio PPAs instead.
Just for the record, here's what ldd gives for the 1:200-0ubuntu1 libportmidi on my system:
linux-vdso.so.1 => (0x00007fffe9bff000)
libasound.so.2 => /usr/lib/x86_64-linux-gnu/libasound.so.2 (0x00007f26264cb000)
libpthread.so.0 => /lib/x86_64-linux-gnu/libpthread.so.0 (0x00007f26262ae000)
libporttime.so.0 => /usr/lib/libporttime.so.0 (0x00007f26260ab000)
libc.so.6 => /lib/x86_64-linux-gnu/libc.so.6 (0x00007f2625cec000)
libm.so.6 => /lib/x86_64-linux-gnu/libm.so.6 (0x00007f26259f0000)
libdl.so.2 => /lib/x86_64-linux-gnu/libdl.so.2 (0x00007f26257eb000)
librt.so.1 => /lib/x86_64-linux-gnu/librt.so.1 (0x00007f26255e3000)
/lib64/ld-linux-x86-64.so.2 (0x00007f26269f4000)
And the broken 1:200-0ubuntu1.12.04.1 version:
linux-vdso.so.1 => (0x00007fff9e3ff000)
libporttime.so.0 => /usr/lib/libporttime.so.0 (0x00007fb84ac71000)
libc.so.6 => /lib/x86_64-linux-gnu/libc.so.6 (0x00007fb84a8b2000)
libpthread.so.0 => /lib/x86_64-linux-gnu/libpthread.so.0 (0x00007fb84a694000)
/lib64/ld-linux-x86-64.so.2 (0x00007fb84b0af000)
So any application which doesn't happen to link in libasound2 by itself will be hosed. Specifically, that seems to be the case for the Python portmidi module. (This kind of error is also aggravated by the fact that, at least from Ubuntu 12.04 onwards, gcc uses the --as-needed linker flag by default. I bet that there are still quite a few packages in the Ubuntu repos which are broken because of that.)
If you want fix it now you can checkout the latest version of portmidi and build the library as follows (assuming you've checked out or unpacked portmidi into a dir called portmidi):
cd portmidi
make -f pm_linux/Makefile
The default install doesn't build a dynamic version of library so you need to build one like this:
gcc -shared -Wl,-soname,libportmidi.so.0 -o pm_linux/libportmidi.so.0 pm_common/pmutil.o pm_linux/pmlinuxalsa.o pm_linux/pmlinux.o pm_common/portmidi.o -lasound
Then you can make a copy of the old library (just in case), and then copy this new one in its place:
sudo cp /usr/lib/libportmidi.so.0 /usr/lib/libportmidi.so.0.orig
sudo cp pm_linux/libportmidi.so.0 /usr/lib/libportmidi.so.0
Your apps should now work...
The crux of my problem is this:
I am developing code on Windows XP in C with MS Visual Studio 10.0, and I need to embed Python to do some plotting, file management, and some other things. I had problems with sys.path finding my Pure-Python modules, but I have fixed the problem by modifying PYTHONPATH.
Now, my problem is getting python to find dynamic libraries that are pulled in by some modules. In particular, my problem is to compress a folder into a bzip2 achive of the same name.
From a normal python command prompt, this works just fine:
import tarfile
tar=tarfile.open('Code.tar.bz2','w:bz2')
tar.add('Code',arcname='Code')
tar.close()
But when I call this code from my c-code, it gives me this error:
Traceback (most recent call last):
File "<string>", line 4, in <module>
File "D:\My_Documents\Code\ScrollModel\trunk\PythonCode.py", line 20, in Colle
ctFiles
tar=tarfile.open(os.path.join(runPath,'CODE.tar.bz2'),'w:bz2')
File "c:\Python26\lib\tarfile.py", line 1671, in open
return func(name, filemode, fileobj, **kwargs)
File "c:\Python26\lib\tarfile.py", line 1737, in bz2open
raise CompressionError("bz2 module is not available")
tarfile.CompressionError: bz2 module is not available
I have a suspicion the problem is similar to what is described at section 5.6 of Embedded Python, but it is a bit hard to tell. For what its worth, if I do
Py_Initialize();
PyRun_SimpleString("import ssl\n");
Py_Finalize();
it doesn't work either and I get an ImportError.
Anyone had any problems like this? Am I missing something critical?
Try this, it works on my machine.
Create a simple Windows console application in Visual Studio 2010 (remove precompiled headers option in the wizard). Replace the generated code with this one :
#include <Python.h>
int main(int argc, char *argv[]) {
Py_Initialize();
PyRun_SimpleString("import ssl \n"
"for f in dir(ssl):\n"
" print f \n" );
Py_Finalize();
return 0;
}
With PYTHONHOME set to something like c:\Python...
add C:\Python\Include to the include path
add C:\Python\Libs to the library path
add python26.lib to the linker input (adjust with your Python version)
Build. Run from anywhere and you should see a listing of the content of the ssl module.
I also tried with Mingw. Same file, build with this command line :
gcc -Wall -o test.exe embeed.c -I%PYTHONHOME%\Include -L%PYTHONHOME%\libs -lpython26
Hey, I have asked a similar question, my operation system is Linux.
When i compile c file, option $(python-config --cflags --ldflags) should be added, as
gcc test.c $(python-config --cflags --ldflags) -o test
I think in Windows you may also check python-config option, Hope this helps!
I had a similar problem with Boost C++ DLL. Any external DLL need to be in the DLL search path.
In my experience, PYTHONPATH affects Python module (the import statement in Python will end up a LoadLibrary call), and build options have nothing to do with it.
When you load a DLL, Windows doesn't care what the process is. In other words, Python follows the same DLL loading rules as Notepad. You can confirm that you are facing a Windows path problem by copying any missing DLL in the same directory as your python extension, or to a directory in your path.
To find what DLL are required by any other executable or DLL, simply open the DLL or EXE file with DependencyWalker. There is also a "Profile" menu which will allow you to run your application and watch it search and load DLLs.