How to compile OpenCV for iOS7 (arm64) - python

Compiling Xcode Project fails with following errors:
'missing required architecture arm64 in file /Users/*/Git/ocr/opencv2.framework/opencv2'
It works well, if i change Architectures(under Build Settings) to (armv7, armv7s) instead of (armv7, armv7s).
How to change the opencv python build script, to add arm64 support to opencv2.framework?

The latest OpenCV iOS framework supports 64 bit by default
It can be downloaded at: OpenCV download page

I modified the following to make it build, though I haven't got an arm64 iOS device to test at the moment.
Edit: I also had to follow https://stackoverflow.com/a/17025423/1094400
Assuming "opencv" is the folder containing the opencv source from Github:
in each of gzlib.c, gzread.c, gzwrite.c located in opencv/3rdparty/zlib/ add:
#include <unistd.h>
at the top after the existing include.
In addition open opencv/platforms/ios/cmake/Modules/Platform/iOS.cmake and change line 88 from:
set (CMAKE_OSX_ARCHITECTURES "$(ARCHS_STANDARD_32_BIT)" CACHE string "Build architecture for iOS")
to:
set (CMAKE_OSX_ARCHITECTURES "$(ARCHS_STANDARD_INCLUDING_64_BIT)" CACHE string "Build architecture for iOS")
Furthermore change the buildscript at opencv/platforms/ios/build_framework.py in lines 99 and 100 from:
targets = ["iPhoneOS", "iPhoneOS", "iPhoneSimulator"]
archs = ["armv7", "armv7s", "i386"]
to:
targets = ["iPhoneOS", "iPhoneOS", "iPhoneOS", "iPhoneSimulator", "iPhoneSimulator"]
archs = ["armv7", "armv7s", "arm64", "i386", "x86_64"]
The resulting library will include the following:
$ xcrun -sdk iphoneos lipo -info opencv2
Architectures in the fat file: opencv2 are: armv7 armv7s i386 x86_64 arm64
Although I have a remaining concern regarding opencv/platforms/ios/cmake/Toolchain-iPhoneOS_Xcode.cmake which defines the size of a data pointer to be 4 in lines 14 and 17.
It should be 8 for 64bit I guess, so as I haven't tested if the compiled library is working for arm64 I would suggest further investigations at this point if it does not run properly.

micahp's answer was almost perfect, but missed the simulator version. So modify platforms/ios/build_framework.py to:
targets = ["iPhoneOS", "iPhoneOS", "iPhoneOS", "iPhoneSimulator", "iPhoneSimulator"]
archs = ["armv7", "armv7s", "arm64", "i386", "x86_64"]
You'll need to download the command line tools for Xcode 5.0.1 and then run
python opencv/platforms/ios/build_framework.py ios

Try to wait a next month. Will release a new XCode with more powerful supporting of 32/64 bit.
https://developer.apple.com/news/index.php?id=9162013a

Modify "build_frameworks.py" to:
def build_framework(srcroot, dstroot):
"main function to do all the work"
targets = ["iPhoneOS", "iPhoneOS", "iPhoneOS", "iPhoneSimulator"]
archs = ["armv7", "armv7s", "arm64", "i386"]
for i in range(len(targets)):
build_opencv(srcroot, os.path.join(dstroot, "build"), targets[i], archs[i])
put_framework_together(srcroot, dstroot)

#Jan, I followed your instructions, but OpenCV still doesn't run on arm64. You made such a detailed and wonderful answer - why not check it out on a simulator and see if you can make it run? :-)
FWIW, I think it might be harder than it seems. On the openCV stackoverflow clone, there's an indication that this problem might be non-trivial.

Instead of using terminal commands given in the opencv installation guide in official website, use the following commands. Worked for me.
cd OpenCV-2.3.1
mkdir build
cd build
cmake -G "Unix Makefiles" ..
make
sudo make install

I was having a similar error, but the issue wasn't related with the arm64 coompilation.fixed adding the framework libc++.dylib

Related

How to install C++ dependencies "from source" for a Python package on Mac OS?

There is a Github repo containing Python "bindings" for a C++ library that I am interested in playing with. The README has abundant information about how to install the C++ library on Linux like machines, but no information about how to do so with a mac OS.
I have also opened up an issue requesting the README installation instructions include mac OS-specific installs in addition to linux. There hasn't been any activity on that issue.
Here are the two repos:
(Python) https://github.com/asiffer/python3-libspot
(C++) https://github.com/asiffer/libspot
Since the C++ package isn't available for installing via Brew/pip/anaconda, I'm not sure how to get going.
What I've Tried:
I have tried ./configure, and make. There is no ./configure file.
To address the lack of ./configure, read about a tool called autoconf which supposedly generates ./configure for you. I installed it with brew, but am not sure what arguments to pass it. These docs were pretty hard to understand: https://www.gnu.org/software/autoconf/manual/autoconf-2.67/html_node/Making-configure-Scripts.html
Just using make results in the error clang: error: unsupported option '-fopenmp'That sent me down a whole different rabbit hole which had me adding lines to the Makefile:
CPP = /usr/local/opt/llvm/bin/clang
CPPFLAGS = -I/usr/local/opt/llvm/include -fopenmp
LDFLAGS = -L/usr/local/opt/llvm/lib
omp_hello: omp_hello.c
$(CPP) $(CPPFLAGS) $^ -o $# $(LDFLAGS)
That felt dangerous because I have no idea what any of that stuff means. Plus it resulted in a new error: *** missing separator. Stop.
So then I read that's probably due to using "soft" tabs instead of "hard" tabs which can be identified using cat -e -t -v makefile_name. I found the one line where a "hard" tab was missing (the indented line above) and inserted it. This resulted in a new error:
make: *** No rule to make target `omp_hello.c', needed by `omp_hello'. Stop.
Next, following the advice of Yang Yushi and his follow on comments, I changed lines 39 and 40 according to his answer, plus added the locations of some additional files to the CXXFLAGS variable:
-I//opt/homebrew/Cellar/libomp/11.0.1/include
-L/opt/homebrew/Cellar/libomp/11.0.1/lib
And this got me a little further. Next, OSX didn't like where this script was trying to install, as explained by this answer. So I changed these two lines in the makefile which seemed to dictate install location:
INSTALL_HEAD_DIR = $(DESTDIR)/usr/include/libspot
INSTALL_LIB_DIR = $(DESTDIR)/usr/lib
to
INSTALL_HEAD_DIR = $(DESTDIR)/usr/local/include/libspot
INSTALL_LIB_DIR = $(DESTDIR)/usr/local/lib
And that indeed got me a little farther. Next I ran into an error complaining about the flat -t at these lines in the makefile:
#install -t $(INSTALL_LIB_DIR) $(LIB_DIR)/*.so
#install -t $(INSTALL_HEAD_DIR) $(INC_DIR)/*.h
So I deleted those flags, which then resulted in this error:
Checking the headers installation directory (/usr/local/include/libspot)
Checking the library installation directory (/usr/local/lib)
Installing the shared library (libspot.so)
install: /usr/local/lib: Inappropriate file type or format
For which I can find no reading material and have no clue how to fix. Any further assistance appreciated.
Here's a list of SO and other resources I've perused trying to answer this question:
Enable OpenMP support in clang in Mac OS X (sierra & Mojave)
makefile error: make: *** No rule to make target `omp.h' ; with OpenMP
makefile:4: *** missing separator. Stop
http://www.idryman.org/blog/2016/03/10/autoconf-tutorial-1/
https://www.gnu.org/software/autoconf/manual/autoconf-2.67/html_node/Making-configure-Scripts.html
https://developer.gnome.org/anjuta-build-tutorial/stable/create-autotools.html.en
My Question
How do I proceed.
If you know how to do this, could you also include a brief explanation of the concepts behind each step? I'd be happy to learn a little instead of just copying and pasting commands in the right order.
Compile the C++ source code with Apple Clang
I downloaded the prjoect (libspot) and successfully compiled it on my Mac. I change two lines (39 and 40) in the Makefile to make it work. (Following this answer)
CC = clang++ # change from g++ to default Apple clang
CXXFLAGS = -std=c++11 -Wall -pedantic -Xpreprocessor -fopenmp -lomp # additional flags
You should get the binary file by just type make with a "correct" Makefile.
(If you see something like "cant find omp.h", add -I/usr/local/opt/libomp/include to the CXXFLAGS.)
For the Question
The error message in the updated question description
make: *** No rule to make target omp_hello.c', needed by omp_hello'. Stop.
is telling us that the file omp_hello.c is missing. The Makefile is written to compile the source code omp_hello.c to an executable binary file omp_hello. If I have the C source file (omp_hello.c), the Makefile will allow me to compile by just typing
make
instead of
/usr/local/opt/llvm/bin/clang \
-I/usr/local/opt/llvm/include -fopenmp \
-L/usr/local/opt/llvm/lib \
omp_hello.c -o omp_hello
This is just a normal compile process, it has nothing to do with Python. The error message is saying the source code to be compiled (omp_hello.c) is missing.
It looks like this is a small project with custom Makefile. Normally you compile the code with just make. The error you got seems to suggest the lack of llvm. You may want to try install llvm following this answer.
Usually it comes to running brew install <your C++ package> or downloading the source code to some directory and running a set of commands:
./configure
make
make install
While usually it works, some packages can not be installed on Mac since their maintainers did not prepare configuration for Mac.

how to reduce the size of cross compiled shared libraries?

I am working on to install python3.6 along with zmq on a ARM based processor which has around 32 MB free space on the flash.
I have built python3.6 and removed the unwanted libraries, i have created python installation package with 15 MB and it is running fine for sample programs.
I need to install zmq to run my application, for that, i have cross compiled pyzmq for ARM as per the below link
https://github.com/zeromq/pyzmq/wiki/Cross-compiling-PyZMQ-for-Android
(this link is for android but i made modification as per my setup)
As expected, I have got the list of below libraries compiled for arm
2.6M constants.cpython-36m-x86_64-linux-gnu.so
3.0M context.cpython-36m-x86_64-linux-gnu.so
3.0M _device.cpython-36m-x86_64-linux-gnu.so
3.0M error.cpython-36m-x86_64-linux-gnu.so
3.1M message.cpython-36m-x86_64-linux-gnu.so
3.1M _poll.cpython-36m-x86_64-linux-gnu.so
3.1M socket.cpython-36m-x86_64-linux-gnu.so
3.0M utils.cpython-36m-x86_64-linux-gnu.so
3.0M _version.cpython-36m-x86_64-linux-gnu.so
I need help on two problems here
The size of each library was around 20MB before strip. I was able to reduce them up to 3MB but i need to reduce it further more so as to accommodate on the flash. i have seen these libraries on other boards which are around 50KB each, so i believe there is a way to reduce the size of each library. can anyone please tell me, how can i do this?
The name of the files are not named as arm. however this is not a major problem for me as i can rename them manually but i need to know if i can change them during the build process.
When i run the file command on these libraries, i can see they are built for arm.
constants.cpython-36m-x86_64-linux-gnu.so: ELF 32-bit LSB shared
object, ARM, EABI5 version 1 (SYSV), dynamically linked, stripped
Below is my setup.cfg file i used for building pyzmq
[global]
# the prefix with which libzmq was configured / installed
zmq_prefix = /home/sagar/zmq/_install
have_sys_un_h = False
[build_ext]
libraries = python3.6
library_dirs = /home/sagar/python_source/arm_install_with_zmq/lib
include_dirs = /usr/include/python3.6m/
plat-name = linux-armv
[bdist_egg]
plat-name = linux-armv
Thanks in advance.

PyOpenCL on Linux Mint: PLATFORM_NOT_FOUND_KHR

I've been trying to get PyOpenCL and PyCUDA running on a Linux Mint machine. I have things installed but the demo scripts fail with the error:
pyopencl.cffi_cl.LogicError: clgetplatformids failed: PLATFORM_NOT_FOUND_KHR
Configuration
$ uname -a && cat /etc/lsb-release && lspci | grep NV
Linux 3.13.0-24-generic #47-Ubuntu SMP Fri May 2 23:30:00 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux
DISTRIB_DESCRIPTION="Linux Mint 17.3 Rosa"
01:00.0 VGA compatible controller: NVIDIA Corporation GK208 [GeForce GT 730] (rev a1)
Relevant installed packages:
libcuda1-352-updates
libcudart5.5:amd64
nvidia-352-updates
nvidia-352-updates-dev
nvidia-cuda-dev
nvidia-cuda-toolkit
nvidia-opencl-icd-352-updates
nvidia-profiler
nvidia-settings
ocl-icd-libopencl1:amd64
ocl-icd-opencl-dev:amd64
opencl-headers
python-pycuda
python-pyopencl
python3-pycuda
python3-pyopencl
Research
This post describes a scenario in which the package-manager installed opencl/cuda implementation don't set up some simlinks correctly. That issue doesn't seem to be present on my system.
There was a version number mismatch between the graphics drivers (were nvidia-340) and the nvidia-opencl package (352). I update the graphics drivers to nvidia-352-updates-dev but the issue remains.
There is a bug in Arch linux that seems to revolve around the necessary device files not being created. However, I've verified that the /dev/nvidia0 and /dev/nvidiactl exist and have permissions 666, so they should be accessible.
Another Stackoverflow post suggests running the demos as root. I have tried this and the behavior does not change.
Older installation instructions for cuda/opencl say to download drivers directly from the NVidia website. I'm not sure this still applies, so I'm holding off on that for now since there seem to be plenty of relevant packages in the respositories.
The same error, but for an ATI card on a different linux system, was resolved by putting proper files in /usr/lib/OpenCL/vendors. That path isn't used on my system, However, I do have /etc/OpenCL/vendors/nvidia.icd which contains the line libnvidia-opencl.so.1, suggesting my issue is dissimilar.
This error has been observed on OSX, but for unrelated reasons. Similar error messages for PyCUDA also appear to be unrelated.
This error can occur under remote access since the device files are not initialized if X is not loaded. However, I'm testing this in a desktop environment. Furthermore, I've run the manual commands suggested in that thread just to be sure, and they are redundant since the relevant /dev entries already exist.
There is a note here about simply running a command a few times to get around some sort temporary glitch. That doesn't seem to help.
This post describes how the similar cuInit failed: no device CUDA error was caused by not having the user in the video group. To check, I ran usermod -a -G video $USER, but it did not resolve my issue.
In the past, routine updates have broken CUDA support. I have not taken the time to explore every permutation of package version numbers, and it's possible that downgrading some packages may change the situation. However, without further intuition about the source of the issue, I'm not going to invest time in doing that since I don't know whether it will work.
The most common google search result for this error, appearing four times on the first pages, is a short and unresolved email thread on the PyOpenCL list. Checking the permissions bits for /dev/nvidia0 and /dev/nvidiactl is suggested. On my machine user/group/other all have read and write access to these devices, so I don't think that's the source of the trouble.
I've also tried building and installing PyOpenCL form the latest source, rather than using the version in the repositories. This is failing at an earlier phase which suggests to me it is not building correctly.
Summary
The issue would appear to be that PyCUDA/PyOpenCL cannot locate the graphics card. There are several known issues that can cause this, but none of them seem to apply here. I'm missing something, and I'm not sure what else to do.

Building VRPN server with Python 3.4 64-bit on Windows

I'm trying to build a VRPN server with Python3 flag using Python 3.4 64-bit on Windows 7 64-bit but there seems to be a problem. I need this for BlenderVR software.
This is my procedure:
1) I use CMake to create makefiles (I'm using 3.4.0 version but I've also tried different ones). I do it with this command (those flags should be there but the result seems to be the same without them anyway):
cmake -G"MinGW Makefiles" -HD:\My\BlenderVR\plugins\vrpn
-BD:\My\BlenderVR\plugins\cmake -DVRPN_BUILD_PYTHON=OFF -DVRPN_BUILD_PYTHON_HANDCODED_2X=OFF -DVRPN_BUILD_PYTHON_HANDCODED_3X=ON
I used to add those flags as well but it seems that it can find Python without them
-DPYTHON_INCLUDE_DIR=D:\My\BlenderVR\Required\Python3\include
-DPYTHON_LIBRARY=D:\My\BlenderVR\Required\Python3\libs\python34.lib
Python is correctly found and this operation doesn't throw any error.
2) Then I use mingw32-make.exe to build it and I get this error:
[ 90%] Linking CXX shared module vrpn.pyd D:/My/BlenderVR/Required/Python3/libs/python34.lib: error adding
symbols: File f ormat not recognized collect2.exe: error: ld
returned 1 exit status
python\CMakeFiles\vrpn-python.dir\build.make:505: recipe for
target 'python/vrpn .pyd' failed mingw32-make[2]: * * *
[python/vrpn.pyd] Error 1 CMakeFiles\Makefile2:3247: recipe for
target 'python/CMakeFiles/vrpn-python.dir/ all' failed
mingw32-make[1]: * * * [python/CMakeFiles/vrpn-python.dir/all]
Error 2 Makefile:159: recipe for target 'all' failed
mingw32-make: [all] Error 2
vprn.pyd is the crucial thing for my future work.
I figured out that it needs libpython34.a file (probably). When I created it and copied to Python3/libs folder it worked and finished without errors but the crated vprn.pyd didn't worked as it should.
What I need is to get import vrpn to work with this simple test in python (appending path where vrpn.pyd was build):
import sys
sys.path.append('D:/My/BlenderVR/plugins/cmake/python')
import vrpn
It lags my whole computer for a while and then pops out that Python has stop working.
I suspect that problem is in the libpython34.a file that I created doing this:
gendef python34.dll (in Windows/System32)
dlltool -D python34.dll -d python34.def -l libpython34.a
I don't how else should I get the libpython file. I've tried various versions of CMake and MinGW (like MinGWPy, TDM, w64) with many CMake flags. I was able to make it work using 32-bit Python but I need 64-bit version otherwise it is not working with BlenderVR enviroment.
I know this is very specific problem and probably kind of confusing at first but I didn't know how else to put it. I'll be glad for anything that could help. Thank you.
mingwpy should be installed with pip (until it is officially released at PYPI):
pip install -i https://pypi.anaconda.org/carlkl/simple mingwpy
all necessary import files are atomatically copied into the python\libs folder.
If python\Scripts is in the PATH it should work out of the box.
You have to make sure, that Blender Python is equiped with two import files
D:\My\BlenderVR\Required\Python3\libs\libpython\libpython34.dll.a
D:\My\BlenderVR\Required\Python3\libs\libpython\libmsvcr100.a

Building Blender for breakpoints/Debug in Xcode

TL;RD Following http://wiki.blender.org/index.php/Dev:Doc/Building_Blender/Mac for Xcode what are the steps that let you add breakpoints/watches and correctly debug the executable on OS X?
My spec
Xcode Version 6.4 (6E35b)
OSX 10.10.4 (14E46)
CMake 3.3.0 GUI build with QT 4.8.6
The long description
I did follow the instructions so I set the scheme as suggested as for Xcode 5 (did let debug as default) but
First time cmake fails because there is no numpy there is no numpy (release or debug) inside https://svn.blender.org/svnroot/bf-blender/trunk/lib/darwin-9.x.universal/python/lib/python3.4/ also from the console output munpy is searched on something like /Users/tyoc213/blender-build/blender/../lib/darwin-9.x.universal/python/lib/python3.4/python3.4/site-packages/numpy also you can see that it search on python3.4/python3.4 which is weird.
On the second run it says that it will skip numpy on install(ation). you can see the cmake output here https://gist.github.com/tyoc213/aea0fb541383dc06981a
So we now can generate Xcode project, we open the generated blender project with Xcode configure the scheme as on the dev-wiki and wait it fails at blinker step.
The only plausible fix for debug scheme is no fix at all, use release scheme
There is only one "fix" for compile&run for this and it is to change the scheme to Release, but even having checked Debug application and that Xcode attach on start to the process, breakpoints don't work.
The debug scheme
So the problem in Debug scheme is this: How to build a blender build in Xcode 5? basically there are references from libbf_intern_cycles.a that are not found: _Controller_actuators_length, _CurveMapping_curves_length, _MeshColorLayer_data_length, _MeshLoopColorLayer_data_length, _MeshPaintMaskLayer_data_length, _MeshPolygonFloatPropertyLayer_data_length, _MeshPolygonIntPropertyLayer_data_length, _MeshPolygonStringPropertyLayer_data_length, _MeshSkinVertexLayer_data_length, _MeshTextureFaceLayer_data_length, _MeshTexturePolyLayer_data_length, _MeshUVLoopLayer_data_length, _MeshVertexFloatPropertyLayer_data_length, _MeshVertexIntPropertyLayer_data_length, _MeshVertexStringPropertyLayer_data_length, _Sensor_controllers_length, _Spline_points_length this are ld: symbol(s) not found for architecture x86_64
Any suggestions in the correct setup for debug and put break points then do steps, threads with Xcode and watch variables and so on?.
The only way for the moment that I have found to be able to put breakpoints in Xcode and do step by step, see locals and so on, is to disable cycles in CMake.
Maybe I should post this as an error in the build process at less for CMake, I don't know if I can build scons inside Xcode and debug it inside Xcode.

Categories