how to reduce the size of cross compiled shared libraries? - python

I am working on to install python3.6 along with zmq on a ARM based processor which has around 32 MB free space on the flash.
I have built python3.6 and removed the unwanted libraries, i have created python installation package with 15 MB and it is running fine for sample programs.
I need to install zmq to run my application, for that, i have cross compiled pyzmq for ARM as per the below link
https://github.com/zeromq/pyzmq/wiki/Cross-compiling-PyZMQ-for-Android
(this link is for android but i made modification as per my setup)
As expected, I have got the list of below libraries compiled for arm
2.6M constants.cpython-36m-x86_64-linux-gnu.so
3.0M context.cpython-36m-x86_64-linux-gnu.so
3.0M _device.cpython-36m-x86_64-linux-gnu.so
3.0M error.cpython-36m-x86_64-linux-gnu.so
3.1M message.cpython-36m-x86_64-linux-gnu.so
3.1M _poll.cpython-36m-x86_64-linux-gnu.so
3.1M socket.cpython-36m-x86_64-linux-gnu.so
3.0M utils.cpython-36m-x86_64-linux-gnu.so
3.0M _version.cpython-36m-x86_64-linux-gnu.so
I need help on two problems here
The size of each library was around 20MB before strip. I was able to reduce them up to 3MB but i need to reduce it further more so as to accommodate on the flash. i have seen these libraries on other boards which are around 50KB each, so i believe there is a way to reduce the size of each library. can anyone please tell me, how can i do this?
The name of the files are not named as arm. however this is not a major problem for me as i can rename them manually but i need to know if i can change them during the build process.
When i run the file command on these libraries, i can see they are built for arm.
constants.cpython-36m-x86_64-linux-gnu.so: ELF 32-bit LSB shared
object, ARM, EABI5 version 1 (SYSV), dynamically linked, stripped
Below is my setup.cfg file i used for building pyzmq
[global]
# the prefix with which libzmq was configured / installed
zmq_prefix = /home/sagar/zmq/_install
have_sys_un_h = False
[build_ext]
libraries = python3.6
library_dirs = /home/sagar/python_source/arm_install_with_zmq/lib
include_dirs = /usr/include/python3.6m/
plat-name = linux-armv
[bdist_egg]
plat-name = linux-armv
Thanks in advance.

Related

Using CMake FindPython() with "Development" component when cross-compiling

I have a CMake toolchain file containing the following
set(CMAKE_SYSTEM_NAME Linux)
set(CMAKE_SYSTEM_PROCESSOR arm)
set(target_triplet "arm-linux-gnueabihf")
set(target_root /srv/chroot/raspbian)
set(CMAKE_C_COMPILER ${target_triplet}-gcc CACHE FILEPATH "C compiler")
set(CMAKE_CXX_COMPILER ${target_triplet}-g++ CACHE FILEPATH "C++ compiler")
set(CMAKE_SYSROOT ${target_root})
set(CMAKE_LIBRARY_ARCHITECTURE ${target_triplet})
# Look for the headers and libraries in the target system.
set(CMAKE_FIND_ROOT_PATH ${target_root})
# Setting the root path is not enough to make pkg-config work.
set(ENV{PKG_CONFIG_DIR} "")
set(ENV{PKG_CONFIG_LIBDIR} "${CMAKE_FIND_ROOT_PATH}/usr/lib/${target_triplet}/pkgconfig")
set(ENV{PKG_CONFIG_SYSROOT_DIR} ${CMAKE_FIND_ROOT_PATH})
# Don't look for programs in the root path (these are ARM programs, they won't
# run on the build machine)
set(CMAKE_FIND_ROOT_PATH_MODE_PROGRAM NEVER)
# Only look for libraries, headers and packages in the sysroot, don't look on
# the build machine
set(CMAKE_FIND_ROOT_PATH_MODE_LIBRARY ONLY)
set(CMAKE_FIND_ROOT_PATH_MODE_INCLUDE ONLY)
set(CMAKE_FIND_ROOT_PATH_MODE_PACKAGE ONLY)
This relies on having a working Raspbian installation under /srv/chroot/raspbian and is supposed to make it possible to easily use its system libraries. This works fine for "simple" libraries after setting PKG_CONFIG_XXX like above, but fails for
find_package(Python3 COMPONENTS Development.Module REQUIRED)
with the following errors:
CMake Error at /usr/share/cmake-3.24/Modules/FindPackageHandleStandardArgs.cmake:230 (message):
Could NOT find Python3 (missing: Python3_INCLUDE_DIRS Development.Module)
Call Stack (most recent call first):
/usr/share/cmake-3.24/Modules/FindPackageHandleStandardArgs.cmake:594 (_FPHSA_FAILURE_MESSAGE)
/usr/share/cmake-3.24/Modules/FindPython/Support.cmake:3217 (find_package_handle_standard_args)
/usr/share/cmake-3.24/Modules/FindPython3.cmake:490 (include)
Python3API/CMakeLists.txt:9 (find_package)
I'm a bit lost in 3421 lines of FindPython/Support.cmake module, so I don't understand why it doesn't find the headers and, unfortunately, the error is not very helpful and there doesn't seem any way to turn on debugging for this code. But it seems like it doesn't look inside the chroot containing the target system at all, because it's supposed to use ${CMAKE_LIBRARY_ARCHITECTURE}-python-config if it's available, and a file with this name does exist in ${target_root}/usr/bin, but somehow it doesn't seem to be found. I've tried setting CMAKE_FIND_ROOT_PATH_MODE_PROGRAM to ONLY, but it doesn't seem to work.
Is it possible to make this work without manually setting Python3_INCLUDE_DIRS and all the other variables? Please note that I really want to use the target root and not install the packages on the host system, as they are not available for it in the versions old enough to ensure compatibility with the system being targeted.
Thanks in advance for any suggestions!
Actually, doing
set(CMAKE_FIND_ROOT_PATH_MODE_PROGRAM ONLY)
find_package(Python3 COMPONENTS Development.Module REQUIRED)
set(CMAKE_FIND_ROOT_PATH_MODE_PROGRAM NEVER)
does work for Python 3. Unfortunately if you still need to support Python 2 too, it doesn't work for it because python2.7-config doesn't behave correctly when invoked from outside the sysroot and returns wrong paths with a duplicate sysroot path in them. I couldn't find any workaround for this and ended up hardcoding Python 2 include directories.

filter in scapy function sniff() says libpcap is not available

I was playing around with the Scapy sniff function and I wanted to add a filter into the parameters. So I added this filter:
pkt = sniff(count=1, filter='arp')
and the output i recieve is:
WARNING: Cannot set filter: libpcap is not available. Cannot compile filter !
I still get a packet that was sniffed, but for some reason the filter is not working.
I am running Mac OS Big Sur. I have libpcap installed using Homebrew and I have tcpdump installed using Homebrew.
I also saw online that you could manually initialize pcap on Scapy using
conf.use_pcap = True
However when I type this in I get:
WARNING: No libpcap provider available ! pcap won't be used
I'm sure it is just a small fix but I can't seem to figure out what I am doing wrong. If anyone can help that would be amazing!
Older versions of Python 3 assume that, on macOS, all shared libraries are in files located in one of a number of directories.
That is not the case in Big Sur; instead, a cache file is generated for system shared libraries, and at least some of the libraries from which the cache file is generated are not shipped with the OS.
This is one of the issues in CPython issue 41100, "Support macOS 11 and Apple Silicon Macs"; the fix is to look in the shared library cache as well as in the file system.
That issue says
Thank you to everyone who contributed to this major undertaking! A particular thank you to Lawrence for doing much of the initial work and paving the way. Now that 3.8 also supports Big Sur and Apple Silicon Macs as of the imminent 3.8.10 release, it's time to close this issue. If new concerns arise, pleasa open or use other issues.
So a sufficiently recent version of Python should fix this issue.
tldr:
$ brew install libpcap
$ ln -s /usr/local/opt/libpcap/lib/libpcap.a /usr/local/lib/libpcap.a
$ ln -s /usr/local/opt/libpcap/lib/libpcap.dylib /usr/local/lib/libpcap.dylib
Explanation (applicable for Python 3.9.1, Scapy 2.4.5 # Big Sur and libpcap installed by brew):
When you debug the Scapy sniff function, after a while you get to scapy.libs.winpcapy, line 36:
_lib_name = find_library("pcap")
find_library is located in ctypes.util, for POSIX it starts on line 72. On line 73 you can see that the library is expected as one of these filenames ['libpcap.dylib', 'pcap.dylib', 'pcap.framework/pcap'], being fed to dyld_find.
dyld_find is located in ctypes.macholib.dyld on line 121. If you iter through the chain on line 125 yourself, you find out that dyld_find is trying to succeed with one of these paths:
/usr/local/lib/
/Users/<user>/lib/
/usr/local/lib/
/lib/
/usr/lib/
In my case none of them contained the libpcap lib, which is installed in different location by brew.
The library sits in /usr/local/opt/libpcap/lib/.
And here you go, you just need to get the file libpcap.dylib (nothing wrong with libpcap.a too) into one of those paths searched by dyld_find. The two soft links above are one of a few more possible solutions.

Distribute embed-cython-compiled .exe and run another machine without python

I have a small pyqt5 project written in python. I generate .cpp file using cython --embed , compiled with MSVC and it is working in my machine with no problem but I want to distribute .exe with no python installed target machines. Pretty much confused about pyqt import as I get error initialization of QtCore failed without raising an exception. I tried various stuff, I put my effort a whole day but briefly put necessary files to .exe location and removed python36._pth , which is as follows
.\python36.dll
.\python36.zip (from python-3.6.5-embed-amd64.zip)
.\PyQt5 (copied from Anaconda3\Lib\site-packages)
.\platforms (required plugins for windows)
I guess it requires also sip but I could not figure out elegant way to add pyqt5 as there is no documentation about distribution of embedded python with imported modules (site-packages).
Any help would be extremely great.
I solved adding .\sip.pyd. Then I checked resulting folder size , it was around 20 MB. With Pyinstaller, resulting .exe is about 43 MB

Trying to import cjson in a Maya 2018 Python environment

I'd like to use cjson in Maya 2018 but I'm running into some trouble. The original tool seems to be 32-bit compiled (https://pypi.org/project/python-cjson/) so I looked around and found a version that was compiled in 64-bit from https://www.lfd.uci.edu/~gohlke/pythonlibs/.
Using "C:\Program Files\Autodesk\Maya2018\Python\Scripts\pip.exe" in admin mode I managed to pip install the 64-bit wheel file and that's given me cjson.pyd in my Maya 2018 site-packages directory.
However, when I try to import this I get the error:
// Error: root : [('<maya console>', 2, '<module>', None)] //
// Error: root : DLL load failed: The specified module could not be found. //
// Error: //
Has anyone had any experience importing this module into Maya before?
I have a feeling this could be to do DLL dependencies but I'm not sure where to begin solving it.
So, traditionally you'd check your dll for dependencies using something like Dependency Walker. .pyd files are just renamed .dll files, so should work fine with this. However, Dependency Walker is mostly full of noise - unless you spot something really obvious (and you can ignore all the API-MS-WIN*, EXT-MS-* dlls straight away) then you might find it easier to enable Loader Snaps (see this old but still relevant MSDN article) which should hopefully be able to tell you which loadlibrary call is failing. Alternatively, you can use Process Monitor (procmon) form Sysinternals to see what files Maya attempts to load when you import the module in to a script. I can't remember if Maya spawns a separate process to run it's python scripts in, or if they're executed in the context of the main maya process. This is important as you'll want to filter events in procmon to just maya / the python executable as otherwise you'll be swamped by them.
Edit: Having looked through the source code there doesn't appear to be much in the way of dependencies beyond the python headers themselves and some standard library includes. I suspect it's more likely that this is compiled with a different version of the toolchain than Maya's Python. In the Python interpreter, you should see a line like this when you start it:
Python 2.7.14 (v2.7.14:84471935ed, Sep 16 2017, 20:25:58) [MSC v.1500 64 bit (AMD64)] on win32
It would be good to see what toolchain was used to compile it (the square brackets bit), and then build the extension from source again with that toolchain. Alternatively, you can try running the setup.py for this module from the project's GitHub page, which if your machine is set up to be able to compile python extensions correctly will build it from source. More details about building c extensions can be found on the Python docs page.

How to compile OpenCV for iOS7 (arm64)

Compiling Xcode Project fails with following errors:
'missing required architecture arm64 in file /Users/*/Git/ocr/opencv2.framework/opencv2'
It works well, if i change Architectures(under Build Settings) to (armv7, armv7s) instead of (armv7, armv7s).
How to change the opencv python build script, to add arm64 support to opencv2.framework?
The latest OpenCV iOS framework supports 64 bit by default
It can be downloaded at: OpenCV download page
I modified the following to make it build, though I haven't got an arm64 iOS device to test at the moment.
Edit: I also had to follow https://stackoverflow.com/a/17025423/1094400
Assuming "opencv" is the folder containing the opencv source from Github:
in each of gzlib.c, gzread.c, gzwrite.c located in opencv/3rdparty/zlib/ add:
#include <unistd.h>
at the top after the existing include.
In addition open opencv/platforms/ios/cmake/Modules/Platform/iOS.cmake and change line 88 from:
set (CMAKE_OSX_ARCHITECTURES "$(ARCHS_STANDARD_32_BIT)" CACHE string "Build architecture for iOS")
to:
set (CMAKE_OSX_ARCHITECTURES "$(ARCHS_STANDARD_INCLUDING_64_BIT)" CACHE string "Build architecture for iOS")
Furthermore change the buildscript at opencv/platforms/ios/build_framework.py in lines 99 and 100 from:
targets = ["iPhoneOS", "iPhoneOS", "iPhoneSimulator"]
archs = ["armv7", "armv7s", "i386"]
to:
targets = ["iPhoneOS", "iPhoneOS", "iPhoneOS", "iPhoneSimulator", "iPhoneSimulator"]
archs = ["armv7", "armv7s", "arm64", "i386", "x86_64"]
The resulting library will include the following:
$ xcrun -sdk iphoneos lipo -info opencv2
Architectures in the fat file: opencv2 are: armv7 armv7s i386 x86_64 arm64
Although I have a remaining concern regarding opencv/platforms/ios/cmake/Toolchain-iPhoneOS_Xcode.cmake which defines the size of a data pointer to be 4 in lines 14 and 17.
It should be 8 for 64bit I guess, so as I haven't tested if the compiled library is working for arm64 I would suggest further investigations at this point if it does not run properly.
micahp's answer was almost perfect, but missed the simulator version. So modify platforms/ios/build_framework.py to:
targets = ["iPhoneOS", "iPhoneOS", "iPhoneOS", "iPhoneSimulator", "iPhoneSimulator"]
archs = ["armv7", "armv7s", "arm64", "i386", "x86_64"]
You'll need to download the command line tools for Xcode 5.0.1 and then run
python opencv/platforms/ios/build_framework.py ios
Try to wait a next month. Will release a new XCode with more powerful supporting of 32/64 bit.
https://developer.apple.com/news/index.php?id=9162013a
Modify "build_frameworks.py" to:
def build_framework(srcroot, dstroot):
"main function to do all the work"
targets = ["iPhoneOS", "iPhoneOS", "iPhoneOS", "iPhoneSimulator"]
archs = ["armv7", "armv7s", "arm64", "i386"]
for i in range(len(targets)):
build_opencv(srcroot, os.path.join(dstroot, "build"), targets[i], archs[i])
put_framework_together(srcroot, dstroot)
#Jan, I followed your instructions, but OpenCV still doesn't run on arm64. You made such a detailed and wonderful answer - why not check it out on a simulator and see if you can make it run? :-)
FWIW, I think it might be harder than it seems. On the openCV stackoverflow clone, there's an indication that this problem might be non-trivial.
Instead of using terminal commands given in the opencv installation guide in official website, use the following commands. Worked for me.
cd OpenCV-2.3.1
mkdir build
cd build
cmake -G "Unix Makefiles" ..
make
sudo make install
I was having a similar error, but the issue wasn't related with the arm64 coompilation.fixed adding the framework libc++.dylib

Categories