I am using kivy to create an android app. I need to install the deepspeech framework, however, in order for deepspeech to be installed it is necessary to create a recipe.
I created a recipe and built the apk, there were no errors in the build, it created the apk and also, as far as I could see in the folders, the deepspeech was built. However after I install the app in the phone and try to run the app, it crashes and says there is no module named deepspeech.
Does anyone know what i am doing wrong? I've been stuck on this for a while now, and can't seem to find the end of this :/.
from pythonforandroid.recipe import PythonRecipe
from pythonforandroid.toolchain import current_directory, shprint
import sh
class deepspeechRecipe(PythonRecipe):
version = 'v0.9.2'
url = 'https://github.com/mozilla/DeepSpeech/archive/{version}.tar.gz'
depends = ['numpy', 'setuptools']
call_hostpython_via_targetpython = False
site_packages_name = 'deepspeech'
def build_arch(self, arch):
env = self.get_recipe_env(arch)
with current_directory(self.get_build_dir(arch.arch)):
# Build python bindings
hostpython = sh.Command(self.hostpython_location)
shprint(hostpython,
'setup.py',
'build_ext', _env=env)
# Install python bindings
super().build_arch(arch)
def get_recipe_env(self, arch):
env = super().get_recipe_env(arch)
numpy_recipe = self.get_recipe('numpy', self.ctx)
env['CFLAGS'] += ' -I' + numpy_recipe.get_build_dir(arch.arch)
#env['LDFLAGS'] += ' -L' + sqlite_recipe.get_lib_dir(arch)
env['LIBS'] = env.get('LIBS', '') + ' -lnumpy'
return env
recipe = deepspeechRecipe()
Buildozer:1.4.0
requirements = python3==3.7.14, hostpython3==3.7.14, kivy, kivymd, sqlite3, numpy==1.14.5, deepspeech, apsw
If you need any extra information I can add.
I have already tried using tensorflow to run the model, however, the model gives an array as the output and I don't know the right procedures to transform that into a text form.
I have already tried other recipes (like opencv) and all work fine.
Edit:
I found out that when i use the recipe it does run, and it does build properly, but only the deepspeech_training part because the setup.py only installs that. To install other parts like the model class it is necessary to use another setup.py located in "native_client/python", but that requires the rest of the folders, so I still need to figure that out.
Edit2:I was able to build the packages that i wanted (the inference of deepspeech) however when i run it gives the following error.
python : ImportError: dlopen failed: library "libc++_shared.so" not found: needed by /data/user/0/org.test.myapp/files/app/_python_bundle/site-packages/deepspeech/_impl.so in namespace classloader-namespace
python : Python for android ended.
Add pillow in your requirements and check if it works!
requirements = python3==3.7.14, hostpython3==3.7.14, kivy, kivymd, sqlite3, numpy==1.14.5, deepspeech, apsw, pillow
Related
I have some trouble for installing USD library on Ubuntu. Here is the tuto I want to follow.
On github, I cloned the git, run the script build_usd.py and change the env var. But when I want to run this simple code
from pxr import Usd, UsdGeom
stage = Usd.Stage.CreateNew('HelloWorld.usda')
xformPrim = UsdGeom.Xform.Define(stage, '/hello')
spherePrim = UsdGeom.Sphere.Define(stage, '/hello/world')
stage.GetRootLayer().Save()
By changing the env var, eitheir the command "python " can't be found or the "module pxr" can't be found.
I tried to move the lib and include in my directory in usr/lib, ur/local/lib, usr/local/include but still pxr seems not to be found.
I am really confused about how to install and use library in general for python to found on ubuntu.
I am trying to run C++ code, wrapped with SWIG into Python 3.8 modules on a remote computing cluster (where I don't have root access). I was writing the code on my own computer (MacOS Monterey) while the cluster runs Ubuntu20.04. The Code makes use of libraries quadedge, nlopt and gsl. Using the latter two I perform an optimisation with nlopt and integration with gsl every step however this only works on my own Mac, on the Ubuntu cluster there's some undefined behaviour during runtime, i.e. my rate of change when integrating is zero at around 50% of the integration steps (On the Mac everything works flawlessly).
To install my code as python modules, I use a setup.py. The procedure and module structure I use is currently the following:
install quadedge
install polygeo (own module that depends on quadedge)
install plantdevelopment (own module that depends on polygeo)
Both polygeo and plantdevelopment contain two extensions in one module. My setup.py for polygeo on my Mac looks like this:
from setuptools import setup, find_packages, Extension
from setuptools.command.build_py import build_py as _build_py
import os
import glob
class build_py(_build_py):
def run(self):
self.run_command("build_ext")
return super().run()
polygeoDir = 'polygeo/'
surfnloptDir = 'polygeo/surfnlopt/'
includeDir = 'polygeo/include/'
srcDir = 'polygeo/src/'
headerFiles = glob.glob(includeDir + '*.hpp')
srcFiles = glob.glob(srcDir + '*.cpp')
srcFiles.append(polygeoDir + 'polygeo.i')
surfnloptHeaderFiles = glob.glob(surfnloptDir + 'include/*.hpp')
surfnloptSrcFiles = glob.glob(surfnloptDir + 'src/*.cpp')
surfnloptSrcFiles.append(surfnloptDir + 'surfnlopt.i')
allHeaders = []
allHeaders += headerFiles
allHeaders += surfnloptHeaderFiles
polygeoExt = Extension('_polygeo',
sources=srcFiles,
include_dirs=[includeDir],
library_dirs=[],
libraries=[],
swig_opts=['-c++'],
extra_compile_args=['-std=c++11'],)
surfnloptExt = Extension('surfnlopt._surfnlopt',
sources=surfnloptSrcFiles,
include_dirs=[includeDir,
surfnloptDir + '/include'],
library_dirs=[],
libraries=['m', 'nlopt'],
swig_opts=['-c++'],
extra_compile_args=['-std=c++11'],)
setup(name='polygeo',
version='1.0',
packages=find_packages(),
ext_package='polygeo',
ext_modules=[polygeoExt,
surfnloptExt],
install_requires=['quadedge'],
headers=allHeaders,
cmdclass = {'build_py' : build_py},
)
Interestingly, it seems not to be necessary on my Mac to add extra paths or anything, but on the remote it requires me to edit the .bash_profile in order to set the necessary includepaths etc..
On top of that on Ubuntu it is required though to also add to each of the extensions the extra_link_args=['/home/.local/lib/python3.8/site-packages/quadedge/_quadedge.cpython-38-x86_64-linux-gnu.so'],that point to the cpython.so that I already generated with the setup.py for quadedge.
Also I need to add the source files of the polygeo extension to the source files of surfnlopt, i.e. sources=srcFiles+surfnloptSrcFiles, otherwise it throws a "symbols not found" error when importing the modules in python, because surfnlopt depends on the source files of polygeo (same if I don't include the .so as it also needs functions from quadedge).
For installing plantdevelopment my script looks almost the same, however there I have to include the polygeo.so instead of quadedge.so and have to add swig_opts=['-c++','-I'+polygeoIncDir]that points to the directory where the header files from the polygeo setup end up.
First thing I was wondering if it causes problems to have the extensions depend on the previously installed ones as then plantdevelopment also depends on polygeo and its extensions.
Could adding the source files to the extensions again, have an impact due to ambiguity/duplication when running the code? Could this be a memory leak?
So far I tried the following:
try to have as little includes/sources as possible to avoid overdefinition in the setup.py
try to install python locally on the Ubuntu remote with hombrew and pyenv with all software the same version as on the Mac (as well as the compiler)
tried out python 3.10 as there apparently used to be a memory leak issue at some point
tried to install everything on Mac without root privileges (still works flawlessly)
tried to put ALL files into one single module (which doesn't work due to abstract classes and the C++ code structure)
tested that the C++ code without python is working properly to make sure it's not about the C libraries (unfortunately I need the python API for the project and it's not my choice)
Using the guppy memory profiler, actually some of the runs (~50%) work out properly, already when I just import it and print the heap status, however there seems to be a lot of randomness...
In my program I use guppy the following way:
heap = hpy()
print("Heap Status At Starting : ")
heap_status1 = heap.heap()
print("Heap Size : ", heap_status1.size, " bytes\n")
print(heap_status1)
heap.setref()
print("\nHeap Status After Setting Reference Point : ")
heap_status2 = heap.heap()
print("Heap Size : ", heap_status2.size, " bytes\n")
print(heap_status2)
I can't really think of a reason why just importing and printing the heap info should change anything of the outcome of my optimization/integration?
Could this be an issue with the different OS?
Or maybe the code structure it problematic with python, building everything on top of each other/having dependencies between extensions in a single module?
I'd be happy for any kind of help and will of course provide any further necessary info.
I'm getting the below error when trying to package my app using buildozer (VM with Ubuntu):
ImportError: dlopen failed: "/data/data/org.test.myapp/files/app/_python_bundle/site-packages/grpc/_cython/cygrpc.so" is a 64-bit instead of a 32-bit
Apparently this is because I need to write a custom recipe for grpcio, so I did that:
class GrpcioRecipe(CythonRecipe):
version = 'master'
url = 'https://github.com/grpc/grpc/archive/{version}.zip'
name = 'grpcio'
depends = ['six', 'futures', 'enum34']
recipe = GrpcioRecipe()
I saved the recipe as grcpio_recipes.sh, put it in a folder called recipes and changed the buildozer.spec file to # (str) The directory in which python-for-android should look for your own build recipes (if any) p4a.local_recipes = .buildozer/python-for-android/recipes
However, I'm still getting the same error as before. Have I added the recipe to the right area - it doesn't seem like it's using my custom recipe.
You need to add this code inside recipes/grpcio/__init__.py
And don't forget to add it in the requirements as well in addition to adding it as a recipe
I'm using buildozer to convert a python program to a phone app on Mac connected to an Android phone with the command line:
buildozer android debug deploy run
The previous command line runs the converted app on the connected phone.But the app crashes as soon as playsound is used. As for the methods before playsound work just fine.
When I run:
adb logcat | grep python
I get the error:
ImportError: dlopen failed: "/data/data/org.test.myapp/files/app/_python_bundle/site-packages/gi/_gi.so" has bad ELF magic
When I looked it up I found that Mac cannot use .so files.
Does anybody know how I can solve this?
Okay, so I ended up fixing this when I got a similar error. (dawg.so has bad ELF magic)
Basically, the reason I got this error was because the library ("gi" in your case) was not been read properly by the android phone when deployed and hence, was "corrupted".
The bottomline reason (for me) was that it was a C/C++ library under the hood and used Cython to be converted to a Python library. Hence, this error usually means that your library needs a custom recipe.
Steps to solve it:
In the root directory (where .buildozer folder is found), I added a folder named dawg (the library name), and then, inside dawg, I git cloned the source files of dawg. To get the source files, you can just go to the PyPi page for that library and go to the Project links -> Homepage of their GitHub site. Once cloned, you can also remove .git, .gitignore, from the source files.
Once that's done, run python3 setup.py install in the dawg directory to install and hence, "cythonize" the source files
In .buildozer/android/platform/python-for android/pythonforandroid/recipes, add a new folder named "dawg" (your library name) and then, inside /dawg, make _init_.py where you will add your custom recipe.
In _init_.py, you can add your recipe and the path to the source files. Here is a template that worked for me, but you can customize it for you as per your requirements.
from pythonforandroid.recipe import IncludedFilesBehaviour, CppCompiledComponentsPythonRecipe
import os
import sys
class DAWGRecipe(IncludedFilesBehaviour, CppCompiledComponentsPythonRecipe):
version = '0.8.0'
src_filename = "../../../../../../../dawg"
name = 'dawg'
# Libraries it depends on
depends = ['setuptools']
call_hostpython_via_targetpython = False
install_in_hostpython = True
def get_recipe_env(self, arch):
env = super().get_recipe_env(arch)
env['LDFLAGS'] += ' -lc++_shared'
return env
recipe = DAWGRecipe()
Don't forget to alter the buildozer.spec. To its p4a.local_recipes, add the local path to /.buildozer/android/platform/python-for-android/pythonforandroid/recipes as that's where we add our recipes.
Clean the previous build by running buildozer android clean
Lastly, run buildozer -v android debug deploy run to build the app on android phone again.
Hope this helps :)
I'm running this code that I got from here
https://tuomur-python-odata.readthedocs.io/en/latest/#what-is-this
from odata import ODataService
url = 'http://services.odata.org/V4/Northwind/Northwind.svc/'
Service = ODataService(url, reflect_entities=True)
Order = Service.entities['Order']
query = Service.query(Order)
query = query.filter(Order.Name.startswith('Demo'))
query = query.order_by(Order.ShippedDate.desc())
for order in query:
print(order.Name)
When I run the code, it says ImportError: No module named odata. I tried using pip install odata, nothing was found. How do I install this library? I can't find any documentation on how to install it either.
Download the project from GitHub, run python setup.py install, sudo if necessary.