I'm getting the below error when trying to package my app using buildozer (VM with Ubuntu):
ImportError: dlopen failed: "/data/data/org.test.myapp/files/app/_python_bundle/site-packages/grpc/_cython/cygrpc.so" is a 64-bit instead of a 32-bit
Apparently this is because I need to write a custom recipe for grpcio, so I did that:
class GrpcioRecipe(CythonRecipe):
version = 'master'
url = 'https://github.com/grpc/grpc/archive/{version}.zip'
name = 'grpcio'
depends = ['six', 'futures', 'enum34']
recipe = GrpcioRecipe()
I saved the recipe as grcpio_recipes.sh, put it in a folder called recipes and changed the buildozer.spec file to # (str) The directory in which python-for-android should look for your own build recipes (if any) p4a.local_recipes = .buildozer/python-for-android/recipes
However, I'm still getting the same error as before. Have I added the recipe to the right area - it doesn't seem like it's using my custom recipe.
You need to add this code inside recipes/grpcio/__init__.py
And don't forget to add it in the requirements as well in addition to adding it as a recipe
Related
I am using kivy to create an android app. I need to install the deepspeech framework, however, in order for deepspeech to be installed it is necessary to create a recipe.
I created a recipe and built the apk, there were no errors in the build, it created the apk and also, as far as I could see in the folders, the deepspeech was built. However after I install the app in the phone and try to run the app, it crashes and says there is no module named deepspeech.
Does anyone know what i am doing wrong? I've been stuck on this for a while now, and can't seem to find the end of this :/.
from pythonforandroid.recipe import PythonRecipe
from pythonforandroid.toolchain import current_directory, shprint
import sh
class deepspeechRecipe(PythonRecipe):
version = 'v0.9.2'
url = 'https://github.com/mozilla/DeepSpeech/archive/{version}.tar.gz'
depends = ['numpy', 'setuptools']
call_hostpython_via_targetpython = False
site_packages_name = 'deepspeech'
def build_arch(self, arch):
env = self.get_recipe_env(arch)
with current_directory(self.get_build_dir(arch.arch)):
# Build python bindings
hostpython = sh.Command(self.hostpython_location)
shprint(hostpython,
'setup.py',
'build_ext', _env=env)
# Install python bindings
super().build_arch(arch)
def get_recipe_env(self, arch):
env = super().get_recipe_env(arch)
numpy_recipe = self.get_recipe('numpy', self.ctx)
env['CFLAGS'] += ' -I' + numpy_recipe.get_build_dir(arch.arch)
#env['LDFLAGS'] += ' -L' + sqlite_recipe.get_lib_dir(arch)
env['LIBS'] = env.get('LIBS', '') + ' -lnumpy'
return env
recipe = deepspeechRecipe()
Buildozer:1.4.0
requirements = python3==3.7.14, hostpython3==3.7.14, kivy, kivymd, sqlite3, numpy==1.14.5, deepspeech, apsw
If you need any extra information I can add.
I have already tried using tensorflow to run the model, however, the model gives an array as the output and I don't know the right procedures to transform that into a text form.
I have already tried other recipes (like opencv) and all work fine.
Edit:
I found out that when i use the recipe it does run, and it does build properly, but only the deepspeech_training part because the setup.py only installs that. To install other parts like the model class it is necessary to use another setup.py located in "native_client/python", but that requires the rest of the folders, so I still need to figure that out.
Edit2:I was able to build the packages that i wanted (the inference of deepspeech) however when i run it gives the following error.
python : ImportError: dlopen failed: library "libc++_shared.so" not found: needed by /data/user/0/org.test.myapp/files/app/_python_bundle/site-packages/deepspeech/_impl.so in namespace classloader-namespace
python : Python for android ended.
Add pillow in your requirements and check if it works!
requirements = python3==3.7.14, hostpython3==3.7.14, kivy, kivymd, sqlite3, numpy==1.14.5, deepspeech, apsw, pillow
I'm using buildozer to convert a python program to a phone app on Mac connected to an Android phone with the command line:
buildozer android debug deploy run
The previous command line runs the converted app on the connected phone.But the app crashes as soon as playsound is used. As for the methods before playsound work just fine.
When I run:
adb logcat | grep python
I get the error:
ImportError: dlopen failed: "/data/data/org.test.myapp/files/app/_python_bundle/site-packages/gi/_gi.so" has bad ELF magic
When I looked it up I found that Mac cannot use .so files.
Does anybody know how I can solve this?
Okay, so I ended up fixing this when I got a similar error. (dawg.so has bad ELF magic)
Basically, the reason I got this error was because the library ("gi" in your case) was not been read properly by the android phone when deployed and hence, was "corrupted".
The bottomline reason (for me) was that it was a C/C++ library under the hood and used Cython to be converted to a Python library. Hence, this error usually means that your library needs a custom recipe.
Steps to solve it:
In the root directory (where .buildozer folder is found), I added a folder named dawg (the library name), and then, inside dawg, I git cloned the source files of dawg. To get the source files, you can just go to the PyPi page for that library and go to the Project links -> Homepage of their GitHub site. Once cloned, you can also remove .git, .gitignore, from the source files.
Once that's done, run python3 setup.py install in the dawg directory to install and hence, "cythonize" the source files
In .buildozer/android/platform/python-for android/pythonforandroid/recipes, add a new folder named "dawg" (your library name) and then, inside /dawg, make _init_.py where you will add your custom recipe.
In _init_.py, you can add your recipe and the path to the source files. Here is a template that worked for me, but you can customize it for you as per your requirements.
from pythonforandroid.recipe import IncludedFilesBehaviour, CppCompiledComponentsPythonRecipe
import os
import sys
class DAWGRecipe(IncludedFilesBehaviour, CppCompiledComponentsPythonRecipe):
version = '0.8.0'
src_filename = "../../../../../../../dawg"
name = 'dawg'
# Libraries it depends on
depends = ['setuptools']
call_hostpython_via_targetpython = False
install_in_hostpython = True
def get_recipe_env(self, arch):
env = super().get_recipe_env(arch)
env['LDFLAGS'] += ' -lc++_shared'
return env
recipe = DAWGRecipe()
Don't forget to alter the buildozer.spec. To its p4a.local_recipes, add the local path to /.buildozer/android/platform/python-for-android/pythonforandroid/recipes as that's where we add our recipes.
Clean the previous build by running buildozer android clean
Lastly, run buildozer -v android debug deploy run to build the app on android phone again.
Hope this helps :)
I have a build process that creates a Python wheel using the following command:
python setup.py bdist_wheel
The build process can be run on many platforms (Windows, Linux, py2, py3 etc.) and I'd like to keep the default output names (e.g. mapscript-7.2-cp27-cp27m-win_amd64.whl) to upload to PyPI.
Is there anyway to get the generated wheel's filename (e.g. mapscript-7.2-cp27-cp27m-win_amd64.whl) and save to a variable so I can then install the wheel later on in the script for testing?
Ideally the solution would be cross platform. My current approach is to try and clear the folder, list all files and select the first (and only) file in the list, however this seems a very hacky solution.
setuptools
If you are using a setup.py script to build the wheel distribution, you can use the bdist_wheel command to query the wheel file name. The drawback of this method is that it uses bdist_wheel's private API, so the code may break on wheel package update if the authors decide to change it.
from setuptools.dist import Distribution
def wheel_name(**kwargs):
# create a fake distribution from arguments
dist = Distribution(attrs=kwargs)
# finalize bdist_wheel command
bdist_wheel_cmd = dist.get_command_obj('bdist_wheel')
bdist_wheel_cmd.ensure_finalized()
# assemble wheel file name
distname = bdist_wheel_cmd.wheel_dist_name
tag = '-'.join(bdist_wheel_cmd.get_tag())
return f'{distname}-{tag}.whl'
The wheel_name function accepts the same arguments you pass to the setup() function. Example usage:
>>> wheel_name(name="mydist", version="1.2.3")
mydist-1.2.3-py3-none-any.whl
>>> wheel_name(name="mydist", version="1.2.3", ext_modules=[Extension("mylib", ["mysrc.pyx", "native.c"])])
mydist-1.2.3-cp36-cp36m-linux_x86_64.whl
Notice that the source files for native libs (mysrc.pyx or native.c in the above example) don't have to exist to assemble the wheel name. This is helpful in case the sources for the native lib don't exist yet (e.g. you are generating them later via SWIG, Cython or whatever).
This makes the wheel_name easily reusable in the setup.py script where you define the distribution metadata:
# setup.py
from setuptools import setup, find_packages, Extension
from setup_helpers import wheel_name
setup_kwargs = dict(
name='mydist',
version='1.2.3',
packages=find_packages(),
ext_modules=[Extension(...), ...],
...
)
file = wheel_name(**setup_kwargs)
...
setup(**setup_kwargs)
If you want to use it outside of the setup script, you have to organize the access to setup() args yourself (e.g. reading them from a setup.cfg script or whatever).
This part is loosely based on my other answer to setuptools, know in advance the wheel filename of a native library
poetry
Things can be simplified a lot (it's practically a one-liner) if you use poetry because all the relevant metadata is stored in the pyproject.toml. Again, this uses an undocumented API:
from clikit.io import NullIO
from poetry.factory import Factory
from poetry.masonry.builders.wheel import WheelBuilder
from poetry.utils.env import NullEnv
def wheel_name(rootdir='.'):
builder = WheelBuilder(Factory().create_poetry(rootdir), NullEnv(), NullIO())
return builder.wheel_filename
The rootdir argument is the directory containing your pyproject.toml script.
flit
AFAIK flit can't build wheels with native extensions, so it can give you only the purelib name. Nevertheless, it may be useful if your project uses flit for distribution building. Notice this also uses an undocumented API:
from flit_core.wheel import WheelBuilder
from io import BytesIO
from pathlib import Path
def wheel_name(rootdir='.'):
config = str(Path(rootdir, 'pyproject.toml'))
builder = WheelBuilder.from_ini_path(config, BytesIO())
return builder.wheel_filename
Implementing your own solution
I'm not sure whether it's worth it. Still, if you want to choose this path, consider using packaging.tags before you find some old deprecated stuff or even decide to query the platform yourself. You will still have to fall back to private stuff to assemble the correct wheel name, though.
My current approach to install the wheel is to point pip to the folder containing the wheel and let it search itself:
python -m pip install --no-index --find-links=build/dist mapscript
twine also can be pointed directly at a folder without needing to know the exact wheel name.
I used a modified version of hoefling's solution. My goal was to copy the build to a "latest" wheel file. The setup() function will return an object with all the info you need, so you can find out what it actually built, which seems simpler than the solution above. Assuming you have a variable version in use, the following will get the file name I just built and then copies it.
setup = setuptools.setup(
# whatever options you currently have
)
wheel_built = 'dist/{}-{}.whl'.format(
setup.command_obj['bdist_wheel'].wheel_dist_name,
'-'.join(setup.command_obj['bdist_wheel'].get_tag()))
wheel_latest = wheel_built.replace(version, 'latest')
shutil.copy(wheel_built, wheel_latest)
print('Copied {} >> {}'.format(wheel_built, wheel_latest))
I guess one possible drawback is you have to actually do the build to get the name, but since that was part of my workflow, I was ok with that. hoefling's solution has the benefit of letting you plan the name without doing the build, but it seems more complex.
I'm trying to install couchdbkit using following buildout config:
[buildout]
parts = eggs
include-site-packages = false
versions = versions
[eggs]
recipe = zc.recipe.egg:eggs
eggs =
couchdbkit
[versions]
couchdbkit = 0.6.3
It installs package successfully but I get numerous errors like this during setup on some machines:
Download error on http://hg.e-engura.org/couchdbkit/: [Errno -2] Name or service not known -- Some packages may not be found!
Be default buildout should find packages using this index. But I can't understand source of this weird hostname. Nothing here points to this location.
How does it actually work?
The underlying setuptools code also scans for homepage and download links from the simple index and does this quite aggressively.
The couchdbkit setup.py file lists http://hg.e-engura.org/couchdbkit/ as the homepage, so all homepage links on the simple index link there.
You can prevent zc.buildout from trying to connect to that host by setting up a whitelist of hosts it can connect to:
[buildout]
# ...
allow-hosts =
*.python.org
*.google.com
*.googlecode.com
*.sourceforge.net
for example.
I'm attempting to release a new version of a package to pypi. This is using python 2.7, and I'm currently targeting pythons 2.6/2.7 for consumption.
The current release for the package in question is 0.0.2-1. (The -1 was a build tag convention I read somewhere; I'm changing this practice to use b for beta, which is more relevant.)
Basically, if I have the combination of version (in the setup() call) and build tag (from setup.cfg) that is anything other than the current version already on pypi, both the register and upload commands fail:
ethan#walrus:~/source/python-mandrel$ python setup.py register
running register
running egg_info
writing requirements to mandrel.egg-info/requires.txt
writing mandrel.egg-info/PKG-INFO
writing top-level names to mandrel.egg-info/top_level.txt
writing dependency_links to mandrel.egg-info/dependency_links.txt
writing entry points to mandrel.egg-info/entry_points.txt
reading manifest file 'mandrel.egg-info/SOURCES.txt'
writing manifest file 'mandrel.egg-info/SOURCES.txt'
running check
Registering mandrel to http://pypi.python.org/pypi
Server response (500): There's been a problem with your request
That's with a version of 0.0.3 and build tag of b.
But if I apply this patch:
--- a/setup.cfg
+++ b/setup.cfg
## -1,3 +1,3 ##
[egg_info]
-tag_build = b
+tag_build = -1
diff --git a/setup.py b/setup.py
index 14761cf..beb8278 100644
--- a/setup.py
+++ b/setup.py
## -3,7 +3,7 ## import os
setup(
name = "mandrel",
- version = "0.0.3",
+ version = "0.0.2",
author = "Ethan Rowe",
author_email = "ethan#the-rowes.com",
description = ("Provides bootstrapping for sane configuration management"),
Then the register call (and presumably upload) will succeed:
ethan#walrus:~/source/python-mandrel$ python setup.py register
running register
...
running check
Registering mandrel to http://pypi.python.org/pypi
Server response (200): OK
If I change the build tag to -2, say, the register call will fail again. This suggests the failure is related to any total version string that isn't already known to pypi.
Unfortunately, the --show-response option when using upload is unhelpful when the server responds with a 500 code; distutils' upload command merely reports the fact that the server experienced an error, with nothing useful to go on.
Any suggestions on what I might do to troubleshoot?
I'm having a 500 error also, the issue for that with the diagnosis from them is here: https://sourceforge.net/tracker/index.php?func=detail&aid=3573564&group_id=66150&atid=513503 .
I debugged it using pdb. The show-response option isn't really implemented in a useful way, apparently. I put an "import pdb; pdb.set_trace()" in my Python dist, in distutils/command/register.py on line 291, which in my release is inside the method post_to_server(). I do a "print req.data" right there and then "next" through it in order to see the response installed inside the exception catch.