How to install gRPC python package on IoT device? - python

I have an IoT device (Android device) and some python script to interact with gRPC server running on the same device (as an Android app). I've cross-compiled Python3 and in general it's working - i was able to run some python script with it from Android app (using Runtime.getRuntime().exec(...) and passing PATH, PWD, PYTHONHOME, PYTHONPATH, ANDROID_DATA, ANDROID_ROOT env vars).
The python script that uses gRPC looks as follows:
...
import gprc
...
channel = grpc.insecure_channel(url)
When the script is executed i get the following error:
import error: No module named 'grpc'
Here is the structure of python directories on IoT device (that i've prepared):
├── bin
├── lib
│   ├── pkgconfig
│   ├── python3.9
│   │   ├── asyncio
│   │   ├── collections
│   │   ├── concurrent
│   │   ├── ctypes
│   │   ├── curses
│   │   ├── dbm
│   │   ├── distutils
│   │   ├── encodings
│   │   ├── ensurepip
│   │   ├── html
│   │   ├── http
│   │   ├── idlelib
│   │   ├── importlib
│   │   ├── json
│   │   ├── lib-dynload
│   │   ├── lib2to3
│   │   ├── logging
│   │   ├── multiprocessing
│   │   ├── pydoc_data
│   │   ├── site-packages
│   │   ├── sqlite3
│   │   ├── tkinter
│   │   ├── turtledemo
│   │   ├── unittest
│   │   ├── urllib
│   │   ├── venv
│   │   ├── wsgiref
│   │   ├── xml
│   │   └── xmlrpc
│   └── site-packages
│   ├── google
│   ├── grpc
│   ├── grpcio-1.30.0.dist-info
│   └── protobuf-3.12.2.dist-info
└── share
├── man
│   └── man1
└── terminfo
...
As you can see i've put relevant packages to site-packages (by just copying the same files from my mac machine to iot device which can be incorrect).
What's the right way to do it (where and what exactly should i put relevant libs to python dirs tree)? Can i put any dirs/files to the same dir where the script is (locally)? Is there any lite gRPC impl (probably with limited functionality) in python which can be easily prepared for distribution (eg. copy/pasted)?
FYI I've tried to use python -m pip install grpcio --target and then python -m zipapp resources -m "grpc_serial:main" but it's not working even locally because of module cygrpc not found too (but working if using grpc package that is installed globally):
import error: cannot import name 'cygrpc' from 'grpc._cython' (../python3/lib/python3.9/grpc/_cython/init.py)
If i run "python -m pip install cygrpc --target resources" to have a standalone dist for cygrpc i get ~30 directories (probably transitive deps) about 50Mb which is just crazy heavy.
I can provide a tree output for site-packages if it helps.

Solved it (workaround) by using Thrift RPC which seems to be more lightweight on python side (it required only thrift and six deps installed locally (in the same directory)).
I guess the rootcause for non using of gRPC was not installed transitive deps (not sure it is gRPC or pip issue).

Related

moduleNotFoundError: no module named (*)

 I'm trying to run my tests using python -m pytest but I get an error that
ModuleNotFoundError: No module named 'sample'
When using nosetests or anything else it works fine, but when trying to use pytest, it doesn't work.
My tree looks like below, do you have any advice why it doesn't work?
├── LICENSE.txt
├── README.md
├── data
│   └── data_file
├── exported_register.csv
├── pyproject.toml
├── requirements.txt
├── setup.cfg
├── setup.py
├── src
│   └── sample
│   ├── __init__.py
│   ├── __pycache__
│   │   ├── __init__.cpython-39.pyc
│   │   ├── dziennik.cpython-39.pyc
│   │   ├── przedmiot.cpython-39.pyc
│   │   ├── simple.cpython-39.pyc
│   │   └── uczen.cpython-39.pyc
│   ├── dziennik.py
│   ├── package_data.dat
│   ├── przedmiot.py
│   ├── simple.py
│   └── uczen.py
├── tests
│   ├── __init__.py
│   ├── __pycache__
│   │   ├── __init__.cpython-39.pyc
│   │   ├── test_ASSERTPY_uczen.cpython-39-pytest-6.2.1.pyc
│   │   ├── test_ASSERTPY_uczen.cpython-39-pytest-6.2.5.pyc
│   │   ├── test_ASSERTPY_uczen.cpython-39.pyc
│   │   ├── test_PYHAMCREST_uczen.cpython-39-pytest-6.2.1.pyc
│   │   ├── test_PYHAMCREST_uczen.cpython-39-pytest-6.2.5.pyc
│   │   ├── test_PYHAMCREST_uczen.cpython-39.pyc
│   │   ├── test_UNITTEST_register.cpython-39-pytest-6.2.1.pyc
│   │   ├── test_UNITTEST_register.cpython-39-pytest-6.2.5.pyc
│   │   ├── test_UNITTEST_register.cpython-39.pyc
│   │   ├── test_UNITTEST_uczen.cpython-39-pytest-6.2.1.pyc
│   │   ├── test_UNITTEST_uczen.cpython-39-pytest-6.2.5.pyc
│   │   ├── test_UNITTEST_uczen.cpython-39.pyc
│   │   ├── test_simple.cpython-39-pytest-6.2.1.pyc
│   │   ├── test_simple.cpython-39-pytest-6.2.5.pyc
│   │   └── test_simple.cpython-39.pyc
│   ├── test_ASSERTPY_uczen.py
│   ├── test_PYHAMCREST_uczen.py
│   ├── test_UNITTEST_register.py
│   ├── test_UNITTEST_uczen.py
│   └── test_simple.py
└── tox.ini
When you run pytest with python -m pytest it uses the current directory as it its working dir, which doesn't contain the sample module (located inside ./src). The way I deal with this is I have a conftest.py inside my tests directory where I add my source dir to python path something like this:
import sys
from pathlib import Path
source_path = Path(__file__).parents[1].joinpath("src").resolve()
sys.path.append(str(source_path))
I've recently started using pytorch and have had similar problems. Couple steps come to mind:
How are you writing the .py file that contains the tests? It may simply be that you need to change up how you import sample within the unit test file. I would expect that you need something like import src.sample.simple. In other words, could be just a pathing issue.
Try a (much) simpler folder structure and try again. If that doesn't work, try to just copy an example of a simple scheme that someone has posted. That is, just get python -m pytest to run somehow, then start slowly adding the complexities of your project.

How to deploy Google Cloud Function with shared local dependencies?

I am having some trouble with local modules when deploying on Cloud Functions - any ideas or best practices would be appreciated!
I am trying to deploy a piece of my project as a Cloud Function. It uses some local code from the project, which is shared with other modules - and I use an absolute import for that. I am using a Cloud Repository for deployment, and there I state the folder where the function resides (parent\cloud_function\). The problem is the parent package is not available with that setup.
This is an example of the project structure:
├── parent_repo
│   ├── parent
│   │   ├── __init__.py
│   │   ├── config.conf
│   │   ├── config.py
│   │   ├── cloud_function
│   │   │   ├── __init__.py
│   │   │   ├── main.py
│   │   │   └── requirements.txt
│   │   ├── shared_module
│   │   │   ├── __init__.py
│   │   │   ├── package1.py
│   │   │   └── package2.py
│   │   ├── other_module
│   │   │   ├── __init__.py
│   │   │   ├── some_script.py
│   │   │   └── another_script.py
│   │   └── utils.py
inside parent.cloud_function.main.py AND in parent.other_module.some_script.py I use:
from parent.shared_module.package1 import some_func
from parent.shared_module.package2 importsome_class
to access shared code. However, when trying to deploy the function on Cloud Functions, since I assume it only looks at the folder alone, the parent module is unavailable.
Of course I could simply nest all required code inside the cloud_function folder - but from a project perspective that isn't ideal - as that code is shared across other resources, and does not logically belong there.
Does anyone have a good idea how to this better?
Thanks in advance!
Very shortly - it is difficult.
Here is Python runtime - Specifying dependencies description -
requirements.txt file or packaging local dependencies alongside your
function
You probably can play with Private dependencies in the Cloud Build
Some ideas are provided in the Deploy a Python Cloud Function with all package dependencies SO question.

How to import locally in a proper way in PyCharm OSX

Here's how the directory looks like
├── Caches
│   ├── 11\ Centroids\ representing\ relative\ anchor\ sizes..png
│   ├── 9\ Centroids\ representing\ relative\ anchor\ sizes..png
│   ├── Generated\ anchors\ relative\ to\ sample\ image\ size.png
│   ├── Relative\ width\ and\ height\ for\ 10107\ boxes..png
│   └── data_set_labels.csv
├── Config
│   ├── feature_map.py
│   ├── set_annotation_conf.py
│   └── voc_conf.json
├── Helpers
│   ├── __pycache__
│   │   ├── annotation_parsers.cpython-37.pyc
│   │   └── visual_tools.cpython-37.pyc
│   ├── anchors.py
│   ├── annotation_parsers.py
│   ├── dataset_handlers.py
│   ├── models.py
│   └── visual_tools.py
├── README.md
├── sample_img.png
└── structure_requirements.txt
in dataset_handlers.py I need to import the following:
from ..Config.feature_map import get_feature_map
and in anchors.py I need to import the following:
from .visual_tools import visualization_wrapper
Both imports do resolve in PyCharm or in other words do not have syntax errors however when are run they give the following error:
ImportError: attempted relative import with no known parent package
If I do this:
from visual_tools import visualization_wrapper
the name visual_tools is unresolved however it runs without errors and it imports the requested things
How to be able to make relative imports from .some_module import something without getting errors and being resolved without marking the directory as sources root.

AWS Lambda Code Unable to Import Depdendencies from S3 -- Runtime.ImportModuleError

I'm trying to deploy a Python lambda function with external dependencies, but I'm hitting an error because it doesn't see my external dependencies. "Unable to import module 'metrics': No module named 'github'"
Here's my deploy script. My python script with the lambda_handler() is metrics.py.
mkdir lambda_code
# populate lambda_code directory with python libraries
pip3 install --quiet -r requirements.txt --target lambda_code/
# compress the lambda_code directory and add metrics.py to the zip
zip -qq -r9 lambda_code.zip lambda_code/
zip -qq -g lambda_code.zip metrics.py
aws s3 cp lambda_code.zip s3://$BUCKET/lambda_code.zip
aws lambda update-function-code --function-name $FUNCTION_NAME --s3-bucket $BUCKET --s3-key lambda_code.zip
Here's the tree of my upacked lamdba_code.zip This is where things aren't working. It doesn't make sense to me why the lambda can't see the github module. I've also tried putting metrics.py directly in the lambda_code directory, but still nothing.
.
├── lambda_code
│   ├── Deprecated-1.2.5.dist-info
│   ├── PyGithub-1.43.7.dist-info
│   ├── PyJWT-1.7.1.dist-info
│   ├── __pycache__
│   ├── bin
│   ├── certifi
│   ├── certifi-2019.3.9.dist-info
│   ├── chardet
│   ├── chardet-3.0.4.dist-info
│   ├── cycler-0.10.0.dist-info
│   ├── cycler.py
│   ├── dateutil
│   ├── deprecated
│   ├── easy_install.py
│   ├── github
│   ├── idna
│   ├── idna-2.8.dist-info
│   ├── jwt
│   ├── kiwisolver-1.1.0.dist-info
│   ├── kiwisolver.cpython-37m-darwin.so
│   ├── matplotlib
│   ├── matplotlib-3.0.3-py3.7-nspkg.pth
│   ├── matplotlib-3.0.3.dist-info
│   ├── mpl_toolkits
│   ├── numpy
│   ├── numpy-1.16.3.dist-info
│   ├── pandas
│   ├── pandas-0.24.2.dist-info
│   ├── pkg_resources
│   ├── pylab.py
│   ├── pyparsing-2.4.0.dist-info
│   ├── pyparsing.py
│   ├── python_dateutil-2.8.0.dist-info
│   ├── pytz
│   ├── pytz-2019.1.dist-info
│   ├── requests
│   ├── requests-2.21.0.dist-info
│   ├── setuptools
│   ├── setuptools-41.0.1.dist-info
│   ├── six-1.12.0.dist-info
│   ├── six.py
│   ├── urllib3
│   ├── urllib3-1.24.3.dist-info
│   ├── wrapt
│   └── wrapt-1.11.1.dist-info
└── metrics.py
Finally, here's the beginning of the lambda code. The error occurs when trying to import github.
"""Obtains total number of releases on Github.com and creates data
visualizations"""
import datetime
import io
import os
import sys
from base64 import b64decode
from github import Github
import boto3
import matplotlib.pyplot as plt
import pandas as pd
ENCRYPTED = os.environ['github_credentials']
DECRYPTED =
boto3.client('kms').decrypt(CiphertextBlob=b64decode(ENCRYPTED)) .
['Plaintext']
def lambda_handler(event, context):
You either need to have your metrics.py and the sub folders of lambda_code into a single folder, or you need to import the modules like, lambda_code.pytz for each module that is zipped.
If you follow your current directory structure, lambda_code becomes a module and all other modules can be referred only using lambda_code. because, folders inside lambda_code becomes sub modules of lambda_code. I would suggest you to copy sub folders of lambda_code to root directory, i.e. the directory which your metrics.py resides. Then delete the lambda_code folder and zip and upload. This way, you might not need to edit your code.

Why virtualenv relies on the global python instead of the local one, after being pulled?

I pulled (git) a python project which was created (by me on another computer) using virtualenv. So, the python library is actually in a local directory (e.g., fila_env/bin/python) in this project. After pulling it, I can locate that (see the tree below). However, when I activate the environment (using source fila_env/bin/activate), the python on this machine is being used instead of the virtualenv's:
(fila_env) username#ASCSC-645A:~/CODES/.../myProject$ which python
>>> /usr/bin/python
I googled around but I couldn't find a good solution to this. I would like to know:
How I can assure that if someone pulls this project, they will only
use the provided python library, and not their own python.
Is this a correct approach to create a virtualenv, and push the entire project (including the virtualenv) to the cloud?
Here are some more info:
├── yyyyyyExample.py
├── fila_env
│   ├── bin
│   │   ├── activate
│   │   ├── ...
│   │   ├── python
│   │   ├── python2 -> python
│   │   ├── python2.7 -> python
│   │   ├── python-config
│   │   ├── ...
│   │   └── wheel
│   ├── include
│   │   └── python2.7 -> /usr/include/python2.7
│   ├── lib
│   │   └── python2.7
│   ├── local
│   │   ├── bin -> .../fila_env/bin
│   │   ├── include -> .../fila_env/include
│   │   └── lib -> .../fila_env/lib
│   ├── pip-selfcheck.json
│   └── share
│   ├── jupyter
│   └── man
└── xxxxxxExample.py
You cannot and you shouldn't, If I use 32-bit Linux and your virtualenv has been created on 64-bit Windows (or vice versa) your python binary certainly will not work for me.
Again, no. virtualenv is a developer's tool, not a distribution tool. For distribution you should consider sdist/egg/wheel, or creating platform-dependent binaries with PyInstaller, py2exe or similar tools.

Categories