When I run a python script, which uses the 'sh' module, from bazel genrule, it failed with this:
INFO: Analysed target //src:foo_gen (8 packages loaded).
INFO: Found 1 target...
ERROR: /home/libin11/workspace/test/test/src/BUILD:1:1: Executing genrule //src:foo_gen failed (Exit 1)
Traceback (most recent call last):
File "src/test.py", line 2, in <module>
sh.touch("foo.bar")
File "/usr/local/lib/python2.7/dist-packages/sh.py", line 1427, in __call__
return RunningCommand(cmd, call_args, stdin, stdout, stderr)
File "/usr/local/lib/python2.7/dist-packages/sh.py", line 767, in __init__
self.call_args, pipe, process_assign_lock)
File "/usr/local/lib/python2.7/dist-packages/sh.py", line 1784, in __init__
self._stdout_read_fd, self._stdout_write_fd = pty.openpty()
File "/usr/lib/python2.7/pty.py", line 29, in openpty
master_fd, slave_name = _open_terminal()
File "/usr/lib/python2.7/pty.py", line 70, in _open_terminal
raise os.error, 'out of pty devices'
OSError: out of pty devices
Target //src:foo_gen failed to build
Use --verbose_failures to see the command lines of failed build steps.
INFO: Elapsed time: 2.143s, Critical Path: 0.12s
INFO: 0 processes.
FAILED: Build did NOT complete successfully
I want to integrate a thirdparty project to my own. The third party project is built with a python script, so I would like to build the project with bazel genrule.
Here is an example file list:
.
├── src
│ ├── BUILD
│ └── test.py
└── WORKSPACE
WORKSPACE is empty, BUILD is:
genrule(
name = "foo_gen",
srcs = glob(["**/*"]),
outs = ["foo.bar"],
cmd = "python $(location test.py)",
)
test.py is:
import sh
sh.touch("foo.bar")
And run:
bazel build //src:foo_gen
OS: Ubuntu 16.04
bazel: release 0.14.1
It looks like if you change the call to sh.touch("foo.bar", _tty_in=False, _tty_out=False) it works, but you'll still need a bit of modification to the genrule otherwise it won't produce output.
I prefer to import pip dependencies using the bazel python rules, so I can create the tool for my genrule. This way, bazel handles the pip requirement install and you don't have to chmod the test.py file.
load("#my_deps//:requirements.bzl", "requirement")
py_binary(
name = "foo_tool",
srcs = [
"test.py",
],
main = "test.py",
deps = [
requirement("sh"),
],
)
genrule(
name = "foo_gen",
outs = ["foo.bar"],
cmd = """
python3 $(location //src:foo_tool)
cp foo.bar $#
""",
tools = [":foo_tool"],
)
Note the required copy in the genrule command. It's a bit cleaner if your python script can output to std out, then you can just redirect the output to the file instead of adding a copy command. See this for more info.
My output with these changes:
INFO: Analysed target //src:foo_gen (0 packages loaded).
INFO: Found 1 target...
Target //src:foo_gen up-to-date:
bazel-genfiles/src/foo.bar
INFO: Elapsed time: 0.302s, Critical Path: 0.00s
INFO: 0 processes.
INFO: Build completed successfully, 1 total action
Related
I'm trying to create an Azure DevOps Pipeline in order to build and release a Python package under the Azure DevOps Artifacts section.
I've started creating a feed called "utils", then I've created my package and I've structured it like that:
.
src
|
__init__.py
class.py
test
|
__init__.py
test_class.py
.pypirc
azure-pipelines.yml
pyproject.toml
requirements.txt
setup.cfg
And this is the content of files:
.pypirc
[distutils]
Index-servers =
prelios-utils
[utils]
Repository = https://pkgs.dev.azure.com/OMIT/_packaging/utils/pypi/upload/
pyproject.toml
[build-system]
requires = [
"setuptools>=42",
"wheel"
]
build-backend = "setuptools.build_meta"
setup.cfg
[metadata]
name = my_utils
version = 0.1
author = Walter Tranchina
author_email = walter.tranchina#OMIT.com
description = A package containing [...]
long_description = file: README.md
long_description_content_type = text/markdown
url = OMIT.com
project_urls =
classifiers =
Programming Language :: Python :: 3
License :: OSI Approved :: MIT License
Operating System :: OS Independent
[options]
package_dir =
= src
packages = find:
python_requires = >=3.7
install_requires=
[options.packages.find]
where = src
azure-pipelines.yml
trigger:
- main
pool:
vmImage: 'ubuntu-latest'
strategy:
matrix:
Python38:
python.version: '3.8'
steps:
- task: UsePythonVersion#0
inputs:
versionSpec: '$(python.version)'
displayName: 'Use Python $(python.version)'
- script: |
python -m pip install --upgrade pip
displayName: 'Install dependencies'
- script: |
pip install twine wheel
displayName: 'Install buildtools'
- script: |
pip install pytest pytest-azurepipelines
pytest
displayName: 'pytest'
- script: |
python -m build
displayName: 'Artifact creation'
- script: |
twine upload -r utils --config-file ./.pypirc dist/*
displayName: 'Artifact Upload'
The problem I'm facing is that the pipeline stucks in the Artifact Upload stage for hours without completing.
Can please someone help me understand what it's wrong?
Thanks!
[UPDATE]
I've updated my yml file as suggested in the answers:
- task: TwineAuthenticate#1
displayName: 'Twine Authenticate'
inputs:
artifactFeed: 'utils'
And now I have this error:
2022-05-19T09:20:50.6726960Z ##[section]Starting: Artifact Upload
2022-05-19T09:20:50.6735745Z ==============================================================================
2022-05-19T09:20:50.6736081Z Task : Command line
2022-05-19T09:20:50.6736434Z Description : Run a command line script using Bash on Linux and macOS and cmd.exe on Windows
2022-05-19T09:20:50.6736788Z Version : 2.201.1
2022-05-19T09:20:50.6737008Z Author : Microsoft Corporation
2022-05-19T09:20:50.6737375Z Help : https://learn.microsoft.com/azure/devops/pipelines/tasks/utility/command-line
2022-05-19T09:20:50.6737859Z ==============================================================================
2022-05-19T09:20:50.8090380Z Generating script.
2022-05-19T09:20:50.8100662Z Script contents:
2022-05-19T09:20:50.8102321Z twine upload -r utils --config-file ./.pypirc dist/*
2022-05-19T09:20:50.8102824Z ========================== Starting Command Output ===========================
2022-05-19T09:20:50.8129029Z [command]/usr/bin/bash --noprofile --norc /home/vsts/work/_temp/706c12ef-da25-44b0-b1fc-5ab83e7e0bf9.sh
2022-05-19T09:20:51.1178721Z Uploading distributions to
2022-05-19T09:20:51.1180490Z https://pkgs.dev.azure.com/OMIT/_packaging/utils/pypi/upload/
2022-05-19T09:20:27.0860014Z Traceback (most recent call last):
2022-05-19T09:20:27.0861203Z File "/opt/hostedtoolcache/Python/3.8.12/x64/bin/twine", line 8, in <module>
2022-05-19T09:20:27.0862081Z sys.exit(main())
2022-05-19T09:20:27.0863965Z File "/opt/hostedtoolcache/Python/3.8.12/x64/lib/python3.8/site-packages/twine/__main__.py", line 33, in main
2022-05-19T09:20:27.0865080Z error = cli.dispatch(sys.argv[1:])
2022-05-19T09:20:27.0866638Z File "/opt/hostedtoolcache/Python/3.8.12/x64/lib/python3.8/site-packages/twine/cli.py", line 124, in dispatch
2022-05-19T09:20:27.0867670Z return main(args.args)
2022-05-19T09:20:27.0869183Z File "/opt/hostedtoolcache/Python/3.8.12/x64/lib/python3.8/site-packages/twine/commands/upload.py", line 198, in main
2022-05-19T09:20:27.0870362Z return upload(upload_settings, parsed_args.dists)
2022-05-19T09:20:27.0871990Z File "/opt/hostedtoolcache/Python/3.8.12/x64/lib/python3.8/site-packages/twine/commands/upload.py", line 127, in upload
2022-05-19T09:20:27.0873239Z repository = upload_settings.create_repository()
2022-05-19T09:20:27.0875392Z File "/opt/hostedtoolcache/Python/3.8.12/x64/lib/python3.8/site-packages/twine/settings.py", line 329, in create_repository
2022-05-19T09:20:27.0876447Z self.username,
2022-05-19T09:20:27.0877911Z File "/opt/hostedtoolcache/Python/3.8.12/x64/lib/python3.8/site-packages/twine/settings.py", line 131, in username
2022-05-19T09:20:27.0879043Z return cast(Optional[str], self.auth.username)
2022-05-19T09:20:27.0880583Z File "/opt/hostedtoolcache/Python/3.8.12/x64/lib/python3.8/site-packages/twine/auth.py", line 34, in username
2022-05-19T09:20:27.0881640Z return utils.get_userpass_value(
2022-05-19T09:20:27.0883208Z File "/opt/hostedtoolcache/Python/3.8.12/x64/lib/python3.8/site-packages/twine/utils.py", line 248, in get_userpass_value
2022-05-19T09:20:27.0884302Z value = prompt_strategy()
2022-05-19T09:20:27.0886234Z File "/opt/hostedtoolcache/Python/3.8.12/x64/lib/python3.8/site-packages/twine/auth.py", line 85, in username_from_keyring_or_prompt
2022-05-19T09:20:27.0887440Z return self.prompt("username", input)
2022-05-19T09:20:27.0888964Z File "/opt/hostedtoolcache/Python/3.8.12/x64/lib/python3.8/site-packages/twine/auth.py", line 96, in prompt
2022-05-19T09:20:27.0890017Z return how(f"Enter your {what}: ")
2022-05-19T09:20:27.0890786Z EOFError: EOF when reading a line
2022-05-19T09:20:27.1372189Z ##[error]Bash exited with code 'null'.
2022-05-19T09:20:27.1745024Z ##[error]The operation was canceled.
2022-05-19T09:20:27.1749049Z ##[section]Finishing: Artifact Upload
Seems like twine is waiting for something... :/
I guess this is because you are missing a Python Twine Upload Authenticate task.
- task: TwineAuthenticate#1
inputs:
artifactFeed: 'MyTestFeed'
If you are using a project level feed, the value of artifactFeed should be {project name}/{feed name}.
If you are using an organization level feed, the value of artifactFeed should be {feed name}.
A simpler way is to click the gray "setting" button under the task and select your feed from the drop-down list.
I've found the solution after many tentatives...
First I've created a Service Connection in Azure DevOps to Python, containing an API key previously generated.
Then I've edited the yaml file:
- task: TwineAuthenticate#1
displayName: 'Twine Authenticate'
inputs:
pythonUploadServiceConnection: 'PythonUpload'
- script: |
python -m twine upload --skip-existing --verbose -r utils --config-file $(PYPIRC_PATH) dist/*
displayName: 'Artifact Upload'
They key was using the variable $(PYPIRC_PATH) that is automatically set by the previous task. The .pypirc file is ignored by the process, so it can be deleted!
Hope it will help!
TL;DR: How can I set up my GitLab test pipeline so that the tests also run locally on VS Code?
I'm very new to GitLab pipelines, so please forgive me if the question is amateurish. I have a GitLab repo set up online, and I'm using VS Code to develop locally. I've created a new pipeline, I want to make sure all my unit tests (written with PyTest) run anytime I make a commit.
The issue is, that even though I use the same setup.py file for both places (obviously), I can't get both VS Code testing and the GitLab pipeline test to work at the same time. The issue is, I'm doing an import for my tests, and if I import like
...
from external_workforce import misc_tools
# I want to test functions in this misc_tools module
...
Then it works on GitLab, but not on VS Code, as VS Code gives an error when I'm doing test discovery, namely: ModuleNotFoundError: No module named 'external_workforce'. However, this works on GitLab. But if I import (in my test_tools.py file, see location below) like this:
...
from hr_datapool.external_workforce import misc_tools
...
It works in VS Code, but now GitLab is doing a crazy on me saying ModuleNotFoundError: No module named 'hr_datapool'.
I think the relevant info might be the following, please ask for more if more info is needed!
My file structure is:
.
|__ requirements.txt
setup.py
hr_datapool
|__ external_workforce
|__ __init__.py
misc_tools.py
tests
|__ test_tools.py
|__ other_module
...
In my pipeline editor (the .gitlab-ci.yml file) I have:
image: python:3.9.7
variables:
PIP_CACHE_DIR: "$CI_PROJECT_DIR/.cache/pip"
cache:
paths:
- .cache/pip
- venv/
before_script:
- python --version # For debugging
- pip install virtualenv
- virtualenv venv
- source venv/bin/activate
- pip install -r requirements.txt
test:
script:
- pytest --pyargs hr_datapool #- python setup.py test
run:
script:
- python setup.py bdist_wheel
artifacts:
paths:
- dist/*.whl
And finally, my setup.py is:
import re
from unittest import removeResult
from setuptools import setup, find_packages
with open('requirements.txt') as f:
requirements = f.read().splitlines()
for req in ['wheel', 'bar']:
requirements.append(req)
setup(
name='hr-datapool',
version='0.1',
...
packages=find_packages(),
install_requires=requirements,
)
Basically, the question is: How can I set up my GitLab test pipeline so that the tests also run locally on VS Code? Thank you!
UPDATE:
Adding the full trace coming from VS Code:
> conda run -n base --no-capture-output --live-stream python ~/.vscode/extensions/ms-python.python-2022.2.1924087327/pythonFiles/get_output_via_markers.py ~/.vscode/extensions/ms-python.python-2022.2.1924087327/pythonFiles/testing_tools/run_adapter.py discover pytest -- --rootdir "." -s --cache-clear hr_datapool
cwd: .
[ERROR 2022-2-23 9:2:4.500]: Error discovering pytest tests:
[r [Error]: ============================= test session starts ==============================
platform darwin -- Python 3.9.7, pytest-6.2.4, py-1.10.0, pluggy-0.13.1
rootdir: /Users/myuser/Documents/myfolder
plugins: anyio-2.2.0
collected 0 items / 1 error
==================================== ERRORS ====================================
_____ ERROR collecting hr_datapool/external_workforce/tests/test_tools.py ______
ImportError while importing test module '/Users/myuser/Documents/myfolder/hr_datapool/external_workforce/tests/test_tools.py'.
Hint: make sure your test modules/packages have valid Python names.
Traceback:
../../opt/anaconda3/lib/python3.9/importlib/__init__.py:127: in import_module
return _bootstrap._gcd_import(name[level:], package, level)
hr_datapool/external_workforce/tests/test_tools.py:2: in <module>
from external_workforce import misc_tools
E ModuleNotFoundError: No module named 'external_workforce'
=========================== short test summary info ============================
ERROR hr_datapool/external_workforce/tests/test_tools.py
!!!!!!!!!!!!!!!!!!!! Interrupted: 1 error during collection !!!!!!!!!!!!!!!!!!!!
===================== no tests collected, 1 error in 0.08s =====================
Traceback (most recent call last):
File "/Users/myuser/.vscode/extensions/ms-python.python-2022.2.1924087327/pythonFiles/get_output_via_markers.py", line 26, in <module>
runpy.run_path(module, run_name="__main__")
File "/Users/myuser/opt/anaconda3/lib/python3.9/runpy.py", line 268, in run_path
return _run_module_code(code, init_globals, run_name,
File "/Users/myuser/opt/anaconda3/lib/python3.9/runpy.py", line 97, in _run_module_code
_run_code(code, mod_globals, init_globals,
File "/Users/myuser/opt/anaconda3/lib/python3.9/runpy.py", line 87, in _run_code
exec(code, run_globals)
File "/Users/myuser/.vscode/extensions/ms-python.python-2022.2.1924087327/pythonFiles/testing_tools/run_adapter.py", line 22, in <module>
main(tool, cmd, subargs, toolargs)
File "/Users/myuser/.vscode/extensions/ms-python.python-2022.2.1924087327/pythonFiles/testing_tools/adapter/__main__.py", line 100, in main
parents, result = run(toolargs, **subargs)
File "/Users/myuser/.vscode/extensions/ms-python.python-2022.2.1924087327/pythonFiles/testing_tools/adapter/pytest/_discovery.py", line 44, in discover
raise Exception("pytest discovery failed (exit code {})".format(ec))
Exception: pytest discovery failed (exit code 2)
ERROR conda.cli.main_run:execute(33): Subprocess for 'conda run ['python', '/Users/myuser/.vscode/extensions/ms-python.python-2022.2.1924087327/pythonFiles/get_output_via_markers.py', '/Users/A111086670/.vscode/extensions/ms-python.python-2022.2.1924087327/pythonFiles/testing_tools/run_adapter.py', 'discover', 'pytest', '--', '--rootdir', '/Users/myuser/Documents/myfolder', '-s', '--cache-clear', 'hr_datapool']' command failed. (See above for error)
at ChildProcess.<anonymous> (/Users/myuser/.vscode/extensions/ms-python.python-2022.2.1924087327/out/client/extension.js:32:39235)
at Object.onceWrapper (events.js:422:26)
at ChildProcess.emit (events.js:315:20)
at maybeClose (internal/child_process.js:1048:16)
at Process.ChildProcess._handle.onexit (internal/child_process.js:288:5)]
The PYTHONPATH caused the problem.
As external_workforce parent folder path -> the path of hr_datapool in the PYTHONPATH when you are using GitLab. While hr_datapool parent folder path in the PYTHONPATH when you are using VSCode.
Are you running the test in the terminal on the VSCode? And have you added this in the settings.json file?
"terminal.integrated.env.windows": {
"PYTHONPATH": "${workspaceFolder};"
},
Then you can execute pytest in the terminal on the VSCode. But you have not configured this in GitLab instead of adding hr-datapool( - pytest --pyargs hr_datapool or setup( name='hr-datapool',), so you will get the error message.
I compiled and installed everything from ros wiki and noetic got installed properly and i checked by running roscore command and also gazebo got installed rightly too. Then to work with turtlebot, I created a Catkin workspace and cloned the git repositories by following this tutorial : https://emanual.robotis.com/docs/en/platform/turtlebot3/simulation/
after running the cd ~/catkin_ws && catkin_make command i get the following error:
*Base path: /home/areebpc/catkin_ws
Source space: /home/areebpc/catkin_ws/src
Build space: /home/areebpc/catkin_ws/build
Devel space: /home/areebpc/catkin_ws/devel
Install space: /home/areebpc/catkin_ws/install
####
#### Running command: "cmake /home/areebpc/catkin_ws/src -DCATKIN_DEVEL_PREFIX=/home/areebpc/catkin_ws/devel -DCMAKE_INSTALL_PREFIX=/home/areebpc/catkin_ws/install -G Unix Makefiles" in "/home/areebpc/catkin_ws/build"
####
-- Using CATKIN_DEVEL_PREFIX: /home/areebpc/catkin_ws/devel
-- Using CMAKE_PREFIX_PATH: /home/areebpc/catkin_ws/devel;/opt/ros/noetic
-- This workspace overlays: /home/areebpc/catkin_ws/devel;/opt/ros/noetic
-- Found PythonInterp: /usr/bin/python3 (found suitable version "3.8.5", minimum required is "3")
-- Using PYTHON_EXECUTABLE: /usr/bin/python3
-- Using Debian Python package layout
-- Using empy: /usr/lib/python3/dist-packages/em.py
-- Using CATKIN_ENABLE_TESTING: ON
-- Call enable_testing()
-- Using CATKIN_TEST_RESULTS_DIR: /home/areebpc/catkin_ws/build/test_results
-- Forcing gtest/gmock from source, though one was otherwise available.
-- Found gtest sources under '/usr/src/googletest': gtests will be built
-- Found gmock sources under '/usr/src/googletest': gmock will be built
-- Found PythonInterp: /usr/bin/python3 (found version "3.8.5")
-- Using Python nosetests: /usr/bin/nosetests3
-- catkin 0.8.9
-- BUILD_SHARED_LIBS is on
-- BUILD_SHARED_LIBS is on
/opt/ros/noetic/share/catkin/cmake/em/order_packages.cmake.em:23: error: <class 'RuntimeError'>: Multiple packages found with the same name "turtlebot3":
- share/turtlebot3
- turtlebot3/turtlebot3
Multiple packages found with the same name "turtlebot3_bringup":
- share/turtlebot3_bringup
- turtlebot3/turtlebot3_bringup
Multiple packages found with the same name "turtlebot3_description":
- share/turtlebot3_description
- turtlebot3/turtlebot3_description
Multiple packages found with the same name "turtlebot3_example":
- share/turtlebot3_example
- turtlebot3/turtlebot3_example
Multiple packages found with the same name "turtlebot3_msgs":
- share/turtlebot3_msgs
- turtlebot3_msgs
Multiple packages found with the same name "turtlebot3_navigation":
- share/turtlebot3_navigation
- turtlebot3/turtlebot3_navigation
Multiple packages found with the same name "turtlebot3_slam":
- share/turtlebot3_slam
- turtlebot3/turtlebot3_slam
Multiple packages found with the same name "turtlebot3_teleop":
- share/turtlebot3_teleop
- turtlebot3/turtlebot3_teleop
Traceback (most recent call last):
File "/usr/lib/python3/dist-packages/em.py", line 3302, in <module>
if __name__ == '__main__': main()
File "/usr/lib/python3/dist-packages/em.py", line 3300, in main
invoke(sys.argv[1:])
File "/usr/lib/python3/dist-packages/em.py", line 3283, in invoke
interpreter.wrap(interpreter.file, (file, name))
File "/usr/lib/python3/dist-packages/em.py", line 2295, in wrap
self.fail(e)
File "/usr/lib/python3/dist-packages/em.py", line 2284, in wrap
callable(*args)
File "/usr/lib/python3/dist-packages/em.py", line 2359, in file
self.safe(scanner, done, locals)
File "/usr/lib/python3/dist-packages/em.py", line 2401, in safe
self.parse(scanner, locals)
File "/usr/lib/python3/dist-packages/em.py", line 2421, in parse
token.run(self, locals)
File "/usr/lib/python3/dist-packages/em.py", line 1425, in run
interpreter.execute(self.code, locals)
File "/usr/lib/python3/dist-packages/em.py", line 2595, in execute
_exec(statements, self.globals, locals)
File "<string>", line 17, in <module>
File "/usr/lib/python3/dist-packages/catkin_pkg/topological_order.py", line 147, in topological_order
for path, package in find_packages(space).items():
File "/usr/lib/python3/dist-packages/catkin_pkg/packages.py", line 96, in find_packages
raise RuntimeError('\n'.join(duplicates))
RuntimeError: Multiple packages found with the same name "turtlebot3":
- share/turtlebot3
- turtlebot3/turtlebot3
Multiple packages found with the same name "turtlebot3_bringup":
- share/turtlebot3_bringup
- turtlebot3/turtlebot3_bringup
Multiple packages found with the same name "turtlebot3_description":
- share/turtlebot3_description
- turtlebot3/turtlebot3_description
Multiple packages found with the same name "turtlebot3_example":
- share/turtlebot3_example
- turtlebot3/turtlebot3_example
Multiple packages found with the same name "turtlebot3_msgs":
- share/turtlebot3_msgs
- turtlebot3_msgs
Multiple packages found with the same name "turtlebot3_navigation":
- share/turtlebot3_navigation
- turtlebot3/turtlebot3_navigation
Multiple packages found with the same name "turtlebot3_slam":
- share/turtlebot3_slam
- turtlebot3/turtlebot3_slam
Multiple packages found with the same name "turtlebot3_teleop":
- share/turtlebot3_teleop
- turtlebot3/turtlebot3_teleop
CMake Error at /opt/ros/noetic/share/catkin/cmake/safe_execute_process.cmake:11 (message):
execute_process(/home/areebpc/catkin_ws/build/catkin_generated/env_cached.sh
"/usr/bin/python3" "/usr/lib/python3/dist-packages/em.py" "--raw-errors"
"-F" "/home/areebpc/catkin_ws/build/catkin_generated/order_packages.py"
"-o" "/home/areebpc/catkin_ws/build/catkin_generated/order_packages.cmake"
"/opt/ros/noetic/share/catkin/cmake/em/order_packages.cmake.em") returned
error code 1
Call Stack (most recent call first):
/opt/ros/noetic/share/catkin/cmake/em_expand.cmake:25 (safe_execute_process)
/opt/ros/noetic/share/catkin/cmake/catkin_workspace.cmake:35 (em_expand)
CMakeLists.txt:69 (catkin_workspace)
-- Configuring incomplete, errors occurred!
See also "/home/areebpc/catkin_ws/build/CMakeFiles/CMakeOutput.log".
See also "/home/areebpc/catkin_ws/build/CMakeFiles/CMakeError.log".
Invoking "cmake" failed*
i also checked the CMakeOutput.log and CMakeError.log but i could not see where the problem lies.
While installing, are you sure you are installing via noetic?
cd ~/catkin_ws/src/
git clone -b noetic-devel https://github.com/ROBOTIS-GIT/turtlebot3_simulations.git
cd ~/catkin_ws && catkin_make
If you followed this step it should work. You could try deleting and reinstalling the turtlebot3 files. Make sure the setup is noetic.
In local I have this:
from shapely.geometry import Point
<...>
class GeoDataIngestion:
def parse_method(self, string_input):
Place = Point(float(values[2]), float(values[3]))
<...>
I run this, with python 2.7 and all goes well
After that, I try to test it with the dataflow runner but while running I got this error:
NameError: global name 'Point' is not defined
The pipeline:
geo_data = (raw_data
| 'Geo data transform' >> beam.Map(lambda s: geo_ingestion.parse_method(s))
I have read other post and I think this should work, but i'm not sure if there are something special with Google Dataflow in this
I also tried:
import shapely.geometry
<...>
Place = shapely.geometry.Point(float(values[2]), float(values[3]))
With the same result
NameError: global name 'shapely' is not defined
Any idea?
In Google Cloud, If I tried in my virtual enviroment, I can do it without any problem:
(env) ...#cloudshell:~ ()$ python
Python 2.7.13 (default, Sep 26 2018, 18:42:22)
[GCC 6.3.0 20170516] on linux2
Type "help", "copyright", "credits" or "license" for more information.
>>> from shapely.geometry import Point
Var = Point(-5.020751953125, 39.92237576385941)
EXTRA:
Error using requirements.txt
Collecting Shapely==1.6.4.post1 (from -r req.txt (line 2))
Using cached https://files.pythonhosted.org/packages/7d/3c/0f09841db07aabf9cc387662be646f181d07ed196e6f60ce8be5f4a8f0bd/Shapely-1.6.4.post1.tar.gz
Saved c:\<...>\shapely-1.6.4.post1.tar.gz
Complete output from command python setup.py egg_info:
Traceback (most recent call last):
File "<string>", line 1, in <module>
File "c:\<...>\temp\pip-download-kpg5ca\Shapely\setup.py", line 80, in <module>
from shapely._buildcfg import geos_version_string, geos_version, \
File "shapely\_buildcfg.py", line 200, in <module>
lgeos = CDLL("geos_c.dll")
File "C:\Python27\Lib\ctypes\__init__.py", line 366, in __init__
self._handle = _dlopen(self._name, mode)
WindowsError: [Error 126] No se puede encontrar el m¢dulo especificado
Error using setup.py
Setup.py like this changing:
CUSTOM_COMMANDS = [
['apt-get', 'update'],
['apt-get', '--assume-yes', 'install', 'libgeos-dev'],
['pip', 'install', 'Shapely'],
['echo', 'Custom command worked!']
]
The result is like no packet would be installed, because I get the error from the beginning:
NameError: global name 'Point' is not defined
setup.py file:
from __future__ import absolute_import
from __future__ import print_function
import subprocess
from distutils.command.build import build as _build
import setuptools
class build(_build): # pylint: disable=invalid-name
sub_commands = _build.sub_commands + [('CustomCommands', None)]
CUSTOM_COMMANDS = [
['apt-get', 'update'],
['apt-get', '--assume-yes', 'install', 'libgeos-dev'],
['pip', 'install', 'Shapely']]
class CustomCommands(setuptools.Command):
def initialize_options(self):
pass
def finalize_options(self):
pass
def RunCustomCommand(self, command_list):
print('Running command: %s' % command_list)
p = subprocess.Popen(
command_list,
stdin=subprocess.PIPE, stdout=subprocess.PIPE, stderr=subprocess.STDOUT)
# Can use communicate(input='y\n'.encode()) if the command run requires
# some confirmation.
stdout_data, _ = p.communicate()
print('Command output: %s' % stdout_data)
if p.returncode != 0:
raise RuntimeError(
'Command %s failed: exit code: %s' % (command_list, p.returncode))
def run(self):
for command in CUSTOM_COMMANDS:
self.RunCustomCommand(command)
REQUIRED_PACKAGES = ['Shapely']
setuptools.setup(
name='dataflow',
version='0.0.1',
description='Dataflow set workflow package.',
install_requires=REQUIRED_PACKAGES,
packages=setuptools.find_packages(),
cmdclass={
'build': build,
'CustomCommands': CustomCommands,
}
)
pipeline options:
pipeline_options = PipelineOptions()
pipeline_options.view_as(StandardOptions).streaming = True
pipeline_options.view_as(SetupOptions).save_main_session = True
pipeline_options.view_as(SetupOptions).setup_file = 'C:\<...>\setup.py'
with beam.Pipeline(options=pipeline_options) as p:
The call:
python -m dataflow --project XXX --temp_location gs://YYY --runner DataflowRunner --region europe-west1 --setup_file C:\<...>\setup.py
The beginning log: (before dataflow wait for the data)
INFO:root:Defaulting to the temp_location as staging_location: gs://iotbucketdetector/test/prueba
C:\Users\<...>~1\Desktop\PROYEC~2\env\lib\site-packages\apache_beam\runners\dataflow\dataflow_runner.py:816: DeprecationWarning: options is deprecated since First stable release.. References to <pipeline>.options will
not be supported
transform_node.inputs[0].pipeline.options.view_as(StandardOptions))
INFO:root:Starting GCS upload to gs://<...>-1120074505-586000.1542699905.588000/pipeline.pb...
INFO:oauth2client.transport:Attempting refresh to obtain initial access_token
INFO:oauth2client.client:Refreshing access_token
INFO:root:Completed GCS upload to gs://<...>-1120074505-586000.1542699905.588000/pipeline.pb
INFO:root:Executing command: ['C:\\Users\\<...>~1\\Desktop\\PROYEC~2\\env\\Scripts\\python.exe', 'setup.py', 'sdist', '--dist-dir', 'c:\\users\\<...>~1\\appdata\\local\\temp\\tmpakq8bs']
running sdist
running egg_info
writing requirements to dataflow.egg-info\requires.txt
writing dataflow.egg-info\PKG-INFO
writing top-level names to dataflow.egg-info\top_level.txt
writing dependency_links to dataflow.egg-info\dependency_links.txt
reading manifest file 'dataflow.egg-info\SOURCES.txt'
writing manifest file 'dataflow.egg-info\SOURCES.txt'
warning: sdist: standard file not found: should have one of README, README.rst, README.txt, README.md
running check
warning: check: missing required meta-data: url
warning: check: missing meta-data: either (author and author_email) or (maintainer and maintainer_email) must be supplied
creating dataflow-0.0.1
creating dataflow-0.0.1\dataflow.egg-info
copying files to dataflow-0.0.1...
copying setup.py -> dataflow-0.0.1
copying dataflow.egg-info\PKG-INFO -> dataflow-0.0.1\dataflow.egg-info
copying dataflow.egg-info\SOURCES.txt -> dataflow-0.0.1\dataflow.egg-info
copying dataflow.egg-info\dependency_links.txt -> dataflow-0.0.1\dataflow.egg-info
copying dataflow.egg-info\requires.txt -> dataflow-0.0.1\dataflow.egg-info
copying dataflow.egg-info\top_level.txt -> dataflow-0.0.1\dataflow.egg-info
Writing dataflow-0.0.1\setup.cfg
Creating tar archive
removing 'dataflow-0.0.1' (and everything under it)
INFO:root:Starting GCS upload to gs://<...>-1120074505-586000.1542699905.588000/workflow.tar.gz...
INFO:root:Completed GCS upload to gs://<...>-1120074505-586000.1542699905.588000/workflow.tar.gz
INFO:root:Starting GCS upload to gs://<...>-1120074505-586000.1542699905.588000/pickled_main_session...
INFO:root:Completed GCS upload to gs://<...>-1120074505-586000.1542699905.588000/pickled_main_session
INFO:root:Downloading source distribtution of the SDK from PyPi
INFO:root:Executing command: ['C:\\Users\\<...>~1\\Desktop\\PROYEC~2\\env\\Scripts\\python.exe', '-m', 'pip', 'download', '--dest', 'c:\\users\\<...>~1\\appdata\\local\\temp\\tmpakq8bs', 'apache-beam==2.5.0', '--no-d
eps', '--no-binary', ':all:']
Collecting apache-beam==2.5.0
Using cached https://files.pythonhosted.org/packages/c6/96/56469c57cb043f36bfdd3786c463fbaeade1e8fcf0593ec7bc7f99e56d38/apache-beam-2.5.0.zip
Saved c:\users\<...>~1\appdata\local\temp\tmpakq8bs\apache-beam-2.5.0.zip
Successfully downloaded apache-beam
INFO:root:Staging SDK sources from PyPI to gs://<...>-1120074505-586000.1542699905.588000/dataflow_python_sdk.tar
INFO:root:Starting GCS upload to gs://<...>-1120074505-586000.1542699905.588000/dataflow_python_sdk.tar...
INFO:root:Completed GCS upload to gs://<...>-1120074505-586000.1542699905.588000/dataflow_python_sdk.tar
INFO:root:Downloading binary distribtution of the SDK from PyPi
INFO:root:Executing command: ['C:\\Users\\<...>~1\\Desktop\\PROYEC~2\\env\\Scripts\\python.exe', '-m', 'pip', 'download', '--dest', 'c:\\users\\<...>~1\\appdata\\local\\temp\\tmpakq8bs', 'apache-beam==2.5.0', '--no-d
eps', '--only-binary', ':all:', '--python-version', '27', '--implementation', 'cp', '--abi', 'cp27mu', '--platform', 'manylinux1_x86_64']
Collecting apache-beam==2.5.0
Using cached https://files.pythonhosted.org/packages/ff/10/a59ba412f71fb65412ec7a322de6331e19ec8e75ca45eba7a0708daae31a/apache_beam-2.5.0-cp27-cp27mu-manylinux1_x86_64.whl
Saved c:\users\<...>~1\appdata\local\temp\tmpakq8bs\apache_beam-2.5.0-cp27-cp27mu-manylinux1_x86_64.whl
Successfully downloaded apache-beam
INFO:root:Staging binary distribution of the SDK from PyPI to gs://<...>-1120074505-586000.1542699905.588000/apache_beam-2.5.0-cp27-cp27mu-manylinux1_x86_64.whl
INFO:root:Starting GCS upload to gs://<...>-1120074505-586000.1542699905.588000/apache_beam-2.5.0-cp27-cp27mu-manylinux1_x86_64.whl...
INFO:root:Completed GCS upload to gs://<...>-1120074505-586000.1542699905.588000/apache_beam-2.5.0-cp27-cp27mu-manylinux1_x86_64.whl
INFO:root:Create job: <Job
createTime: u'2018-11-20T07:45:28.050865Z'
currentStateTime: u'1970-01-01T00:00:00Z'
id: u'2018-11-19_23_45_27-14221834310382472741'
location: u'europe-west1'
name: u'beamapp-<...>-1120074505-586000'
projectId: u'poc-cloud-209212'
stageStates: []
steps: []
tempFiles: []
type: TypeValueValuesEnum(JOB_TYPE_STREAMING, 2)>
This is because you need to tell dataflow to install package you want.
Briefly documentation is here.
Simply speak, for PyPi package like shapely, you can do the following to ensure all dependencies installed.
pip freeze > requirements.txt
Remove all unrelated package in requirements.txt
Run your pipline with --requirements_file requirements.txt
Or even more, if you want to do something like install linux package by apt-get or using your own python module. Take a look on this official example. You need to setup a setup.py for this and change your pipeline command with
--setup_file setup.py.
For PyPi module, use the REQUIRED_PACKAGES in example.
REQUIRED_PACKAGES = [
'numpy','shapely'
]
If you are use pipeline options, then add setup.py as
pipeline_options = {
'project': PROJECT,
'staging_location': 'gs://' + BUCKET + '/staging',
'runner': 'DataflowRunner',
'job_name': 'test',
'temp_location': 'gs://' + BUCKET + '/temp',
'save_main_session': True,
'setup_file':'.\setup.py'
}
options = PipelineOptions.from_dictionary(pipeline_options)
p = beam.Pipeline(options=options)
Import inside the function + setup.py:
class GeoDataIngestion:
def parse_method(self, string_input):
from shapely.geometry import Point
place = Point(float(values[2]), float(values[3]))
setup.py with:
REQUIRED_PACKAGES = ['shapely']
I'm currently trying to corss-compile scipy for open-embedded but the bitbake build failed with the error
NOTE: Executing SetScene Tasks
NOTE: Executing RunQueue Tasks
ERROR: python-scipy-1.0.0-r0 do_compile: python setup.py build execution failed.
ERROR: python-scipy-1.0.0-r0 do_compile: Function failed: do_compile (log file is located at /home/somewhere_in_my_home_folder/build_dir/tmp/work/core2-64-idp-linux/python-scipy/1.0.0-r0/temp/log.do_compile.2788)
ERROR: Logfile of failure stored in: /home/somewhere_in_my_home_folder/build_dir/tmp/work/core2-64-idp-linux/python-scipy/1.0.0-r0/temp/log.do_compile.2788
Log data follows:
| DEBUG: Executing shell function do_compile
| Traceback (most recent call last):
| File "setup.py", line 418, in <module>
| setup_package()
| File "setup.py", line 398, in setup_package
| from numpy.distutils.core import setup
| ImportError: No module named numpy.distutils.core
| ERROR: python setup.py build execution failed.
| WARNING: exit code 1 from a shell command.
| ERROR: Function failed: do_compile (log file is located at /home/somewhere_in_my_home_folder/build_dir/tmp/work/core2-64-idp-linux/python-scipy/1.0.0-r0/temp/log.do_compile.2788)
ERROR: Task (/home/somewhere_in_my_home_folder/recipes-ros/python-scipy/python-scipy_1.0.0.bb:do_compile) failed with exit code '1'
I already spend searching several hours for recipe which works, but without any success. The only thing I found about this error was here.
My recipe looks like this
DESCRIPTION = "SciPy"
SECTION = "devel/python"
LICENSE = "CLOSED"
PYPI_PACKAGE = "scipy"
inherit pypi setuptools distutils
DEPENDS_${PN} = "python-numpy python-setuptools python-distutils"
RDEPENDS_${PN} = "python-numpy python-setuptools python-distutils"
S = "${WORKDIR}/scipy-1.0.0"
SRC_URI[md5sum] = "53fa34bd3733a9a4216842b6000f7316"
SRC_URI[sha256sum] = "87ea1f11a0e9ec08c264dc64551d501fa307289460705f6fccd84cbfc7926d10"
Any ideas on how to crosscompile scipy oder on how to fix this error?
edit 1:
my changed my recipe to
DESCRIPTION = "SciPy"
SECTION = "devel/python"
LICENSE = "CLOSED"
PYPI_PACKAGE = "scipy"
DEPENDS_${PN} = "python-numpy python-setuptools python-distutils"
RDEPENDS_${PN} = "python-numpy python-setuptools python-distutils"
S = "${WORKDIR}/scipy-1.0.0"
PACKAGECONFIG[python2] = "-DPYTHON2_NUMPY_INCLUDE_DIRS:PATH=${STAGING_LIBDIR}/${PYTHON_DIR}/site-packages/numpy/core/include,,python-numpy,"
SRC_URI[md5sum] = "53fa34bd3733a9a4216842b6000f7316"
SRC_URI[sha256sum] = "87ea1f11a0e9ec08c264dc64551d501fa307289460705f6fccd84cbfc7926d10"
FILES_python-scipy+="/usr/lib/* /usr/lib/python2.7/*"
FILES_python-scipy-dev+="/usr/share/pkgconfig /usr/lib/pkgconfig /usr/lib/python2.7/site-packages/*.la "
FILES_python-scipy-staticdev+="/usr/lib/python2.7/site-packages/*.a "
inherit pypi ${#bb.utils.contains('PACKAGECONFIG', 'python2', 'distutils-base', '', d)}
but now I get a error while building the main image
No package python-scipy available.
Error: Unable to find a match
That kind of recipe should be enough to get SciPy to work:
SUMMARY = "Scientific Library for Python"
SECTION = "devel/python"
HOMEPAGE = "https://pypi.python.org/pypi/scipy"
LICENSE = "BSD-3-Clause"
LIC_FILES_CHKSUM = "file://LICENSE.txt;md5=d0db8f4148a3d5534cfb93be78f9287c"
PYPI_PACKAGE="scipy"
SRC_URI[md5sum] = "53fa34bd3733a9a4216842b6000f7316"
SRC_URI[sha256sum] = "87ea1f11a0e9ec08c264dc64551d501fa307289460705f6fccd84cbfc7926d10"
inherit pypi setuptools distutils
RDEPENDS_${PN} += "python-core python-numpy python-distutils"
DEPENDS += "python-numpy"
If it fails, you can try to replace
DEPENDS += "python-numpy" by DEPENDS += "python-numpy-native". And if it still fails, you should create an issue on SciPy github or try to patch setup files, you can see an example here.