Bitbucket pipleline python3.7 not found! Try the pythonBin option - python

I have below bitbucket pipeline
image: node:11.13.0-alpine
pipelines:
branches:
master:
- step:
caches:
- node
script:
- apk add python py-pip python3
- npm install -g serverless
- serverless config credentials --stage dev --provider aws --key $AWS_ACCESS_KEY_ID --secret $AWS_SECRET_ACCESS_KEY
- cd src/rsc_user
- pip install -r requirements.txt
- sls plugin install -n serverless-python-requirements
- sls plugin install -n serverless-wsgi
- npm i serverless-package-external --save-dev
- npm install serverless-domain-manager --save-dev
- serverless deploy --stage dev
Throwing error
Error --------------------------------------------------
Error: python3.7 not found! Try the pythonBin option.
at pipAcceptsSystem (/opt/atlassian/pipelines/agent/build/src/rsc_user/node_modules/serverless-python-requirements/lib/pip.js:100:13)
at installRequirements (/opt/atlassian/pipelines/agent/build/src/rsc_user/node_modules/serverless-python-requirements/lib/pip.js:173:9)
at installRequirementsIfNeeded (/opt/atlassian/pipelines/agent/build/src/rsc_user/node_modules/serverless-python-requirements/lib/pip.js:556:3)
at ServerlessPythonRequirements.installAllRequirements (/opt/atlassian/pipelines/agent/build/src/rsc_user/node_modules/serverless-python-requirements/lib/pip.js:635:29)
at ServerlessPythonRequirements.tryCatcher (/opt/atlassian/pipelines/agent/build/src/rsc_user/node_modules/bluebird/js/release/util.js:16:23)
at Promise._settlePromiseFromHandler (/opt/atlassian/pipelines/agent/build/src/rsc_user/node_modules/bluebird/js/release/promise.js:547:31)
at Promise._settlePromise (/opt/atlassian/pipelines/agent/build/src/rsc_user/node_modules/bluebird/js/release/promise.js:604:18)
at Promise._settlePromise0 (/opt/atlassian/pipelines/agent/build/src/rsc_user/node_modules/bluebird/js/release/promise.js:649:10)
at Promise._settlePromises (/opt/atlassian/pipelines/agent/build/src/rsc_user/node_modules/bluebird/js/release/promise.js:729:18)
at _drainQueueStep (/opt/atlassian/pipelines/agent/build/src/rsc_user/node_modules/bluebird/js/release/async.js:93:12)
at _drainQueue (/opt/atlassian/pipelines/agent/build/src/rsc_user/node_modules/bluebird/js/release/async.js:86:9)
at Async._drainQueues (/opt/atlassian/pipelines/agent/build/src/rsc_user/node_modules/bluebird/js/release/async.js:102:5)
at Immediate.Async.drainQueues [as _onImmediate] (/opt/atlassian/pipelines/agent/build/src/rsc_user/node_modules/bluebird/js/release/async.js:15:14)
at processImmediate (internal/timers.js:443:21)
at process.topLevelDomainCallback (domain.js:136:23)
For debugging logs, run again after setting the "SLS_DEBUG=*" environment variable.
Get Support --------------------------------------------
Docs: docs.serverless.com
Bugs: github.com/serverless/serverless/issues
Issues: forum.serverless.com
Your Environment Information ---------------------------
Operating System: linux
Node Version: 11.13.0
Framework Version: 2.1.1
Plugin Version: 4.0.4
SDK Version: 2.3.2
Components Version: 3.1.4
I am not able to get this error as I am new in python.
any help highly appreciated
Thanks

This error basically means you do not have the right installation of python. Some application needs python3.7 and you do not specify a version with apk add python3. Hence, the latest is probably installed (3.8).
This article deals with how to select a given python version for an agent in a bitbucket pipeline. It basically boils down to:
image: python:3.7
pipelines:
default:
- step:
script:
- python --version
Is there a reason you have to use Alpine? Otherwise I'd go for the pragmatic image above.

he solve with
pythonRequirements:
pythonBin: python3
Similar problem

Related

Pyinstaller not working in Gitlab CI file

I have created a python application and I would to deploy it via Gitlab. To achieve this, I create the following gitlab-ci.yml file:
# This file is a template, and might need editing before it works on your project.
# Official language image. Look for the different tagged releases at:
# https://hub.docker.com/r/library/python/tags/
image: "python:3.10"
#commands to run in the Docker container before starting each job.
before_script:
- python --version
- pip install -r requirements.txt
# different stages in the pipeline
stages:
- Static Analysis
- Test
- Deploy
#defines the job in Static Analysis
pylint:
stage: Static Analysis
script:
- pylint -d C0301 src/*.py
#tests the code
pytest:
stage: Test
script:
- cd test/;pytest -v
#deploy
deploy:
stage: Deploy
script:
- echo "test ms deploy"
- cd src/
- pyinstaller -F gui.py --noconsole
tags:
- macos
It runs fine through the Static Analysis and Test phases, but in Deploy I get the following error:
OSError: Python library not found: .Python, libpython3.10.dylib, Python3, Python, libpython3.10m.dylib
This means your Python installation does not come with proper shared library files.
This usually happens due to missing development package, or unsuitable build parameters of the Python installation.
* On Debian/Ubuntu, you need to install Python development packages:
* apt-get install python3-dev
* apt-get install python-dev
* If you are building Python by yourself, rebuild with `--enable-shared` (or, `--enable-framework` on macOS).
As I am working on a Macbook I tried with the following addition - env PYTHON_CONFIGURE_OPTS="--enable-framework" pyenv install 3.10.5 but then I get an error that python 3.10.5 already exists.
I tried some other things, but I am a bit stuck. Any advice or suggestions?

How to correctly set pythonBin for serverless deploy in Github-Actions

I am attempting to deploy a simple python lambda script via github actions. I am stuck trying to figure out how to get github actions & serverless to find python3.6 (or python3.7) for a deploy.
Here is my main.yml:
name: Deploy Lambda
# Controls when the action will run. on: # Triggers the workflow on push or pull request events but only for the master branch push:
branches: [ master ]
# Allows you to run this workflow manually from the Actions tab workflow_dispatch:
# A workflow run is made up of one or more jobs that can run sequentially or in parallel jobs: # This workflow contains a single job called "deploy" deploy:
# The type of runner that the job will run on
runs-on: ubuntu-latest
strategy:
matrix:
python-version: [3.6]
env: #Setup environmental variables for serverless deployment
AWS_ACCESS_KEY_ID: ${{ secrets.AWS_ACCESS_KEY_ID }}
AWS_SECRET_ACCESS_KEY: ${{ secrets.AWS_SECRET_ACCESS_KEY }}
# Steps represent a sequence of tasks that will be executed as part of the job
steps:
# Checks-out your repository under $GITHUB_WORKSPACE, so your job can access it
- uses: actions/checkout#v2
- name: Set up Python 3.6
uses: actions/setup-python#v2
with:
python-version: 3.6
- name: Install dependencies
run: |
python -m pip install --upgrade pip
if [ -f requirements.txt ]; then pip install -r requirements.txt; fi
echo "which python: `which python`"
echo "which python3.6: `which python3.6`"
- name: npm install dependencies
run: npm install
- name: Serverless
uses: serverless/github-action#master
with:
args: deploy
Here is my serverless.yml
service: utilitybot
provider:
name: aws
runtime: python3.6
stage: prod
region: us-east-1
memorySize: 128
plugins:
- serverless-wsgi
- serverless-python-requirements
custom:
wsgi:
app: app.app
packRequirements: false
pythonRequirements:
pythonBin: /opt/hostedtoolcache/Python/3.6.13/x64/bin/python3.6
functions:
app:
handler: wsgi_handler.handler
events:
- http: ANY /
And here is the relevant output when I attempt a deploy:
Successfully installed Flask-1.1.2 Jinja2-2.11.3 MarkupSafe-1.1.1 Werkzeug-1.0.1 aiohttp-3.7.4.post0 async-timeout-3.0.1 attrs-20.3.0 chardet-4.0.0 click-7.1.2 idna-3.1 idna-ssl-1.1.0 itsdangerous-1.1.0 multidict-5.1.0 pyee-7.0.4 slackclient-2.9.3 slackeventsapi-2.2.1 typing-extensions-3.7.4.3 urllib3-1.26.4 yarl-1.6.3
which python: /opt/hostedtoolcache/Python/3.6.13/x64/bin/python
which python3.6: /opt/hostedtoolcache/Python/3.6.13/x64/bin/python3.6
. . .
Serverless: Python executable not found for "runtime": python3.6
Serverless: Using default Python executable: python
Serverless: Packaging Python WSGI handler...
Serverless: Generated requirements from /github/workspace/requirements.txt in /github/workspace/.serverless/requirements.txt...
Serverless: Installing requirements from /github/home/.cache/serverless-python-requirements/1fc06bc3bc8373bb92e534c979ef8012825c2f0cf279b582a4c7d4a567c48e2d_slspyc/requirements.txt ...
Serverless: Using download cache directory /github/home/.cache/serverless-python-requirements/downloadCacheslspyc
Error ---------------------------------------------------
Error: python3.6 not found! Try the pythonBin option.
at pipAcceptsSystem (/github/workspace/node_modules/serverless-python-requirements/lib/pip.js:100:13)
I have tried no pythonBin, various versions of pythonBin, different versions of python... I cannot get past this error. When I do the which python3.6 it finds the binary in the path, so I'm confused how it doesn't appear when it does the deploy.
Finally got it working.
I changed the runtime to 3.7
I removed the python executable from pythonBin (so it is just /opt/hostedtoolcache/Python/3.6.13/x64/bin/ )
I left the matrix strategy at 3.6 but had serverless install 3.7
Most importantly, I found a public image which just plain ran better:
mirrorhanyu/serverless-github-action-python#master
So the end of the main.yml becomes
- name: Serverless
uses: mirrorhanyu/serverless-github-action-python#master
with:
args: deploy

How do i integrate a Python Lambda Function into the Pipeline of AWS Amplify

So i'm trying to build an Ampliy application with Javascript and a Python Lambda function. Everything works just fine. I have setup my CodeCommit Branch for hosting with continous deployment. I add a API with a Lambda function in Python. With amplify push, amplify successfully deploys the corresponding API Gateway and Lambda and i can successfully interact with my lambda function. So, as soon as i push my commits into my repository, the pipeline gets trigger and crashes during the build phase:
# Starting phase: build
# Executing command: amplifyPush --simple
2021-02-17T14:01:23.680Z [INFO]: [0mAmplify AppID found: d2l0j3vtlykp8l. Amplify App name is: documentdownload[0m
2021-02-17T14:01:23.783Z [INFO]: [0mBackend environment dev found in Amplify Console app: documentdownload[0m
2021-02-17T14:01:24.440Z [WARNING]: - Fetching updates to backend environment: dev from the cloud.
2021-02-17T14:01:24.725Z [WARNING]: ✔ Successfully pulled backend environment dev from the cloud.
2021-02-17T14:01:24.758Z [INFO]:
2021-02-17T14:01:26.925Z [INFO]: [33mNote: It is recommended to run this command from the root of your app directory[39m
2021-02-17T14:01:31.904Z [WARNING]: - Initializing your environment: dev
2021-02-17T14:01:32.216Z [WARNING]: ✔ Initialized provider successfully.
2021-02-17T14:01:32.829Z [INFO]: [31mpython3 found but version Python 3.7.9 is less than the minimum required version.[39m
[31mYou must have python >= 3.8 installed and available on your PATH as "python3" or "python". It can be installed from https://www.python.org/downloads[39m
[31mYou must have pipenv installed and available on your PATH as "pipenv". It can be installed by running "pip3 install --user pipenv".[39m
2021-02-17T14:01:32.830Z [WARNING]: ✖ An error occurred when pushing the resources to the cloud
2021-02-17T14:01:32.830Z [WARNING]: ✖ There was an error initializing your environment.
2021-02-17T14:01:32.832Z [INFO]: [31minit failed[39m
2021-02-17T14:01:32.834Z [INFO]: [0mError: Missing required dependencies to package documentdownload[0m
[0m at Object.buildFunction (/root/.nvm/versions/node/v12.19.0/lib/node_modules/#aws-amplify/cli/node_modules/amplify-category-function/src/provider-utils/awscloudformation/utils/buildFunction.ts:21:11)[0m
[0m at processTicksAndRejections (internal/process/task_queues.js:97:5)[0m
[0m at prepareResource (/root/.nvm/versions/node/v12.19.0/lib/node_modules/#aws-amplify/cli/node_modules/amplify-provider-awscloudformation/src/push-resources.ts:474:33)[0m
[0m at async Promise.all (index 0)[0m
[0m at Object.run (/root/.nvm/versions/node/v12.19.0/lib/node_modules/#aws-amplify/cli/node_modules/amplify-provider-awscloudformation/src/push-resources.ts:106:5)[0m
2021-02-17T14:01:32.856Z [ERROR]: !!! Build failed
2021-02-17T14:01:32.856Z [ERROR]: !!! Non-Zero Exit Code detected
2021-02-17T14:01:32.856Z [INFO]: # Starting environment caching...
2021-02-17T14:01:32.857Z [INFO]: # Environment caching completed
In the previous step PROVISION step is Python 3.8 though..
## Install python3.8
RUN wget https://www.python.org/ftp/python/3.8.0/Python-3.8.0.tgz
RUN tar xvf Python-3.8.0.tgz
WORKDIR Python-3.8.0
RUN ./configure --enable-optimizations --prefix=/usr/local
RUN make altinstall
For now i have no idea why it behaves like this. Pushing the changes locally it works. Can anybody help?
Two Solutions from here:
swapping the build image This can be done with this .Go to the Amplify Console, open the menu on the left, click on "Build Settings", scroll down until you see "Build Image Settings", on the drop-down select Custom, then enter the image name on the field just below it
2.If you want to build from a source like you mentioned: add the following to amplify.yml in the AWS console under App settings -> Build settings:
backend:
phases:
preBuild:
commands:
- export BASE_PATH=$(pwd)
- yum install -y gcc openssl-devel bzip2-devel libffi-devel python3.8-pip
- cd /opt && wget https://www.python.org/ftp/python/3.8.2/Python-3.8.2.tgz
- cd /opt && tar xzf Python-3.8.2.tgz
- cd /opt/Python-3.8.2 && ./configure --enable-optimizations
- cd /opt/Python-3.8.2 && make altinstall
- pip3.8 install --user pipenv
- ln -fs /usr/local/bin/python3.8 /usr/bin/python3
- ln -fs /usr/local/bin/pip3.8 /usr/bin/pip3
- cd $BASE_PATH

Running pytest-qt on CircleCI

I am attempting to run tests which require pytest-qt (for testing PySide2 dialogs) on CircleCI. I am getting the following error:
xdpyinfo was not found, X start can not be checked! Please install xdpyinfo!
============================= test session starts ==============================
platform linux -- Python 3.6.8, pytest-5.0.0, py-1.8.0, pluggy-0.12.0 -- /home/circleci/project-caveman/venv/bin/python3
cachedir: .pytest_cache
PySide2 5.13.0 -- Qt runtime 5.13.0 -- Qt compiled 5.13.0
rootdir: /home/circleci/project-caveman
plugins: cov-2.7.1, xvfb-1.2.0, qt-3.2.2
collected 1 item
tests/test_main.py::test_label_change_on_button_press Fatal Python error: Aborted
Aborted (core dumped)
Exited with code 134
And I am using this configuration file:
version: 2
jobs:
build:
working_directory: ~/project-caveman
docker:
- image: circleci/python:3.6.8-stretch
steps:
- checkout
# Dependencies
- restore_cache:
keys:
- venv-{{ .Branch }}-{{ checksum "setup.py" }}
- venv-{{ .Branch }}-
- venv-
- run:
name: Install dependencies
command: |
python3 -m venv venv
. venv/bin/activate
pip install -e .[test] --progress-bar off
- save_cache:
key: venv-{{ .Branch }}-{{ checksum "setup.py" }}
paths:
- "venv"
# Tests
- run:
name: Pytest
command: |
mkdir test-reports
. venv/bin/activate
xvfb-run -a pytest -s -v --doctest-modules --junitxml test-reports/junit.xml --cov=coveralls --cov-report term-missing
- store_test_results:
path: test-reports
- run:
name: Coveralls
command: coveralls
Any help is greatly appreciated, thanks in advance.
I have pulled the container circleci/python:3.6.8-stretch locally, cloned your repository and tried to execute the tests, whereas I could reproduce the error.
First thing to do is to enable the debug mode for Qt runtime so it prints some info on errors. This can be done by settings the environment variable QT_DEBUG_PLUGINS:
$ QT_DEBUG_PLUGINS=1 pytest -sv
Now it's immediately clear what's missing in the container to run the tests. A snippet from the output of the above command:
Got keys from plugin meta data ("xcb")
QFactoryLoader::QFactoryLoader() checking directory path "/usr/local/bin/platforms" ...
Cannot load library /home/circleci/.local/lib/python3.6/site-packages/PySide2/Qt/plugins/platforms/libqxcb.so: (libxkbcommon-x11.so.0: cannot open shared object file: No such file or directory)
QLibraryPrivate::loadPlugin failed on "/home/circleci/.local/lib/python3.6/site-packages/PySide2/Qt/plugins/platforms/libqxcb.so" : "Cannot load library /home/circleci/.local/lib/python3.6/site-packages/PySide2/Qt/plugins/platforms/libqxcb.so: (libxkbcommon-x11.so.0: cannot open shared object file: No such file or directory)"
qt.qpa.plugin: Could not load the Qt platform plugin "xcb" in "" even though it was found.
This application failed to start because no Qt platform plugin could be initialized. Reinstalling the application may fix this problem.
Available platform plugins are: eglfs, linuxfb, minimal, minimalegl, offscreen, vnc, wayland-egl, wayland, wayland-xcomposite-egl, wayland-xcomposite-glx, webgl, xcb.
Aborted (core dumped)
The fix for that is easy - install the libxkbcommon-x11-0 package:
$ sudo apt update && sudo apt install -y libxkbcommon-x11-0
Add this line in the CircleCI config (somewhere before the tests job, for example in the job where you install package dependencies) and the test should run fine.
Aside from that, it makes sense to set QT_DEBUG_PLUGINS=1 globally so you can react on eventual Qt runtime failures in future.
xdpyinfo was not found, X start can not be checked! Please install xdpyinfo!
If you want to get rid of that warning, install x11-utils:
$ sudo apt install x11-utils
in Centos6.5 only run:yum install xdpyinfo,and sucessfully solve it

Using ansible core's pip and virtualenv on Centos or Redhat

I have created a playbook that is suppose to run a django website for local developers. These are organizational constraints
Currently the VM is Centos - http://puppet-vagrant-boxes.puppetlabs.com/centos-64-x64-vbox4210.box
The machine is being provisioned with ansible via Vagrant.
The developer will need python2.7.
I attempted to follow the software collections route in
adding a scl repo to box
installing python27 via yum
using shell modoule to enable python27
creating a virtualenv inside that shell
The newly create virtualenv and python binaries give an error after provision. Here is the pertinent part of my playbook:
main.yml
- hosts: app
sudo: yes
sudo_user: root
gather_facts: true
roles:
# insert other roles
tasks:
- name: Add SCL Repos
command: sh -c 'wget -qO- http://people.redhat.com/bkabrda/scl_python27.repo >> /etc/yum.repos.d/scl.repo'
- name: Install python dependencies
yum: pkg={{ item }} state=present
with_items:
- "python-devel"
- "scl-utils"
- "python27"
- name: Manually create virtual .env and install requirements
shell: "source /opt/rh/python27/enable && virtualenv /vagrant/.env && source /vagrant/.env/bin/activate && pip install -r /vagrant/requirements/local.txt"
Ansible - stdout
Here is the tail end of my ansible's stdout message.
pip can't proceed with requirement 'pytz (from -r /vagrant/requirements/base.txt (line 3))' due to a pre-existing build directory.\n location: /vagrant/.env/build/pytz\nThis is likely due to a previous installation that failed.\npip is being responsible and not assuming it can delete this.\nPlease delete it and try again.\n\nCleaning up...
Post Mortem Test via SSH
In an attempt to glean more information out the problem, I sshed into the box to see what feedback I could get.
$ vagrant ssh
Last login: Fri Feb 12 22:17:03 2016 from 10.0.2.2
Welcome to your Vagrant-built virtual machine.
[vagrant#localhost ~]$ cd /vagrant/
[vagrant#localhost vagrant]$ source .env/bin/activate
(.env)[vagrant#localhost vagrant]$ pip install -r requirements/local.txt
/vagrant/.env/bin/python: error while loading shared libraries: libpython2.7.so.1.0: cannot open shared object file: No such file or directory
In general, the approach feels like a square peg in a round hole. I'd love to hear some feedback from the community about the appropriate way to run a centos box locally using a python27 virtualenv provisioned through ansible.
You could always use ansible environment directive to manually set appropriate variables so that correct executables get called. Here's an example:
environment:
PATH: "/opt/rh/rh-python34/root/usr/bin:{{ ansible_env.PATH }}"
LD_LIBRARY_PATH: "/opt/rh/rh-python34/root/usr/lib64"
MANPATH: "/opt/rh/rh-python34/root/usr/share/man"
XDG_DATA_DIRS: "/opt/rh/rh-python34/root/usr/share"
PKG_CONFIG_PATH: "/opt/rh/rh-python34/root/usr/lib64/pkgconfig"
pip: "virtualenv={{root_dir}}/{{venvs_dir}}/{{app_name}}_{{spec}} requirements={{root_dir}}/{{spec}}_sites/{{app_name}}/requirements.txt"
In the end, I had to rebuild python from source to create a python2.7 virtual environment. I used an open source playbook.
https://github.com/Ken24/ansible-role-python
main.yml
- hosts: app
roles:
- { role: Ken24.python }
tasks:
- name: Install virtualenv
command: "/usr/local/bin/pip install virtualenv"
- name: Create virtualenv and install requirements
pip: requirements=/vagrant/requirements/local.txt virtualenv=/vagrant/cfgov-refresh virtualenv_command=/usr/local/bin/virtualenv

Categories