Docker : exec /usr/bin/sh: exec format error - python

Hi guys need some help.
I created a custom docker image and push it to docker hub but when I run it in CI/CD it gives me this error.
exec /usr/bin/sh: exec format error
Where :
Dockerfile
FROM ubuntu:20.04
RUN apt-get update
RUN apt-get install -y software-properties-common
RUN apt-get install -y python3-pip
RUN pip3 install robotframework
.gitlab-ci.yml
robot-framework:
image: rethkevin/rf:v1
allow_failure: true
script:
- ls
- pip3 --version
Output
Running with gitlab-runner 15.1.0 (76984217)
on runner zgjy8gPC
Preparing the "docker" executor
Using Docker executor with image rethkevin/rf:v1 ...
Pulling docker image rethkevin/rf:v1 ...
Using docker image sha256:d2db066f04bd0c04f69db1622cd73b2fc2e78a5d95a68445618fe54b87f1d31f for rethkevin/rf:v1 with digest rethkevin/rf#sha256:58a500afcbd75ba477aa3076955967cebf66e2f69d4a5c1cca23d69f6775bf6a ...
Preparing environment
00:01
Running on runner-zgjy8gpc-project-1049-concurrent-0 via 1c8189df1d47...
Getting source from Git repository
00:01
Fetching changes with git depth set to 20...
Reinitialized existing Git repository in /builds/reth.bagares/test-rf/.git/
Checking out 339458a3 as main...
Skipping Git submodules setup
Executing "step_script" stage of the job script
00:00
Using docker image sha256:d2db066f04bd0c04f69db1622cd73b2fc2e78a5d95a68445618fe54b87f1d31f for rethkevin/rf:v1 with digest rethkevin/rf#sha256:58a500afcbd75ba477aa3076955967cebf66e2f69d4a5c1cca23d69f6775bf6a ...
exec /usr/bin/sh: exec format error
Cleaning up project directory and file based variables
00:01
ERROR: Job failed: exit code 1
any thoughts on this to resolve the error?

The problem is that you built this image for arm64/v8 -- but your runner is using a different architecture.
If you run:
docker image inspect rethkevin/rf:v1
You will see this in the output:
...
"Architecture": "arm64",
"Variant": "v8",
"Os": "linux",
...
Try building and pushing your image from your GitLab CI runner so the architecture of the image will match your runner's architecture.
Alternatively, you can build for multiple architectures using docker buildx . Alternatively still, you could also run a GitLab runner on ARM architecture so that it can run the image for the architecture you built it on.

In my case, I was building it using buildx
docker buildx build --platform linux/amd64 -f ./Dockerfile -t image .
however the problem was in AWS lambda

Related

Pyinstaller not working in Gitlab CI file

I have created a python application and I would to deploy it via Gitlab. To achieve this, I create the following gitlab-ci.yml file:
# This file is a template, and might need editing before it works on your project.
# Official language image. Look for the different tagged releases at:
# https://hub.docker.com/r/library/python/tags/
image: "python:3.10"
#commands to run in the Docker container before starting each job.
before_script:
- python --version
- pip install -r requirements.txt
# different stages in the pipeline
stages:
- Static Analysis
- Test
- Deploy
#defines the job in Static Analysis
pylint:
stage: Static Analysis
script:
- pylint -d C0301 src/*.py
#tests the code
pytest:
stage: Test
script:
- cd test/;pytest -v
#deploy
deploy:
stage: Deploy
script:
- echo "test ms deploy"
- cd src/
- pyinstaller -F gui.py --noconsole
tags:
- macos
It runs fine through the Static Analysis and Test phases, but in Deploy I get the following error:
OSError: Python library not found: .Python, libpython3.10.dylib, Python3, Python, libpython3.10m.dylib
This means your Python installation does not come with proper shared library files.
This usually happens due to missing development package, or unsuitable build parameters of the Python installation.
* On Debian/Ubuntu, you need to install Python development packages:
* apt-get install python3-dev
* apt-get install python-dev
* If you are building Python by yourself, rebuild with `--enable-shared` (or, `--enable-framework` on macOS).
As I am working on a Macbook I tried with the following addition - env PYTHON_CONFIGURE_OPTS="--enable-framework" pyenv install 3.10.5 but then I get an error that python 3.10.5 already exists.
I tried some other things, but I am a bit stuck. Any advice or suggestions?

How do i integrate a Python Lambda Function into the Pipeline of AWS Amplify

So i'm trying to build an Ampliy application with Javascript and a Python Lambda function. Everything works just fine. I have setup my CodeCommit Branch for hosting with continous deployment. I add a API with a Lambda function in Python. With amplify push, amplify successfully deploys the corresponding API Gateway and Lambda and i can successfully interact with my lambda function. So, as soon as i push my commits into my repository, the pipeline gets trigger and crashes during the build phase:
# Starting phase: build
# Executing command: amplifyPush --simple
2021-02-17T14:01:23.680Z [INFO]: [0mAmplify AppID found: d2l0j3vtlykp8l. Amplify App name is: documentdownload[0m
2021-02-17T14:01:23.783Z [INFO]: [0mBackend environment dev found in Amplify Console app: documentdownload[0m
2021-02-17T14:01:24.440Z [WARNING]: - Fetching updates to backend environment: dev from the cloud.
2021-02-17T14:01:24.725Z [WARNING]: ✔ Successfully pulled backend environment dev from the cloud.
2021-02-17T14:01:24.758Z [INFO]:
2021-02-17T14:01:26.925Z [INFO]: [33mNote: It is recommended to run this command from the root of your app directory[39m
2021-02-17T14:01:31.904Z [WARNING]: - Initializing your environment: dev
2021-02-17T14:01:32.216Z [WARNING]: ✔ Initialized provider successfully.
2021-02-17T14:01:32.829Z [INFO]: [31mpython3 found but version Python 3.7.9 is less than the minimum required version.[39m
[31mYou must have python >= 3.8 installed and available on your PATH as "python3" or "python". It can be installed from https://www.python.org/downloads[39m
[31mYou must have pipenv installed and available on your PATH as "pipenv". It can be installed by running "pip3 install --user pipenv".[39m
2021-02-17T14:01:32.830Z [WARNING]: ✖ An error occurred when pushing the resources to the cloud
2021-02-17T14:01:32.830Z [WARNING]: ✖ There was an error initializing your environment.
2021-02-17T14:01:32.832Z [INFO]: [31minit failed[39m
2021-02-17T14:01:32.834Z [INFO]: [0mError: Missing required dependencies to package documentdownload[0m
[0m at Object.buildFunction (/root/.nvm/versions/node/v12.19.0/lib/node_modules/#aws-amplify/cli/node_modules/amplify-category-function/src/provider-utils/awscloudformation/utils/buildFunction.ts:21:11)[0m
[0m at processTicksAndRejections (internal/process/task_queues.js:97:5)[0m
[0m at prepareResource (/root/.nvm/versions/node/v12.19.0/lib/node_modules/#aws-amplify/cli/node_modules/amplify-provider-awscloudformation/src/push-resources.ts:474:33)[0m
[0m at async Promise.all (index 0)[0m
[0m at Object.run (/root/.nvm/versions/node/v12.19.0/lib/node_modules/#aws-amplify/cli/node_modules/amplify-provider-awscloudformation/src/push-resources.ts:106:5)[0m
2021-02-17T14:01:32.856Z [ERROR]: !!! Build failed
2021-02-17T14:01:32.856Z [ERROR]: !!! Non-Zero Exit Code detected
2021-02-17T14:01:32.856Z [INFO]: # Starting environment caching...
2021-02-17T14:01:32.857Z [INFO]: # Environment caching completed
In the previous step PROVISION step is Python 3.8 though..
## Install python3.8
RUN wget https://www.python.org/ftp/python/3.8.0/Python-3.8.0.tgz
RUN tar xvf Python-3.8.0.tgz
WORKDIR Python-3.8.0
RUN ./configure --enable-optimizations --prefix=/usr/local
RUN make altinstall
For now i have no idea why it behaves like this. Pushing the changes locally it works. Can anybody help?
Two Solutions from here:
swapping the build image This can be done with this .Go to the Amplify Console, open the menu on the left, click on "Build Settings", scroll down until you see "Build Image Settings", on the drop-down select Custom, then enter the image name on the field just below it
2.If you want to build from a source like you mentioned: add the following to amplify.yml in the AWS console under App settings -> Build settings:
backend:
phases:
preBuild:
commands:
- export BASE_PATH=$(pwd)
- yum install -y gcc openssl-devel bzip2-devel libffi-devel python3.8-pip
- cd /opt && wget https://www.python.org/ftp/python/3.8.2/Python-3.8.2.tgz
- cd /opt && tar xzf Python-3.8.2.tgz
- cd /opt/Python-3.8.2 && ./configure --enable-optimizations
- cd /opt/Python-3.8.2 && make altinstall
- pip3.8 install --user pipenv
- ln -fs /usr/local/bin/python3.8 /usr/bin/python3
- ln -fs /usr/local/bin/pip3.8 /usr/bin/pip3
- cd $BASE_PATH

scrapyd-deploy error: pkg_resources.DistributionNotFound

I have been trying for a long time to find a solution to the scrapyd error message: pkg_resources.DistributionNotFound: The 'idna<3,>=2.5' distribution was not found and is required by requests
What I have done:
$ docker pull ceroic/scrapyd
$ docker build -t scrapyd .
Dockerfile:
FROM ceroic/scrapyd
RUN pip install "idna==2.5"
$ docker build -t scrapyd .
Sending build context to Docker daemon 119.3kB
Step 1/2 : FROM ceroic/scrapyd
---> 868dca3c4d94
Step 2/2 : RUN pip install "idna==2.5"
---> Running in c0b6f6f73cf1
Downloading/unpacking idna==2.5
Installing collected packages: idna
Successfully installed idna
Cleaning up...
Removing intermediate container c0b6f6f73cf1
---> 849200286b7a
Successfully built 849200286b7a
Successfully tagged scrapyd:latest
I run the container:
$ docker run -d -p 6800:6800 scrapyd
Next:
scrapyd-deploy demo -p tutorial
And get error:
pkg_resources.DistributionNotFound: The 'idna<3,>=2.5' distribution was not found and is required by requests
I'm not a Docker expert, and I don't understand the logic. If idna==2.5 has been successfully installed inside the container, why does the error message require version 'idna<3,>=2.5'?
The answer is very simple. I finished my 3 days! torment. When I run the
scrapyd-deploy demo -p tutorial
then I do it not in the created container, but outside it.
The problem was solved by:
pip uninstall idna
pip install "idna == 2.5"
This was to be done on a virtual server, not a container. I can't believe I didn't understand it right away.

How to run pytest with mysql docker container?

What I want to do
I have been trying to follow instructions from my travis-ci environment on using-a-docker-image-from-a-repository-in-a-build.
In my case, and forgive my if I misspeak because I'm not too familiar with docker, I want to start a docker container with a mysql instance that I can use during pytest.
What I've tried
.travis.yml
language: python
python:
- "3.7"
cache:
directories:
- "$HOME/google-cloud-sdk/"
services:
- docker
before_install:
...
install:
...
- pip install -r requirements.txt
script:
- docker pull mysql/mysql-server
- docker run -d -p 127.0.0.1:3306:3306 mysql/mysql-server /bin/sh -c "cd /root/mysql; pip install -r requirements.txt;"
- docker run mysql/mysql-server /bin/sh -c "ls -l /root; cd /root/mysql; pytest"
travis-ci logging
$ docker pull mysql/mysql-server
Using default tag: la[secure]
la[secure]: Pulling from mysql/mysql-server
0e690826fc6e: Pulling fs layer
0e6c49086d52: Pulling fs layer
862ba7a26325: Pulling fs layer
7731c802ed08: Pulling fs layer
7731c802ed08: Waiting
862ba7a26325: Verifying Checksum
862ba7a26325: Download complete
7731c802ed08: Verifying Checksum
7731c802ed08: Download complete
0e690826fc6e: Verifying Checksum
0e690826fc6e: Download complete
0e6c49086d52: Verifying Checksum
0e6c49086d52: Download complete
0e690826fc6e: Pull complete
0e6c49086d52: Pull complete
862ba7a26325: Pull complete
7731c802ed08: Pull complete
Digest: sha256:a82ff720911b2fd40a425fd7141f75d7c68fb9815ec3e5a5a881a8eb49677087
Status: Downloaded newer image for mysql/mysql-server:la[secure]
The command "docker pull mysql/mysql-server" exited with 0.
2.49s$ docker run -d -p 127.0.0.1:3306:3306 mysql/mysql-server /bin/sh -c "cd /root/mysql; pip install -r requirements.txt;"
bfba9cb26b8902682903d8a5576e805e86823096220e723da0df6a6a881c1ef7
The command "docker run -d -p 127.0.0.1:3306:3306 mysql/mysql-server /bin/sh -c "cd /root/mysql; pip install -r requirements.txt;"" exited with 0.
0.74s$ docker run mysql/mysql-server /bin/sh -c "ls -l /root; cd /root/mysql; py[secure]"
[Entrypoint] MySQL Docker Image 8.0.20-1.1.16
total 0
/bin/sh: line 0: cd: /root/mysql: No such file or directory
/bin/sh: py[secure]: command not found
The command "docker run mysql/mysql-server /bin/sh -c "ls -l /root; cd /root/mysql; py[secure]"" exited with 127.
So it seems like for whatever reason my use case for mysql differs from the example provided by travis-ci. The specific issue seems to be that the directory /root/mysql just disappears so when I try the second docker run I get No such file or directory.
To be perfectly honest I don't know much about what is happening, so any help with dockerizing my pytests would be great! Also, if it's possible, I'm also curious if it's possible to move the docker logic into a Dockerfile of some sort.
Here is my main script where I've set it up to connect to a mysql database, so the environment variables would just need to be set appropriately, which is why I thought a Dockerfile might be helpful.
main.py
elif env == "test":
return sqlalchemy.create_engine(
sqlalchemy.engine.url.URL(
drivername="mysql+pymysql",
username=os.environ.get("DB_USER"),
password=os.environ.get("DB_PASS"),
host=os.environ.get("DB_HOST"),
port=3306,
database=PRIMARY_TABLE_NAME
),
pool_size=5,
max_overflow=2,
pool_timeout=30,
pool_recycle=1800
)

Docker: The command returned a non-zero code: 137

My docker file is as follows:
#Use python 3.6 image
FROM python:3.6
ENV PYTHONUNBUFFERED 1
#install required packages
RUN apt-get update
RUN apt-get install libsasl2-dev libldap2-dev libssl-dev python3-dev psmisc -y
#install a pip package
#Note: This pip package has a completely configured django project in it
RUN pip install <pip-package>
#Run a script
#Note: Here appmanage.py is a file inside the pip installed location(site-packages), but it will be accessible directly without cd to the folder
RUN appmanage.py appconfig appadd.json
#The <pip-packge> installed comes with a built in django package, so running it with following CMD
#Note: Here manage.py is present inside the pip package folder but it is accesible directly
CMD ["manage.py","runserver","0.0.0.0:8000"]
When i run :
sudo docker build -t test-app .
The steps in dockerfile till: RUN appmanage.py appconfig runs sucessfully as expected but after that i get the error:
The command '/bin/sh -c appmanage.py appconfig ' returned a non-zero code: 137
When i google for the error i get suggestions like memory is not sufficient. But i have verified, the system(centos) is having enough memory.
Additional info
The commandline output during the execution of RUN appmanage.py appconfig is :
Step 7/8 : RUN appmanage.py appconfig
---> Running in 23cffaacc81f
======================================================================================
configuring katana apps...
Please do not quit (or) kill the server manually, wait until the server closes itself...!
======================================================================================
Performing system checks...
System check identified no issues (0 silenced).
February 08, 2020 - 12:01:45
Django version 2.1.2, using settings 'katana.wui.settings'
Starting development server at http://127.0.0.1:9999/
Quit the server with CONTROL-C.
9999/tcp:
20Killed
As described, the command RUN appmanage.py appconfig appAdd.json run successfully as expected and reported that System check identified no issues (0 silenced)..
Moreover, the command "insisted" on killing itself and return exit code of 137. The minimum changes for this to work is to update your Dockerfile to be like
...
#Run a script
#Note: Here appmanage.py is a file inside the pip installed location(site-packages), but it will be accessible directly without cd to the folder
RUN appmanage.py appconfig appAdd.json || true
...
This will just forcefully ignore the return exit code from the previous command and carry on the build.

Categories