Can't deploy container image to lambda function - python

I try to deploy container image to lambda function, but this error message appear
The image manifest or layer media type for the source image <image_source> is not supported.
here is my Dockerfile, i believe i have use the proper setup
FROM public.ecr.aws/lambda/python:3.8
# Install dependencies
COPY requirements.txt ./
RUN pip install -r requirements.txt
# Copy function code
COPY app/* ./
# Set the CMD to your handler
CMD [ "lambda_function.lambda_handler" ]

Try by specifying the target platform of the image you build as amd64:
docker build --platform linux/amd64 . -t my_image.
I get the same error while trying to deploy a lambda based on an image that supports both linux/amd64 and linux/arm64/v8 (Apple Silicon) architectures.

If you are using buildx >= 0.10 specifying target platform does not work since it also creates multi-platform index by default.
To fix this problem set --provenance=false to docker build.
For more details please see: https://github.com/docker/buildx/issues/1509#issuecomment-1378538197

Related

AWS SAM DockerBuildArgs It does not add them when creating the lambda image

I am trying to test a lambda function locally, the function is created from the public docker image from aws, however I want to install my own python library from my github, according to the documentation AWS sam Build I have to add a variable to be taken in the Dockerfile like this:
Dockerfile
FROM public.ecr.aws/lambda/python:3.8
COPY lambda_preprocessor.py requirements.txt ./
RUN yum install -y git
RUN python3.8 -m pip install -r requirements.txt -t .
ARG GITHUB_TOKEN
RUN python3.8 -m pip install git+https://${GITHUB_TOKEN}#github.com/repository/library.git -t .
And to pass the GITHUB_TOKEN I can create a .json file containing the variables for the docker environment.
.json file named env.json
{
"LambdaPreprocessor": {
"GITHUB_TOKEN": "TOKEN_VALUE"
}
}
And simply pass the file address in the sam build: sam build --use-container --container-env-var-file env.json Or directly the value without the .json with the command: sam build --use-container --container-env-var GLOBAL_ENV_VAR=TOKEN_VALUE
My problem is that I don't get the GITHUB_TOKEN variable either with the .json file or by putting it directly in the command with --container-env-var GITHUB_TOKEN=TOKEN_VALUE
Using sam build --use-container --container-env-var GLOBAL_ENV_VAR=TOKEN_VALUE --debug shows that it doesn't take it when creating the lambda image.
The only way that has worked for me is to put the token directly in the Dockerfile not as an build argument.
Promt output
Building image for LambdaPreprocessor function
Setting DockerBuildArgs: {} for LambdaPreprocessor function
Does anyone know why this is happening, am I doing something wrong?
If you need to see the template.yaml this is the lambda definition.
template.yaml
LambdaPreprocessor:
Type: AWS::Serverless::Function
Properties:
PackageType: Image
Architectures:
- x86_64
Timeout: 180
Metadata:
Dockerfile: Dockerfile
DockerContext: ./lambda_preprocessor
DockerTag: python3.8-v1
I'm doing it with vscode and wsl 2 with ubuntu 20.04 lts on windows 10
I am having this issue too. What I have learned is that in the Metadata field there is DockerBuildArgs: that you can also add. Example:
Metadata:
DockerBuildArgs:
MY_VAR: <some variable>
When I add this it does make it to the DockerBuildArgs dict.

docker build from inside container

I'm trying to build a docker image from inside a container using the Python docker SDK. The build command
client.build(dockerfile="my.Dockerfile", path=".", tag="my-tag")
fails with
OSError: Can not read file in context: /proc/1/mem
The issue was that docker cannot build from the container's root directory, which was implicit due to the build context path='.'. This can easily be fixed by using a working directory in the Dockerfile of the container performing the build operation, e.g.
FROM python:3.9-slim
RUN apt-get update -y
WORKDIR my-workdir <-- ADD TO FIX
COPY . .
CMD python -m my-script

Updating Gitlab repo file using that repo's pipeline

I have a Python app that takes the value of a certificate in a Dockerfile and updates it. However, I'm having difficulty knowing how to get the app to work within Gitlab.
When I push the app with the Dockerfile to be updated I want the app to run in the Gitlab pipeline and update the Dockerfile. I'm a little stuck on how to do this. I'm thinking that I would need to pull the repo, run the app and then push back up.
Would like some advice on if this is the right approach and if so how I would go about doing so?
This is just an example of the Dockerfile to be updated (I know this image wouldn't actually work, but the app would only update the ca-certificate present in the DF:
#syntax=docker/dockerfile:1
#init the base image
FROM alpine:3.15
#define present working directory
#WORKDIR /library
#run pip to install the dependencies of the flask app
RUN apk add -u \
ca-certificates=20211220 \
git=3.10
#copy all files in our current directory into the image
COPY . /library
EXPOSE 5000
#define command to start the container, need to make app visible externally by specifying host 0.0.0.0
CMD [ "python3", "-m", "flask", "run", "--host=0.0.0.0"]
gitlab-ci.yml:
stages:
- build
- test
- update_certificate
variables:
PYTHON_IMG: "python:3.10"
pytest_installation:
image: $PYTHON_IMG
stage: build
script:
- pip install pytest
- pytest --version
python_requirements_installation:
image: $PYTHON_IMG
stage: build
script:
- pip install -r requirements.txt
unit_test:
image: $PYTHON_IMG
stage: test
script:
- pytest ./tests/test_automated_cert_checker.py
cert_updater:
image: $PYTHON_IMG
stage: update_certificate
script:
- pip install -r requirements.txt
- python3 automated_cert_updater.py
I'm aware there's a lot of repetition with installing the requirements multiple times and that this is an area for improvement. I doesn't feel like it's necessary for the app to be built into an image because it's only used for updating the DF.
requirements.txt installs pytest and BeautifulSoup4
Additional context: The pipeline that builds the Dockerimage already exists and builds successfully. I am looking for a way to run this app once a day which will check if the ca-certificate is still up to date. If it isn't then the app is run, the ca-certificate in the Dockerfile is updated and then the updated Dockerfile is re built automatically.
My thoughts are that I may need to set the gitlab-ci.yml up pull the repo, run the app (that updates the ca-certificate) and then re push it, so that a new image is built based upon the update to the certificate.
The Dockerfile shown here is just a basic example showing that the actual DF in the repo looks like.
What you probably want to do is identify the appropriate version before you build the Dockerfile. Then, pass a --build-arg with the ca-certificates version. That way, if the arg changes, then the cached layer becomes invalid and will install the new version. But if the version is the same, the cached layer would be used.
FROM alpine:3.15
ARG CA_CERT_VERSION
RUN apk add -u \
ca-certificates=$CA_CERT_VERSION \
git=3.10
# ...
Then when you build your image, you should figure out the appropriate ca-certificates version and pass it as a build-arg.
Something like:
version="$(python3 ./get-cacertversion.py)" # you implement this
docker build --build-arg CA_CERT_VERSION=$version -t myimage .
Be sure to add appropriate bits to leverage docker caching in GitLab.

Docker copy: failed to compute cache key/error from sender

I'm trying to create a Docker image from a Dockerfile, and while doing this, I encounter following error with the COPY steps:
failed to compute cache key: not found: not found when using relative paths, and
error from sender: Create file .......\Temp\empty-dir347165903\C:: The filename, directory name, or volume label syntax is incorrect when using absolute ones
The exact command I'm trying is COPY main.py ./
Important notes would be there is no .dockerignore file whatsoever, no container is set and both main.py and Dockerfile are located in the same directory
Here's what the Dockerfile itself looks like:
From public.ecr.aws/lambda/python:3.8
COPY requirements.txt .
RUN pip3 install -r requirements.txt
COPY main.py ./
RUN mkdir chrome
RUN curl -SL (chromedriver link here) > chromedriver.zip
RUN unzip chromedriver.zip -d chrome/
RUN rm chromedriver.zip
The command I'm running is docker build - < Dockerfile
This syntax is only valid if your build doesn't use the context. The docker build command expects one argument, and that's not the Dockerfile, rather it's the build context. Typically it's a directory, could be a remote git repo, or you can pass a tar file of the directory on stdin with the - syntax. There is an exception for passing a Dockerfile instead of the build context, but when this is done, you can't have any COPY or ADD steps that pull files from the build context. Instead, you almost certainly want:
docker build .
To perform the build using the current directory as your build context, which also contains the Dockerfile. And after that, you'll likely want to add a tag to your resulting image:
docker build -t your-image:latest .
(Thanks to to David for the pointer to the Dockerfile as input syntax.)

Docker- Do we need to include RUN command in Dockerfile

I have a python code and to convert it to docker image, I can use below command:
sudo docker build -t customdocker .
This converts python code to docker image. To convert it I use a Dockerfile with below commands:
FROM python:3
ADD my_script.py /
ADD user.conf /srv/config/conf.d/
RUN pip3 install <some-package>
CMD [ "python3", "./my_script.py" ]
In this, we have RUN command which install required packages. Lets say if we have deleted the image for some reason and want to build it again, so is there any way we can skip this RUN step to save some time because I think this is already installed.
Also in my code I am using a file user.conf which is in other directory. So for that I am including this in DOckerfile and also saving a copy of it in current directory. Is there a way in docker where I can define my working directory so that docker image searches for the file inside those directories.
Thanks
Yes you cannot remove the RUN or other statements in dockerfile, if you want to build the docker image again after deleteing.
You use the command WORKDIR in your dockerfile but its scope will be within the docker images, i.e when you create the container from the image workdir will be set to that metioned in WORKDIR
For ex :
WORKDIR /srv/config/conf.d/
This /srv/config/conf.d/ will set as workingdir, but you have to use below in dockerfile while building in-order to copy that file in specified location
ADD user.conf /srv/config/conf.d/
Answering your first question: A docker image holds everything related to your python environment including the packages you install. When you delete the image then the packages are also deleted from the image. Therefore, no you cannot skip that step.
Now on to your second question, you can bind a direectory while starting the container by:
docker run -v /directory-you-want-to-mount:/src/config/ customdocker
You can also set the working directory with -w flag.
docker run -w /path/to/dir/ -i -t customdocker
https://docs.docker.com/v1.10/engine/reference/commandline/run/

Categories