When attempting to initialise a new cdk project in a windows WSL environment, I was confronted with the following error:
cdk init test-app --language python
Usage:
cdk [-vbo] [--toc] [--notransition] [--logo=<logo>] [--theme=<theme>] [--custom-css=<cssfile>] FILE
cdk --install-theme=<theme>
cdk --default-theme=<theme>
cdk --generate=<name>
I first wanted to check the install was still correct but the version number is not displayed:
cdk --version
cdk
All advice online and on Stack Overflow suggests re-installing as root user. I have attempted a global install as root, followed by a restart:
sudo npm install -g aws-cdk
checking for global install version lists the following, showing the update has had global effect:
npm list -g --depth=0 | grep cdk
├── aws-cdk#2.15.0
├── cdk-assume-role-credential-plugin#1.4.0
but the error remains the same. Running the which command confirms it is following the correct user path:
which cdk
/home/user/.local/bin/cdk
This is a new error and I am unable to pinpoint any particular change that could have caused this. I have been able to initialise cdk projects in empty directories before without issue.
Related
I am trying to test a lambda function locally, the function is created from the public docker image from aws, however I want to install my own python library from my github, according to the documentation AWS sam Build I have to add a variable to be taken in the Dockerfile like this:
Dockerfile
FROM public.ecr.aws/lambda/python:3.8
COPY lambda_preprocessor.py requirements.txt ./
RUN yum install -y git
RUN python3.8 -m pip install -r requirements.txt -t .
ARG GITHUB_TOKEN
RUN python3.8 -m pip install git+https://${GITHUB_TOKEN}#github.com/repository/library.git -t .
And to pass the GITHUB_TOKEN I can create a .json file containing the variables for the docker environment.
.json file named env.json
{
"LambdaPreprocessor": {
"GITHUB_TOKEN": "TOKEN_VALUE"
}
}
And simply pass the file address in the sam build: sam build --use-container --container-env-var-file env.json Or directly the value without the .json with the command: sam build --use-container --container-env-var GLOBAL_ENV_VAR=TOKEN_VALUE
My problem is that I don't get the GITHUB_TOKEN variable either with the .json file or by putting it directly in the command with --container-env-var GITHUB_TOKEN=TOKEN_VALUE
Using sam build --use-container --container-env-var GLOBAL_ENV_VAR=TOKEN_VALUE --debug shows that it doesn't take it when creating the lambda image.
The only way that has worked for me is to put the token directly in the Dockerfile not as an build argument.
Promt output
Building image for LambdaPreprocessor function
Setting DockerBuildArgs: {} for LambdaPreprocessor function
Does anyone know why this is happening, am I doing something wrong?
If you need to see the template.yaml this is the lambda definition.
template.yaml
LambdaPreprocessor:
Type: AWS::Serverless::Function
Properties:
PackageType: Image
Architectures:
- x86_64
Timeout: 180
Metadata:
Dockerfile: Dockerfile
DockerContext: ./lambda_preprocessor
DockerTag: python3.8-v1
I'm doing it with vscode and wsl 2 with ubuntu 20.04 lts on windows 10
I am having this issue too. What I have learned is that in the Metadata field there is DockerBuildArgs: that you can also add. Example:
Metadata:
DockerBuildArgs:
MY_VAR: <some variable>
When I add this it does make it to the DockerBuildArgs dict.
I have a Python app that takes the value of a certificate in a Dockerfile and updates it. However, I'm having difficulty knowing how to get the app to work within Gitlab.
When I push the app with the Dockerfile to be updated I want the app to run in the Gitlab pipeline and update the Dockerfile. I'm a little stuck on how to do this. I'm thinking that I would need to pull the repo, run the app and then push back up.
Would like some advice on if this is the right approach and if so how I would go about doing so?
This is just an example of the Dockerfile to be updated (I know this image wouldn't actually work, but the app would only update the ca-certificate present in the DF:
#syntax=docker/dockerfile:1
#init the base image
FROM alpine:3.15
#define present working directory
#WORKDIR /library
#run pip to install the dependencies of the flask app
RUN apk add -u \
ca-certificates=20211220 \
git=3.10
#copy all files in our current directory into the image
COPY . /library
EXPOSE 5000
#define command to start the container, need to make app visible externally by specifying host 0.0.0.0
CMD [ "python3", "-m", "flask", "run", "--host=0.0.0.0"]
gitlab-ci.yml:
stages:
- build
- test
- update_certificate
variables:
PYTHON_IMG: "python:3.10"
pytest_installation:
image: $PYTHON_IMG
stage: build
script:
- pip install pytest
- pytest --version
python_requirements_installation:
image: $PYTHON_IMG
stage: build
script:
- pip install -r requirements.txt
unit_test:
image: $PYTHON_IMG
stage: test
script:
- pytest ./tests/test_automated_cert_checker.py
cert_updater:
image: $PYTHON_IMG
stage: update_certificate
script:
- pip install -r requirements.txt
- python3 automated_cert_updater.py
I'm aware there's a lot of repetition with installing the requirements multiple times and that this is an area for improvement. I doesn't feel like it's necessary for the app to be built into an image because it's only used for updating the DF.
requirements.txt installs pytest and BeautifulSoup4
Additional context: The pipeline that builds the Dockerimage already exists and builds successfully. I am looking for a way to run this app once a day which will check if the ca-certificate is still up to date. If it isn't then the app is run, the ca-certificate in the Dockerfile is updated and then the updated Dockerfile is re built automatically.
My thoughts are that I may need to set the gitlab-ci.yml up pull the repo, run the app (that updates the ca-certificate) and then re push it, so that a new image is built based upon the update to the certificate.
The Dockerfile shown here is just a basic example showing that the actual DF in the repo looks like.
What you probably want to do is identify the appropriate version before you build the Dockerfile. Then, pass a --build-arg with the ca-certificates version. That way, if the arg changes, then the cached layer becomes invalid and will install the new version. But if the version is the same, the cached layer would be used.
FROM alpine:3.15
ARG CA_CERT_VERSION
RUN apk add -u \
ca-certificates=$CA_CERT_VERSION \
git=3.10
# ...
Then when you build your image, you should figure out the appropriate ca-certificates version and pass it as a build-arg.
Something like:
version="$(python3 ./get-cacertversion.py)" # you implement this
docker build --build-arg CA_CERT_VERSION=$version -t myimage .
Be sure to add appropriate bits to leverage docker caching in GitLab.
Error:
Running "serverless" from node_modules
Deploying serverless-flask to stage dev (us-east-1)
✖ Stack serverless-flask-dev failed to deploy (0s)
Environment: darwin, node 16.0.0, framework 3.1.1 (local) 3.1.1v (global), plugin 6.0.0, SDK 4.3.1
Credentials: Local, "default" profile
Docs: docs.serverless.com
Support: forum.serverless.com
Bugs: github.com/serverless/serverless/issues
Error:
Error: spawn docker ENOENT
at Process.ChildProcess._handle.onexit (node:internal/child_process:282:19)
at onErrorNT (node:internal/child_process:480:16)
at processTicksAndRejections (node:internal/process/task_queues:83:21)
I'm following these instructions (https://www.serverless.com/blog/flask-python-rest-api-serverless-lambda-dynamodb/) and can't seem to figure this out since the base app is in python and not javascript... most people who have solved this solved it using javascript.
To solve this issue you need to update your serverless.yml file with these changes in the custom block
custom:
pythonRequirements:
pythonBin: python3
dockerizePip: "false"
I also face the same issue my issue was with dockerizePip it was set to
dockerizePip: non-linux
either remove this entry from serverless.yml file or just set it to false
To be able to deploy your project with serverless-python-requirements you need to have docker on your machine (if you are on windows consider using docker desktop or a linux vm)
Why do I need Docker ?
When you do a sls deploy, serverless-python-requirements launch a docker container to install all the dependencies you've put in your requirements.txt file that will be used during the deployement process
You are getting this error because your container is not launch correctly
I am trying docker for tensorflow on windows 10 education, I have installed docker successfully and could run/ pull/ import images. I linked my docker container using
C:\User\xyz_folder> docker run -it tensorflow/tensorflow:latest-devel
root#23433215319e:~#cd /tensorflow
root#23433215319e:/tensorflow#git pull
From https://github.com/tensorflow/tensorflow
* [new tag] v1.11.0 -> v1.11.0
Already up-to-date.
Until here it ran fine without error. Following is the problem:
root#23433215319e:/tensorflow# cd abc_folder
bash: cd: abc_folder: No such file or directory
the abc_folder is there in linked folder but can not be seen in when it list it using 'ls'
root#23433215319e:/tensorflow#ls
ACKNOWLEDGMENTS CODEOWNERS LICENSE WORKSPACE bazel-out configure.py tools ADOPTERS.md CODE_OF_CONDUCT.md README.md arm_compiler.BUILD bazel-tensorflow models.BUILD AUTHORS CONTRIBUTING.md RELEASE.md bazel-bin bazel-testlogs tensorflow BUILD ISSUE_TEMPLATE.md SECURITY.md bazel-genfiles configure third_party
Please suggest how to link this properly so that I can see the shared folders content.
Thanks in advance.
To make a directory outside the container visible inside the container, you have to use the option -v or --volume as is stated here.
So, your command would have to be:
docker run -v c:\local\directory:container/directory -it tensorflow/tensorflow:latest-devel
Whit that, you should be able to see the directory inside the container
I'm trying to familiarize myself with the Gitlab CI environment with a test project, https://gitlab.com/khpeek/CI-test. The project has the following .gitlab-ci.yml:
image: python:2.7-onbuild
services:
- rethinkdb:latest
test_job:
script:
- pytest
The problem is that the test_job job in the CI pipeline fails with the following error message:
Running with gitlab-ci-multi-runner 9.0.1 (a3da309)
on docker-auto-scale (e11ae361)
Using Docker executor with image python:2.7-onbuild ...
Starting service rethinkdb:latest ...
Pulling docker image rethinkdb:latest ...
Using docker image rethinkdb:latest ID=sha256:23ecfb08823bc5483c6a955b077a9bc82899a0df2f33899b64992345256f22dd for service rethinkdb...
Waiting for services to be up and running...
Using docker image sha256:aaecf574604a31dd49a9d4151b11739837e4469df1cf7b558787048ce4ba81aa ID=sha256:aaecf574604a31dd49a9d4151b11739837e4469df1cf7b558787048ce4ba81aa for predefined container...
Pulling docker image python:2.7-onbuild ...
Using docker image python:2.7-onbuild ID=sha256:5754a7fac135b9cae7e02e34cc7ba941f03a33fb00cf31f12fbb71b8d389ece2 for build container...
Running on runner-e11ae361-project-3083420-concurrent-0 via runner-e11ae361-machine-1491819341-82630004-digital-ocean-2gb...
Cloning repository...
Cloning into '/builds/khpeek/CI-test'...
Checking out d0937f33 as master...
Skipping Git submodules setup
$ pytest
/bin/bash: line 56: pytest: command not found
ERROR: Job failed: exit code 1
However, there is a requirements.txt in the repository with the single line pytest==3.0.7 in it. It seems to me from the Dockerfile of the python:2.7-onbuild image, however, that pip install -r requirements.txt should get run on build. So why is pytest not found?
If you look at the Dockerfile you linked to, you'll see pip install -r requirements.txt is part of an onbuild command. This is useful if you want to create a new container from that first one and install a bunch of requirements. The pip install -r requirements.txt command is therefore not executed within the container in your CI pipeline and if it were, it would be executed at the very beginning, even before your gitlab repository was cloned.
I would suggest you modify your .gitlab-ci.yml file this way
image: python:2.7-onbuild
services:
- rethinkdb:latest
test_job:
script:
- pip install -r requirements.txt
- pytest
The problem seems to be intermittent: although the first time it took 61 minutes to run the tests (which initially failed), now it takes about a minute (see screen grab below).
For reference, the testing repository is at https://gitlab.com/khpeek/CI-test. (I had to add a before_script with some pip installs to make the job succeed).