I am trying to launch jupyter lab in VSCode remote server, capsuled by Docker, but got error saying
Unable to start session for kernel Python 3.8.5 64-bit. Select another kernel to launch with.
I set up Dockerfile and .devcontainer.json in workspace dir.
Do I also need docker-compose.yaml file for jupyter lab setting like port forwarding?
Or I can handle and replace docker-compose file by .devcontainer.json file?
Dockerfile:
FROM python:3.8
RUN apt-get update --fix-missing && apt-get upgrade -y
# Set Japanese UTF-8 as locale so Japanese can be used
RUN apt-get install -y locales \
&& locale-gen ja_JP.UTF-8
ENV LANG ja_JP.UTF-8
ENV LANGUAGE ja_JP:ja
ENV LC_ALL ja_JP.UTF-8
# RUN apt-get install zsh -y && \
# chsh -s /usr/bin/zsh
# Install zsh with theme and some plugins
RUN sh -c "$(wget -O- https://raw.githubusercontent.com/deluan/zsh-in-docker/master/zsh-in-docker.sh)" \
-t mrtazz \
-p git -p ssh-agent
RUN pip install jupyterlab
RUN jupyter serverextension enable --py jupyterlab
WORKDIR /app
CMD ["bash"]
.devcontainer.json
{
"name": "Python 3.8",
"build": {
"dockerfile": "Dockerfile",
"context": ".."
},
// Uncomment to use docker-compose
// "dockerComposeFile": "docker-compose.yml",
// "service": "dev",
// Set *default* container specific settings.json values on container create.
"settings": {
"terminal.integrated.shell.linux": "/bin/bash",
"python.pythonPath": "/usr/local/bin/python",
"python.linting.enabled": true,
"python.linting.pylintEnabled": true,
"python.formatting.autopep8Path": "/usr/local/py-utils/bin/autopep8",
"python.formatting.blackPath": "/usr/local/py-utils/bin/black",
"python.formatting.yapfPath": "/usr/local/py-utils/bin/yapf",
"python.linting.banditPath": "/usr/local/py-utils/bin/bandit",
"python.linting.flake8Path": "/usr/local/py-utils/bin/flake8",
"python.linting.mypyPath": "/usr/local/py-utils/bin/mypy",
"python.linting.pycodestylePath": "/usr/local/py-utils/bin/pycodestyle",
"python.linting.pydocstylePath": "/usr/local/py-utils/bin/pydocstyle",
"python.linting.pylintPath": "/usr/local/py-utils/bin/pylint"
},
// Add the IDs of extensions you want installed when the container is created.
"extensions": [
"ms-python.python",
"teabyii.ayu",
"jeff-hykin.better-dockerfile-syntax",
"coenraads.bracket-pair-colorizer-2",
"file-icons.file-icons",
"emilast.logfilehighlighter",
"zhuangtongfa.material-theme",
"ibm.output-colorizer",
"wayou.vscode-todo-highlight",
"atishay-jain.all-autocomplete",
"amazonwebservices.aws-toolkit-vscode",
"hookyqr.beautify",
"phplasma.csv-to-table",
"alefragnani.bookmarks",
"mrmlnc.vscode-duplicate",
"tombonnike.vscode-status-bar-format-toggle",
"donjayamanne.githistory",
"codezombiech.gitignore",
"eamodio.gitlens",
"zainchen.json",
"ritwickdey.liveserver",
"yzhang.markdown-all-in-one",
"pkief.markdown-checkbox",
"shd101wyy.markdown-preview-enhanced",
"ionutvmi.path-autocomplete",
"esbenp.prettier-vscode",
"diogonolasco.pyinit",
"ms-python.vscode-pylance",
"njpwerner.autodocstring",
"kevinrose.vsc-python-indent",
"mechatroner.rainbow-csv",
"msrvida.vscode-sanddance",
"rafamel.subtle-brackets",
"formulahendry.terminal",
"tyriar.terminal-tabs",
"redhat.vscode-yaml"
],
// Use 'forwardPorts' to make a list of ports inside the container available locally.
"forwardPorts": [8888],
// Use 'postCreateCommand' to run commands after the container is created.
// "postCreateCommand": "pip3 install -r requirements.txt",
// Comment out to connect as root instead.
// "remoteUser": "myname",
"shutdownAction": "none"
}
Related
I am trying to synthesize a CDK app (typeScript) which has some python lambda functions.
I am using PythonFunction to use a requirements.txt file to install the external dependencies. I am running vscode on WSL. I am encountering the following error.
Bundling asset Test/test-lambda-stack/test-subscriber-data-validator-poc/Code/Stage...
node:internal/fs/utils:347
throw err;
^
Error: ENOENT: no such file or directory, open '~/.nvm/versions/node/v16.17.0/lib/node_modules/docker/node_modules/highlight.js/styles/cp -rTL /asset-input/ /asset-output && cd /asset-output && python -m pip install -r requirements.txt -t /asset-output.css'
at Object.openSync (node:fs:594:3)
at Object.readFileSync (node:fs:462:35)
at module.exports (~/.nvm/versions/node/v16.17.0/lib/node_modules/docker/src/getColourScheme.js:47:26)
at ~/.nvm/versions/node/v16.17.0/lib/node_modules/docker/src/docker.js:809:47
at FSReqCallback.readFileAfterClose [as oncomplete] (node:internal/fs/read_file_context:68:3)
at FSReqCallback.callbackTrampoline (node:internal/async_hooks:130:17) {
errno: -2,
syscall: 'open',
code: 'ENOENT',
path: '~/.nvm/versions/node/v16.17.0/lib/node_modules/docker/node_modules/highlight.js/styles/cp -rTL /asset-input/ /asset-output && cd /asset-output && python -m pip install -r requirements.txt -t /asset-output.css'
}
Error: Failed to bundle asset Test/test-lambda-stack/test-subscriber-data-validator-poc/Code/Stage, bundle output is located at ~/Code/AWS/CDK/test-dev-poc/cdk.out/asset.6b577fe604573a3b53e635f09f768df3f87ad6651b18e9f628c2a086a525bb49-error: Error: docker exited with status 1
at AssetStaging.bundle (~/Code/AWS/CDK/test-dev-poc/node_modules/aws-cdk-lib/core/lib/asset-staging.js:2:614)
at AssetStaging.stageByBundling (~/Code/AWS/CDK/test-dev-poc/node_modules/aws-cdk-lib/core/lib/asset-staging.js:1:4506)
at stageThisAsset (~/Code/AWS/CDK/test-dev-poc/node_modules/aws-cdk-lib/core/lib/asset-staging.js:1:1867)
at Cache.obtain (~/Code/AWS/CDK/test-dev-poc/node_modules/aws-cdk-lib/core/lib/private/cache.js:1:242)
at new AssetStaging (~/Code/AWS/CDK/test-dev-poc/node_modules/aws-cdk-lib/core/lib/asset-staging.js:1:2262)
at new Asset (~/Code/AWS/CDK/test-dev-poc/node_modules/aws-cdk-lib/aws-s3-assets/lib/asset.js:1:736)
at AssetCode.bind (~/Code/AWS/CDK/test-dev-poc/node_modules/aws-cdk-lib/aws-lambda/lib/code.js:1:4628)
at new Function (~/Code/AWS/CDK/test-dev-poc/node_modules/aws-cdk-lib/aws-lambda/lib/function.js:1:2803)
at new PythonFunction (~/Code/AWS/CDK/test-dev-poc/node_modules/#aws-cdk/aws-lambda-python-alpha/lib/function.ts:73:5)
at new lambdaInfraStack (~/Code/AWS/CDK/test-dev-poc/lib/serviceInfraStacks/lambda-infra-stack.ts:24:40)
My requirements.txt file looks like this
attrs==22.1.0
jsonschema==4.16.0
pyrsistent==0.18.1
My cdk code is this
new PythonFunction(this,`${appName}-subscriber-data-validator-${stage}`,{
runtime: Runtime.PYTHON_3_9,
entry: join('lambdas/subscriber_data_validator'),
handler: 'lambda_hander',
index: 'subscriber_data_validator.py'
})
Do I need to install anything additional? I have esbuild installed as a devDependency. Having a real hard time getting this work. Any help is appreciated.
I want to use "Select a script to run after creation" when I create a notebook instance in GCP.
Specifically, I want to use it to install python packages.
What kind of script (extension and contents) do I need to write?
This will be an example of Post startup script that installs Voila.
Save this file in a GCS bucket and when creating the Notebook, define the path, for example:
gcloud notebooks instances create nb-1 \
'--vm-image-project=deeplearning-platform-release' \
'--vm-image-family=tf2-latest-cpu' \
'--metadata=post-startup-script=gs://ai-platform-notebooks-tools/install-voila.sh' \
'--location=us-central1-a'
Script contents:
#!/bin/bash -eu
# Installs Voila in AI Platform Notebook
function install_voila() {
echo 'Installing voila...'
/opt/conda/condabin/conda install -y -c conda-forge ipywidgets ipyvolume bqplot scipy
/opt/conda/condabin/conda install -y -c conda-forge voila
/opt/conda/bin/jupyter lab build
systemctl restart jupyter.service || echo 'Error restarting jupyter.service.'
}
function download_samples() {
echo 'Downloading samples...'
cd /home/jupyter
git clone https://github.com/voila-dashboards/voila
}
function main() {
install_voila || echo 'Error installing voila.'
download_samples || echo 'Error downloading voila samples.'
}
main
i am building a docker image, that will run a flask application.
when i do it locally with no problem i can build the image
my dockerfile:
FROM python:3.7
#RUN apt-get update -y
WORKDIR /app
RUN curl www.google.com
COPY requirements.txt requirements.txt
RUN pip install --trusted-host pypi.org --trusted-host files.pythonhosted.org -r requirements.txt
my jenkinspipeline
pipeline {
agent {
label "linux_machine"
}
stages {
stage('Stage1') {
steps {
//sh 'docker --version'
//sh 'python3 --version'
//sh 'pip3 --version'
checkout([$class: 'GitSCM', branches: [[name: '*/my_branch']], extensions: [], userRemoteConfigs: [[credentialsId: 'credentials_', url: 'https://myrepo.git']]])
}
}
stage('Stage2'){
steps{
sh "docker build --tag tag1 --file path/to/docker_file_in_repo docker_folder_path"
}
}
}
}
i was able to install docker and jenkins locally in my machine and all works fine, but when i put it on the jenkins server with real agents i get:
File "/usr/local/lib/python3.7/site-packages/pip/_internal/network/auth.py", line 256, in handle_401
username, password, save = self._prompt_for_password(parsed.netloc)
File "/usr/local/lib/python3.7/site-packages/pip/_internal/network/auth.py", line 226, in _prompt_for_password
username = ask_input(f"User for {netloc}: ")
File "/usr/local/lib/python3.7/site-packages/pip/_internal/utils/misc.py", line 237, in ask_input
return input(message)
EOFError: EOF when reading a line
Removed build tracker: '/tmp/pip-req-tracker-i4mhh7vg'
The command '/bin/sh -c pip install --trusted-host pypi.org --trusted-host files.pythonhosted.org -r requirements.txt' returned a non-zero code: 2
i try using --no-input but same error,
it seems that is asking for a user and password, why is that?
is the docker using the certification of the agent/host to pass that to the commands of the dockerfile?
any sugestion on how could i make this work?
thanks guys.
Unfortunately, the problem is not clear at all from the message. What happens is that pip gets a 401 Unauthorized from the package index. You have to provide credentials so it can log-in.
You can add --no-input so it doesn't try to ask for a password (where it then fails due to STDIN being unavailable). That doesn't solve the underlying problem of it being unable to connect.
I have written a docker file which adds my python script inside the container:
ADD test_pclean.py /test_pclean.py
My directory structure is:
.
├── Dockerfile
├── README.md
├── pipeline.json
└── test_pclean.py
My json file which acts as a configuration file for creating a pipeline in Pachyderm is as follows:
{
"pipeline": {
"name": "mopng-beneficiary-v2"
},
"transform": {
"cmd": ["python3", "/test_pclean.py"],
"image": "avisrivastava254084/mopng-beneficiary-v2-image-7"
},
"input": {
"atom": {
"repo": "mopng_beneficiary_v2",
"glob": "/*"
}
}
}
Even though I have copied the official documentation's example, I am facing an error:
python3: can't open file '/test_pclean.py': [Errno 2] No such file or directory
My dockerfile is:
FROM debian:stretch
# Install opencv and matplotlib.
RUN apt-get update \
&& apt-get upgrade -y \
&& apt-get install -y unzip wget build-essential \
cmake git pkg-config libswscale-dev \
&& apt-get clean \
&& rm -rf /var/lib/apt
RUN apt update
RUN apt-get -y install python3-pip
RUN pip3 install matplotlib
RUN pip3 install pandas
ADD test_pclean.py /test_pclean.py
ENTRYPOINT [ "/bin/bash/" ]
Like some of the comments above suggest. It looks like your test_pclean.py file isn't in the docker image. Here's what should fix it.
Make sure your test_pclean.py file is in your docker image by having be included as part of the build process. Put this as the last step in your dockerfile:
COPY test_pclean.py .
Ensure that your pachyderm pipeline spec has the following for the cmd portion:
"cmd": ["python3", "./test_pclean.py"]
And this is more of a suggestion than a requirement.... You'll make life easier for yourself if you use image tags as part of your docker build. If you default to latest tag, any future iterations/builds of this step in your pipeline could have negitave affects (new bugs in your code etc.). Therefore the best practice is to use a particular version in your pipeline: mopng-beneficiary-v2-image-7:v1 and mopng-beneficiary-v2-image-7:v2 and so on. That way you can iterate on say version 3 and it won't affect the already running pipeline.
docker build -t avisrivastava254084/mopng-beneficiary-v2-image-7:v1
Then just update your pipeline spec to use avisrivastava254084/mopng-beneficiary-v2-image-7:v1
I was not changing the commits to my docker images on each build and hence, Kubernetes was using the local docker file that it had(w/o tags and commits, it doesn't acknowledge any change). Once I started using commit with each build, Kubernetes started downloading the intended docker image.
I am trying to build a Python package with some wrapped C++ code on Max osx via Travis CI. This is my build config:
{
"os": "osx",
"env": "PYTHON=3.6 CPP=14 CLANG DEBUG=1",
"sudo": false,
"script": [
"python setup.py install",
"py.test"
],
"install": [
"if [ \"$TRAVIS_OS_NAME\" = \"osx\" ]; then\n if [ \"$PY\" = \"3\" ]; then\n brew update && brew upgrade python\n else\n curl -fsSL https://bootstrap.pypa.io/get-pip.py | $PY_CMD - --user\n fi\n fi\nif [[ \"${TRAVIS_OS_NAME}\" == \"osx\" ]]; then\n export CXX=clang++ CC=clang;\n # manually install python on osx\n brew update\n brew install python3\n brew reinstall gcc\n virtualenv venv\n source venv/bin/activate\n pip install -r requirements.txt --upgrade\nfi\n",
"pip install -r requirements.txt --upgrade",
"python --version"
],
"language": "python",
"osx_image": "xcode9"
}
I get the following build error:
2.7 is not installed; attempting download
Downloading archive: https://s3.amazonaws.com/travis-python-archives/binaries/osx/10.12/x86_64/python-2.7.tar.bz2
$ curl -sSf -o python-2.7.tar.bz2 ${archive_url}
curl: (22) The requested URL returned error: 403 Forbidden
Unable to download 2.7 archive. The archive may not exist. Please consider a different version.
I'm not sure what to do about this.