i am building a docker image, that will run a flask application.
when i do it locally with no problem i can build the image
my dockerfile:
FROM python:3.7
#RUN apt-get update -y
WORKDIR /app
RUN curl www.google.com
COPY requirements.txt requirements.txt
RUN pip install --trusted-host pypi.org --trusted-host files.pythonhosted.org -r requirements.txt
my jenkinspipeline
pipeline {
agent {
label "linux_machine"
}
stages {
stage('Stage1') {
steps {
//sh 'docker --version'
//sh 'python3 --version'
//sh 'pip3 --version'
checkout([$class: 'GitSCM', branches: [[name: '*/my_branch']], extensions: [], userRemoteConfigs: [[credentialsId: 'credentials_', url: 'https://myrepo.git']]])
}
}
stage('Stage2'){
steps{
sh "docker build --tag tag1 --file path/to/docker_file_in_repo docker_folder_path"
}
}
}
}
i was able to install docker and jenkins locally in my machine and all works fine, but when i put it on the jenkins server with real agents i get:
File "/usr/local/lib/python3.7/site-packages/pip/_internal/network/auth.py", line 256, in handle_401
username, password, save = self._prompt_for_password(parsed.netloc)
File "/usr/local/lib/python3.7/site-packages/pip/_internal/network/auth.py", line 226, in _prompt_for_password
username = ask_input(f"User for {netloc}: ")
File "/usr/local/lib/python3.7/site-packages/pip/_internal/utils/misc.py", line 237, in ask_input
return input(message)
EOFError: EOF when reading a line
Removed build tracker: '/tmp/pip-req-tracker-i4mhh7vg'
The command '/bin/sh -c pip install --trusted-host pypi.org --trusted-host files.pythonhosted.org -r requirements.txt' returned a non-zero code: 2
i try using --no-input but same error,
it seems that is asking for a user and password, why is that?
is the docker using the certification of the agent/host to pass that to the commands of the dockerfile?
any sugestion on how could i make this work?
thanks guys.
Unfortunately, the problem is not clear at all from the message. What happens is that pip gets a 401 Unauthorized from the package index. You have to provide credentials so it can log-in.
You can add --no-input so it doesn't try to ask for a password (where it then fails due to STDIN being unavailable). That doesn't solve the underlying problem of it being unable to connect.
Related
I am trying to synthesize a CDK app (typeScript) which has some python lambda functions.
I am using PythonFunction to use a requirements.txt file to install the external dependencies. I am running vscode on WSL. I am encountering the following error.
Bundling asset Test/test-lambda-stack/test-subscriber-data-validator-poc/Code/Stage...
node:internal/fs/utils:347
throw err;
^
Error: ENOENT: no such file or directory, open '~/.nvm/versions/node/v16.17.0/lib/node_modules/docker/node_modules/highlight.js/styles/cp -rTL /asset-input/ /asset-output && cd /asset-output && python -m pip install -r requirements.txt -t /asset-output.css'
at Object.openSync (node:fs:594:3)
at Object.readFileSync (node:fs:462:35)
at module.exports (~/.nvm/versions/node/v16.17.0/lib/node_modules/docker/src/getColourScheme.js:47:26)
at ~/.nvm/versions/node/v16.17.0/lib/node_modules/docker/src/docker.js:809:47
at FSReqCallback.readFileAfterClose [as oncomplete] (node:internal/fs/read_file_context:68:3)
at FSReqCallback.callbackTrampoline (node:internal/async_hooks:130:17) {
errno: -2,
syscall: 'open',
code: 'ENOENT',
path: '~/.nvm/versions/node/v16.17.0/lib/node_modules/docker/node_modules/highlight.js/styles/cp -rTL /asset-input/ /asset-output && cd /asset-output && python -m pip install -r requirements.txt -t /asset-output.css'
}
Error: Failed to bundle asset Test/test-lambda-stack/test-subscriber-data-validator-poc/Code/Stage, bundle output is located at ~/Code/AWS/CDK/test-dev-poc/cdk.out/asset.6b577fe604573a3b53e635f09f768df3f87ad6651b18e9f628c2a086a525bb49-error: Error: docker exited with status 1
at AssetStaging.bundle (~/Code/AWS/CDK/test-dev-poc/node_modules/aws-cdk-lib/core/lib/asset-staging.js:2:614)
at AssetStaging.stageByBundling (~/Code/AWS/CDK/test-dev-poc/node_modules/aws-cdk-lib/core/lib/asset-staging.js:1:4506)
at stageThisAsset (~/Code/AWS/CDK/test-dev-poc/node_modules/aws-cdk-lib/core/lib/asset-staging.js:1:1867)
at Cache.obtain (~/Code/AWS/CDK/test-dev-poc/node_modules/aws-cdk-lib/core/lib/private/cache.js:1:242)
at new AssetStaging (~/Code/AWS/CDK/test-dev-poc/node_modules/aws-cdk-lib/core/lib/asset-staging.js:1:2262)
at new Asset (~/Code/AWS/CDK/test-dev-poc/node_modules/aws-cdk-lib/aws-s3-assets/lib/asset.js:1:736)
at AssetCode.bind (~/Code/AWS/CDK/test-dev-poc/node_modules/aws-cdk-lib/aws-lambda/lib/code.js:1:4628)
at new Function (~/Code/AWS/CDK/test-dev-poc/node_modules/aws-cdk-lib/aws-lambda/lib/function.js:1:2803)
at new PythonFunction (~/Code/AWS/CDK/test-dev-poc/node_modules/#aws-cdk/aws-lambda-python-alpha/lib/function.ts:73:5)
at new lambdaInfraStack (~/Code/AWS/CDK/test-dev-poc/lib/serviceInfraStacks/lambda-infra-stack.ts:24:40)
My requirements.txt file looks like this
attrs==22.1.0
jsonschema==4.16.0
pyrsistent==0.18.1
My cdk code is this
new PythonFunction(this,`${appName}-subscriber-data-validator-${stage}`,{
runtime: Runtime.PYTHON_3_9,
entry: join('lambdas/subscriber_data_validator'),
handler: 'lambda_hander',
index: 'subscriber_data_validator.py'
})
Do I need to install anything additional? I have esbuild installed as a devDependency. Having a real hard time getting this work. Any help is appreciated.
I'm creating a project which needs to make a connection from Python running in a docker container to a MySQL database running in another container. Currently, my docker-compose file looks like this:
version: "3"
services:
login:
build:
context: ./services/login
dockerfile: docker/Dockerfile
ports:
- "80:80"
# Need to remove this volume - this is only for dev work
volumes:
- ./services/login/app:/app
# Need to remove this command - this is only for dev work
command: /start-reload.sh
db_users:
image: mysql
volumes:
- ./data/mysql/users_data:/var/lib/mysql
- ./databases/users:/docker-entrypoint-initdb.d/:ro
restart: always
ports:
- 3306:3306
# Remove 'expose' below for prod
expose:
- 3306
environment:
MYSQL_ROOT_PASSWORD: password
MYSQL_DATABASE: users
MYSQL_USER: user
MYSQL_PASSWORD: password
And my Dockerfile for the login service looks like this:
# Note: this needs to be run from parent service directory
FROM tiangolo/uvicorn-gunicorn-fastapi:python3.8
# Install Poetry
RUN curl -sSL https://raw.githubusercontent.com/python-poetry/poetry/master/get-poetry.py | POETRY_HOME=/opt/poetry python && \
cd /usr/local/bin && \
ln -s /opt/poetry/bin/poetry && \
poetry config virtualenvs.create false
# Copy using poetry.lock* in case it doesn't exist yet
COPY ./app/pyproject.toml ./app/poetry.lock* /app/
RUN poetry install --no-root --no-dev
COPY ./app /app
I am trying to connect my login service to db_users, and want to make use of mysqlclient, but when I run poetry add mysqlclient, I get an error which includes the following lines:
/bin/sh: mysql_config: command not found
/bin/sh: mariadb_config: command not found
/bin/sh: mysql_config: command not found
Traceback (most recent call last):
File "<string>", line 1, in <module>
File "/private/var/folders/33/5yy7bny964bb0f3zggd1b4440000gn/T/pip-req-build-lak6lqu7/setup.py", line 15, in <module>
metadata, options = get_config()
File "/private/var/folders/33/5yy7bny964bb0f3zggd1b4440000gn/T/pip-req-build-lak6lqu7/setup_posix.py", line 70, in get_config
libs = mysql_config("libs")
File "/private/var/folders/33/5yy7bny964bb0f3zggd1b4440000gn/T/pip-req-build-lak6lqu7/setup_posix.py", line 31, in mysql_config
raise OSError("{} not found".format(_mysql_config_path))
OSError: mysql_config not found
mysql_config --version
mariadb_config --version
mysql_config --libs
----------------------------------------
ERROR: Command errored out with exit status 1: python setup.py egg_info Check the logs for full command output.
I'm assuming this is something to do with the fact that I need the mysql-connector-c library to work, but I'm not sure how to go about getting this in poetry.
I was looking at following this tutorial, but since I'm not running MySQL locally but rather in docker, I'm not sure how to translate those steps to work in docker.
So essentially, my question is two-fold:
How do I add mysqlclient to my pyproject.toml file
How do I get this working in my docker env?
I was forgetting that my dev environment is also in Docker so I didn't really need to care about the poetry environment.
With that said, I edited the Dockerfile to look like the below:
# Note: this needs to be run from parent service directory
FROM tiangolo/uvicorn-gunicorn-fastapi:python3.8
RUN apt-get install default-libmysqlclient-dev
# Install Poetry
RUN curl -sSL https://raw.githubusercontent.com/python-poetry/poetry/master/get-poetry.py | POETRY_HOME=/opt/poetry python && \
cd /usr/local/bin && \
ln -s /opt/poetry/bin/poetry && \
poetry config virtualenvs.create false
# Copy using poetry.lock* in case it doesn't exist yet
COPY ./app/pyproject.toml ./app/poetry.lock* /app/
RUN poetry install --no-root --no-dev
COPY ./app /app
Which now has everything working as expected.
I am trying to launch jupyter lab in VSCode remote server, capsuled by Docker, but got error saying
Unable to start session for kernel Python 3.8.5 64-bit. Select another kernel to launch with.
I set up Dockerfile and .devcontainer.json in workspace dir.
Do I also need docker-compose.yaml file for jupyter lab setting like port forwarding?
Or I can handle and replace docker-compose file by .devcontainer.json file?
Dockerfile:
FROM python:3.8
RUN apt-get update --fix-missing && apt-get upgrade -y
# Set Japanese UTF-8 as locale so Japanese can be used
RUN apt-get install -y locales \
&& locale-gen ja_JP.UTF-8
ENV LANG ja_JP.UTF-8
ENV LANGUAGE ja_JP:ja
ENV LC_ALL ja_JP.UTF-8
# RUN apt-get install zsh -y && \
# chsh -s /usr/bin/zsh
# Install zsh with theme and some plugins
RUN sh -c "$(wget -O- https://raw.githubusercontent.com/deluan/zsh-in-docker/master/zsh-in-docker.sh)" \
-t mrtazz \
-p git -p ssh-agent
RUN pip install jupyterlab
RUN jupyter serverextension enable --py jupyterlab
WORKDIR /app
CMD ["bash"]
.devcontainer.json
{
"name": "Python 3.8",
"build": {
"dockerfile": "Dockerfile",
"context": ".."
},
// Uncomment to use docker-compose
// "dockerComposeFile": "docker-compose.yml",
// "service": "dev",
// Set *default* container specific settings.json values on container create.
"settings": {
"terminal.integrated.shell.linux": "/bin/bash",
"python.pythonPath": "/usr/local/bin/python",
"python.linting.enabled": true,
"python.linting.pylintEnabled": true,
"python.formatting.autopep8Path": "/usr/local/py-utils/bin/autopep8",
"python.formatting.blackPath": "/usr/local/py-utils/bin/black",
"python.formatting.yapfPath": "/usr/local/py-utils/bin/yapf",
"python.linting.banditPath": "/usr/local/py-utils/bin/bandit",
"python.linting.flake8Path": "/usr/local/py-utils/bin/flake8",
"python.linting.mypyPath": "/usr/local/py-utils/bin/mypy",
"python.linting.pycodestylePath": "/usr/local/py-utils/bin/pycodestyle",
"python.linting.pydocstylePath": "/usr/local/py-utils/bin/pydocstyle",
"python.linting.pylintPath": "/usr/local/py-utils/bin/pylint"
},
// Add the IDs of extensions you want installed when the container is created.
"extensions": [
"ms-python.python",
"teabyii.ayu",
"jeff-hykin.better-dockerfile-syntax",
"coenraads.bracket-pair-colorizer-2",
"file-icons.file-icons",
"emilast.logfilehighlighter",
"zhuangtongfa.material-theme",
"ibm.output-colorizer",
"wayou.vscode-todo-highlight",
"atishay-jain.all-autocomplete",
"amazonwebservices.aws-toolkit-vscode",
"hookyqr.beautify",
"phplasma.csv-to-table",
"alefragnani.bookmarks",
"mrmlnc.vscode-duplicate",
"tombonnike.vscode-status-bar-format-toggle",
"donjayamanne.githistory",
"codezombiech.gitignore",
"eamodio.gitlens",
"zainchen.json",
"ritwickdey.liveserver",
"yzhang.markdown-all-in-one",
"pkief.markdown-checkbox",
"shd101wyy.markdown-preview-enhanced",
"ionutvmi.path-autocomplete",
"esbenp.prettier-vscode",
"diogonolasco.pyinit",
"ms-python.vscode-pylance",
"njpwerner.autodocstring",
"kevinrose.vsc-python-indent",
"mechatroner.rainbow-csv",
"msrvida.vscode-sanddance",
"rafamel.subtle-brackets",
"formulahendry.terminal",
"tyriar.terminal-tabs",
"redhat.vscode-yaml"
],
// Use 'forwardPorts' to make a list of ports inside the container available locally.
"forwardPorts": [8888],
// Use 'postCreateCommand' to run commands after the container is created.
// "postCreateCommand": "pip3 install -r requirements.txt",
// Comment out to connect as root instead.
// "remoteUser": "myname",
"shutdownAction": "none"
}
I have installed elastalert on Centos 7.6 and while starting the elastalert receiving the following error.
[root#e2e-27-36 elastalert]# python -m elastalert.elastalert --verbose --rule example_rules/example_frequency.yaml
Traceback (most recent call last):
File "/usr/lib64/python2.7/runpy.py", line 162, in _run_module_as_main
"__main__", fname, loader, pkg_name)
File "/usr/lib64/python2.7/runpy.py", line 72, in _run_code
exec code in run_globals
File "/root/elastalert/elastalert/elastalert.py", line 29, in <module>
from . import kibana
File "elastalert/kibana.py", line 4, in <module>
import urllib.error
ImportError: No module named error
How should I go about fixing this?
You can try to check if urllib3 is installed by running pip freeze or try to reinstall it with pip install urllib3.
You maybe need to correctly activate your environment variable like this : source [env]/bin/activate.
Setup conda environment
conda create -n elastalert python=3.6 anaconda
Activate conda env
conda activate elastalert
Install all the requirements
pip install -r requirements-dev.txt
pip install -r requirements.txt
I have found my fix by own.
1.On python2.7 the issue still persist
2.Install python3.6 version to fix the issue.
yum install python3 python3-devel python3-urllib3
3.Run the elastalert command
python3 -m elastalert.elastalert --config /root/elastalert/config.yaml --verbose --rule /root/elastalert/example_rules/example_frequency.yaml
4.If you received issue with the modules (ModuleNotFoundError: No module named 'pytz')
5.Install the modules as per the requirement.
pip3 install -r /root/elastalert/requirements.txt
6.Let's run the command "python3 -m elastalert.elastalert --config /root/elastalert/config.yaml --verbose --rule /root/elastalert/example_rules/example_frequency.yaml" and got error
urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='elasticsearch.example.com', port=9200): Max retries exceeded with url: / (Caused by NewConnectionError(': Failed to establish a new connection: [Errno -2] Name or service not known',))
7.Above error due to not valid hostname on config.yaml file. Edit the config.yaml file and change the hostname to server hostname at es.hosts field
Make sure you had an entry for the same on the /etc/hosts file.
8.Ok the issue got fixed and run the command "python3 -m elastalert.elastalert --config /root/elastalert/config.yaml --verbose --rule /root/elastalert/example_rules/example_frequency.yaml" and one more error
pkg_resources.DistributionNotFound: The 'jira>=2.0.0'
9.We need to install the jira by using below command
pip3 install jira==2.0.0
10.Now let's run the command "python3 -m elastalert.elastalert --config /root/elastalert/config.yaml --verbose --rule /root/elastalert/example_rules/example_frequency.yaml" and again another error OMG.
elasticsearch.exceptions.TransportError: TransportError(429, 'circuit_breaking_exception', '[parent] Data too large, data for [] would be [994793504/948.7mb], which is larger than the limit of [986061209/940.3mb], real usage: [994793056/948.7mb], new bytes reserved: [448/448b]')
11.You need to fix the same by changing the heap value on following /etc/elasticsearch/jvm.options
Xms-1g to Xms-2g
Xmx-1g to Xms-2g
and restart elasticsearch service "service elasticsearch restart"
12.Everything set again run the command "python3 -m elastalert.elastalert --config /root/elastalert/config.yaml --verbose --rule /root/elastalert/example_rules/example_frequency.yaml" and ended up receiving another error.
ERROR:root:Error finding recent pending alerts: NotFoundError(404, 'index_not_found_exception', 'no such index [elastalert_status]', elastalert_status, index_or_alias) {'query': {'bool': {'must': {'query_string': {'query': '!exists:aggregate_id AND alert_sent:false'}}, 'filter': {'range': {'alert_time': {'from': '2019-12-04T19:45:09.635478Z', 'to': '2019-12-06T19:45:09.635529Z'}}}}}, 'sort': {'alert_time': {'order': 'asc'}}}
13.Fix the issue by running the below command
elastalert-create-index
14.Finally everything done and run the below command
python3 -m elastalert.elastalert --config /root/elastalert/config.yaml --verbose --rule /root/elastalert/example_rules/example_frequency.yaml
Now cancelled the command and ran the same on background
python3 -m elastalert.elastalert --config /root/elastalert/config.yaml --verbose --rule /root/elastalert/example_rules/example_frequency.yaml &
I have written a docker file which adds my python script inside the container:
ADD test_pclean.py /test_pclean.py
My directory structure is:
.
├── Dockerfile
├── README.md
├── pipeline.json
└── test_pclean.py
My json file which acts as a configuration file for creating a pipeline in Pachyderm is as follows:
{
"pipeline": {
"name": "mopng-beneficiary-v2"
},
"transform": {
"cmd": ["python3", "/test_pclean.py"],
"image": "avisrivastava254084/mopng-beneficiary-v2-image-7"
},
"input": {
"atom": {
"repo": "mopng_beneficiary_v2",
"glob": "/*"
}
}
}
Even though I have copied the official documentation's example, I am facing an error:
python3: can't open file '/test_pclean.py': [Errno 2] No such file or directory
My dockerfile is:
FROM debian:stretch
# Install opencv and matplotlib.
RUN apt-get update \
&& apt-get upgrade -y \
&& apt-get install -y unzip wget build-essential \
cmake git pkg-config libswscale-dev \
&& apt-get clean \
&& rm -rf /var/lib/apt
RUN apt update
RUN apt-get -y install python3-pip
RUN pip3 install matplotlib
RUN pip3 install pandas
ADD test_pclean.py /test_pclean.py
ENTRYPOINT [ "/bin/bash/" ]
Like some of the comments above suggest. It looks like your test_pclean.py file isn't in the docker image. Here's what should fix it.
Make sure your test_pclean.py file is in your docker image by having be included as part of the build process. Put this as the last step in your dockerfile:
COPY test_pclean.py .
Ensure that your pachyderm pipeline spec has the following for the cmd portion:
"cmd": ["python3", "./test_pclean.py"]
And this is more of a suggestion than a requirement.... You'll make life easier for yourself if you use image tags as part of your docker build. If you default to latest tag, any future iterations/builds of this step in your pipeline could have negitave affects (new bugs in your code etc.). Therefore the best practice is to use a particular version in your pipeline: mopng-beneficiary-v2-image-7:v1 and mopng-beneficiary-v2-image-7:v2 and so on. That way you can iterate on say version 3 and it won't affect the already running pipeline.
docker build -t avisrivastava254084/mopng-beneficiary-v2-image-7:v1
Then just update your pipeline spec to use avisrivastava254084/mopng-beneficiary-v2-image-7:v1
I was not changing the commits to my docker images on each build and hence, Kubernetes was using the local docker file that it had(w/o tags and commits, it doesn't acknowledge any change). Once I started using commit with each build, Kubernetes started downloading the intended docker image.