I am trying to use python image to build and test a very simple python project. But when I give Docker Image name in Jenkinsfile, it fails to pull the image.
[drone-python_test-jenk-NVHH77CLU5PUMV6UVRK62EARJB3DUVF5FWILYVRZDXOE54RACN2Q] Running shell script
+ docker pull python
Using default tag: latest
Warning: failed to get default registry endpoint from daemon (Cannot connect to the Docker daemon. Is the docker daemon running on this host?). Using system default: https://index.docker.io/v1/
Cannot connect to the Docker daemon. Is the docker daemon running on this host?
script returned exit code 1
JenkinsFile
pipeline {
agent {
docker {
image 'python'
}
}
stages {
stage('Build') {
steps {
sh '''virtualenv --no-site-packages .env
'''
sh '.env/bin/pip install -r dev-requirements.txt'
}
}
stage('Test') {
steps {
sh 'flake8 setup.py drone tests'
}
}
stage('test2') {
steps {
sh 'nosetests --with-coverage --cover-package drone -v'
}
}
}
Edit:
Tried mounting the docker host using a docker-compose file
version: '2'
services:
jenkins:
image: jenkinsci/blueocean
ports:
- 8888:8080
volumes:
- /var/run/docker.sock:/var/run/docker.sock
restart: always
also added user jenkins to docker group(is this correct user?)
But it still did not work. DroneIO also used docker images to setup environment but it did not have this issue.
I have encountered the same problem.
The Docker daemon inside container seems not started. I started if manually by docker exec -u root jenkins /bin/sh -c 'dockerd >/var/log/docker.log 2>&1 &'. It seems to work.
Now, i try to build a Dockerfile that exends jenkins/blueocean:latest with a modification af entry point script to start docker daemon automatically.
Related
I've seen other posts with similar questions but I can't find a solution for my case scenario. I hope someone can help.
Here is the thing: I have a python script that listens for UDP traffic and stores the messages in a log file. I've put this script in a docker image so I can run it in a container.
I need to map the generated logs (python script logs) FROM inside the container TO a folder outside the container, on the host machine (Windows host).
If I use docker-compose everything works fine!! but I can't find the way to make it work using a "docker run ..." command.
Here is my docker-compose.yml file
version: '3.3'
services:
'20001':
image: udplistener
container_name: '20001'
ports:
- 20001:20001/udp
environment:
- UDP_PORT=20001
- BUFFER_SIZE=1024
volumes:
- ./UDPLogs/20001:/usr/src/app/logs
And here is the corresponding docker run command:
docker run -d --name 20001 -e UDP_PORT=20001 -e BUFFER_SIZE=1024 -v "/C/Users/kgonzale/Desktop/UDPLogs/20001":/usr/src/app/logs -p 20001:20001 udplistener
I think the problem may be related to the way I'm creating the volumes, I know is different (docker-compose -> Relative path) (docker command -> Absolute path) but I can't find the way to use relative paths when using the docker run command..
To summarize: the python script is creating logs Inside the container, I want to map those logs Outside the container. I can see the logs in the host machine if I use "docker-compose up -d" but I need the "docker run ..." corresponding command.
Container: python:3.7-alpine
Host: Windows 10
Thanks in advance for your help!
I have a web project which I have hosted on the server. Frontend is angular, backend is flask and database is mongodb and all are made as a docker container linking with each other.
I am doing flowing steps
Backend Hosting
Backend is created as a docker container at 10.31.61.52.
Use “docker build . -t backend” to create docker image.And
after it is build use - “ docker run -itd -p 5000:5000 --linkmongodb:mongodb --name skybridge_backend backend” to run this container.Your backend is good to go!
2 Frontends Hosting
Frontend is created as a docker container at 10.31.61.52
docker build . -t frontend” to create docker image.And
after it is build use - “ docker run -itd -p 80:9000 --linkskybridge_backend:skybridge_backend --name skybridge_frontend frontend” torun this container
instead of 80 want to use 8080 as host port so that my URL would be below.
Then just want to access the URL like this http://10.31.61.52:8080/login
You specify the port redirection from the host machine to the container using docker run -p <host port>:<container port>. Therefore changing the port from 80 to 8080 in the command to run the frontend container like this should do the trick:
docker run -itd -p 8080:9000 --linkskybridge_backend:skybridge_backend --name skybridge_frontend frontend
when i run this command docker run -itd -p 80:9000 --linkskybridge_backend:skybridge_backend --name skybridge_frontend frontend
linking is happening but internally its referring port 80 in the url http://10.31.61.52/login and page is displaying
but want to change it to port 8080 so that my URL should be http://10.31.61.52:8080/login..
when i try to access this url page not opening.
http://10.31.61.52/login -- Login screen come up
http://10.31.61.52:8080/login -- Login screen not coming
var arr = [
{
"TOTAL_USAGE":17243,
"MNTH":"NOV",
"SERVICE_NAME":"CustomerConsumerEmcsfdc"
},
{
"MNTH":"AUG",
"TOTAL_USAGE":1104,
"SERVICE_NAME":"CustomerConsumerEmcsfdc"
},
{
"MNTH":"OCT",
"TOTAL_USAGE":45,
"SERVICE_NAME":"CustomerConsumerEmcsfdc"
},
{
"MNTH":"JUL",
"SERVICE_NAME":"CustomerConsumerEmcsfdc",
"TOTAL_USAGE":19747
},
{
"TOTAL_USAGE":1539,
"SERVICE_NAME":'CustomerConsumerEmcsfdc',
"MNTH":"SEP"
}
];
I have a docker image in dockerhub that I want to add as an agent in my jenkins pipeline script. As a part of the image, I perform git clone to fetch a repository from github that has multiple python scripts which corresponds to multiple stages in my jenkins pipeline. I tried searching everywhere but I'm not able to find relevant information that talks about how to access the files inside a docker container in jenkins pipeline.
I'm running jenkins on a VM and it has docker installed. The pipeline performs a build on a docker container. Since there are many steps involved in every single stage of the pipeline, I tried to use python API as much as possible.
This is how my dockerfile looks like and the image builds successfully and I'm able to host it in dockerhub. When I run the container, I'm able to see "jenkins_pipeline_scripts" directory which contains all the necessary python scripts for the pipeline stages.
FROM ros:melodic-ros-core-stretch
RUN apt-get update && apt-get -y install python-pip
RUN git clone <private-repo with token>
This is how my current jenkins pipeline script looks like.
pipeline {
agent {
docker {
image '<image name>'
registryUrl 'https://registry.hub.docker.com'
registryCredentialsId 'docker-credentials'
args '--network host -u root:root'
}
}
stages {
stage('Test') {
steps {
sh 'python jenkins_pipeline_scripts/scripts/test.py'
}
}
}
}
This is the error I'm getting when I execute the job.
$ docker top 05587cd75db5c4282b86b2f1ded2c43a0f4eae161d6c7d7c03d065b0d45e1 -eo pid,comm
[Pipeline] {
[Pipeline] stage
[Pipeline] { (Test)
[Pipeline] sh
+ python jenkins_pipeline_scripts/scripts/test.py
python: can't open file 'jenkins_pipeline_scripts/scripts/test.py': [Errno 2] No such file or directory
When Jenkins Pipeline to launch the agent container, it will change the container's WORKDIR via -w option and mount the Jenkins job's workspace folder via -v option.
As a result of both options, the Jenkins job's workspace folder will becomes your container's WORKDIR.
Following is my jenkins job console output:
docker run -t -d -u 29001:100
-w /bld/workspace/test/agent-poc
-v /bld/workspace/test/agent-poc:/bld/workspace/test/agent-poc:rw,z
-v /bld/workspace/test/agent-poc#tmp:/bld/workspace/test/agent-poc#tmp:rw,z
-e ******** -e ******** -e ******** -e ********
docker.hub.com/busybox cat
You clone the code when build the image and they are not inside the WORKDIR, thus reports no such file error.
Two approaches to fix your issue.
1) cd your code folder at firstly, you should know that path.
stage('Test') {
steps {
sh '''
cd <your code folder in container>
python jenkins_pipeline_scripts/scripts/test.py
'''
}
}
2) move git clone code repo from Dockerfile into pipeline stage
As I explained at begin, your job's workspace will become container's WORKDIR,
thus you can clone your code into jenkins job workspace via pipeline step, then you no need to cd <your code folder in container>.
trying to get remote debugging for my python flask API I have. I'm able to docker-compose up and have postman successfully call the running container, but when I try to attach the debugger, it never compiles. Below is my yml, dockerfile and vscode launch settings... the following error I get is:
There was an error in starting the debug server. Error = {"code":"ECONNREFUSED","errno":"ECONNREFUSED","syscall":"connect","address":"127.0.0.1","port":5050}
version: '2'
services:
website:
build: .
command: >
python ./nomz/app.py
environment:
PYTHONUNBUFFERED: 'true'
volumes:
- '.:/nomz'
ports:
- '5000:5000'
- '5050'
DockerFile
FROM python:3.6-slim
ENV INSTALL_PATH /nomz
RUN mkdir -p $INSTALL_PATH
WORKDIR $INSTALL_PATH
COPY requirements.txt requirements.txt
RUN pip3 install -r requirements.txt
COPY . .
EXPOSE 5000 5050
VSCode Launch Settings
{
"name": "Python: Attach",
"type": "python",
"request": "attach",
"localRoot": "${workspaceFolder}/nomz/app.py",
"remoteRoot": "/nomz/",
"port": 5050,
"host": "localhost"
}
I finally got it working with remote debugging. I had to pip3 install ptvsd==3.0.0 on my local, and make sure that the requirements.txt for my docker container had the same version. (note: the latest version 3.2.1 didn't work)
#BrettCannon had the right link for a good tutorial
https://code.visualstudio.com/docs/python/debugging#_remote-debugging
What I had to do was add some code to the app.py of the flask app. I originally was getting address already in use error when starting the container, so I added the socket code and after the first successful attach of debugger I didn't seem to need it anymore (strange I know, but that's why I left it in in case someone else gets that error)
try:
import ptvsd
# import socket
# sock = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
# sock.close()
ptvsd.enable_attach(secret=None,address=('0.0.0.0',5050))
ptvsd.wait_for_attach()
except Exception as ex:
print('Not working: ')
also i took the debug kwarg off the app.run() in app.py for the flask app.
This all gave me the ability to connect the debugger, but the breakpoints where "Unverified", so the last thing that had to happen was a path to the app.py in the launch.json for the remoteRoot. I will say I created a small test api to get this working, and it only need the first level of pathing (ie. /app and not /app/app/app.py)Here is a github of the test api I made (https://github.com/tomParty/docker_python). So if the debugger is attaching, but your breakpoints are unverified, play around with the path of the remoteRoot
"remoteRoot": "/nomz/nomz/app.py"
I've just registered for this question. It's about if it's possible to remote debug python code in a Docker Container with VS Code.
I am having a completely configured Docker Container here. I got a little bit of help with it, and I'm pretty new to docker anyways. In it runs Odoo v10. But I cant get the remote debug in VS Code to work. I have tried this explanation, but I don't really get it.
Is it even possible? And if yes, how can I get it to work?
I'm running Kubuntu 16.04 with VS Code 1.6.1 and the Python Extension from Don Jayamanne.
Ah yeah and I hope I am at the right location with this question and it's not against any rules.
UPDATE:
Just tried the way of Elton Stoneman. With it I'm getting this error:
There was an error in starting the debug server.
Error = {"code":"ECONNREFUSED","errno":"ECONNREFUSED","syscall":"connect",
"address":"172.21.0.4","port":3000}
My Dockerfile looks like this:
FROM **cut_out**
USER root
# debug/dev settings
RUN pip install \
watchdog
COPY workspace/pysrc /pysrc
RUN apt-get update \
&& apt-get install -y --no-install-recommends \
build-essential \
python-dev \
&& /usr/bin/python /pysrc/setup_cython.py build_ext --inplace \
&& rm -rf /var/lib/apt/lists/*
EXPOSE 3000
USER odoo
The pysrc in my Dockerfile is there because this was intended for working with PyDev (Eclipse) before.
This is the run command I've used:
docker-compose run -d -p 3000:3000 odoo
And this is the important part of my launch.json:
{
"name": "Attach (Remote Debug)",
"type": "python",
"request": "attach",
"localRoot": "${workspaceRoot}",
"remoteRoot": "${workspaceRoot}",
"port": 3000,
"secret": "my_secret",
"host": "172.21.0.4"
}
I hope that's enough information for now.
UPDATE 2:
Alright I found the solution. I totally misunderstood how Docker works and tried it completeley wrong. I already had a completeley configured Docker-compose. So everything I needed to do was to adapt my VS Code configs to the docker-compose.yml. This means that I just had to change the launch.json to the port 8069 (default Odoo port) and just need to use docker-compose up, then the debugging works in VS Code.
Unfortunately the use of ptvsd kinda destroys my Odoo environment, but at least I'm able to debug now. Thanks!
Yes, this is possible - when the Python app is running in a Docker container, you can treat it like a remote machine.
In your Docker image, you'll need to make the remote debugging port available (e.g. EXPOSE 3000 in the Dockerfile), include the ptvsd setup in your Python app, and then publish the port when you run the container, something like:
docker run -d -p 3000:3000 my-image
Then use docker inspect to get the IP address of the running container, and that's what you use for the host in the launch file.
works with vscode 1.45.0 & later. for reference files https://gist.github.com/kerbrose/e646aaf9daece42b46091e2ca0eb55d0
1- Edit your docker.dev file & insert RUN pip3 install -U debugpy. this will install a python package debugpy instead of the deprecated one ptvsd because your vscode (local) will be communicating to debugpy (remote) server of your docker image using it.
2- Start your containers. however you will be starting the python package that you just installed debugpy. it could be as next command from your shell.
docker-compose run --rm -p 8888:3001 -p 8879:8069 {DOCKER IMAGE[:TAG|#DIGEST]} /usr/bin/python3 -m debugpy --listen 0.0.0.0:3001 /usr/bin/odoo --db_user=odoo --db_host=db --db_password=odoo
3- Prepare your launcher file as following. please note that port will be related to odoo server. debugServer will be the port for the debug server
{
"name": "Odoo: Attach",
"type": "python",
"request": "attach",
"port": 8879,
"debugServer": 8888,
"host": "localhost",
"pathMappings": [
{
"localRoot": "${workspaceFolder}",
"remoteRoot": "/mnt/extra-addons",
}
],
"logToFile": true
}
If you want a nice step by step walkthrough of how to attach a remote debugger for VS code in a container you could check out the youtube video "Debugging Python in Docker using VSCode".
He also talks about how to configure the Docker file such that the container does not includes the debugger when run in production mode.