I'm trying to dockerize a python/django application. When Docker runs the build script in a web container, it is unable to retrieve the images from application.
It shows an error:
and on Google chrome console it shows:
But what is interesting is that I don't get any errors when I do the same on my local machine.
Everything goes as expected.
Docker version 18.03.1-ce, build 9ee9f40
Related
Docker for Windows.
I've written a docker image for a python project.
When I start a container based on this image, one of the instructions that gets executed is the following:
os.system(f"start \"\" "path/to/pdf_file.pdf")
which simply opens the specified PDF file with the default PDF opener.
This instruction works perfectly locally, but, if I run it from a docker container, this file doesn't get opened in my local machine.
This makes sense, it probably gets opened in the docker container, and I can't see it because I've no way of seeing what's going on in the container (except by looking at the container logs).
My goal is to have the docker container TELL to my local machine to execute that command, in such a way the PDF file gets opened in my local machine, not in the docker container.
How could I achieve such goal?
I would like to run Selenium integration tests on a Development server.
Our App is Django app and it is deployed via Jenkins and Docker.
We know how to write and run Selenium tests localy
We know how to run tests with Jenkins and present Cobertura and Junit reports
The problem we have is:
In order for Selenium tests (apart from unit tests) the server needs to run
So we can't run tests before we build docker image
How to run tests inside docker images (this could be potentially achieved via script called inside Dockerfile...)
BUT even more important: How can Jenkins get the reports from inside docker containers
Whats are the best practice here.
The Deployment structure:
Jenkins get code from GIT
Jenkins Builds docker images
pass this image to the Docker registry (a private one)
log in to the remote server
on remote server pull the image from the registry
run image on a remote server
I have been having trouble executing my python script in a remote server. However, when I build the docker image in my local machine (macOS Mojave), the docker image executes my script fine. The same docker image built in the remote server has issues with running the python script in the docker image. I was under the impression that the docker image would produce the same results regardless of its host OS. Correct if I'm wrong and what I can do to produce the same results in the remote server VM. What exactly is the underlying problem causing this issue? Should I request a new remote VM with certain specifications?
When attempting to build an image from a Dockerfile in PyCharm the following error is shown
Deploying 'test1 Dockerfile: Dockerfile'...
Building image...
Failed to deploy 'test1 Dockerfile: Dockerfile': com.github.dockerjava.api.exception.DockerClientException: Failed to parse docker configuration file<br/>caused by: java.io.IOException: Failed to parse docker config.json
Building outside of PyCharm from the command line works fine.
I assume I'm meant to set the location of the .docker/config.json file, but I don't see where in PyCharm to do that.
Versions
PyCharm Community 2018.2.1
PyCharm Docker plugin 182.3684.90
Docker version 18.03.1-ce, build 9ee9f40
Ubuntu 16.04
Something could have tempered with ~/.docker/config.json. Try deleting it and restarting the docker service afterwards.
You might want to docker login again in case you were logged in to some private registries.
I am creating Python code that will be built into a docker image.
My intent is that the docker image will have the capability of running other docker images on the host.
Let's call these docker containers "daemon" and "workers," respectively.
I've proven that this concept works by running "daemon" using
-v /var/run/docker.sock:/var/run/docker.sock
I'd like to be able to write the code so that it will work anywhere that there exists a /var/run/docker.sock file.
Since I'm working on an OSX machine I have to use the Docker Quickstart terminal. As such, on my system there is no docker.sock file.
The docker-py documentation shows this as the way to capture the docker client:
from docker import Client
cli = Client(base_url='unix://var/run/docker.sock')
Is there some hackery I can do on my system so that I can instantiate the docker client that way?
Can I create the docker.sock file on my file system and have it sym-linked to the VM docker host?
I really don't want to have to build my docker image every time I was to test a single line code change... help!!!