Looking up hostnames from Docker container - python

I'm currently trying to get lookup hostnames from inside a docker container using python. My example script is exceedingly simple, so here we go:
import sockets
print(socket.gethostbyaddr('8.8.8.8'))
It works fine from my Windows machine, and it works fine from the Ubuntu instance using WSL2. However, inside the Docker container, no dice. I've verified that it has internet access, it should be able to reach all the same servers my physical machine can access, and I've verified that by pinging those servers from inside the docker container.
However simply looking up the hostname of an IP doesn't work. Any advice?
Edit: The docker container I am running is based on Debian Bullseye
Edit 2: Executing nslookup inside the container is not able to resolve the hostnames either. Do I need to specify a DNS server inside the container?
Edit 3: I got a lot closer to actually solve it. The default nameserver in my resolv.conf is unable to resolve those lookups.
However checking what which nameserver my Ubuntu/Windows use, and manually setting it to be used inside of docker enabled me to at least use nslookup.
So echo "nameserver [ip of host nameserver]" > /etc/resolv.conf got me mostly there.

Okay, I've found the proper solution.
Obviously the proper way to do is, is to add the add --dns [ip of host nameserver] to the docker run arguments.
Edit: Or change it inside the python script in order to not need any arguments other than run to properly start the container for ease of use.

Related

Running python code in a docker image in a server where I dont have root access

So, I have access to a server by ssh with some gpus where I can run some python code. I need to do that using a docker container, however if I try to do anything with docker in the server i get permission denied as I dont have root access (and I am not in the list of sudoers). What am I missing here?
Btw, I am totally new to Docker (and quite new to linux itself) so it might be that I am not getting some fundamental.
I solved my problem. Turns out I simply had to ask the server administrator to add me to a group and everything worked.

How do I build docker container behind company proxy?

I am trying to build a simple python based docker container. I am working at a corporate behind a proxy, on Windows 10. Below is my docker file:
FROM python:3.7.9-alpine3.11
WORKDIR ./
RUN pip install --proxy=http://XXXXXXX:8080 -r requirements.txt
COPY . /
EXPOSE 5000
CMD ["python", "application.py"]
But it's giving me the following errors in cmd :
"failed to solve with frontend dockerfile.v0: failed to build LLB: failed to load cache key: failed to do request: Head https://registry-1.docker.io/v2/library/python/manifests/3.7.9-alpine3.11: proxyconnect tcp: EOF"
I've tried to figure out how to configure docker's proxy, using many links but they keep referring to a file "/etc/sysconfig/docker" which I cannot find anywhere under Windows 10 or maybe I'm not looking at the right place.
Also I'm not sure this is only a proxy issue since I've seen people running into this issue without using a proxy.
I would highly appreciate anyone's help. Working at this corporate already made me spend >10 hours doing something that took me 10 minutes to do on my Mac... :(
Thank you
You're talking about the most basic of Docker functionality. Normally, it has to connect to the Docker Hub on the internet to get base images. If you can't make this work with your proxy, you can either
preload your local cache with the necessary images
set up a Docker registry inside your firewall that contains all the images you'll need
Obviously, the easiest thing, probably by far, would be to figure out how to get Docker to connect to Docker Hub through your proxy.
In terms of getting Docker on Windows to work with your proxy, might this help? - https://learn.microsoft.com/en-us/virtualization/windowscontainers/manage-docker/configure-docker-daemon
Here's what it says about configuring a proxy:
To set proxy information for docker search and docker pull, create a Windows environment variable with the name HTTP_PROXY or HTTPS_PROXY, and a value of the proxy information. This can be completed with PowerShell using a command similar to this:
In PowerShell:
[Environment]::SetEnvironmentVariable("HTTP_PROXY", "http://username:password#proxy:port/", [EnvironmentVariableTarget]::Machine)
Once the variable has been set, restart the Docker service.
In PowerShell:
Restart-Service docker
For more information, see Windows Configuration File on Docker.com.
I've also seen it mentioned that Docker for Windows allows you to set proxy parameters in its configuration GUI interface.
There is no need to pass proxy information in the Dockerfile.
There are predefined ARGs which can be used for this purpose.
HTTP_PROXY
HTTPS_PROXY
FTP_PROXY
You can pass the details when building the image
https://docs.docker.com/engine/reference/builder/#predefined-args
I do not see any run time dependency of your container on the Internet. So running the container will work without an issue.

How to know the host IP by running the docker-compose

I have a docker-compose file. Through my docker-compose, I'm running multiple services. For each service, I'm running the different containers. Among these containers, I have one container which is responsible to get the hardware and network info of the host machine. When I'm running the container in a standalone mode the container is able to provide me with the host IP. But unfortunately, when I'm running it along with other containers (most precisely through the docker-compose file), then I'm not able to get the host network information, rather than I'm always getting the bridge network information (i.e., the docker-compose internal network information). I tried to set the network_mode:host in my service, but unfortunately, when I set it, stops communicating with the other containers. Can anyone please suggest me the way of getting the host network information without tampering the internal communication between different service containers.
You could, perhaps, have this container in two networks. One with the host information, and the other for the 'internal' communication with the containers.
For example:
https://success.docker.com/article/multiple-docker-networks
Without a dockerfile and a docker-compose it's hard to test but I think that if you have a container with network_mode:host, you have to access other container trough the host only, that mean port forwarding, so if you have your container in host mode, and another binded to localhost 8080, I think you should be able to ping localhost:8080 from the host mode container.
Give me feedback about this, please!
Have fun!

How to I get the host address using dockerpy of container?

I am trying to figureout from where to get the hostname of a running docker container that was started using docker-py.
Based on presence of DOCKER_HOST= file the started docker container my be on a remove machine and not on the localhost (machine running docker-py code).
I looked inside the container object and I was not able to find any information that would be of use for as 'HostIp': '0.0.0.0' is the remote docker host.
I need an IP or DNS name of the remote machine.
I know that I could start parsing DOCKER_HOST myself and "guess" that but this would not really be a reliable way of doing it, especially as there are multiple protocols involved: ssh:// and tcp:// at least.
I guess it should be an API based way of getting this information.
PS. We would assume that the docker host does not have firewall.
For the moment I ended up creating a bug on https://github.com/docker/docker-py/issues/2254 as I failed to find that information with the library.
The best method is probably to use a website like wtfismyip.com.
You can use
curl wtfismyip.com
to print it in terminal, and can then extract the public ip from the output.

visdom.server inside Docker

I am running a pytorch training on CycleGan inside a Docker image.
I want to use visdom to show the progress of the training (also recommended from the CycleGan project).
I can start a visdom.server inside the docker container and access it outside of the container. But when I try to use the basic example on visdom inside a bash session, of the same container that is running the visdom.server. I get connection refused errors such as The requested URL could not be retrieved.
I think I need to configure the visdom.Visdom() in the example in some custom way to be able to send the data to the server.
Thankful for any help!
Notes
When I start visdom.server it says You can navigate to http://c4b7a2be26c4:8097, when all the examples mentions localhost:8097.
I am trying to do this behind a proxy.
I realised that, in order to curl localhost:8097, I need to use curl --noproxy localhost, localhost:8097. So I will have to do something similar inside visdom.
When setting http_proxy inside a docker container, you need to set no_proxy=localhost, 127.0.0.1 as well in order to allow connections to local host.
Got the same problem, And I found when you use a docker container to connect server, then you can not use the same docker container to run you code

Categories