How to know the host IP by running the docker-compose - python

I have a docker-compose file. Through my docker-compose, I'm running multiple services. For each service, I'm running the different containers. Among these containers, I have one container which is responsible to get the hardware and network info of the host machine. When I'm running the container in a standalone mode the container is able to provide me with the host IP. But unfortunately, when I'm running it along with other containers (most precisely through the docker-compose file), then I'm not able to get the host network information, rather than I'm always getting the bridge network information (i.e., the docker-compose internal network information). I tried to set the network_mode:host in my service, but unfortunately, when I set it, stops communicating with the other containers. Can anyone please suggest me the way of getting the host network information without tampering the internal communication between different service containers.

You could, perhaps, have this container in two networks. One with the host information, and the other for the 'internal' communication with the containers.
For example:
https://success.docker.com/article/multiple-docker-networks

Without a dockerfile and a docker-compose it's hard to test but I think that if you have a container with network_mode:host, you have to access other container trough the host only, that mean port forwarding, so if you have your container in host mode, and another binded to localhost 8080, I think you should be able to ping localhost:8080 from the host mode container.
Give me feedback about this, please!
Have fun!

Related

Looking up hostnames from Docker container

I'm currently trying to get lookup hostnames from inside a docker container using python. My example script is exceedingly simple, so here we go:
import sockets
print(socket.gethostbyaddr('8.8.8.8'))
It works fine from my Windows machine, and it works fine from the Ubuntu instance using WSL2. However, inside the Docker container, no dice. I've verified that it has internet access, it should be able to reach all the same servers my physical machine can access, and I've verified that by pinging those servers from inside the docker container.
However simply looking up the hostname of an IP doesn't work. Any advice?
Edit: The docker container I am running is based on Debian Bullseye
Edit 2: Executing nslookup inside the container is not able to resolve the hostnames either. Do I need to specify a DNS server inside the container?
Edit 3: I got a lot closer to actually solve it. The default nameserver in my resolv.conf is unable to resolve those lookups.
However checking what which nameserver my Ubuntu/Windows use, and manually setting it to be used inside of docker enabled me to at least use nslookup.
So echo "nameserver [ip of host nameserver]" > /etc/resolv.conf got me mostly there.
Okay, I've found the proper solution.
Obviously the proper way to do is, is to add the add --dns [ip of host nameserver] to the docker run arguments.
Edit: Or change it inside the python script in order to not need any arguments other than run to properly start the container for ease of use.

How to I get the host address using dockerpy of container?

I am trying to figureout from where to get the hostname of a running docker container that was started using docker-py.
Based on presence of DOCKER_HOST= file the started docker container my be on a remove machine and not on the localhost (machine running docker-py code).
I looked inside the container object and I was not able to find any information that would be of use for as 'HostIp': '0.0.0.0' is the remote docker host.
I need an IP or DNS name of the remote machine.
I know that I could start parsing DOCKER_HOST myself and "guess" that but this would not really be a reliable way of doing it, especially as there are multiple protocols involved: ssh:// and tcp:// at least.
I guess it should be an API based way of getting this information.
PS. We would assume that the docker host does not have firewall.
For the moment I ended up creating a bug on https://github.com/docker/docker-py/issues/2254 as I failed to find that information with the library.
The best method is probably to use a website like wtfismyip.com.
You can use
curl wtfismyip.com
to print it in terminal, and can then extract the public ip from the output.

Flask Docker image giving time-out error on Azure

I am trying to run the Flask mega-tutorial app on Azure off Docker. The Dockerfile is as given here, first I tried EXPOSE 5000 (as mentioned in this Dockerfile ) but as that lead to ERR_CONNECTION_TIMED_OUT I then tried EXPOSE 80 as suggested here: but the error remained.
Both ports 5000 and 80 in the Dockerfile worked fine off local server. Also, in each case, for Azure the instanceView.state=="Running" but pinging the ip address does not return anything.
The Azure-Docker helloWorld image also runs fine and my Azure CLI commands are exactly the same as in this example except for changing the container registry name etc. Apart from CLI, I tried doing it on the Azure portal as well with same outcome.
Thanks
When there is no issue with your image and it can work fine locally. It should be the port issue if you use the Azure Container Instance.
Azure Container Instances does not currently support port mapping like
with regular docker configuration
It means that if you expose the port 5000 in the container and you should expose the same port in Azure Container Instance group. For more details, see IPs may not be accessible due to mismatched ports. Also, maybe it's better to use the port 80. Hope this will help you. If there is more question you can give me the message.
Test with your application gives in your GitHub. Here is the screenshot of the result:

Error accessing my webpage from network

I've just started learning network developing using Flask. According to its official tutorial:
Externally Visible Server
If you run the server you will notice that the server is only
accessible from your own computer, not from any other in the network.
This is the default because in debugging mode a user of the
application can execute arbitrary Python code on your computer.
If you have the debugger disabled or trust the users on your network,
you can make the server publicly available simply by adding
--host=0.0.0.0 to the command line:
flask run --host=0.0.0.0
This tells your operating system to listen on all public IPs.
However, when I try to access 0.0.0.0:5000 on another device, I got an error: ERR_CONNECTION_REFUSE. In fact, I think this behavior is reasonable, since people all around world can use 0.0.0.0:5000 for different testing purposes, but isn't the tutorial implying that adding --host=0.0.0.0 can make my webpage "accessible not only from your own computer, but also from any other in the network"?
So, my question is:
What does adding --host=0.0.0.0 do?
How can I access my webpage on device B while the server is running on device A?
You don't access the Flask server on another computer by going to 0.0.0.0:5000. Instead, you need to put in the IP address of the computer that it is running on.
For example, if you are developing on a computer that has IP address 10.10.0.1, you can run the server like so:
flask run --host=0.0.0.0 --port=5000
This will start the server (on 10.10.0.1:5000) and listen for any connections from anywhere. Now your other device (say, on 10.10.0.2) can access that server by going to http://10.10.0.1:5000 in the browser.
If you don't have the host=0.0.0.0, the server on 10.10.0.1 will only listen for connections from itself (localhost). By adding that parameter, you are telling it to listen from connections external to itself.

How to host flower on a remote machine that can also be accessed over the internet

I am trying to run flower on a remote ubuntu server. However, I am unsure on what address/port to run it on so that other people can login (I have the basic auth set up) and check their celery workers. The ubuntu server is actually an EC2 instance, so am I supposed to use its private or public ip address? Do I just open any standard port? In their docs, they use their example setup with http://localhost/5555 but I do not think that will work if flower will be running on a remote server. Any advice?
Flower runs on 5555 by default- which port are you running it on? The private IP is only available if the requests are coming from INSIDE your amazon network, so probably public.
So, if my guesses are right, you want to create an AWS security rule allowing traffic from "anywhere" to port 5555 and apply that to your instance, and then access that instance using its public ip like
http://50.31.10.99:5555

Categories