Temporary failure in name resolution [Errno -3] with Docker - python

I'm following the docker tutorial and am on the part where I have to build the app using:
docker build -t friendlyhello .
It reaches up to step 4, where after a pause I get this error:
Step 4/7 : RUN pip install -r requirements.txt
---> Running in 7f4635a7510a
Collecting Flask (from -r requirements.txt (line 1))
Retrying (Retry(total=4, connect=None, read=None, redirect=None)) after
connection broken by
'NewConnectionError('<pip._vendor.requests.packages.urllib3.connection.VerifiedHTTPSConnection
object at 0x7fe3984d9b10>: Failed to establish a new connection:
[Errno -3] Temporary failure in name resolution',)': /simple/flask/
I'm not quite sure what this error means and how I can go about solving it.
Thanks for your help!

I just did sudo service docker restart and it worked after. Definitely worth a shot before jumping in to modify your configurations.

I got the same problem with Ubuntu 16.04 and Docker version 17.09.0-ce.
I don't think disabling dnsmasq is the right solution.
Here is how I solved it:
For Ubuntu
Edit /etc/default/docker and add your DNS server to the following line:
Example
DOCKER_OPTS="--dns 8.8.8.8 --dns 10.252.252.252"
Reference:
Network calls fail during image build on corporate network

bkasap's answer changes a system's feature I would say is exaggerated. Further because there are options in docker to do that. The new way to do that is
$ sudo vi /etc/docker/daemon.json
and add following content
{
"dns": ["8.8.8.8", "8.8.4.4"]
}
Don't forget to
sudo service docker restart

It's silly, but I had a VPN connected when I got this error.
After disconnecting the VPN, PIP started working again.

On fedora 32 it was problem with firewall. Following command resolved issue:
$firewall-cmd --permanent --zone=trusted --add-interface=docker0
$firewall-cmd --reload

this post worked for me too!
Solved by dns mask [sic] disable:
sudo vim /etc/NetworkManager/NetworkManager.conf
comment out dns=dnsmasq -> #dns=dnsmasq
sudo service network-manager restart (or reboot VM in this case)
from: https://github.com/moby/moby/issues/26330

Had this just now, on my Ubuntu 20.04. Randomly, it just stopped working!
Tried:
sudo service network-manager restart
Did not work. Then I just did:
sudo systemctl restart docker
and the issue was resolved!

This error means your Docker container is unable to access your network.
Beginning with systemd version 220, the forwarding setting for a given network (net.ipv4.conf..forwarding) defaults to off. This setting prevents IP forwarding. It also conflicts with Docker’s behavior of enabling the net.ipv4.conf.all.forwarding setting within containers.
If your container needs to resolve hosts which are internal to your network, the public nameservers will not be adequate. You have two choices:
You can specify a DNS server for Docker to use, or
You can disable dnsmasq in NetworkManager. If you do this, NetworkManager will add your true DNS nameserver to /etc/resolv.conf, but you will lose the possible benefits of dnsmasq.
You only need to use one of these methods.
you can read about how to perform these steps here

I am having the same issue with Ubuntu 16.04.1 machine for docker-ce 17.
Its got fixed by disable the dns mask in the network.
sudo nano /etc/NetworkManager/NetworkManager.conf
Press Ctrl+O save and Enter the exit Ctrl+X
Restart the network service by running bellow command.
sudo service network-manager restart
After this if you run the docker build command everything will work fine.

I had this problem on Windows 10 Pro and I solved it by right clicking on the docker icon in the tray and choosing "Restart...". It took a few mins and then the network was running fine again.

for me rebooting host machine resolved the issue

Docker build: "Temporary failure in name resolution"
I also got the "temporary failure in name resolution" too. My solution was to specify the network on the docker build command:
s001# docker network create example_net
s001# docker build --network example_net -t example_image example_image
^^^^^^^^^^^^^^^^^^^
I also configured the dns on docker config on my development notebook:
s001# nano /etc/docker/daemon.json
{
"dns": ["8.8.8.8"]
}
s001# systemctl restart docker

I changed the default DNS server in /etc/resolv.conf and it worked for me.
FROM:
nameserver 127.0.0.53
options edns0 trust-ad
TO:
nameserver 8.8.8.8
#nameserver 127.0.0.53
options edns0 trust-ad
I just added the DNS server of Google and commented out the default DNS server.

My case was tricky and related to environmental conditions, but is worth mentioning.
I was under a firewall with bandwidth limitations based on its own hierarchy-based logic (critical, hard, medium traffic, etc...).
Every time I was starting huge docker pull, everything on my host started misbehaving (https browser navigation based upon DNS, ping based upon DNS, ... and Docker, ofc.
Removing those limits fixed my problem, so check your network, too.

If you are facing it on windows machine,you can configure the way docker containers interact with network and set dns manually.
Settings=>Resources=>Network=>Manual DNS Configuration
Here is how it is configured

Don't forget to check your internet connection especially if you are using a virtual machine in cloud (for example EC2).
I had no internet connection when I tried to run a container in the EC2. I was connected by bastion host to the VM. I didn't have internet connection for the virtual machine.
I wasted too much time. I hope this answer helps the people like me.

Related

How do I build docker container behind company proxy?

I am trying to build a simple python based docker container. I am working at a corporate behind a proxy, on Windows 10. Below is my docker file:
FROM python:3.7.9-alpine3.11
WORKDIR ./
RUN pip install --proxy=http://XXXXXXX:8080 -r requirements.txt
COPY . /
EXPOSE 5000
CMD ["python", "application.py"]
But it's giving me the following errors in cmd :
"failed to solve with frontend dockerfile.v0: failed to build LLB: failed to load cache key: failed to do request: Head https://registry-1.docker.io/v2/library/python/manifests/3.7.9-alpine3.11: proxyconnect tcp: EOF"
I've tried to figure out how to configure docker's proxy, using many links but they keep referring to a file "/etc/sysconfig/docker" which I cannot find anywhere under Windows 10 or maybe I'm not looking at the right place.
Also I'm not sure this is only a proxy issue since I've seen people running into this issue without using a proxy.
I would highly appreciate anyone's help. Working at this corporate already made me spend >10 hours doing something that took me 10 minutes to do on my Mac... :(
Thank you
You're talking about the most basic of Docker functionality. Normally, it has to connect to the Docker Hub on the internet to get base images. If you can't make this work with your proxy, you can either
preload your local cache with the necessary images
set up a Docker registry inside your firewall that contains all the images you'll need
Obviously, the easiest thing, probably by far, would be to figure out how to get Docker to connect to Docker Hub through your proxy.
In terms of getting Docker on Windows to work with your proxy, might this help? - https://learn.microsoft.com/en-us/virtualization/windowscontainers/manage-docker/configure-docker-daemon
Here's what it says about configuring a proxy:
To set proxy information for docker search and docker pull, create a Windows environment variable with the name HTTP_PROXY or HTTPS_PROXY, and a value of the proxy information. This can be completed with PowerShell using a command similar to this:
In PowerShell:
[Environment]::SetEnvironmentVariable("HTTP_PROXY", "http://username:password#proxy:port/", [EnvironmentVariableTarget]::Machine)
Once the variable has been set, restart the Docker service.
In PowerShell:
Restart-Service docker
For more information, see Windows Configuration File on Docker.com.
I've also seen it mentioned that Docker for Windows allows you to set proxy parameters in its configuration GUI interface.
There is no need to pass proxy information in the Dockerfile.
There are predefined ARGs which can be used for this purpose.
HTTP_PROXY
HTTPS_PROXY
FTP_PROXY
You can pass the details when building the image
https://docs.docker.com/engine/reference/builder/#predefined-args
I do not see any run time dependency of your container on the Internet. So running the container will work without an issue.

How do I run pycharm within my docker container?

I'm very new to docker. I want to build my python application within a docker container. As I build the application I want to be testing / running it in Pycharm and in the container I build.
How do I connect Pycharm pro to a specific container or image (either python or Anaconda)?
When I create a project, click pure python and then add remote, then clicking docker I get the following result
I'm running on Mac OS X El Capitan (10.11.6) with Docker version 1.12.1 and Pycharm Pro 2016.2.3
Docker-for-mac only supports connections over the /var/run/docker.sock socket that is listening on your OSX host.
If you try to add this to pycharm, you'll get the following message:
"Cannot connect: java.lang.ExceptionInInitializerError, caused by: java.lang.IllegalStateException: Only supported on Linux"
So PyCharm really only wants to connect to a docker daemon over a TCP socket, and has support for the recommended TLS protection of that socket. The Certificates folder defaults to the certificate folder for the default docker-machine machine, "default".
It is possible to implement a workaround to expose Docker for Mac via a TCP server if you have socat installed on your OSX machine.
On my system, I have it installed via homebrew:
brew install socat
Now that's installed, I can run socat with the following parameters:
socat TCP-LISTEN:2376,reuseaddr,fork,bind=127.0.0.1 UNIX-CLIENT:/var/run/docker.sock
WARNING: this will make it possible for any process running as any user on your whole mac to access your docker-for-mac. The unix socket is protected by user permissions, while 127.0.0.1 is not.
This socat command tells it to listen on 127.0.0.1:2376 and pass connections on to /var/run/docker.sock. The reuseaddr and fork options allow this one command to service multiple connections instead of just the very first one.
I can test that socat is working by running the following command:
docker -H tcp://127.0.0.1:2376 ps
If you get a successful docker ps response back, then you know that the socat process is doing its job.
Now, in the PyCharm window, I can put the same tcp://127.0.0.1:2376 in place. I should get a "Connection successful" message back:
This workaround will require that socat command to be running any time you want to use docker from PyCharm.
If you wanted to do the same thing, but with TLS, you could set up certificates and make them available for both pycharm and socat, and use socat's OPENSSL-LISTEN instead of the TCP-LISTEN feature. I won't go into the details on that for this answer though.

pycharm can't complete remote interpreter setup for Docker

I'm new to Docker. I'm using Docker & docker-compose, going through a flask tutorial. The base docker image is python 2.7 slim.
It's running on Linux. docker 1.11.2
The application is working fine.
I want to get pycharm pro connecting to the remote interpreter, something I have never done before.
I followed the instructions for docker-compose. Initially it was failing because it could not connect to port 2376. I added this port to docker-compose.yml and the error went away.
However, trying to save the configuration now stalls/hangs with a dialog 'Getting Remote Interpreter Version'. This never completes. Also, I can't quit pycharm. This happens in Pycharm 2016.2 and 2016.3 EAP (2nd).
The help say "SFTP support is required for copying helpers to the server".
Does this mean I need to do something?
I'm not using docker-machine
The problem was that TCP access to the docker API is not established by default under ubuntu 16.04.
There are suggestions to enable TCP/IP access.
However, JetBrains gave me the simplest solution:
If you are using Linux it is most likely that Docker installed with
its default setup and Docker is expecting to be used through UNIX
domain file socket /var/run/docker.sock. And you should specify
unix:///var/run/docker.sock in the API URL field. Please comment
whether it helps!
This suggestion worked with my Ubuntu 16.04 -derived distribution.
This goes into the Docker entry in PyCharm preferences under Build, Execution, Deployment.
You can also edit this while setting up a remote interpreter, but only by making a new Docker entry.
TCP/IP Method
This method works if you want TCP/IP access, but this is a security risk. The socket approach is better, which is probably why it is the default.
https://coreos.com/os/docs/latest/customizing-docker.html
Customizing docker
The Docker systemd unit can be customized by overriding the unit that
ships with the default CoreOS settings. Common use-cases for doing
this are covered below.
Enable the remote API on a new socket
Create a file called /etc/systemd/system/docker-tcp.socket to make
Docker available on a TCP socket on port 2375.
[Unit]
Description=Docker Socket for the API
[Socket]
ListenStream=2375
BindIPv6Only=both
Service=docker.service
[Install]
WantedBy=sockets.target
Then enable this new socket:
systemctl enable docker-tcp.socket
systemctl stop docker
systemctl start docker-tcp.socket
systemctl start docker
Test that it’s working:
docker -H tcp://127.0.0.1:2375 ps
Once I thought to search for ubuntu 16.04 I came across simpler solutions, but I did not test them.
For instance:
https://www.ivankrizsan.se/2016/05/18/enabling-docker-remote-api-on-ubuntu-16-04/
Edit the file /lib/systemd/system/docker.service
Modify the line that starts with ExecStart to look like this:
ExecStart=/usr/bin/docker daemon -H fd:// -H tcp://0.0.0.0:2375
Where my addition is the “-H tcp://0.0.0.0:2375” part. Save the
modified file. Restart the Docker service:
sudo service docker restart
Test that the Docker API is indeed accessible:
curl http://localhost:2375/version
I - docker-compose up
I think PyCharm will run docker-compose up, have you try to run this command first in your terminal (from where your docker-compose.yml is) ?
Maybe if some errors occur, you will get more info in your terminal.
II - pycharm docker configuration
Otherwise it could be due to your docker machine configuration in PyCharm.
What I do to configure my machine and to be sure this one is correctly configured:
1 - run docker-machine ls in your shell
2 - copy paste the url without tcp://
3 - go to pycharm preferences -> Build, Execution, Deployement -> Docker -> + to create a new server, fill the server name field
4 - paste previously copied url keeping https://
5 - fill the path of your machine certificates folder
6 - tick Import credentials from Docker Machine
7 - click Detect -> your machine should appear in the selection list
8 - save this server
9 - select this server when configuring your remote interpreter, from PyCharm Preferences -> Project -> Project Interpreter -> wheel -> add remote -> Docker or Docker Compose
10 - you should be able to select a service name
11 - save your new interpreter
11 - try run your test twice, sometimes it could take time to initialize

nginx permission denied while reading upstream - even when run as root

I have a flask app running under uWSGI behind nginx.
*1 readv() failed (13: Permission denied) while reading upstream, client: 10.0.3.1, server: , request: "GET /some/path/constants.js HTTP/1.1", upstream: "uwsgi://unix:/var/uwsgi.sock:", host: "dev.myhost.com"
The permissions on the socket are okay (666, and set to the same user as nginx), in fact, even when I run nginx as root I still get this error.
The flask app/uwsgi is sending the request properly. But it's just not being read by Nginx. This is on Ubuntu Utopic Unicorn.
Any idea where the permission might be getting denied if the nginx process has full access to the socket?
As a complicating factor this server is running in a container that has Ubuntu 14.04 installed in it. And this setup used to work... but I recently upgraded the host to 14.10... I can fully understand that this could be the cause of the problem. But before I downgrade the host or upgrade the container I want to understand why.
When I run strace on a worker that's generating this error I see the call it's making is something like this:
readv(14, 0x7fffb3d16a80, 1) = -1 EACCES (Permission denied)
14 seems to be the file descriptor created by this system call
socket(PF_LOCAL, SOCK_STREAM, 0) = 14
So it can't read from a local socket that it has just created?
Okay! So the problem was, I think, related to this bug. It seems that even though apparmor wasn't configured to prevent access to sockets inside the containers it was actually doing something to prevent reading from them (though not creation...) so turning off apparmor for the container (following these instructions) worked to fix it.
The two relevant lines were:
sudo apparmor_parser -R /etc/apparmor.d/usr.bin.lxc-start
sudo ln -s /etc/apparmor.d/usr.bin.lxc-start /etc/apparmor.d/disabled/
and adding
lxc.aa_profile = unconfined
To the containers config file.
NB: These errors were not recorded in any apparmor logs.
This problem was probably introduced in kernel 3.16, because it does not reproduce on 14.04 with 3.13 kernel. Strange apparmor bug was indeed responsible for that.
Unfortunately #aychedee's solution didn't work for me. In my case I had to add the following parameter to docker run command to get rid of the issue:
docker run --security-opt apparmor:unconfined ...
If someone's aware what is the current state of the issue, please consider adding comment under this answer :)

How to make Django's devserver public ? Is it generally possible?

I'm currently trying out the Django framework and I would share/present/show some stuff I've made to my workmate/friends. I work in Ubuntu under Win7 via VMware. So my wish/desire is to send my current pub-IP with port (e.g http://123.123.123.123:8181/django-app/) to my friends so they could test it.
the Problem is - I use django's Dev server (python /path-to-django-app/manage.py runserver $IP:$PORT).
How do I make the devserver public?
EDIT:
Oh, there's something I forgot to mention. As I sad I use VMware with Ubuntu. I have a shellscript that returns me my current int-IP 192.168.xx.xx and saves it in a environment-variable ($CUR_IP)
So, each time I want to run django's devserver I simply execute
python /path-to-django-site/manage.py runserver $CUR_IP:8080
At this way I become an http-adress (e.g.http://192.168.40.145:8080/app-name/) which I CAN USE OUTSIDE my virtual machine. I could test it on my host (win7) machine. That's actually the reason why I asked the question. I thought there's a way to use the ext-IP and make runserver usable outside too
python manage.py runserver 0.0.0.0:8181
This will run development server that should listen on all IP's on port 8181.
Note that as of Jun 17, 2011 Django development server is threaded by default (ticket #1609).
From docs:
Note that the default IP address,
127.0.0.1, is not accessible from other machines on your network. To
make your development server viewable
to other machines on the network, use
its own IP address (e.g. 192.168.2.1)
or 0.0.0.0.
Assuming you have ruby installed, you just have to get localtunnel:
gem install localtunnel
then start your python development server with:
python manage.py runserver 0.0.0.0:8000
in another shell, start localtunnel:
localtunnel -k ~/.ssh/id_rsa.pub 8000
That will output an url to access your local server.
Port 8000 is now publicly accessible from http://xxxx.localtunnel.com
That's it.
192.168.*.* is a LAN-private address -- once you've done the proper VMWare (or other VM manager) and firewall incantations to make it accessible from the LAN, it still won't be accessible from outside the LAN, i.e., from the internet at large (a good thing too, because such development servers are not designed for security and scalability).
To make some port of a machine with a LAN-private IP visible to the internet at large, you need a router with a "virtual servers" ability (many routers, even cheap ones, offer it, but it's impossible to be specific about enabling it since each brand has its own idiosyncratic way). I would also recommend dyndns or other similar service to associate a stable DNS name to your always-varying public IP (unless you're splurging for a static IP from your connectivity provider, of course, but the latter option is becoming costlier all the time).
superuser.com or serverfault.com may provide better answers and details (once you give every single little detail of your configuration in a question) since the question has nothing much to do with software development and everything to do with server administration and configuration.
I had to add this line to settings.py in order to make it work (otherwise it shows an error when accessed from another computer)
ALLOWED_HOSTS = ['*']
then ran the server with:
python manage.py runserver 0.0.0.0:9595
Also, make sure that your firewall allows communication to the chosen port (9595 in this case)
Already answered but adding npm alternate of same localtunnel
sudo npm install -g localtunnel
lt --port 8000 --subdomain yash
If you are using Virtualbox, You need to change the network setting in VB from "NAT" to "Bridged Adaptor". Then restart the linux. Now if you run sudo ifconfig you are able to see your IP address like 192.168.*.* . The last step is runserver
python manage.py runserver 192.168.*.*:8000
Cheers!
You need to configure bridged networking in VMWare and also grant access to the target port in Ubuntu firewall.
Alternatively, you can use cotunnel, Just run cotunnel in your ubuntu (in VMware) change your tunnel port in cotunnel dashboard which port you are using in local side. It gives public url and you can share the url with your friends.
Your Django server can listen to 127.0.0.1 or 0.0.0.0 (I prefer 0.0.0.0) it does not matter for cotunnel.
Might I suggest trying something like pyngrok to programmatically manage an ngrok tunnel for you? Full disclosure, I am the developer of it. Django example here, but it's as easy as installing pyngrok:
pip install pyngrok
and using it:
from pyngrok import ngrok
# <NgrokTunnel: "http://<public_sub>.ngrok.io" -> "http://localhost:8000">
http_url = ngrok.connect(8000)
No messing with ports or firewalls or IP addresses, and now you can also inspect the traffic (which is useful since what you're doing here is ongoing development, not running a prod-ready server).

Categories