How do I build docker container behind company proxy? - python

I am trying to build a simple python based docker container. I am working at a corporate behind a proxy, on Windows 10. Below is my docker file:
FROM python:3.7.9-alpine3.11
WORKDIR ./
RUN pip install --proxy=http://XXXXXXX:8080 -r requirements.txt
COPY . /
EXPOSE 5000
CMD ["python", "application.py"]
But it's giving me the following errors in cmd :
"failed to solve with frontend dockerfile.v0: failed to build LLB: failed to load cache key: failed to do request: Head https://registry-1.docker.io/v2/library/python/manifests/3.7.9-alpine3.11: proxyconnect tcp: EOF"
I've tried to figure out how to configure docker's proxy, using many links but they keep referring to a file "/etc/sysconfig/docker" which I cannot find anywhere under Windows 10 or maybe I'm not looking at the right place.
Also I'm not sure this is only a proxy issue since I've seen people running into this issue without using a proxy.
I would highly appreciate anyone's help. Working at this corporate already made me spend >10 hours doing something that took me 10 minutes to do on my Mac... :(
Thank you

You're talking about the most basic of Docker functionality. Normally, it has to connect to the Docker Hub on the internet to get base images. If you can't make this work with your proxy, you can either
preload your local cache with the necessary images
set up a Docker registry inside your firewall that contains all the images you'll need
Obviously, the easiest thing, probably by far, would be to figure out how to get Docker to connect to Docker Hub through your proxy.
In terms of getting Docker on Windows to work with your proxy, might this help? - https://learn.microsoft.com/en-us/virtualization/windowscontainers/manage-docker/configure-docker-daemon
Here's what it says about configuring a proxy:
To set proxy information for docker search and docker pull, create a Windows environment variable with the name HTTP_PROXY or HTTPS_PROXY, and a value of the proxy information. This can be completed with PowerShell using a command similar to this:
In PowerShell:
[Environment]::SetEnvironmentVariable("HTTP_PROXY", "http://username:password#proxy:port/", [EnvironmentVariableTarget]::Machine)
Once the variable has been set, restart the Docker service.
In PowerShell:
Restart-Service docker
For more information, see Windows Configuration File on Docker.com.
I've also seen it mentioned that Docker for Windows allows you to set proxy parameters in its configuration GUI interface.

There is no need to pass proxy information in the Dockerfile.
There are predefined ARGs which can be used for this purpose.
HTTP_PROXY
HTTPS_PROXY
FTP_PROXY
You can pass the details when building the image
https://docs.docker.com/engine/reference/builder/#predefined-args
I do not see any run time dependency of your container on the Internet. So running the container will work without an issue.

Related

How to Deploy Flask app on AWS EC2 Linux/UNIX instance

How to deploy Flask app on AWS Linux/UNIX EC2 instance.
With any way either
1> using Gunicorn
2> using Apache server
It's absolutely possible, but it's not the quickest process! You'll probably want to use Docker to containerize your flask app before you deploy it as well, so it boils down to these steps:
Install Docker (if you don't have it) and build an image for your application and make sure you can start the container locally and the app works as intended. You'll also need to write a Dockerfile that sets your runtime, copies all your directories and exposes port 80 (this will be handy for AWS later).
The command to build an image is docker build -t your-app-name .
Once you're ready to deploy the container, head over to AWS and launch an EC2 instance with the Linux 2 machine. You'll be required to create a security key (.pem file) and move it to somewhere on your computer. This acts like your credential to login to your instance. This is where things get different depending on what OS you use. On Mac, you need to cd into your directory where the key is and modify the permissions of it by running chmod 400 key-file-name.pem. On Windows, you have to go into the security settings and make sure only your account (ideally the owner of the computer) can use this file, basically setting it to private. At this point, you can connect to your instance from your command prompt with the command AWS gives you when you click connect to instance on the EC2 dashboard.
Once you're logged in, you can configure your instance to install docker and let you use it by running the following:
sudo amazon-linux-extras install docker
sudo yum install docker
sudo service docker start
sudo usermod -a -G docker ec2-user
Great, now you need to copy all your files from your local directory to your instance using SCP (secure transfer protocol). The long way is to use this command for each file: scp -i /path/my-key-pair.pem file-to-copy ec2-user#public-dns-name:/home/ec2-user. Another route is to install FileZilla or WinSCP to speed up this process.
Now that all your files are in the instance, build the docker container using the same command from the first step and activate it. If you go to the URL that AWS gives you, your app should be running on AWS!
Here's a reference I used when I did this for the first time, it might be helpful for you to look at too

visdom.server inside Docker

I am running a pytorch training on CycleGan inside a Docker image.
I want to use visdom to show the progress of the training (also recommended from the CycleGan project).
I can start a visdom.server inside the docker container and access it outside of the container. But when I try to use the basic example on visdom inside a bash session, of the same container that is running the visdom.server. I get connection refused errors such as The requested URL could not be retrieved.
I think I need to configure the visdom.Visdom() in the example in some custom way to be able to send the data to the server.
Thankful for any help!
Notes
When I start visdom.server it says You can navigate to http://c4b7a2be26c4:8097, when all the examples mentions localhost:8097.
I am trying to do this behind a proxy.
I realised that, in order to curl localhost:8097, I need to use curl --noproxy localhost, localhost:8097. So I will have to do something similar inside visdom.
When setting http_proxy inside a docker container, you need to set no_proxy=localhost, 127.0.0.1 as well in order to allow connections to local host.
Got the same problem, And I found when you use a docker container to connect server, then you can not use the same docker container to run you code

Temporary failure in name resolution [Errno -3] with Docker

I'm following the docker tutorial and am on the part where I have to build the app using:
docker build -t friendlyhello .
It reaches up to step 4, where after a pause I get this error:
Step 4/7 : RUN pip install -r requirements.txt
---> Running in 7f4635a7510a
Collecting Flask (from -r requirements.txt (line 1))
Retrying (Retry(total=4, connect=None, read=None, redirect=None)) after
connection broken by
'NewConnectionError('<pip._vendor.requests.packages.urllib3.connection.VerifiedHTTPSConnection
object at 0x7fe3984d9b10>: Failed to establish a new connection:
[Errno -3] Temporary failure in name resolution',)': /simple/flask/
I'm not quite sure what this error means and how I can go about solving it.
Thanks for your help!
I just did sudo service docker restart and it worked after. Definitely worth a shot before jumping in to modify your configurations.
I got the same problem with Ubuntu 16.04 and Docker version 17.09.0-ce.
I don't think disabling dnsmasq is the right solution.
Here is how I solved it:
For Ubuntu
Edit /etc/default/docker and add your DNS server to the following line:
Example
DOCKER_OPTS="--dns 8.8.8.8 --dns 10.252.252.252"
Reference:
Network calls fail during image build on corporate network
bkasap's answer changes a system's feature I would say is exaggerated. Further because there are options in docker to do that. The new way to do that is
$ sudo vi /etc/docker/daemon.json
and add following content
{
"dns": ["8.8.8.8", "8.8.4.4"]
}
Don't forget to
sudo service docker restart
It's silly, but I had a VPN connected when I got this error.
After disconnecting the VPN, PIP started working again.
On fedora 32 it was problem with firewall. Following command resolved issue:
$firewall-cmd --permanent --zone=trusted --add-interface=docker0
$firewall-cmd --reload
this post worked for me too!
Solved by dns mask [sic] disable:
sudo vim /etc/NetworkManager/NetworkManager.conf
comment out dns=dnsmasq -> #dns=dnsmasq
sudo service network-manager restart (or reboot VM in this case)
from: https://github.com/moby/moby/issues/26330
Had this just now, on my Ubuntu 20.04. Randomly, it just stopped working!
Tried:
sudo service network-manager restart
Did not work. Then I just did:
sudo systemctl restart docker
and the issue was resolved!
This error means your Docker container is unable to access your network.
Beginning with systemd version 220, the forwarding setting for a given network (net.ipv4.conf..forwarding) defaults to off. This setting prevents IP forwarding. It also conflicts with Docker’s behavior of enabling the net.ipv4.conf.all.forwarding setting within containers.
If your container needs to resolve hosts which are internal to your network, the public nameservers will not be adequate. You have two choices:
You can specify a DNS server for Docker to use, or
You can disable dnsmasq in NetworkManager. If you do this, NetworkManager will add your true DNS nameserver to /etc/resolv.conf, but you will lose the possible benefits of dnsmasq.
You only need to use one of these methods.
you can read about how to perform these steps here
I am having the same issue with Ubuntu 16.04.1 machine for docker-ce 17.
Its got fixed by disable the dns mask in the network.
sudo nano /etc/NetworkManager/NetworkManager.conf
Press Ctrl+O save and Enter the exit Ctrl+X
Restart the network service by running bellow command.
sudo service network-manager restart
After this if you run the docker build command everything will work fine.
I had this problem on Windows 10 Pro and I solved it by right clicking on the docker icon in the tray and choosing "Restart...". It took a few mins and then the network was running fine again.
for me rebooting host machine resolved the issue
Docker build: "Temporary failure in name resolution"
I also got the "temporary failure in name resolution" too. My solution was to specify the network on the docker build command:
s001# docker network create example_net
s001# docker build --network example_net -t example_image example_image
^^^^^^^^^^^^^^^^^^^
I also configured the dns on docker config on my development notebook:
s001# nano /etc/docker/daemon.json
{
"dns": ["8.8.8.8"]
}
s001# systemctl restart docker
I changed the default DNS server in /etc/resolv.conf and it worked for me.
FROM:
nameserver 127.0.0.53
options edns0 trust-ad
TO:
nameserver 8.8.8.8
#nameserver 127.0.0.53
options edns0 trust-ad
I just added the DNS server of Google and commented out the default DNS server.
My case was tricky and related to environmental conditions, but is worth mentioning.
I was under a firewall with bandwidth limitations based on its own hierarchy-based logic (critical, hard, medium traffic, etc...).
Every time I was starting huge docker pull, everything on my host started misbehaving (https browser navigation based upon DNS, ping based upon DNS, ... and Docker, ofc.
Removing those limits fixed my problem, so check your network, too.
If you are facing it on windows machine,you can configure the way docker containers interact with network and set dns manually.
Settings=>Resources=>Network=>Manual DNS Configuration
Here is how it is configured
Don't forget to check your internet connection especially if you are using a virtual machine in cloud (for example EC2).
I had no internet connection when I tried to run a container in the EC2. I was connected by bastion host to the VM. I didn't have internet connection for the virtual machine.
I wasted too much time. I hope this answer helps the people like me.

pycharm can't complete remote interpreter setup for Docker

I'm new to Docker. I'm using Docker & docker-compose, going through a flask tutorial. The base docker image is python 2.7 slim.
It's running on Linux. docker 1.11.2
The application is working fine.
I want to get pycharm pro connecting to the remote interpreter, something I have never done before.
I followed the instructions for docker-compose. Initially it was failing because it could not connect to port 2376. I added this port to docker-compose.yml and the error went away.
However, trying to save the configuration now stalls/hangs with a dialog 'Getting Remote Interpreter Version'. This never completes. Also, I can't quit pycharm. This happens in Pycharm 2016.2 and 2016.3 EAP (2nd).
The help say "SFTP support is required for copying helpers to the server".
Does this mean I need to do something?
I'm not using docker-machine
The problem was that TCP access to the docker API is not established by default under ubuntu 16.04.
There are suggestions to enable TCP/IP access.
However, JetBrains gave me the simplest solution:
If you are using Linux it is most likely that Docker installed with
its default setup and Docker is expecting to be used through UNIX
domain file socket /var/run/docker.sock. And you should specify
unix:///var/run/docker.sock in the API URL field. Please comment
whether it helps!
This suggestion worked with my Ubuntu 16.04 -derived distribution.
This goes into the Docker entry in PyCharm preferences under Build, Execution, Deployment.
You can also edit this while setting up a remote interpreter, but only by making a new Docker entry.
TCP/IP Method
This method works if you want TCP/IP access, but this is a security risk. The socket approach is better, which is probably why it is the default.
https://coreos.com/os/docs/latest/customizing-docker.html
Customizing docker
The Docker systemd unit can be customized by overriding the unit that
ships with the default CoreOS settings. Common use-cases for doing
this are covered below.
Enable the remote API on a new socket
Create a file called /etc/systemd/system/docker-tcp.socket to make
Docker available on a TCP socket on port 2375.
[Unit]
Description=Docker Socket for the API
[Socket]
ListenStream=2375
BindIPv6Only=both
Service=docker.service
[Install]
WantedBy=sockets.target
Then enable this new socket:
systemctl enable docker-tcp.socket
systemctl stop docker
systemctl start docker-tcp.socket
systemctl start docker
Test that it’s working:
docker -H tcp://127.0.0.1:2375 ps
Once I thought to search for ubuntu 16.04 I came across simpler solutions, but I did not test them.
For instance:
https://www.ivankrizsan.se/2016/05/18/enabling-docker-remote-api-on-ubuntu-16-04/
Edit the file /lib/systemd/system/docker.service
Modify the line that starts with ExecStart to look like this:
ExecStart=/usr/bin/docker daemon -H fd:// -H tcp://0.0.0.0:2375
Where my addition is the “-H tcp://0.0.0.0:2375” part. Save the
modified file. Restart the Docker service:
sudo service docker restart
Test that the Docker API is indeed accessible:
curl http://localhost:2375/version
I - docker-compose up
I think PyCharm will run docker-compose up, have you try to run this command first in your terminal (from where your docker-compose.yml is) ?
Maybe if some errors occur, you will get more info in your terminal.
II - pycharm docker configuration
Otherwise it could be due to your docker machine configuration in PyCharm.
What I do to configure my machine and to be sure this one is correctly configured:
1 - run docker-machine ls in your shell
2 - copy paste the url without tcp://
3 - go to pycharm preferences -> Build, Execution, Deployement -> Docker -> + to create a new server, fill the server name field
4 - paste previously copied url keeping https://
5 - fill the path of your machine certificates folder
6 - tick Import credentials from Docker Machine
7 - click Detect -> your machine should appear in the selection list
8 - save this server
9 - select this server when configuring your remote interpreter, from PyCharm Preferences -> Project -> Project Interpreter -> wheel -> add remote -> Docker or Docker Compose
10 - you should be able to select a service name
11 - save your new interpreter
11 - try run your test twice, sometimes it could take time to initialize

What is the best way to let someone test your webapp

I've been creating an webapp (just for learning purposes) using python django, and have no intention in deploying it. However, is there a way to let someone else, try the webapplication, or more precisely: Is it possible to somehow test the webapp on another computer. I tried to send det source code (and the whole folder), to another computer, installed virtual environment, activated it, and tried to runserver. However, I always get runtimeerror:maximum recursion depth exceeded in cmp. Is there any other way around it?
You can use ngrok -- https://ngrok.com/ -- to create a public URL to your local server for testing, and then give that URL to people so they can try your webapp.
You can also use Localtunnel to easily share a web service on your local development without deploying the code in the server.
Install the localtunnel
npm install -g localtunnel
Start a webserver on some local port (eg http://localhost:8000) and use the command line interface to request a tunnel to your local server
lt --port 8000
You will receive a url, for example https://xyz.localtunnel.me, that you can share with anyone for as long as your local instance of lt remains active. Any requests will be routed to your local service at the specified port.

Categories