I'm building a Docker container that have python and import some packages using 'pip install'.
I want to allow 'pip' to communicate only with pypi.org, and block it from communicating with any other channel, including the host that runs the docker container (in which we run the pip).
I tried to create some custom outbound rules in firewall (including blocking the container's IP address, the container's gateway IP address, and the entire 'nat' docker subnet), but somehow it keeps going around the firewall and having access to wherever it wants (and imports any package it tries).
To clarify, I'm first trying to block 'pip' entirely, even for pypi.org
Is it possible to achieve this blocking behavior? What rules should I set in Firewall (or directly in the docker network)?
*I'm running on Windows desktop with Docker desktop, and the container is running with the following image - mcr.microsoft.com/windows/nanoserver:1809.
Related
I'm currently trying to get lookup hostnames from inside a docker container using python. My example script is exceedingly simple, so here we go:
import sockets
print(socket.gethostbyaddr('8.8.8.8'))
It works fine from my Windows machine, and it works fine from the Ubuntu instance using WSL2. However, inside the Docker container, no dice. I've verified that it has internet access, it should be able to reach all the same servers my physical machine can access, and I've verified that by pinging those servers from inside the docker container.
However simply looking up the hostname of an IP doesn't work. Any advice?
Edit: The docker container I am running is based on Debian Bullseye
Edit 2: Executing nslookup inside the container is not able to resolve the hostnames either. Do I need to specify a DNS server inside the container?
Edit 3: I got a lot closer to actually solve it. The default nameserver in my resolv.conf is unable to resolve those lookups.
However checking what which nameserver my Ubuntu/Windows use, and manually setting it to be used inside of docker enabled me to at least use nslookup.
So echo "nameserver [ip of host nameserver]" > /etc/resolv.conf got me mostly there.
Okay, I've found the proper solution.
Obviously the proper way to do is, is to add the add --dns [ip of host nameserver] to the docker run arguments.
Edit: Or change it inside the python script in order to not need any arguments other than run to properly start the container for ease of use.
I am trying to build a simple python based docker container. I am working at a corporate behind a proxy, on Windows 10. Below is my docker file:
FROM python:3.7.9-alpine3.11
WORKDIR ./
RUN pip install --proxy=http://XXXXXXX:8080 -r requirements.txt
COPY . /
EXPOSE 5000
CMD ["python", "application.py"]
But it's giving me the following errors in cmd :
"failed to solve with frontend dockerfile.v0: failed to build LLB: failed to load cache key: failed to do request: Head https://registry-1.docker.io/v2/library/python/manifests/3.7.9-alpine3.11: proxyconnect tcp: EOF"
I've tried to figure out how to configure docker's proxy, using many links but they keep referring to a file "/etc/sysconfig/docker" which I cannot find anywhere under Windows 10 or maybe I'm not looking at the right place.
Also I'm not sure this is only a proxy issue since I've seen people running into this issue without using a proxy.
I would highly appreciate anyone's help. Working at this corporate already made me spend >10 hours doing something that took me 10 minutes to do on my Mac... :(
Thank you
You're talking about the most basic of Docker functionality. Normally, it has to connect to the Docker Hub on the internet to get base images. If you can't make this work with your proxy, you can either
preload your local cache with the necessary images
set up a Docker registry inside your firewall that contains all the images you'll need
Obviously, the easiest thing, probably by far, would be to figure out how to get Docker to connect to Docker Hub through your proxy.
In terms of getting Docker on Windows to work with your proxy, might this help? - https://learn.microsoft.com/en-us/virtualization/windowscontainers/manage-docker/configure-docker-daemon
Here's what it says about configuring a proxy:
To set proxy information for docker search and docker pull, create a Windows environment variable with the name HTTP_PROXY or HTTPS_PROXY, and a value of the proxy information. This can be completed with PowerShell using a command similar to this:
In PowerShell:
[Environment]::SetEnvironmentVariable("HTTP_PROXY", "http://username:password#proxy:port/", [EnvironmentVariableTarget]::Machine)
Once the variable has been set, restart the Docker service.
In PowerShell:
Restart-Service docker
For more information, see Windows Configuration File on Docker.com.
I've also seen it mentioned that Docker for Windows allows you to set proxy parameters in its configuration GUI interface.
There is no need to pass proxy information in the Dockerfile.
There are predefined ARGs which can be used for this purpose.
HTTP_PROXY
HTTPS_PROXY
FTP_PROXY
You can pass the details when building the image
https://docs.docker.com/engine/reference/builder/#predefined-args
I do not see any run time dependency of your container on the Internet. So running the container will work without an issue.
I'm very new to docker. I want to build my python application within a docker container. As I build the application I want to be testing / running it in Pycharm and in the container I build.
How do I connect Pycharm pro to a specific container or image (either python or Anaconda)?
When I create a project, click pure python and then add remote, then clicking docker I get the following result
I'm running on Mac OS X El Capitan (10.11.6) with Docker version 1.12.1 and Pycharm Pro 2016.2.3
Docker-for-mac only supports connections over the /var/run/docker.sock socket that is listening on your OSX host.
If you try to add this to pycharm, you'll get the following message:
"Cannot connect: java.lang.ExceptionInInitializerError, caused by: java.lang.IllegalStateException: Only supported on Linux"
So PyCharm really only wants to connect to a docker daemon over a TCP socket, and has support for the recommended TLS protection of that socket. The Certificates folder defaults to the certificate folder for the default docker-machine machine, "default".
It is possible to implement a workaround to expose Docker for Mac via a TCP server if you have socat installed on your OSX machine.
On my system, I have it installed via homebrew:
brew install socat
Now that's installed, I can run socat with the following parameters:
socat TCP-LISTEN:2376,reuseaddr,fork,bind=127.0.0.1 UNIX-CLIENT:/var/run/docker.sock
WARNING: this will make it possible for any process running as any user on your whole mac to access your docker-for-mac. The unix socket is protected by user permissions, while 127.0.0.1 is not.
This socat command tells it to listen on 127.0.0.1:2376 and pass connections on to /var/run/docker.sock. The reuseaddr and fork options allow this one command to service multiple connections instead of just the very first one.
I can test that socat is working by running the following command:
docker -H tcp://127.0.0.1:2376 ps
If you get a successful docker ps response back, then you know that the socat process is doing its job.
Now, in the PyCharm window, I can put the same tcp://127.0.0.1:2376 in place. I should get a "Connection successful" message back:
This workaround will require that socat command to be running any time you want to use docker from PyCharm.
If you wanted to do the same thing, but with TLS, you could set up certificates and make them available for both pycharm and socat, and use socat's OPENSSL-LISTEN instead of the TCP-LISTEN feature. I won't go into the details on that for this answer though.
I'm new to Docker. I'm using Docker & docker-compose, going through a flask tutorial. The base docker image is python 2.7 slim.
It's running on Linux. docker 1.11.2
The application is working fine.
I want to get pycharm pro connecting to the remote interpreter, something I have never done before.
I followed the instructions for docker-compose. Initially it was failing because it could not connect to port 2376. I added this port to docker-compose.yml and the error went away.
However, trying to save the configuration now stalls/hangs with a dialog 'Getting Remote Interpreter Version'. This never completes. Also, I can't quit pycharm. This happens in Pycharm 2016.2 and 2016.3 EAP (2nd).
The help say "SFTP support is required for copying helpers to the server".
Does this mean I need to do something?
I'm not using docker-machine
The problem was that TCP access to the docker API is not established by default under ubuntu 16.04.
There are suggestions to enable TCP/IP access.
However, JetBrains gave me the simplest solution:
If you are using Linux it is most likely that Docker installed with
its default setup and Docker is expecting to be used through UNIX
domain file socket /var/run/docker.sock. And you should specify
unix:///var/run/docker.sock in the API URL field. Please comment
whether it helps!
This suggestion worked with my Ubuntu 16.04 -derived distribution.
This goes into the Docker entry in PyCharm preferences under Build, Execution, Deployment.
You can also edit this while setting up a remote interpreter, but only by making a new Docker entry.
TCP/IP Method
This method works if you want TCP/IP access, but this is a security risk. The socket approach is better, which is probably why it is the default.
https://coreos.com/os/docs/latest/customizing-docker.html
Customizing docker
The Docker systemd unit can be customized by overriding the unit that
ships with the default CoreOS settings. Common use-cases for doing
this are covered below.
Enable the remote API on a new socket
Create a file called /etc/systemd/system/docker-tcp.socket to make
Docker available on a TCP socket on port 2375.
[Unit]
Description=Docker Socket for the API
[Socket]
ListenStream=2375
BindIPv6Only=both
Service=docker.service
[Install]
WantedBy=sockets.target
Then enable this new socket:
systemctl enable docker-tcp.socket
systemctl stop docker
systemctl start docker-tcp.socket
systemctl start docker
Test that it’s working:
docker -H tcp://127.0.0.1:2375 ps
Once I thought to search for ubuntu 16.04 I came across simpler solutions, but I did not test them.
For instance:
https://www.ivankrizsan.se/2016/05/18/enabling-docker-remote-api-on-ubuntu-16-04/
Edit the file /lib/systemd/system/docker.service
Modify the line that starts with ExecStart to look like this:
ExecStart=/usr/bin/docker daemon -H fd:// -H tcp://0.0.0.0:2375
Where my addition is the “-H tcp://0.0.0.0:2375” part. Save the
modified file. Restart the Docker service:
sudo service docker restart
Test that the Docker API is indeed accessible:
curl http://localhost:2375/version
I - docker-compose up
I think PyCharm will run docker-compose up, have you try to run this command first in your terminal (from where your docker-compose.yml is) ?
Maybe if some errors occur, you will get more info in your terminal.
II - pycharm docker configuration
Otherwise it could be due to your docker machine configuration in PyCharm.
What I do to configure my machine and to be sure this one is correctly configured:
1 - run docker-machine ls in your shell
2 - copy paste the url without tcp://
3 - go to pycharm preferences -> Build, Execution, Deployement -> Docker -> + to create a new server, fill the server name field
4 - paste previously copied url keeping https://
5 - fill the path of your machine certificates folder
6 - tick Import credentials from Docker Machine
7 - click Detect -> your machine should appear in the selection list
8 - save this server
9 - select this server when configuring your remote interpreter, from PyCharm Preferences -> Project -> Project Interpreter -> wheel -> add remote -> Docker or Docker Compose
10 - you should be able to select a service name
11 - save your new interpreter
11 - try run your test twice, sometimes it could take time to initialize
I have written a python script on my local laptop which uses several third party packages. I now want to run my script regularly (via a cron job) on an external server.
The external server most likely does not have all the dependencies installed, is there is a way to package and deploy my python script and dependencies in order to ensure that it will run?
I have already tried to package the script as an exe, but failed to do so.
Not clear what kind of third party packages you have, but for those that were installed with pip, you can do this in your dev environment:
$ pip freeze > requirements.txt
And then you can install these packages in your production environment:
$ pip install requirements.txt
Ideally, you will already have a virtualenv on your production box. If not, it may be well worth reading about these before deploying your script.
Just turn your computer into a server. Simply set up your router for port forwarding so that your server's content's will display when the router's IP is entered. You can of course purchase a DNS domain to give that IP a human readable URL.