I have a docker image and and associated container than runs a jupyter-lab server. On this docker image I have a very specific python module that cannot be installed on the host. On my host, I have all my work environment that I don't want to run on the docker container.
I would like to use that module from python script running on the host. My first idea is to use docker-py (https://github.com/docker/docker-py) on the host like this:
import docker
client = docker.from_env()
container = client.container.run("myImage", detach=True)
container.exec_run("python -c 'import mymodule; # do stuff; print(something)'")
and get the output and keep working in my script.
Is there a better solution? Is there a way to connect to the jupyter server within the script on the host for example?
Thanks
First. As #dagnic states on his comment there are those 2 modules that let you execute docker runtime in you python script (there's probably more, another different question would be "which one is best").
Second. Without knowing anything about Jupiter, but since you call it "server" , it would mean to me that you are able to port mapping that server (remember -p 8080:80 or --publish 8080:80, yeah that's it!). after setting a port mapping for your container you would be able to ie use pycurl module an "talk" to that service.
Remember, if you "talk on a port" to your server, you might also want to do this using i.e with docker-py.
Since you asked if any better solution exists: This two method would be the more popular. First one would convenient for your script, second would launch a server and you can use pycurl from your host script as you asked (connect to the jupyter server) .ie if you launch jupyter server like:
docker run -p 9999:8888 -it -e JUPYTER_ENABLE_LAB=yes jupyter/base-notebook:latest
you can pycurl like:
import pycurl
from io import BytesIO
b_obj = BytesIO()
crl = pycurl.Curl()
# Set URL value
crl.setopt(crl.URL, 'https://localhost:8888')
# Write bytes that are utf-8 encoded
crl.setopt(crl.WRITEDATA, b_obj)
# Perform a file transfer
crl.perform()
# End curl session
crl.close()
# Get the content stored in the BytesIO object (in byte characters)
get_body = b_obj.getvalue()
# Decode the bytes stored in get_body to HTML and print the result
print('Output of GET request:\n%s' % get_body.decode('utf8'))
Update:
So you have two questions:
1. Is there a better solution?
Basically using docker-py module and run jupyter server in a docker container (and a few other options not involving docker I suppose)
2. Is there a way to connect to the jupyter server within the script on the host for example?
Here, there is an example how to run jupyter in docker.
enter link description here
The rest is use pycurl from your code to talk to that jupyther server from your host computer.
Related
Is it possible, using the docker SDK for Python, to launch a container in a remote machine?
import docker
client = docker.from_env()
client.containers.run("bfirsh/reticulate-splines", detach=True)
# I'd like to run this container ^^^ in a machine that I have ssh access to.
Going through the documentation it seems like this type of management is out of scope for said SDK, so searching online I got hints that the kubernetes client for Python could be of help, but don't know where to begin.
It's not clearly documented by Docker SDK for Python, but you can use SSH to connect to Docker daemon by specifying host with ssh://[user]#[host] format, for example:
import docker
# Create a client connecting to Docker daemon via SSH
client = docker.DockerClient(base_url="ssh://username#your_host")
It's also possible to set environment variable DOCKER_HOST=ssh://username#your_host and use the example your provided which use current environment to create client, for example:
import docker
import os
os.environ["DOCKER_HOST"] = "ssh://username#your_host"
# Or use `export DOCKER_HOST=ssh://username#your_host` before running Python
client = docker.from_env()
Note: as specified in the question, this is considering you have SSH access to target host. You can test with
ssh username#your_host
It's possible, simply do this:
client = docker.DockerClient(base_url=your_remote_docker_url)
Here's the document I found related to this:
https://docker-py.readthedocs.io/en/stable/client.html#client-reference
If you only have SSH access to it, there is an use_ssh_client option
If you have a k8s cluster, you do not need the Python sdk. You only need the cmd line tool kubectl.
Once you have it installed, you can create a deployment that will deploy your image.
I am trying to run my streamlit app via docker. Since I want to run my code in a linux system, I am trying first it it runs in my windows system.
So I ran my container and ran a command which gave me two URLs. But both of the urls are not working.
This is the terminal result
This the result in browser
Do I have to mention any port number? And if yes, then how to find my local system's port?
Thanks in advance
I think you are using the wrong port number, please try to use -p 8501:8501 in your docker command and then go to localhost:8501 in your browser.
First check if you are able to ping the internal IP of the docker container from the host machine.
then export multiple ports using the following args:-
docker run -p <host_port1>:<container_port1> -p <host_port2>:<container_port2>
I am trying to build a simple python based docker container. I am working at a corporate behind a proxy, on Windows 10. Below is my docker file:
FROM python:3.7.9-alpine3.11
WORKDIR ./
RUN pip install --proxy=http://XXXXXXX:8080 -r requirements.txt
COPY . /
EXPOSE 5000
CMD ["python", "application.py"]
But it's giving me the following errors in cmd :
"failed to solve with frontend dockerfile.v0: failed to build LLB: failed to load cache key: failed to do request: Head https://registry-1.docker.io/v2/library/python/manifests/3.7.9-alpine3.11: proxyconnect tcp: EOF"
I've tried to figure out how to configure docker's proxy, using many links but they keep referring to a file "/etc/sysconfig/docker" which I cannot find anywhere under Windows 10 or maybe I'm not looking at the right place.
Also I'm not sure this is only a proxy issue since I've seen people running into this issue without using a proxy.
I would highly appreciate anyone's help. Working at this corporate already made me spend >10 hours doing something that took me 10 minutes to do on my Mac... :(
Thank you
You're talking about the most basic of Docker functionality. Normally, it has to connect to the Docker Hub on the internet to get base images. If you can't make this work with your proxy, you can either
preload your local cache with the necessary images
set up a Docker registry inside your firewall that contains all the images you'll need
Obviously, the easiest thing, probably by far, would be to figure out how to get Docker to connect to Docker Hub through your proxy.
In terms of getting Docker on Windows to work with your proxy, might this help? - https://learn.microsoft.com/en-us/virtualization/windowscontainers/manage-docker/configure-docker-daemon
Here's what it says about configuring a proxy:
To set proxy information for docker search and docker pull, create a Windows environment variable with the name HTTP_PROXY or HTTPS_PROXY, and a value of the proxy information. This can be completed with PowerShell using a command similar to this:
In PowerShell:
[Environment]::SetEnvironmentVariable("HTTP_PROXY", "http://username:password#proxy:port/", [EnvironmentVariableTarget]::Machine)
Once the variable has been set, restart the Docker service.
In PowerShell:
Restart-Service docker
For more information, see Windows Configuration File on Docker.com.
I've also seen it mentioned that Docker for Windows allows you to set proxy parameters in its configuration GUI interface.
There is no need to pass proxy information in the Dockerfile.
There are predefined ARGs which can be used for this purpose.
HTTP_PROXY
HTTPS_PROXY
FTP_PROXY
You can pass the details when building the image
https://docs.docker.com/engine/reference/builder/#predefined-args
I do not see any run time dependency of your container on the Internet. So running the container will work without an issue.
I am trying load the data worth of 44 mb in to a Pandas data frame. this is the code:
import pandas as pd
from sqlalchemy import create_engine
import cx_Oracle
import sqlalchemy
engineor = create_engine('oracle+cx_oracle://xxxxx:xxx#xxx:1521/?service_name=XXX')
sql = "select * from accounts where date >= '10-MAY-18 06.00.16.170000000 PM'"
do = pd.read_sql(sql, engineor)
do.info(memory_usage='deep')
The above query returns around 70k rows and the size is around 44 mb.
When I run this from my local machine (Win 7) in Anaconda, the data loads in to data frame without any issues in a minute or two. However, when I run the same thing in Docker container (Linux based) it just hangs.
I verified that docker container has sufficient memory, and memory doesn't grow over time (although the size is quite small ~44 mb). It just gets submitted and hangs indefinitely that I am unable to kill it by pressing control + c or control + z. I need to disconnect from the machine and login back.
I tried to match the version of Pandas from Anaconda that I am running on local machine. But it didn't help much, it is still hanging. The only thing that is now differing between my local machine and the Python version. In docker container it is 3.5.3 and in my local version it is 3.6.3, and that Anaconda is running from Windows and docker container is Linux based. I am not sure if these things make any difference.
Any suggestions on how to overcome this?
I just ran into same problem as yours. I think it's because of using socket communication inside bridge network container.
Docker uses bridge network as default, just appointed ports could be forwarded to the host. This usually works when you use container as server. But when your container runs as a client which creates sockets for communicating, there will be a problem.
The client needs to open up random ports to the server and bind them with sockets. Because only selected ports could be forwarded, those random ports could not be found outside. Receiving data will never find the client container, and the sockets inside container will be left hang forever.
My solution is to run containers using host network. You could run container using command like
docker run --rm -d --network host --name my_nginx nginx
or using docker-compose
version: '3.7'
services:
my_nginx:
...
network_mode: "host"
But beware using host network could cause port conflicts.
PS. Although using host network solved my problem, I'm not an expert on socket or networking. Please correct me if i'm wrong about the mechnisims. I read socket in this article: https://docs.oracle.com/javase/tutorial/networking/sockets/definition.html.
I want to manage virtual machines (any flavor) using Python scripts. Example, create VM, start, stop and be able to access my guest OS's resources.
My host machine runs Windows. I have VirtualBox installed. Guest OS: Kali Linux.
I just came across a software called libvirt. Do any of you think this would help me ?
Any insights on how to do this? Thanks for your help.
For aws use boto.
For GCE use Google API Python Client Library
For OpenStack use the python-openstackclient and import its methods directly.
For VMWare, google it.
For Opsware, abandon all hope as their API is undocumented and has like 12 years of accumulated abandoned methods to dig through and an equally insane datamodel back ending it.
For direct libvirt control there are python bindings for libvirt. They work very well and closely mimic the c libraries.
I could go on.
follow the directions here to install docker https://docs.docker.com/windows/ (it includes Oracle VirtualBox (if you dont already have it)
#grab the immage
docker pull kalilinux/kali-linux-docker
#run a specific command
docker run kalilinux/kali-linux-docker <some_command>
#open interactive terminal to "docker image"
docker run -t -i kalilinux/kali-linux-docker /bin/bash
if you want to mount a local volume you can use the `-v dst src` switch in your run command
#mount local ./training/webapp directory into kali image # /webapp
docker run kalilinux/kali-linux-docker -v /webapp training/webapp <some_command>
note that these are run from the regular windows prompt to use python you would need to wrap them in subprocess calls ...