I am trying to run my streamlit app via docker. Since I want to run my code in a linux system, I am trying first it it runs in my windows system.
So I ran my container and ran a command which gave me two URLs. But both of the urls are not working.
This is the terminal result
This the result in browser
Do I have to mention any port number? And if yes, then how to find my local system's port?
Thanks in advance
I think you are using the wrong port number, please try to use -p 8501:8501 in your docker command and then go to localhost:8501 in your browser.
First check if you are able to ping the internal IP of the docker container from the host machine.
then export multiple ports using the following args:-
docker run -p <host_port1>:<container_port1> -p <host_port2>:<container_port2>
Related
I'm currently trying to get lookup hostnames from inside a docker container using python. My example script is exceedingly simple, so here we go:
import sockets
print(socket.gethostbyaddr('8.8.8.8'))
It works fine from my Windows machine, and it works fine from the Ubuntu instance using WSL2. However, inside the Docker container, no dice. I've verified that it has internet access, it should be able to reach all the same servers my physical machine can access, and I've verified that by pinging those servers from inside the docker container.
However simply looking up the hostname of an IP doesn't work. Any advice?
Edit: The docker container I am running is based on Debian Bullseye
Edit 2: Executing nslookup inside the container is not able to resolve the hostnames either. Do I need to specify a DNS server inside the container?
Edit 3: I got a lot closer to actually solve it. The default nameserver in my resolv.conf is unable to resolve those lookups.
However checking what which nameserver my Ubuntu/Windows use, and manually setting it to be used inside of docker enabled me to at least use nslookup.
So echo "nameserver [ip of host nameserver]" > /etc/resolv.conf got me mostly there.
Okay, I've found the proper solution.
Obviously the proper way to do is, is to add the add --dns [ip of host nameserver] to the docker run arguments.
Edit: Or change it inside the python script in order to not need any arguments other than run to properly start the container for ease of use.
I have a docker image and and associated container than runs a jupyter-lab server. On this docker image I have a very specific python module that cannot be installed on the host. On my host, I have all my work environment that I don't want to run on the docker container.
I would like to use that module from python script running on the host. My first idea is to use docker-py (https://github.com/docker/docker-py) on the host like this:
import docker
client = docker.from_env()
container = client.container.run("myImage", detach=True)
container.exec_run("python -c 'import mymodule; # do stuff; print(something)'")
and get the output and keep working in my script.
Is there a better solution? Is there a way to connect to the jupyter server within the script on the host for example?
Thanks
First. As #dagnic states on his comment there are those 2 modules that let you execute docker runtime in you python script (there's probably more, another different question would be "which one is best").
Second. Without knowing anything about Jupiter, but since you call it "server" , it would mean to me that you are able to port mapping that server (remember -p 8080:80 or --publish 8080:80, yeah that's it!). after setting a port mapping for your container you would be able to ie use pycurl module an "talk" to that service.
Remember, if you "talk on a port" to your server, you might also want to do this using i.e with docker-py.
Since you asked if any better solution exists: This two method would be the more popular. First one would convenient for your script, second would launch a server and you can use pycurl from your host script as you asked (connect to the jupyter server) .ie if you launch jupyter server like:
docker run -p 9999:8888 -it -e JUPYTER_ENABLE_LAB=yes jupyter/base-notebook:latest
you can pycurl like:
import pycurl
from io import BytesIO
b_obj = BytesIO()
crl = pycurl.Curl()
# Set URL value
crl.setopt(crl.URL, 'https://localhost:8888')
# Write bytes that are utf-8 encoded
crl.setopt(crl.WRITEDATA, b_obj)
# Perform a file transfer
crl.perform()
# End curl session
crl.close()
# Get the content stored in the BytesIO object (in byte characters)
get_body = b_obj.getvalue()
# Decode the bytes stored in get_body to HTML and print the result
print('Output of GET request:\n%s' % get_body.decode('utf8'))
Update:
So you have two questions:
1. Is there a better solution?
Basically using docker-py module and run jupyter server in a docker container (and a few other options not involving docker I suppose)
2. Is there a way to connect to the jupyter server within the script on the host for example?
Here, there is an example how to run jupyter in docker.
enter link description here
The rest is use pycurl from your code to talk to that jupyther server from your host computer.
I am trying to build a simple python based docker container. I am working at a corporate behind a proxy, on Windows 10. Below is my docker file:
FROM python:3.7.9-alpine3.11
WORKDIR ./
RUN pip install --proxy=http://XXXXXXX:8080 -r requirements.txt
COPY . /
EXPOSE 5000
CMD ["python", "application.py"]
But it's giving me the following errors in cmd :
"failed to solve with frontend dockerfile.v0: failed to build LLB: failed to load cache key: failed to do request: Head https://registry-1.docker.io/v2/library/python/manifests/3.7.9-alpine3.11: proxyconnect tcp: EOF"
I've tried to figure out how to configure docker's proxy, using many links but they keep referring to a file "/etc/sysconfig/docker" which I cannot find anywhere under Windows 10 or maybe I'm not looking at the right place.
Also I'm not sure this is only a proxy issue since I've seen people running into this issue without using a proxy.
I would highly appreciate anyone's help. Working at this corporate already made me spend >10 hours doing something that took me 10 minutes to do on my Mac... :(
Thank you
You're talking about the most basic of Docker functionality. Normally, it has to connect to the Docker Hub on the internet to get base images. If you can't make this work with your proxy, you can either
preload your local cache with the necessary images
set up a Docker registry inside your firewall that contains all the images you'll need
Obviously, the easiest thing, probably by far, would be to figure out how to get Docker to connect to Docker Hub through your proxy.
In terms of getting Docker on Windows to work with your proxy, might this help? - https://learn.microsoft.com/en-us/virtualization/windowscontainers/manage-docker/configure-docker-daemon
Here's what it says about configuring a proxy:
To set proxy information for docker search and docker pull, create a Windows environment variable with the name HTTP_PROXY or HTTPS_PROXY, and a value of the proxy information. This can be completed with PowerShell using a command similar to this:
In PowerShell:
[Environment]::SetEnvironmentVariable("HTTP_PROXY", "http://username:password#proxy:port/", [EnvironmentVariableTarget]::Machine)
Once the variable has been set, restart the Docker service.
In PowerShell:
Restart-Service docker
For more information, see Windows Configuration File on Docker.com.
I've also seen it mentioned that Docker for Windows allows you to set proxy parameters in its configuration GUI interface.
There is no need to pass proxy information in the Dockerfile.
There are predefined ARGs which can be used for this purpose.
HTTP_PROXY
HTTPS_PROXY
FTP_PROXY
You can pass the details when building the image
https://docs.docker.com/engine/reference/builder/#predefined-args
I do not see any run time dependency of your container on the Internet. So running the container will work without an issue.
I currently have an instance of scrapyd up and running locally on my machine. This instance of scrapyd needs to be available to other PC's on my employers network. I've read about scrapy-cloud (https://doc.scrapinghub.com/scrapy-cloud.html) and other cloud based services. However I'd much rather host scrapyd on our network, since the spiders I've built pull data from csv files stored on our servers.
I've searched through the scrapyd documentation (https://scrapyd.readthedocs.io/en/stable/) and understand how to install and run scrapyd. I am also comfortable with uploading scrapy projects to scrapyd and running specific spiders.
What steps do I need to take in order to make my scrapyd instance available to other machines on our network? All of our PC's and servers run on a windows OS
The answer doesn't need to be a specific step by step guide. I'm just looking for someone to point me in the right direction, because I am unsure how to proceed.
if you are in a lan in the same range of ip.
you can follow the manual and check your ip
ifconfig in linux
ipconfig in windows
and run the commands in the manual
curl http://localhost:6800/addversion.json -F project=myproject -F version=r23 -F egg=#myproject.egg
and change the localhost with your ipaddress
for example if your ip is 192.168.1.10
you will run
in other pc }.
curl http://192.168.1.10:6800/addversion.json -F project=myproject -F version=r23 -F egg=#myproject.egg
You need open the port if you use firewalls, and if you don't use cURL in windows can download and install it:
How do I install/set up and use cURL on Windows?
More information about the api check the manual
I'm starting the tango-with-django tutorial.
And I'm trying to access the created website using other computer. Both computers are using Windows OS. And this is not working.
$ python manage.py runserver <your_machines_ip_address>:5555
I'm using the IPv4 Adress that I get when I type:
$ ipconfig
What am I doing wrong or what is missing?
Download ngrok from here: https://ngrok.com/ (this will allow you to serve your web app to anyone on the Internet)
Start your Django project normally or provide any port number.
python manage.py runserver
If you are running windows, open a command prompt and browse to the location where the ngrok binary is located.
If you are running GNU/Linux / OSX, just open a terminal.
Then run the following command.
ngrok 8000
Replace 8000 by whichever port the Django project is running on.
ngrok will give you a public hostname like http://abc.ngrok.com
Anyone you give this address to will be able to view / interact with your Django application anywhere on the Internet.
Update: Newer versions of ngrok need to be run like this: ngrok http 8000
Try python manage.py runserver 0.0.0.0:5555. And access it on the other machine using http://<your-ip-address>:5555. That should work