Run host machine program from python docker - python

Actually, I have a little python server (using fastapi but it's not important) that start a program like that:
#app.put("/start_simulation/")
async def start_simulation():
try:
Process = subprocess.Popen("Aimsun_Next.exe")
except Exception as e:
raise HTTPException(status_code=500, detail="Simulation process failed")
I put my little server in a python docker like that:
FROM python:3.9
WORKDIR /code
COPY ./requirements.txt /code/requirements.txt
COPY ./app /code/app
RUN pip install --no-cache-dir --upgrade -r ./requirements.txt
WORKDIR /code/app
CMD ["uvicorn", "server_main:app", "--reload", "--proxy-headers", "--host", "0.0.0.0", "--port", "8000"]
and it seems to work fine!
But when the request "start_simulation" is called, it don't work because we are now in a docker.
PS: My "put" query doesn't look good but I shortened it to have a simple example
I would like my server in my docker have access to the path of my host machine to call the command "Aimsun_Next.exe". It's possible ?

In your dockerfile you should indicate that you want to expose the FastAPI port. Something like EXPOSE 8000. See documentation.
When you start the container you have to publish the port to localhost docker run -p 8000:8080.
It's possible to access a file on your local filesystem from your container, by "mounting" a volume. See documentation.
But I'm not sure you can launch it and if I wouldn't recommend it at all.
The idea behind containerization is that a container can run everywhere and is secure as it's isolated from whatever underlying OS.
Your container is an isolated environment. Furthermore the python3.9 image is based on a specific Linux version and hasn't to ensure any kind of compatibility with whatever OS is used on the host.

Related

Docker run does not produce any endpoint

I am trying to dockerize this repo. After building it like so:
docker build -t layoutlm-v2 .
I try to run it like so:
docker run -d -p 5001:5000 layoutlm-v2
It downloads the necessary libraries and packages:
And then nothing... No errors, no endpoints generated, just radio silence.
What's wrong? And how do I fix it?
You appear to be expecting your application to offer a service on port 5000, but it doesn't appear as if that's how your code behaves.
Looking at your code, you seem to be launching a service using gradio. According the quickstart, calling gr.Interface(...).launch() will launch a service on localhost:7860, and indeed, if you inspect a container booted from your image, we see:
root#74cf8b2463ab:/app# ss -tln
State Recv-Q Send-Q Local Address:Port Peer Address:Port Process
LISTEN 0 2048 127.0.0.1:7860 0.0.0.0:*
There's no way to access a service listening on localhost from outside the container, so we need to figure out how to fix that.
Looking at these docs, it looks like you can control the listen address using the server_name parameter:
server_name
to make app accessible on local network, set this to "0.0.0.0". Can be set by environment variable GRADIO_SERVER_NAME. If None, will use "127.0.0.1".
So if we run your image like this:
docker run -p 7860:7860 -e GRADIO_SERVER_NAME=0.0.0.0 layoutlm-v2
Then we should be able to access the interface on the host at http://localhost:7860/...and indeed, that seems to work:
Unrelated to your question:
You're setting up a virtual environment in your Dockerfile, but you're not using it, primarily because of a typo here:
ENV PATH="VIRTUAL_ENV/bin:$PATH"
You're missing a $ on $VIRTUAL_ENV.
You could optimize the order of operations in your Dockerfile. Right now, making a simple change to your Dockerfile (e.g, editing the CMD setting) will cause much of your image to be rebuilt. You could avoid that by restructuring the Dockerfile like this:
FROM python:3.9
# Install dependencies
RUN apt-get update && apt-get install -y tesseract-ocr
RUN pip install virtualenv && virtualenv venv -p python3
ENV VIRTUAL_ENV=/venv
ENV PATH="$VIRTUAL_ENV/bin:$PATH"
WORKDIR /app
COPY requirements.txt ./
RUN pip install -r requirements.txt
RUN git clone https://github.com/facebookresearch/detectron2.git
RUN python -m pip install -e detectron2
COPY . /app
# Run the application:
CMD ["python", "-u", "app.py"]

Docker containers crashed: /bin/sh: 1: [uvicorn,: not found

I am new to Docker and trying to Dockerize my FastAPI application.
First I created a Dockerfile:
FROM python:3.9.9
WORKDIR /usr/src/app
COPY requirements.txt ./
RUN pip install --no-cache-dir -r requirements.txt
COPY . .
CMD ["uvicorn", "app.main:app", "--host", "0.0.0.0", "--port", "8000"]
Then ran the following command:
docker build -t fastapi .
The command ran successfully.
After that I created the following docker-compose.yml:
version: "3"
services:
api:
build: .
ports:
- 8000:8000
env_file:
./.env
Then ran the following command:
docker-compose up -d
Ran successfully:
Network fastapi_default Created 0.7s
- Container fastapi_api_1 Started
Then to check if its running properly I ran the following command:
docker ps -a
And it showed that Container exited few seconds after it was created.
Then I ran this command:
docker logs fastapi_api_1
And it says:
/bin/sh: 1: [uvicorn,: not found
Not sure what is the reason. Tried some solutions that I found online but nothing worked out. I do have uvicorn in my requirements.txt file.
Help will be appriciated. Please let me know if additional information is required.
Note: You don't need to do docker build -t fastapi . manually. Docker-compose will do it for you (because you set build: .) But! You must run up command with --build parameter (docker-compose up --build) to force rebuild image even if it exists.
And about your problem:
Here is a very good article (and one more) about RUN, ENTRYPOINT and CMD
Here is three forms for CMD:
CMD ["executable","param1","param2"] (exec form, preferred)
CMD ["param1","param2"] (sets additional default parameters for ENTRYPOINT in exec form)
CMD command param1 param2 (shell form)
According error, looks like Docker interpreting CMD as a shell form or additional parameters for default ENTRYPOINT
Actually still not sure why it happens, but changing CMD to
CMD uvicorn app.main:app --host 0.0.0.0 --port 8000
or
ENTRYPOINT ["uvicorn", "app.main:app", "--host", "0.0.0.0", "--port", "8000"]
should solve your problem
Also it will be better to use full path to uvicorn executable (/usr/bin/uvicorn or where it installed by default?). It is just my opinion but, that is may be a reason why CMD is interpreted as parameters instead of command.
PS In addition here is note from docker docs:
Note
The exec form is parsed as a JSON array, which means that you must use double-quotes (“) around words not single-quotes (‘).
So exec form syntax must meet the conditions of JSON syntax.
So, basically there was something wrong with the docker. I had created mulitple images. I removed all of them and ran the same commands again and it worked. I don't know the exact reason but its working now.
What I think was happening is that instead of deleting the old images and creating new one. I was just doing
docker-compose down
and then
docker-compose up -t
I think that command was not taking the changes into consideration.
then i ran:
docker-compose up --build
and I think that created a new image and it worked.
Then I noticed that there were atleast 10 images created. I deleted all of them and ran the same commands:
docker build .
docker-compose up -t
and it worked fine again.
So basically instead of using creating new image it was using the old one which was not created correctly:
docker-compose up --build
In short you should use docker-compose up --build whenever you make changes in your dockerfile or docker-compose.yml instead of docker-compose up -t
It might be confusing but I am also very new to Docker.
Thanks for the help everyone!
I've had the same issue with a Dockerfile in my docker-compose environment containing
COPY ./requirements.txt /app/requirements.txt
RUN pip install -r /app/requirements.txt
RUN pip install uvicorn==0.20.0
CMD ["uvicorn", "--host", "0.0.0.0", "--port", "6000", "app:app"]
So I don't need an extra command:line in my docker-compose.yml
It turned out that if you install uvicorn in your requirements.txt, as I like to do for testing purposes
then it gets installed locally, and
RUN pip install uvicorn==0.20.0 is skipped, which means,
there is no /usr/bin/uvicorn 'executable' available, just somewhere in site-packages and CMD will fail.
So, if you use uvicorn in your requirements.txt, and in Dockerfile as well, you can maybe
force the reinstallation
RUN pip install --ignore-installed uvicorn==0.20.0
in the Dockerfile,
or set the PATH to find it somewhere in the guts of python,
or - what I find is a better solution to keep the image size small -
is to remove uvicorn from requirements.txt...

What should I put for Docker CMD and ENTRYPOINT for Flask app running "python myapp.py images/*"

I am trying to run a Flask app using Docker.
Normally, to execute the Flask app, I run this inside of my Terminal:
python myapp.py images/*
I am unsure of how to convert that to Docker CMD syntax (or if I need to edit ENTRYPOINT).
Here is my docker file:
RUN apt-get update -y
RUN apt-get install -y python-pip python-dev build-essential hdf5-tools
COPY . ~/myapp/
WORKDIR ~/myapp/
RUN pip install -r requirements.txt
ENTRYPOINT ["python"]
CMD ["myapp.py"]
Inside of requirements.txt:
flask
numpy
h5py
tensorflow
keras
When I run the docker image:
person#person:~/Projects/$ docker run -d -p 5001:5000 myapp
19645b69b68284255940467ffe81adf0e32a8027f3a8d882b7c024a10e60de46
docker ps:
Up 24 seconds 0.0.0.0:5001->5000/tcp hardcore_edison
When I got to localhost:5001 I get no response.
Is it an issue with my CMD parameter?
EDIT:
New Dockerfile:
RUN apt-get update -y
RUN apt-get install -y python-pip python-dev build-essential hdf5-tools
COPY . ~/myapp/
WORKDIR ~/myapp/
EXPOSE 5000
RUN pip install -r requirements.txt
CMD ["python myapp.py images/*.jpg "]
With this new configuration, when I run:
docker run -d -p 5001:5000 myapp
I get:
docker: Error response from daemon: OCI runtime create failed: container_linux.go:348: starting container process caused "exec: \"python myapp.py images/*.jpg \": stat python myapp.py images/*.jpg : no such file or directory": unknown.
When I run:
docker run -d -p 5001:5000 myapp python myapp.py images/*.jpg
I get the Docker image to run, but now when I go to localhost:5001, it complains that the connection was reset.
I'm glad you've already solved this issue. I put up this answer just for those who still have the same confusions like you do about ENTRYPOINT and CMD executives.
In a Dockerfile, ENTRYPOINT and CMD are two similar executives, but still have strong difference between them. The most important one(only seems to me) is that CMD could be overwritten but ENTRYPOINT not.
To explain this, I may offer you guys the command blow:
docker run -tid --name=container_name image_name [command]
As we can see, command is optional, and it(if exists) could overwrite CMD defined in Dockerfile.
Let's back to your issue. You may have two ways to achieve your purpose-->
ENTRYPOINT ["python"] and CMD ["/path/to/myapp.py", "/path/to/images/*.jpg"].
CMD python /path/to/myapp.py /path/to/images/*.jpg. This is mentioned by #David Maze above.
To understand the first one, you may take CMD as arguments for ENTRYPOINT.
A simple example below.
Dockerfile-->
FROM ubuntu:18.04
ENTRYPOINT ["cat"]
CMD ["/etc/hosts"]
Build image named test-cmd-show and start a container from it.
docker run test-cmd-show
This would show the content in /etc/hosts file. And go on...
docker run test-cmd-show /etc/resolv.conf
And this would show us the content of /etc/resolv.conf file. And go on ...
docker run test-cmd-show --help
This would show the help information for command cat.
Fantastic, right?
Somehow, we could do more research though this functionality.
Add a relevant question: What's the difference between CMD and ENTRYPOINT?
The important thing is that you need a shell to expand your command line, so I’d write
CMD python myapp.py images/*
When you just write CMD like this (without the not-really-JSON brackets and quotes) Docker will implicitly feed the command line through a shell for you.
(You also might consider changing your application to support taking a directory name as configuration in some form and “baking it in” to your application, if these images will be in a fixed place in the container filesystem.)
I would only set ENTRYPOINT when (a) you are setting it to a wrapper shell script that does some first-time setup and then exec "$#"; or (b) when you have a FROM scratch image with a static binary and you literally cannot do anything with the container besides run the one binary in it.
One issue I found was that the app wasn't accessible to Docker. I added this to app.run:
host='0.0.0.0'
According to this:
Deploying a minimal flask app in docker - server connection issues
Next, Docker panics when you add a directory to the CMD parameters.
So, I removed ENTRYPOINT and CMD and manually added the command to the Docker run:
docker docker run -d -p 5001:5000 myappdocker python myapp.py images/*.jpg

Can't manage to build and run a bottle.py app

I've been trying to setup a container to run an app with the bottle framework. Read everything I could find about it, but even so I can't do it. Here's what I did:
Dockerfile:
# Use an official Python runtime as a parent image
FROM python:2.7
# Set the working directory to /app
WORKDIR /app
# Copy the current directory contents into the container at /app
ADD . /app
# Install any needed packages specified in requirements.txt
RUN pip install -r requirements.txt
# Make port 80 available to the world outside this container
EXPOSE 8080
# Define environment variable
ENV NAME World
# Run app.py when the container launches
CMD ["python", "app.py"]
app.py:
import os
from bottle import route, run, template
#route('/<name>')
def index(name):
return template('<b>Hello {{name}}</b>!', name=name)
run(host='localhost', port=8080)
requirements.txt
bottle
By running the command docker build -t testapp I create the container.
Then by running the command docker run -p 8080:8080 testapp I get this terminal output:
Bottle v0.12.13 server starting up (using WSGIRefServer())...
Listening on http://localhost:8080/
Hit Ctrl-C to quit.
But when I go to localhost:8080/testing I get localhost refused connection.
Can anyone point me to the right direction?
Problem is this line:
run(host='localhost', port=8080)
It is exposing it for "localhost" insde the container you are running the code. You can use python library netifaces to get container external interface if you want to but I suggest you to set 0.0.0.0 as host like:
run(host='0.0.0.0', port=8080)
Then you will be able to access http://localhost:8080/ (asuming your docker engine is at localhost)
EDIT: mind your previous container might still be listening on 8080/tcp. Remove or stop previous container first.

Connection Refused Docker Run

I'm getting a connection refused after building my Docker image and running docker run -t imageName
Inside the container my python script is making web requests (external API call) and then communicating over localhost:5000 to a logstash socket.
My dockerfile is really simple:
FROM ubuntu:14.04
RUN apt-get update -y
RUN apt-get install -y nginx git python-setuptools python-dev
RUN easy_install pip
#Install app dependencies
RUN pip install requests configparser
EXPOSE 80
EXPOSE 5000
#Add project directory
ADD . /usr/local/scripts/
#Set default working directory
WORKDIR /usr/local/scripts
CMD ["python", "logs.py"]
However, I get a [ERROR] Connection refused message when I try to run this. It's not immediately obvious to me what I'm doing wrong here - I believe I'm opening 80 and 5000 to the outside world? Is this incorrect? Thanks.
Regarding EXPOSE:
Each container you run has its own network interface. Doing EXPOSE 5000 tell docker to link a port 5000 from container-network-interface to a random port in your host machine (see it with docker ps), as long as you tell docker to do it when you docker run with -P.
Regarding logstash.
If your logstash is installed in your host, or in another container, it means that logstash is not in the "localhost" of the container (remember that each container has its own network interface, each one has its own localhost). So you need to point to logstash properly.
How?
Method 1:
Don't give container its own iterface, so it has the same localhost as your machine:
docker run --net=host ...
Method 2:
If you are using docker-compose, use the docker network linking. i.e:
services:
py_app:
...
links:
- logstash
logstash:
image: .../logstash..
So point as this: logstash:5000 (docker will resolve that name to the internal IP corresponding to logstash)
Method 3:
If logstash listen in your localhost:5000 (from your host), you can point to it as this: 172.17.0.1:5000 from inside your container (the 172.17.0.1 is the host fixed IP, but this option is less elegant, arguably)

Categories