Docker containers crashed: /bin/sh: 1: [uvicorn,: not found - python

I am new to Docker and trying to Dockerize my FastAPI application.
First I created a Dockerfile:
FROM python:3.9.9
WORKDIR /usr/src/app
COPY requirements.txt ./
RUN pip install --no-cache-dir -r requirements.txt
COPY . .
CMD ["uvicorn", "app.main:app", "--host", "0.0.0.0", "--port", "8000"]
Then ran the following command:
docker build -t fastapi .
The command ran successfully.
After that I created the following docker-compose.yml:
version: "3"
services:
api:
build: .
ports:
- 8000:8000
env_file:
./.env
Then ran the following command:
docker-compose up -d
Ran successfully:
Network fastapi_default Created 0.7s
- Container fastapi_api_1 Started
Then to check if its running properly I ran the following command:
docker ps -a
And it showed that Container exited few seconds after it was created.
Then I ran this command:
docker logs fastapi_api_1
And it says:
/bin/sh: 1: [uvicorn,: not found
Not sure what is the reason. Tried some solutions that I found online but nothing worked out. I do have uvicorn in my requirements.txt file.
Help will be appriciated. Please let me know if additional information is required.

Note: You don't need to do docker build -t fastapi . manually. Docker-compose will do it for you (because you set build: .) But! You must run up command with --build parameter (docker-compose up --build) to force rebuild image even if it exists.
And about your problem:
Here is a very good article (and one more) about RUN, ENTRYPOINT and CMD
Here is three forms for CMD:
CMD ["executable","param1","param2"] (exec form, preferred)
CMD ["param1","param2"] (sets additional default parameters for ENTRYPOINT in exec form)
CMD command param1 param2 (shell form)
According error, looks like Docker interpreting CMD as a shell form or additional parameters for default ENTRYPOINT
Actually still not sure why it happens, but changing CMD to
CMD uvicorn app.main:app --host 0.0.0.0 --port 8000
or
ENTRYPOINT ["uvicorn", "app.main:app", "--host", "0.0.0.0", "--port", "8000"]
should solve your problem
Also it will be better to use full path to uvicorn executable (/usr/bin/uvicorn or where it installed by default?). It is just my opinion but, that is may be a reason why CMD is interpreted as parameters instead of command.
PS In addition here is note from docker docs:
Note
The exec form is parsed as a JSON array, which means that you must use double-quotes (“) around words not single-quotes (‘).
So exec form syntax must meet the conditions of JSON syntax.

So, basically there was something wrong with the docker. I had created mulitple images. I removed all of them and ran the same commands again and it worked. I don't know the exact reason but its working now.
What I think was happening is that instead of deleting the old images and creating new one. I was just doing
docker-compose down
and then
docker-compose up -t
I think that command was not taking the changes into consideration.
then i ran:
docker-compose up --build
and I think that created a new image and it worked.
Then I noticed that there were atleast 10 images created. I deleted all of them and ran the same commands:
docker build .
docker-compose up -t
and it worked fine again.
So basically instead of using creating new image it was using the old one which was not created correctly:
docker-compose up --build
In short you should use docker-compose up --build whenever you make changes in your dockerfile or docker-compose.yml instead of docker-compose up -t
It might be confusing but I am also very new to Docker.
Thanks for the help everyone!

I've had the same issue with a Dockerfile in my docker-compose environment containing
COPY ./requirements.txt /app/requirements.txt
RUN pip install -r /app/requirements.txt
RUN pip install uvicorn==0.20.0
CMD ["uvicorn", "--host", "0.0.0.0", "--port", "6000", "app:app"]
So I don't need an extra command:line in my docker-compose.yml
It turned out that if you install uvicorn in your requirements.txt, as I like to do for testing purposes
then it gets installed locally, and
RUN pip install uvicorn==0.20.0 is skipped, which means,
there is no /usr/bin/uvicorn 'executable' available, just somewhere in site-packages and CMD will fail.
So, if you use uvicorn in your requirements.txt, and in Dockerfile as well, you can maybe
force the reinstallation
RUN pip install --ignore-installed uvicorn==0.20.0
in the Dockerfile,
or set the PATH to find it somewhere in the guts of python,
or - what I find is a better solution to keep the image size small -
is to remove uvicorn from requirements.txt...

Related

How to pass two files from different directory to docker run?

I have a docker container that looks like this
FROM python:3.8
ADD lodc.py .
RUN pip install requests python-dotenv
CMD [ "python", "./lodc.py", "file1.json", "file2.json" ]
it needs to take an env file and then two different files as the arguments that is needed for the script lodc.py to run. I have tried mounting them like described here Passing file as argument to Docker container but I cannot get it to work. It is important to keep the two files isolated because those files will be changing frequently so it doesn't make sense to put them into the container. Here is what I've been running
docker run --env-file /Users/Documents/github/datasets/tmp/.env -v /Users/Documents/datasets/files:/Users/Documents/github/datasets datasets datasets/file1.json datasets/file2.json
Basically I would like to just build and run the docker container and be able to munipulate the two argument files in another directory whenever I want without issue.
The env file is being passed correctly and it is failing because it cannot find the file.json directory. I am new to docker and any help would be greatly appreciated.
Thanks.
I think you just got the run command wrong. Try this one:
docker run \
--env-file /Users/Documents/github/datasets/tmp/.env \
-v /Users/Documents/datasets/files/file1.json:/file1.json \
-v /Users/Documents/github/datasets/files/file2.json:/file2.json \
<your-built-docker-image-name>
I'm not sure about your paths, but you need to run it with two different volumes (-v argument).
If the one you are trying does not work. You can also try below as an option(keep in mind that you will have to build docker image every time input files change):
I created the below directory structure based on your comments:
FROM python:3.8
ADD lodc.py .
COPY dir1/file1.json .
COPY dir2/file2.json .
CMD [ "python", "./lodc.py", "file1.json", "file2.json"]
Please let me know if you need more help.
To give a complete answer you can do it three ways:
In the Docker file you can use the Copy command - this is nice, but it kind of hides what volumes are being used if you use an orchestration tool like Kubernetes or Docker-Compose down the line
FROM python:3.8
ADD lodc.py .
COPY dir1/file1.json .
COPY dir2/file2.json .
CMD [ "python", "./lodc.py", "file1.json", "file2.json"]
You can reference a volume in command line using the -v argument; this is less preferable as it doesn't codify the volumes and it is dependent on the directory the command is executed in
docker run \
--env-file /Users/Documents/github/datasets/tmp/.env \
-v /Users/Documents/datasets/files/file1.json:/file1.json \
-v /Users/Documents/github/datasets/files/file2.json:/file2.json \
<your-built-docker-image-name>
You can reference a volume in a docker-compose file; this option is preferred because the volume referenced is in code, it is explicit the volume is included, and it isn't dependent on where the command is executed
version: '3.9'
services:
base:
build:
context: .
dockerfile: <relative-path-to-dockerfile-from-docker-compose>
image: <desired-image-name>
env_file:
- /Users/Documents/github/datasets/tmp/.env
volumes:
- /Users/Documents/datasets/files/file1.json:/file1.json
- /Users/Documents/github/datasets/files/file2.json:/file2.json
The last option will reduce headaches in the long-term as you scale because is the most explicit and stable; Good Luck!
You just have to make clear where you are running your python (i.e, where are "you" inside the container).
In other words, you have to define (somehow) your WORKDIR. By default, the workdir is at / (the container's root directory).
Let's suppose, for simplicity, that you define your workdir to be inside your "datasets/files" directory. (Also, I'll use a shorter path inside the container...because we can ;)
So, I'd suggest the following for your Dockerfile and docker-run:
FROM python:3.8
ADD lodc.py .
RUN pip install requests python-dotenv
CMD [ "python", "/lodc.py", "file1.json", "file2.json" ]
build:
$ docker build -t datasets
run:
$ docker run --env-file /Users/Documents/github/datasets/tmp/.env \
-v /Users/Documents/datasets/files:/mnt/datasets \
-w /mnt/datasets
datasets
Or, you could free your Dockerfile a little bit, and push it to your run command:
FROM python:3.8
ADD lodc.py .
RUN pip install requests python-dotenv
ENTRYPOINT [ "python", "/lodc.py" ]
build:
$ docker build -t datasets
run:
$ docker run --env-file /Users/Documents/github/datasets/tmp/.env \
-v /Users/Documents/datasets/files:/mnt/datasets \
datasets /mnt/datasets/file1.json /mnt/datasets/file2.json
I didn't test it, but it looks (syntactically) ok.

What should I put for Docker CMD and ENTRYPOINT for Flask app running "python myapp.py images/*"

I am trying to run a Flask app using Docker.
Normally, to execute the Flask app, I run this inside of my Terminal:
python myapp.py images/*
I am unsure of how to convert that to Docker CMD syntax (or if I need to edit ENTRYPOINT).
Here is my docker file:
RUN apt-get update -y
RUN apt-get install -y python-pip python-dev build-essential hdf5-tools
COPY . ~/myapp/
WORKDIR ~/myapp/
RUN pip install -r requirements.txt
ENTRYPOINT ["python"]
CMD ["myapp.py"]
Inside of requirements.txt:
flask
numpy
h5py
tensorflow
keras
When I run the docker image:
person#person:~/Projects/$ docker run -d -p 5001:5000 myapp
19645b69b68284255940467ffe81adf0e32a8027f3a8d882b7c024a10e60de46
docker ps:
Up 24 seconds 0.0.0.0:5001->5000/tcp hardcore_edison
When I got to localhost:5001 I get no response.
Is it an issue with my CMD parameter?
EDIT:
New Dockerfile:
RUN apt-get update -y
RUN apt-get install -y python-pip python-dev build-essential hdf5-tools
COPY . ~/myapp/
WORKDIR ~/myapp/
EXPOSE 5000
RUN pip install -r requirements.txt
CMD ["python myapp.py images/*.jpg "]
With this new configuration, when I run:
docker run -d -p 5001:5000 myapp
I get:
docker: Error response from daemon: OCI runtime create failed: container_linux.go:348: starting container process caused "exec: \"python myapp.py images/*.jpg \": stat python myapp.py images/*.jpg : no such file or directory": unknown.
When I run:
docker run -d -p 5001:5000 myapp python myapp.py images/*.jpg
I get the Docker image to run, but now when I go to localhost:5001, it complains that the connection was reset.
I'm glad you've already solved this issue. I put up this answer just for those who still have the same confusions like you do about ENTRYPOINT and CMD executives.
In a Dockerfile, ENTRYPOINT and CMD are two similar executives, but still have strong difference between them. The most important one(only seems to me) is that CMD could be overwritten but ENTRYPOINT not.
To explain this, I may offer you guys the command blow:
docker run -tid --name=container_name image_name [command]
As we can see, command is optional, and it(if exists) could overwrite CMD defined in Dockerfile.
Let's back to your issue. You may have two ways to achieve your purpose-->
ENTRYPOINT ["python"] and CMD ["/path/to/myapp.py", "/path/to/images/*.jpg"].
CMD python /path/to/myapp.py /path/to/images/*.jpg. This is mentioned by #David Maze above.
To understand the first one, you may take CMD as arguments for ENTRYPOINT.
A simple example below.
Dockerfile-->
FROM ubuntu:18.04
ENTRYPOINT ["cat"]
CMD ["/etc/hosts"]
Build image named test-cmd-show and start a container from it.
docker run test-cmd-show
This would show the content in /etc/hosts file. And go on...
docker run test-cmd-show /etc/resolv.conf
And this would show us the content of /etc/resolv.conf file. And go on ...
docker run test-cmd-show --help
This would show the help information for command cat.
Fantastic, right?
Somehow, we could do more research though this functionality.
Add a relevant question: What's the difference between CMD and ENTRYPOINT?
The important thing is that you need a shell to expand your command line, so I’d write
CMD python myapp.py images/*
When you just write CMD like this (without the not-really-JSON brackets and quotes) Docker will implicitly feed the command line through a shell for you.
(You also might consider changing your application to support taking a directory name as configuration in some form and “baking it in” to your application, if these images will be in a fixed place in the container filesystem.)
I would only set ENTRYPOINT when (a) you are setting it to a wrapper shell script that does some first-time setup and then exec "$#"; or (b) when you have a FROM scratch image with a static binary and you literally cannot do anything with the container besides run the one binary in it.
One issue I found was that the app wasn't accessible to Docker. I added this to app.run:
host='0.0.0.0'
According to this:
Deploying a minimal flask app in docker - server connection issues
Next, Docker panics when you add a directory to the CMD parameters.
So, I removed ENTRYPOINT and CMD and manually added the command to the Docker run:
docker docker run -d -p 5001:5000 myappdocker python myapp.py images/*.jpg

Dockerized web-service does not run in background even with detached flag

I'm trying to Dockerize a web service using Tangelo and python.
My project structure is as follows:
test.py
requirements.txt
Dockerfile
test.py
import ...
def run(query):
...
return response
requirements.txt
... # other packages, numpy, open-cv, etc
tangelo
Dockerfile
FROM ubuntu:latest
RUN apt-get update
RUN apt-get install -y python python-pip git
EXPOSE 9220
ADD . /test
WORKDIR /test
RUN pip install -r requirements.txt
CMD "tangelo --port 9220"
I build this using
docker build -t "test" .
And run in detached mode using
docker run -p 9220:9220 -d "test"
But docker ps shows me that the docker stops almost as soon as it has started. I don't know what the problem is since I cannot inspect the logs.
I have tried a lot of things but I still can't figure this thing out.
Any ideas? If needed, I can provide more info.
EDIT:
When I build, step 8 says
Step 8/8 : ENTRYPOINT tangelo --port 9220
---> Running in 8b54841853ab
Removing intermediate container 8b54841853ab
So it means these are run in an intermediate container. Why is that and how can I prevent it?
TL;DR: Use:
CMD tangelo -np --port 9220
Instead of:
CMD "tangelo --port 9220"
Explanation:
You have two ways to debug the problem:
Inspect the logs of the container:
$ docker run -d test
28684015e519c0c8d644fccf98240d1465acabab6d16c19fd59c5f465b7f18af
$ sudo docker logs 28684015e519c
/bin/sh: 1: tangelo --port 9220: not found
Instead of running in detached mode, run in foreground with -i/--interactive (and optionally also -t/--tty):
$ docker run -ti test
/bin/sh: 1: tangelo --port 9220: not found
As you can see from above, the problem is that tangelo --port 9220 is being interpreted as a single argument. Split it by removing quotes:
CMD tangelo --port 9220 # this will use a shell
or use the "exec" form (preferred, given that you don't need any shell features):
CMD ["tangelo", "--port", "9220"] # this will execute tangelo directly
or even better use ENTRYPOINT + CMD:
ENTRYPOINT ["tangelo"]
CMD ["--port", "9220"] # this will execute tangelo directly
After this change, you'll still have a problem:
$ sudo docker run -ti test
...
[29/Apr/2018:02:43:39] TANGELO no such group 'nobody' to drop privileges to
Tangelo is complaining about the fact that there is no user and group named nobody inside the container. Again, there are two things you can do: add a RUN to create the nobody user and group, or run Tangelo with the -np/--no-drop-privileges option:
ENTRYPOINT ["tangelo"]
CMD ["--no-drop-privileges", "--port", "9220"]
It's fine if during the build you see intermediate containers: Docker creates them for each build step. The commands you specify in ENTRYPOINT or CMD are not executed during build, they're just recorded into the final image.

Docker container/image running but there is no port number

I am trying to get a django project that I have built to run on docker and create an image and container for my project so that I can push it to my dockerhub profile.
Now I have everything set up and I've created the initial image of my project. However, when I run it I am not getting any port number attached to the container. I need this to test and see if the container is actually working.
Here is what I have:
Successfully built a047506ef54b
Successfully tagged test_1:latest
(MySplit) omars-mbp:mysplit omarjandali$ docker images
REPOSITORY TAG IMAGE ID CREATED SIZE
test_1 latest a047506ef54b 14 seconds ago 810MB
(MySplit) omars-mbp:mysplit omarjandali$ docker run --name testing_first -d -p 8000:80 test_1
01cc8173abfae1b11fc165be3d900ee0efd380dadd686c6b1cf4ea5363d269fb
(MySplit) omars-mbp:mysplit omarjandali$ docker container ls -a
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
01cc8173abfa test_1 "python manage.py ru…" 13 seconds ago Exited (1) 11 seconds ago testing_first
(MySplit) omars-mbp:mysplit omarjandali$ Successfully built a047506ef54b
You can see there is no port number so I don't know how to access the container through my local machine on my web browser.
dockerfile:
FROM python:3
WORKDIR tab/
COPY requirements.txt ./
RUN pip install -r requirements.txt
COPY . .
CMD ["python", "manage.py", "runserver", "0.0.0.0"]
This line from the question helps reveal the problem;
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
01cc8173abfa test_1 "python manage.py ru…" 13 seconds ago Exited (1) 11 seconds ago testing_first
Exited (1) (from the STATUS column) means that the main process has already exited with a status code of 1 - usually meaning an error. This would have freed up the ports, as the docker container stops running when the main process finishes for any reason.
You need to view the logs in order to diagnose why.
docker logs 01cc will show the logs of the docker container that has the ID starting with 01cc. You should find that reading these will help you on your way. Knowing this command will help you immensely in debugging weirdness in docker, whether the container is running or stopped.
An alternative 'quick' way is to drop the -d in your run command. This will make your container run inline rather than as a daemon.
Created Dockerise django seed project
django-admin.py startproject djangoapp
Need a requirements.txt file outlining the Python dependencies
cd djangoapp/
RUN follwoing command to create the files required for dockerization
cat <<EOF > requirements.txt
Django
psycopg2
EOF
Dockerfile
cat <<EOF > Dockerfile
FROM python:3.6
ENV PYTHONUNBUFFERED 1
RUN mkdir /app
WORKDIR /app
ADD requirements.txt /app/
RUN pip install -r requirements.txt
ADD . /app/
EXPOSE 8000
CMD ["python", "manage.py", "runserver", "0.0.0.0:8000"]
EOF
docker-compose.yml
cat <<EOF > docker-compose.yml
version: "3.2"
services:
web:
image: djangoapp
command: python manage.py runserver 0.0.0.0:8000
ports:
- "8000:8000"
EOF
Run the application with
docker-compose up -d
When you created the container you published the ports. Your container would be accessible via port 8000 if it successfully built. However, as Shadow pointed out, your container exited with an error. That is why you must add the -a flag to your docker container ls command. docker container ls only shows running containers without the -a flag.
I recommend forgoing the detached flag -d to see what is causing the error. Then creating a new container after you have successfully launched the one you are working on. Or simply run the following commands once you fix the issue. docker stop testing_first then docker container rm testing_first finally run the same command you ran before. docker run --name testing_first -d -p 8000:80 test_1
I ran into similar problems with the first docker instances I attempted to run as well.

Using docker to compose a remote image with a local code base for *development*

I have been reading this tutorial:
https://prakhar.me/docker-curriculum/
along with other tutorials, and Docker docks and I am still not completely clear on how to do this task.
The problem
My local machine is running Mac OS X, and I would like to set up a development environment for a python project. In this project I need to run call an api from a docker repo bamos/openface. The project also has some dependencies such as yaml, etc. If I just mount my local to openface, ie:
docker run -v path/to/project:/root/project -p 9000:9000 -p 8000:8000 -t -i bamos/openface /bin/bash
Then I need to install yaml and other dependencies, and every time I exit the container the installations would be lost. Additionally, it is also much slower for some reason. So the right way to do this is using Docker compose, but I am not sure how to proceed from here.
UPDATE
In response to the comments, I will now update the problem:
Right now my Dockerfile looks like this:
FROM continuumio/anaconda
ADD . /face-off
WORKDIR /face-off
RUN pip install -r requirements.txt
EXPOSE 5000
CMD [ "python", "app.py" ]
It is important that I build from anaconda since a lot of my code will use numpy and scipy. Now I also need bamos/openface, so I tried adding that to my docker-compose.yml file:
version: '2'
services:
web:
build: .
command: python app.py
ports:
- "5000:5000"
volumes:
- .:/face-off
openface:
build: bamos/openface
However, I am getting error:
build path path/to/face-off/bamos/openface either does not exist, is not accessible, or is not a valid URL
So I need to pass bamos/openface the right way so I can build a container with it. Right now bamos/openface is listed when I do docker images.

Categories