I'm trying to run a python application inside a container. I keep getting:
"/bin/sh: 1: python3: not found
I've tried many different iterations, including using python as my base image, with different failures.
This time I built an Ubuntu container and ran the commands one at a time in the command line and it works in bash. But when I run the container it still can't seem to find python.
Here's what I currently have for my dockerfile:
FROM ubuntu
CMD mkdir pong
WORKDIR /pong
CMD apt-get update
CMD apt-get install python3 -y
CMD apt-get install python3-pip -y
COPY . /pong
CMD pip3 install pipenv
CMD pip3 install pyxel
CMD python3 main.py
I've spent a lot of time on the docker documentation too, so forgive me for posting this simple question, but I'm stumped. Thank you in advance!
Replace all CMD by RUN, the last one should be ENTRYPOINT.
FROM ubuntu
RUN mkdir pong
WORKDIR /pong
RUN apt-get update
RUN apt-get install python3 -y
RUN apt-get install python3-pip -y
COPY . /pong
RUN pip3 install pipenv
RUN pip3 install pyxel
ENTRYPOINT ["python3", "main.py"]
The main purpose of a CMD is to provide defaults for an executing container. These defaults can include an executable, or they can omit the executable, in which case you must specify an ENTRYPOINT instruction as well.
The RUN instruction will execute any commands in a new layer on top of the current image and commit the results. The resulting committed image will be used for the next step in the Dockerfile.
For more details:
CMD
RUN
ENTRYPOINT
The sh shell does not know the full path of the executable python3
This should work better:
CMD /usr/bin/python3 main.py
Also, note that for the container not to halt, you need to keep the main.py process constantly running in the foreground. If it exits, the container stops.
Related
I've python code that includes tqdm code.
the bash build docker image and container however I can't see any output from the container(in CLI).
#!/bin/sh
docker build . -t traffic
docker run -d --name traffic_con traffic
docker wait traffic_con
docker cp -a traffic_con:/usr/TrafficMannager/out/data/. ./out/data/
docker rm /traffic_con
docker rmi /traffic
I've tried to run the container on interactive mode (-it) however it's throwing an error
[EDIT:]
Docker file:
FROM cityflowproject/cityflow
# Create a folder we'll work in
WORKDIR /usr/TrafficMannager
# Upgrade installed packages
RUN apt-get update && apt-get upgrade -y && apt-get clean
# Install vim to open & edit code\text files
RUN apt-get install -y vim
# Install all python code depentences
RUN pip install gym && \
pip install numpy && \
pip install IPython && \
pip install torch && \
python -m pip install python-dotenv &&\
pip install tqdm
COPY . .
CMD chmod u+x script/container_instructions.sh; ./script/container_instructions.sh
container_instructions:
#!/bin/sh
pip install lib/extern/CityFlow/.
python main.py
You run the Docker container in the background, then immediately docker wait for it. If you run the container in the foreground instead, you'll see its output on stdout, and the docker run command will complete when the container exits.
docker run --name traffic_con traffic # without -d
Given the wrapper script you show, you may find this setup much easier to run in a Python virtual environment. Ignore all the Docker parts and run:
python3 -m venv venv
./venv/bin/pip install gym numpy IPython torch python-dotenv tqdm lib/extern/CityFlow
./venv/bin/python3 main.py
The script will directly write to ./out/data on the host system, without the long-winded privileged script to copy data out.
If you really do need a container here, you can also mount the output directory into the container to avoid the manual copy step.
#!/bin/sh
docker build . -t traffic
docker run --rm -v "$PWD/out/data:/usr/TrafficMannager/out/data" traffic
docker rmi traffic
I'm deploying node.js web application on custom runtime with flex environment. I'm calling child_process in Node.js to open python3 as such:
const spawn = require("child_process").spawn;
pythonProcess = spawn('python3');
Which runs fine locally but when deployed to GAE, it gives me an error as such:
Error: spawn python3 ENOENT
at Process.ChildProcess._handle.onexit (child_process.js:240)
at onErrorNT (internal/child_process.js:415)
at process._tickCallback (next_tick.js:63)
However, when I run python2, it works fine.
After doing some research and digging, I came across this question on stackoverflow
How to install Python3 in Google Cloud Platform for a Node app
It seems that I have to do something with building custom runtime from docker file to allow multiple runtimes (something like that).
I've tried countless things with docker file such as:
# Trying to install python3
FROM ubuntu as stage0
WORKDIR /python/
RUN apt-get update || : && apt-get install --yes python3;
RUN apt-get install python3-pip -y
# My main node.js docker stuff
FROM gcr.io/google_appengine/nodejs
COPY . /app/
... etc
and
# From google app engine python runtime docker repo
FROM gcr.io/google-appengine/python
RUN virtualenv /env
ENV VIRTUAL_ENV /env
ENV PATH /env/bin:$PATH
ADD requirements.txt /app/requirements.txt
RUN pip install -r /app/requirements.txt
ADD . /app
# My main node.js docker stuff
FROM gcr.io/google_appengine/nodejs
COPY . /app/
... etc
which none of it worked.
What is the correct approach of doing this and how can I do it?
Thank you.
Google's image is based on ubuntu but only has python 2 and 2.7. This answer showed how to use python3.6, but we're going to install 3.5 it via software-properties-common. Putting things together, you get:
FROM launcher.gcr.io/google/nodejs
# same as
# FROM gcr.io/google-appengine/nodejs
RUN apt-get update && apt-get install software-properties-common -y
# RUN unlink /usr/bin/python
# RUN ln -sv /usr/bin/python3.5 /usr/bin/python
# RUN python -V
RUN python3 -V
# Copy application code.
COPY . /app/
# Install dependencies.
RUN npm --unsafe-perm install
If you're just going to call python3 from your spawn, you don't need to unlink (commented lines) which I included so that you can just call python.
I created Dockerfile and docker-compose but gives me this error django-apache2 exited with code 0 when I write docker-compose up
Dockerfile
FROM ubuntu:18.04
RUN apt-get -y update && apt-get -y upgrade
RUN apt-get -y install python3.8
RUN apt-get -y install python3-pip
RUN apt -y install apache2
RUN apt-get install -y apt-utils vim curl apache2 apache2-utils
RUN apt-get -y install python3 libapache2-mod-wsgi-py3
RUN pip3 install --upgrade pip
COPY ./requirements.txt ./requirements.txt
RUN apt-get -y install python3-dev
RUN apt-get -y install python-dev default-libmysqlclient-dev
RUN pip3 install -r ./requirements.txt
COPY ./apache.conf /etc/apache2/sites-available/000-default.conf
RUN mkdir /var/www/api/
COPY ./project/. /var/www/api/
WORKDIR /project/
Docker-compose
version: "3"
services:
django-apache2:
container_name: "django-apache2"
build: .
ports:
- "8005:80"
First, we need to understand that a Docker container runs a single command. The container will be running as long as that process the command started is running. Once the process is completed and exits then the container will stop.
With that understanding, we can make an assumption of what is happening in your case. When you start your service there is no command. At this point, the Docker container is stopped because the process exited (with status 0).
So you need to add command that keeps running on your docker.
Check this link for more information Here.
Your container lacks something to run. You need to add a CMD or ENTRYPOINT instruction to your Dockerfile.
That's why you see such message, which is not an error. The message is telling you that your container django-apache2 finished correctly (exit status 0), and this is because you are running the base image ubuntu which doesn't execute anything.
The problem with this approach is due to www-data apache2 user. If you install python packages from Dockerfile they will be installed for superuser and www-data apache user can not access those packages.
I tried creating a new venv using pip and the same problem happens. Packages installed from superuser in a python virtual environment are not installed inside venv folder.
I created a new repository in github explaining a different approach using miniconda3 as the python packages manager and using sudo -u in order to run commands as a different user.
I am trying to solve this using pip. Changes will be posted in the repository.
I hope this can be useful to you.
I want to be able to add some extra requirements to an own create docker image. My strategy is build the image from a dockerfile with a CMD command that will execute a "pip install -r" command using a mounted volume in runtime.
This is my dockerfile:
FROM ubuntu:14.04
RUN apt-get update
RUN apt-get install -y python-pip python-dev build-essential
RUN pip install --upgrade pip
WORKDIR /root
CMD ["pip install -r /root/sourceCode/requirements.txt"]
Having that dockerfile I build the image:
sudo docker build -t test .
And finally I try to attach my new requirements using this command:
sudo docker run -v $(pwd)/sourceCode:/root/sourceCode -it test /bin/bash
My local folder "sourceCode" has inside a valid requirements.txt file (it contains only one line with the value "gunicorn").
When I get the prompt I can see that the requirements file is there, but if I execute a pip freeze command the gunicorn package is not listed.
Why the requirements.txt file is been attached correctly but the pip command is not working properly?
TLDR
pip command isn't running because you are telling Docker to run /bin/bash instead.
docker run -v $(pwd)/sourceCode:/root/sourceCode -it test /bin/bash
^
here
Longer explanation
The default ENTRYPOINT for a container is /bin/sh -c. You don't override that in the Dockerfile, so that remains. The default CMD instruction is probably nothing. You do override that in your Dockerfile. When you run (ignore the volume for brevity)
docker run -it test
what actually executes inside the container is
/bin/sh -c pip install -r /root/sourceCode/requirements.txt
Pretty straight forward, looks like it will run pip when you start the container.
Now let's take a look at the command you used to start the container (again, ignoring volumes)
docker run -v -it test /bin/bash
what actually executes inside the container is
/bin/sh -c /bin/bash
the CMD arguments you specified in your Dockerfile get overridden by the COMMAND you specify in the command line. Recall that docker run command takes this form
docker run [OPTIONS] IMAGE[:TAG|#DIGEST] [COMMAND] [ARG...]
Further reading
This answer has a really to the point explanation of what CMD and ENTRYPOINT instructions do
The ENTRYPOINT specifies a command that will always be executed when the container starts.
The CMD specifies arguments that will be fed to the ENTRYPOINT.
This blog post on the difference between ENTRYPOINT and CMD instructions that's worth reading.
You may change the last statement i.e., CMD to below.
--specify absolute path of pip location in below statement
CMD ["/usr/bin/pip", "install", "-r", "/root/sourceCode/requirements.txt"]
UPDATE: adding additional answer based on comments.
One thing must be noted that, if customized image is needed with additional requirements, that should part of the image rather than doing at run time.
Using below base image to test:
docker pull colstrom/python:legacy
So, installing packages should be run using RUN command of Dockerfile.
And CMD should be used what app you actually wanted to run as a process inside of container.
Just checking if the base image has any pip packages by running below command and results nothing.
docker run --rm --name=testpy colstrom/python:legacy /usr/bin/pip freeze
Here is simple example to demonstrate the same:
Dockerfile
FROM colstrom/python:legacy
COPY requirements.txt /requirements.txt
RUN ["/usr/bin/pip", "install", "-r", "/requirements.txt"]
CMD ["/usr/bin/pip", "freeze"]
requirements.txt
selenium
Build the image with pip packages Hope you know to place Dockerfile, requirements.txt file in fresh directory.
D:\dockers\py1>docker build -t pypiptest .
Sending build context to Docker daemon 3.072 kB
Step 1 : FROM colstrom/python:legacy
---> 640409fadf3d
Step 2 : COPY requirements.txt /requirements.txt
---> abbe03846376
Removing intermediate container c883642f06fb
Step 3 : RUN /usr/bin/pip install -r /requirements.txt
---> Running in 1987b5d47171
Collecting selenium (from -r /requirements.txt (line 1))
Downloading selenium-3.0.1-py2.py3-none-any.whl (913kB)
Installing collected packages: selenium
Successfully installed selenium-3.0.1
---> f0bc90e6ac94
Removing intermediate container 1987b5d47171
Step 4 : CMD /usr/bin/pip freeze
---> Running in 6c3435177a37
---> dc1925a4f36d
Removing intermediate container 6c3435177a37
Successfully built dc1925a4f36d
SECURITY WARNING: You are building a Docker image from Windows against a non-Windows Docker host. All files and directories added to build context will have '-rwxr-xr-x' permissions. It is recommended to double check and reset permissions for sensitive files and directories.
Now run the image
If you are not passing any external command, then container takes command from CMD which is just shows the list of pip packages. Here in this case, selenium.
D:\dockers\py1>docker run -itd --name testreq pypiptest
039972151eedbe388b50b2b4cd16af37b94e6d70febbcb5897ee58ef545b1435
D:\dockers\py1>docker logs testreq
selenium==3.0.1
So, the above shows that package is installed successfully.
Hope this is helpful.
Using the concepts that #Rao and #ROMANARMY have explained in their answers, I find out finally a way of doing what I wanted: add extra python requirements to a self-created docker image.
My new Dockerfile is as follows:
FROM ubuntu:14.04
RUN apt-get update
RUN apt-get install -y python-pip python-dev build-essential
RUN pip install --upgrade pip
WORKDIR /root
COPY install_req.sh .
CMD ["/bin/bash" , "install_req.sh"]
I've added as first command the execution of a shell script that has the following content:
#!/bin/bash
pip install -r /root/sourceCode/requirements.txt
pip freeze > /root/sourceCode/freeze.txt
And finally I build and run the image using these commands:
docker build --tag test .
docker run -itd --name container_test -v $(pwd)/sourceCode:/root/sourceCode test <- without any parameter at the end
As I explained at the beginning of the post, I have in a local folder a folder named sourceCode that contains a valid requirements.txt file with only one line "gunicorn"
So finally I've the ability of adding some extra requirements (gunicorn package in this example) to a given docker image.
After building and running my experiment If I check the logs (docker logs container_test) I see something like this:
Downloading gunicorn-19.6.0-py2.py3-none-any.whl (114kB)
100% |################################| 122kB 1.1MB/s
Installing collected packages: gunicorn
Furthermore, the container have created a freeze.txt file inside the mounted volume that contains all the pip packages installed, including the desired gunicorn:
chardet==2.0.1
colorama==0.2.5
gunicorn==19.6.0
html5lib==0.999
requests==2.2.1
six==1.5.2
urllib3==1.7.1
Now I've other problems with the permissions of the new created file, but that will be probably in a new post.
Thank you!
I have the following problem ...
I want to create a docker image on which a python virtual environment is created. Then I want to be able to do the following two things:
Run docker run -it <image> to start an interactive shell in this
virtual environment.
Run docker run <image> <command> (such as python --version) that is
executed in said virtual environment
I tried many things but it seems I don't get anywhere. My Dockerfile looks currently like this:
FROM ubuntu:16.04
RUN apt-get -y update && apt-get install -y python3 python-pip
RUN pip install virtualenv
RUN virtualenv -p python3.5 /venvs/myenv3.5
RUN . /venvs/myenv3.5/bin/activate && pip install numpy
I tried messing around with ENTRYPOINT and CMD but I don't get anywhere. By adding the following line: CMD . /venvs/myenv3.5/bin/activate; /bin/bash; I was able to start an interactive bash in the environment, but running docker run python --version shows that commands like that are not executed in said environment.
Is there a way to do this?
You can use the /venvs/myenv3.5/bin/python executable instead of the main python. This will execute python within that virtual environment. You can do this by doing ENV PATH /venvs/myenv3.5/bin:$PATH as you mentioned in the comments or by using an entrypoint in Dockerfile:
ENTRYPOINT /venvs/myenv3.5/bin/python
Now when you run your image, your virtualenv python will be executed by default:
$ docker run -it <image> --version
Python 3.5.2
If you need to get a shell on this image, you can overwrite the entrypont:
$ docker run -it --entrypoint /bin/bash <image>
/ #
You can also use /venvs/myenv3.5/bin/pip to install things into the virtualenv:
RUN /venvs/myenv3.5/bin/pip install numpy