force python container to stay in "running" state for debugging - python

Im using the follwoing docker file to run a python script
FROM python:3
COPY . /app
RUN pip install requests
RUN mkdir /app/foo
ENTRYPOINT [ "python3", "/app/main.py"]
the issue is that a file that need to be created , dose not created and i cant debug it inside the container it stopped to fast .
each run the contianer dies within 6 sec - i tried:
infinite while loop in the python script .
adding RUN tail -f to docker file with 0 success.
im running on docker desktop (win 10)

Related

Python docker container shuts down immediately after finishing running the app, even if specified to stay in -d -t

I have a dockerfile
FROM python:3
WORKDIR /app
ADD ./venv ./venv
ADD ./data/file1.csv.gz ./data/file1.csv.gz
ADD ./data/file2.csv.gz ./data/file2.csv.gz
ADD ./requirements.txt ./venv/requirements.txt
WORKDIR /app/venv
RUN pip install --no-cache-dir -r requirements.txt
CMD ["python", "./src/script.py", "/app/data/file1.csv.gz", "/app/data/file2.csv.gz"]
After building an image from it and running it, the image runs the app as it should, but then the container shuts down immediately after finishing. This is definitely problematic since I can't expect the output file.
I have tried using docker run -d -t <imgname> and docker ps shows the app for a few seconds, but once again, as soon as it finishes the process, the container shuts itself down.
So it's impossible to access, even with docker exec <imgid> -it --entrypoint /bin/bash, it just immediately exits.
I've also tried adding a last RUN /bin/bash after the last CMD but it doesn't help either.
What can I do actually be able to log into the container and inspect the file?
As long as the container hasen't been removed, you will be able to get at the data. You can find the name of the container using docker ps -a.
Then, if you know the location of the file, you can copy it to your host using
docker cp <container name>:<file> .
Alternatively, you can commit the contents of the container to a new image and run a shell in that using
docker commit <container name> newimagename
docker run --rm -it newimagename /bin/bash
Then you can look around in the container and find your files.
Unfortunately there's no way to start the container up again and look around in it. docker start will start the container, but will run the same command again as was run when you did docker run.

How to create a virtual machine programmatically?

I'm trying to find a way to run .exe application in python (I mean making virtual box where you can run .exe programs). And when you run the application its will only affect the folder where python script is.
Dockerfile
FROM python:3
ADD main.py .
ADD the.exe
CMD [ "python", "main.py"]
main.py
import os
os.startfile("/the.exe")
Build
docker build -t isolatedExe:latest .
Run
docker run isolatedExe:latest
Next interact with the container by using
docker exec -i -t <image> /bin/bash
Note: Find the image id with docker ps

How to run python scripts after CMD in Dockerfile?

I have a docker image that exposes 9000 port for server. After the server is running, I need to execute the 3 python scripts which depends on server so, they can only get executed after server.py is running however, after CMD command, the other code do not get executed and remains stuck. What are the possible suggestion to run 3 scripts in same container?
FROM python:3.7.3 as build
WORKDIR /app
COPY requirements.txt .
RUN pip install --no-cache-dir -r requirements.txt
COPY . .
# CMD [ "python", "./server.py" ] (The following 3 scripts depends on server.py for execution)
RUN python /app/script1.py
RUN python /app/script2.py
RUN python /app/script3.py
EXPOSE 9000
CMD [ "python", "./server.py" ]
As written in the Dockerfile referece
There can only be one CMD instruction in a Dockerfile
The CMD instruction tells the container what its entry point is, and when running the container, that is what will be run.
If running python ./server.py is a blocking call (which I'm assuming it is, since it's called a server, and most likely responds to some kind of requests), then this won't be possible.
Instead, try restructuring your scripts so that they are run when the server is run, by doing everything you do in script1.py, script2.py, script3.py after the server has been started inside of server.py.
If instead this is about script1.py... sending requests to the server, I'd recommend not including those in the container. Instead, you can simply run those scripts, manually, from the terminal while the server container is running.
You can just execute those scripts from the command line using docker exec after the container has started. You'll just need to know what the container name is
docker exec <CONTAINER NAME> python /app/script1.py
docker exec <CONTAINER NAME> python /app/script2.py
docker exec <CONTAINER NAME> python /app/script3.py
Or just make a bash script, say my_script.sh to run them all and just execute that
#!/usr/bin/env bash
docker exec <CONTAINER NAME> python /app/script1.py
docker exec <CONTAINER NAME> python /app/script2.py
docker exec <CONTAINER NAME> python /app/script3.py
And then
docker exec <CONTAINER NAME> ./my_script.sh

Docker container/image running but there is no port number

I am trying to get a django project that I have built to run on docker and create an image and container for my project so that I can push it to my dockerhub profile.
Now I have everything set up and I've created the initial image of my project. However, when I run it I am not getting any port number attached to the container. I need this to test and see if the container is actually working.
Here is what I have:
Successfully built a047506ef54b
Successfully tagged test_1:latest
(MySplit) omars-mbp:mysplit omarjandali$ docker images
REPOSITORY TAG IMAGE ID CREATED SIZE
test_1 latest a047506ef54b 14 seconds ago 810MB
(MySplit) omars-mbp:mysplit omarjandali$ docker run --name testing_first -d -p 8000:80 test_1
01cc8173abfae1b11fc165be3d900ee0efd380dadd686c6b1cf4ea5363d269fb
(MySplit) omars-mbp:mysplit omarjandali$ docker container ls -a
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
01cc8173abfa test_1 "python manage.py ru…" 13 seconds ago Exited (1) 11 seconds ago testing_first
(MySplit) omars-mbp:mysplit omarjandali$ Successfully built a047506ef54b
You can see there is no port number so I don't know how to access the container through my local machine on my web browser.
dockerfile:
FROM python:3
WORKDIR tab/
COPY requirements.txt ./
RUN pip install -r requirements.txt
COPY . .
CMD ["python", "manage.py", "runserver", "0.0.0.0"]
This line from the question helps reveal the problem;
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
01cc8173abfa test_1 "python manage.py ru…" 13 seconds ago Exited (1) 11 seconds ago testing_first
Exited (1) (from the STATUS column) means that the main process has already exited with a status code of 1 - usually meaning an error. This would have freed up the ports, as the docker container stops running when the main process finishes for any reason.
You need to view the logs in order to diagnose why.
docker logs 01cc will show the logs of the docker container that has the ID starting with 01cc. You should find that reading these will help you on your way. Knowing this command will help you immensely in debugging weirdness in docker, whether the container is running or stopped.
An alternative 'quick' way is to drop the -d in your run command. This will make your container run inline rather than as a daemon.
Created Dockerise django seed project
django-admin.py startproject djangoapp
Need a requirements.txt file outlining the Python dependencies
cd djangoapp/
RUN follwoing command to create the files required for dockerization
cat <<EOF > requirements.txt
Django
psycopg2
EOF
Dockerfile
cat <<EOF > Dockerfile
FROM python:3.6
ENV PYTHONUNBUFFERED 1
RUN mkdir /app
WORKDIR /app
ADD requirements.txt /app/
RUN pip install -r requirements.txt
ADD . /app/
EXPOSE 8000
CMD ["python", "manage.py", "runserver", "0.0.0.0:8000"]
EOF
docker-compose.yml
cat <<EOF > docker-compose.yml
version: "3.2"
services:
web:
image: djangoapp
command: python manage.py runserver 0.0.0.0:8000
ports:
- "8000:8000"
EOF
Run the application with
docker-compose up -d
When you created the container you published the ports. Your container would be accessible via port 8000 if it successfully built. However, as Shadow pointed out, your container exited with an error. That is why you must add the -a flag to your docker container ls command. docker container ls only shows running containers without the -a flag.
I recommend forgoing the detached flag -d to see what is causing the error. Then creating a new container after you have successfully launched the one you are working on. Or simply run the following commands once you fix the issue. docker stop testing_first then docker container rm testing_first finally run the same command you ran before. docker run --name testing_first -d -p 8000:80 test_1
I ran into similar problems with the first docker instances I attempted to run as well.

How to run my python script on docker?

I am trying to run my python script on docker. I tried different ways to do it but not able to run it on docker. My python script is given below:
import os
print ('hello')
I have already installed docker on my mac. But i want to know how i can make images and then push it to docker after that i wanna pull and run my script on docker itself.
Going by question title, and if one doesn't want to create docker image but just want to run a script using standard python docker images, it can run using below command
docker run -it --rm --name my-running-script -v "$PWD":/usr/src/myapp -w /usr/src/myapp python:3.7-alpine python script_to_run.py
Alright, first create a specific project directory for your docker image. For example:
mkdir /home/pi/Desktop/teasr/capturing
Copy your dockerfile and script in there and change the current context to this directory.
cp /home/pi/Desktop/teasr/capturing.py /home/pi/Desktop/teasr/dockerfile /home/pi/Desktop/teasr/capturing/
cd /home/pi/Desktop/teasr/capturing
This is for best practice, as the first thing the docker-engine does on build, is read the whole current context.
Next we'll take a look at your dockerfile. It should look something like this now:
FROM python:latest
WORKDIR /usr/local/bin
COPY capturing.py .
CMD ["capturing.py", "-OPTIONAL_FLAG"]
The next thing you need to do is build it with a smart name. Using dots is generally discouraged.
docker build -t pulkit/capturing:1.0 .
Next thing is to just run the image like you've done.
docker run -ti --name capturing pulkit/capturing:1.0
The script now get executed inside the container and will probably exit upon completion.
Edit after finding the problem that created the following error:
standard_init_linux.go:195: exec user process caused "exec format error"
There's a different architecture beneath raspberry pi's (ARM instead of x86_64), which COULD'VE BEEN the problem, but wasn't. If that would've been the problem, a switch of the parent image to FROM armhf/python would've been enough.
Source
BUT! The error kept occurring.
So the solution to this problem is a simple missing Sha-Bang on top of the python script. The first line in the script needs to be #!/usr/bin/env python and that should solve the problem.
Source
You need to create a dockerfile in the directory your script is in.
You can take this template:
FROM python:latest
COPY scriptname.py /usr/local/share/
CMD ["scriptname.py", "-flag"]
Then simply execute docker build -t pulkit/scriptname:1.0 . and your image should be created.
Your image should be visible under docker images. If you want to execute it on your local computer, use docker run.
If you want it to upload to the DockerHub, you need to log into the DockerHub with docker login, then upload the image with docker push.
I Followed #samprog (most accepted) answer on my machine running on UBUNTU VERSION="14.04.6".
and was getting "standard_init_linux.go:195: exec user process caused "exec format error"
None of the solution worked for me mentioned above.
Fixed the error after changing my Dockerfile as follows
FROM python:latest
COPY capturing.py ./capturing.py
CMD ["python","capturing.py"]
Note: If your script import some other module then you need to modify COPY statement in your Dockerfile as follows - COPY *.py ./
Hope this will be useful for others.
Another way to run python script on docker can be:
copy the local python script to docker:
docker cp yourlocalscript.path container_id:/dst_path/
container id can be found using:
docker ps
run the python script on docker:
docker exec -it python /container_script_path.py
its very simple
1- go to your Python script directory and create a file with this title without any extension
Dockerfile
2-now open the docker file and write your script name instead of sci.py
( content of Dockerfile )
FROM python:slim #i choice slim version you can choose another tag for example python:3
WORKDIR /usr/local/bin
COPY sci.py . #replace you scrip name with sci.py
CMD [ "python", "sci.py" ] #replace you scrip name with sci.py
save it and now you should create image file from this dockerfile and script py
and next run it
3-in path address folder write CMD and press Enter key :
4-When the cmd window opens for you, type in it :
docker build -t my-python-app . #this create image in docker by this title my-python-app
5- and findly run image:
docker run -it --rm --name my-running-app my-python-app
I've encountered this problem recently, this dependency HELL between python2 and python3 got me. Here is the solution.
Bind your current working directory to a Docker container with python2 and pip2 running.
Pull the docker image.
docker pull frolvlad/alpine-python2
Add this alias into /home/user/.zshrc or /home/user/.bashrc
alias python2='docker run -it --rm --name python2 -v "$PWD":"$PWD" -w
"$PWD" frolvlad/alpine-python2'
Once you type python2 into your CMD you'll be thrown into the Docker instance.

Categories