How to pass python command to dockerfile - python

I am having this python command, not sure how to pass it in dockerfile.
Command
python3 app/main.py start --config config.yml
I am writing dockerfile but not sure how to pass the above command in my docker file. In my main.py file I have given start, stop condition in form of actions.
config.yaml file
host: 127.0.0.1
port: 9000
db: elastic
elastic:
port: 9200
host: localhost
user: null
secret: null
ssl: false
sqlserver:
port: 1433
host: localhost
instance: MSSQLSERVER
ssl: false
user: null
password: null
kafka:
port: null
host: null
api-docs: true
rocketchat:
host:null
port:null
auth-backend:
- basic
- bearer
- oauth
name: Smartapp
Dockerfile
FROM python:3.8
WORKDIR /usr/src/app
COPY requirements.txt ./
RUN pip install -r requirements.txt
COPY . .
EXPOSE 8000
RUN python3 app/main.py start --config config.yml
When I am running the dockerfile, it is going in an infinite loop at RUN step.
Step 7/7 : RUN python3 smartinsights/main.py start --config config.yml
---> Running in 8a81bfe608d6
[91m/usr/src/app/smartinsights/system/DB.py:27: SyntaxWarning: "is" with a literal. Did you mean "=="?
if self.database_name is 'elastic':
/usr/src/app/smartinsights/system/DB.py:29: SyntaxWarning: "is" with a literal. Did you mean "=="?
elif self.database_name is 'sqlserver':
[0mSetting /usr/src/app/smartinsights as project folder
Running...
registering ActionIncident
registering ActionIncidentTag
registering MemoryCount
registering MemoryCloseCount
registering MemoryOpenCount
registering AutoCloseCount
registering AgeingAnalysisData
[2021-04-14 09:57:19 +0000] [9] [INFO] Goin' Fast # http://127.0.0.1:8000
[2021-04-14 09:57:19 +0000] [9] [INFO] Starting worker [9]
Below error can also be seen at startup server
[2021-04-14 10:17:37 +0000] [9] [INFO] Goin' Fast # http://localhost:8000
[2021-04-14 10:17:37 +0000] [9] [ERROR] Unable to start server
Traceback (most recent call last):
File "/usr/local/lib/python3.8/site-packages/sanic/server.py", line 891, in serve
http_server = loop.run_until_complete(server_coroutine)
File "uvloop/loop.pyx", line 1494, in uvloop.loop.Loop.run_until_complete
File "uvloop/loop.pyx", line 1768, in create_server
OSError: [Errno 99] error while attempting to bind on address ('::1', 8000, 0, 0): cannot assign requested address
[2021-04-14 10:17:37 +0000] [9] [INFO] Server Stopped

The dockerfile 'builds' an image -- you should/must not run your application during the build process. You want your application to run only when the container runs.
Change your dockerfile to look like this:
FROM python:3.8
WORKDIR /pyapp/
COPY app/* app/
COPY . .
RUN pip install -r requirements.txt
CMD ["python3", "app/main.py", "start", "--config", "config.yml"]
This CMD line tells docker that when it runs the container, it should run this command within it. You can build it like this:
docker build --tag myPythonApp .
And run it like this
docker run -it --rm myPythonApp
You have added some output in the comments that suggests that this container is listening on port 9000. You can expose this port on the host like this:
docker run -it --rm -p 9000:9000 myPythonApp
And maybe access it in your browser on `http://localhost:9000/".
That command will run the container in the current shell process. When you hit ctrl+c then the process will stop and the container will exit. If you want to keep the container running in the background try this:
docker run -it --rm -p 9000:9000 -d myPythonApp
And, if you're sure that you'll only be running one container at a time, it may help to give it a name.
docker run -it --rm -p 9000:9000 -d --name MyPythonApp myPythonApp
That will allow you to kill a background container with:
docker rm -f MyPythonApp
Btw, if you're in a mess, and you're running bash, you can remove all running and stopped containers with:
docker rm -f $(docker ps -qa)

1.create any python script
2.create the docker file using the following code
FROM python:3
WORKDIR /usr/src/app
COPY . .
CMD ["test.py"]
ENTRYPOINT ["python3"]
3.Build the docker
docker build -t hello
4.run the docker
docker run -it hello test.py

Related

I am to run python FastAPI locally, but when trying to run through container, not getting any response on browser. No error as well

I am able to run python FastAPI locally(connecting to local host http://127.0.0.1:8000/), but when I am trying to run through container, not getting any response on browser. No error message either.
content of main.py
from typing import Optional
from fastapi import FastAPI
app = FastAPI()
#app.get("/")
def read_root():
return {"Hello": "World"}
#app.get("/items/{item_id}")
def read_item(item_id: int, q: Optional[str] = None):
return {"item_id": item_id, "q": q}
content of Dockerfile
FROM python:3.9.5
WORKDIR /code
COPY ./docker_req.txt /code/docker_req.txt
RUN pip install --no-cache-dir --upgrade -r /code/docker_req.txt
COPY ./app /code/app
CMD ["uvicorn", "app.main:app", "--reload", "--host", "0.0.0.0"]
output on cmd when running container:-
docker run --name my-app1 python-fastapi:1.5
INFO: Will watch for changes in these directories: ['/code']
INFO: Uvicorn running on http://0.0.0.0:8000 (Press CTRL+C to quit)
INFO: Started reloader process [1] using statreload
INFO: Started server process [7]
INFO: Waiting for application startup.
INFO: Application startup complete.
docker_req.txt --
fastapi==0.73.0
pydantic==1.9.0
uvicorn==0.17.4
Insert -d before docker run to run your container in Detached mode. try to run:
docker run -d --name my-app5 -p 8000:8000 python-fastapi:1.5
Simpler: you can redirect local port 8000 to docker port 8000
docker run -p 8000:8000 ...
and now you can access it using one of
http://0.0.0.0:8000
http://localhost:8000
http://127.0.0.1:8000
Longer: you can expose port 8000 and run it using docker IP
docker run --expose 8000 ...
or in Dockerfile you can use EXPOSE 8000
Next you have to find Container ID
docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
0e7c6c543246 furas/fastapi "uvicorn app.main:ap…" 3 seconds ago Up 2 seconds 8000/tcp stupefied_nash
or
docker ps -q
0e7c6c543246
And use (part of) CONTAINER ID to get container IP address
docker inspect 0e7c | grep IPAddress
"SecondaryIPAddresses": null,
"IPAddress": "172.17.0.2",
"IPAddress": "172.17.0.2",
or
docker inspect --format '{{ .NetworkSettings.IPAddress }}' 0e7c
172.17.0.2
And now you can access it using
http://172.17.0.2:8000
EDIT:
In one line (on Bash)
docker inspect --format '{{ .NetworkSettings.IPAddress }}' $(docker ps -q)
172.17.0.2
I solved this problem by creating yaml file and running it via docker-compose
version: '3'
services:
my-app:
image: python-fastapi:1.5
ports:
- 8000:8000
In yaml file, I also tried without ports and it still worked whereas the below docker run command not working
docker run --name my-app5 -p 8000:8000 python-fastapi:1.5
Can anyone pls explain why its not working from "docker run" command but working from yaml file?

docker port exposing issue

I have a flask application that I'm trying to dockerize but the ports are not getting exposed properly.
DockerFile
FROM tiangolo/uwsgi-nginx-flask:python3.7
LABEL Name=testAPP Version=0.0.1
EXPOSE 5000
ADD . /app
WORKDIR /app
# Using pip:
RUN python3 -m pip install -r requirements.txt
ENTRYPOINT [ "python3" ]
CMD ["application.py" ,"runserver","-h 0.0.0.0"]
Docker Build is successful:
docker build --rm -f "Dockerfile" -t testAPP .
Docker Run is building the image successfully
docker run -device -expose 5000:5000 testAPP
Also tried,
docker run --rm -d -p 443:443/tcp -p 5000:5000/tcp -p 80:80/tcp testAPP
But when I try to access the site it throws an error
site can't be reached error
Flask App(Inside the APP)
if __name__ == '__main__':
app.run(host='127.0.0.1', port=5000)
On Execution of the command
Docker container ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
8724cdb38e14 testAPP "/entrypoint.sh pyth…" 15 seconds ago Up 13 seconds 80/tcp, 443/tcp, 0.0.0.0:5000->5000/tcp funny_galois
Defining a port as exposed doesn’t publish the port by itself. Try with the flag -p , like:
-p container_port:local_port
example:
docker run -p 8080:8080 -v ~/Code/PYTHON/ttftt-recipes-manager:/app python_dev
But before running try to check if there is something else that already running on the specified port like:
lsof -i :PORTNUM
and after with something like:
docker logs my_container
Make sure you're mapping your localhost port to the container's port
docker run -p 127.0.0.1:8000:8000 your_image
And once you're application is in the container, you want to run your app with the host set to 0.0.0.0

Docker container/image running but there is no port number

I am trying to get a django project that I have built to run on docker and create an image and container for my project so that I can push it to my dockerhub profile.
Now I have everything set up and I've created the initial image of my project. However, when I run it I am not getting any port number attached to the container. I need this to test and see if the container is actually working.
Here is what I have:
Successfully built a047506ef54b
Successfully tagged test_1:latest
(MySplit) omars-mbp:mysplit omarjandali$ docker images
REPOSITORY TAG IMAGE ID CREATED SIZE
test_1 latest a047506ef54b 14 seconds ago 810MB
(MySplit) omars-mbp:mysplit omarjandali$ docker run --name testing_first -d -p 8000:80 test_1
01cc8173abfae1b11fc165be3d900ee0efd380dadd686c6b1cf4ea5363d269fb
(MySplit) omars-mbp:mysplit omarjandali$ docker container ls -a
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
01cc8173abfa test_1 "python manage.py ru…" 13 seconds ago Exited (1) 11 seconds ago testing_first
(MySplit) omars-mbp:mysplit omarjandali$ Successfully built a047506ef54b
You can see there is no port number so I don't know how to access the container through my local machine on my web browser.
dockerfile:
FROM python:3
WORKDIR tab/
COPY requirements.txt ./
RUN pip install -r requirements.txt
COPY . .
CMD ["python", "manage.py", "runserver", "0.0.0.0"]
This line from the question helps reveal the problem;
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
01cc8173abfa test_1 "python manage.py ru…" 13 seconds ago Exited (1) 11 seconds ago testing_first
Exited (1) (from the STATUS column) means that the main process has already exited with a status code of 1 - usually meaning an error. This would have freed up the ports, as the docker container stops running when the main process finishes for any reason.
You need to view the logs in order to diagnose why.
docker logs 01cc will show the logs of the docker container that has the ID starting with 01cc. You should find that reading these will help you on your way. Knowing this command will help you immensely in debugging weirdness in docker, whether the container is running or stopped.
An alternative 'quick' way is to drop the -d in your run command. This will make your container run inline rather than as a daemon.
Created Dockerise django seed project
django-admin.py startproject djangoapp
Need a requirements.txt file outlining the Python dependencies
cd djangoapp/
RUN follwoing command to create the files required for dockerization
cat <<EOF > requirements.txt
Django
psycopg2
EOF
Dockerfile
cat <<EOF > Dockerfile
FROM python:3.6
ENV PYTHONUNBUFFERED 1
RUN mkdir /app
WORKDIR /app
ADD requirements.txt /app/
RUN pip install -r requirements.txt
ADD . /app/
EXPOSE 8000
CMD ["python", "manage.py", "runserver", "0.0.0.0:8000"]
EOF
docker-compose.yml
cat <<EOF > docker-compose.yml
version: "3.2"
services:
web:
image: djangoapp
command: python manage.py runserver 0.0.0.0:8000
ports:
- "8000:8000"
EOF
Run the application with
docker-compose up -d
When you created the container you published the ports. Your container would be accessible via port 8000 if it successfully built. However, as Shadow pointed out, your container exited with an error. That is why you must add the -a flag to your docker container ls command. docker container ls only shows running containers without the -a flag.
I recommend forgoing the detached flag -d to see what is causing the error. Then creating a new container after you have successfully launched the one you are working on. Or simply run the following commands once you fix the issue. docker stop testing_first then docker container rm testing_first finally run the same command you ran before. docker run --name testing_first -d -p 8000:80 test_1
I ran into similar problems with the first docker instances I attempted to run as well.

Understanding Gunicorn and Flask on Docker/Docker-Compose

I'm having trouble getting Flask and Gunicorn to work properly on Docker using Docker-compose
Dockerfile:
FROM ubuntu:latest
MAINTAINER Kyle Calica "Kyle Calica"
RUN apt-get update -y
RUN apt-get install -y python3-dev build-essential python-pip gunicorn
RUN pip install --upgrade setuptools
RUN pip install ez_setup
COPY . /app
WORKDIR /app
RUN pip install -r ./app/requirements.txt
CMD [ "gunicorn", "-b", ":8000", "run" ]
Docker-Compose.yml:
version: '2'
services:
web:
build: .
volumes:
- ./:/var/www/crypto
ports:
- "5000:5000"
run.py:
from app import app
app.run()
From my understanding the Gunicorn master will run at port 8000 on all interfaces in the container
And then it'll spawn a node to run at port 5000 in the container at 127.0.0.1/localhost.
From there I link port 5000 in the container to my hostport 8000
I expected to see my application from my host computer at http://127.0.0.1:8000
instead nothing happened and nothing seemed to be connecting.
I have done this before but can't remember what I did differently.
(env) paper-street:CoinSlack kyle$ gunicorn -b :8000 run
[2017-09-16 17:43:59 -0700] [15402] [INFO] Starting gunicorn 19.7.1
[2017-09-16 17:43:59 -0700] [15402] [INFO] Listening at: http://0.0.0.0:8000 (15402)
[2017-09-16 17:43:59 -0700] [15402] [INFO] Using worker: sync
[2017-09-16 17:43:59 -0700] [15405] [INFO] Booting worker with pid: 15405
* Running on http://127.0.0.1:5000/ (Press CTRL+C to quit)
^the reason why is because it seems like it spawned a worker and is running it at port 5000, i can't access my app through port 8000
app.run() and gunicorn are two ways to run a webserver. The first is the Flask development server, and it's useful for development but shouldn't be deployed in production. You shouldn't run both at the same time.
gunicorn should be pointed to the app object so that it can import it and use it to run the webserver itself. That's all it needs.
Instead of CMD [ "gunicorn", "-b", ":8000", "run" ]
Do CMD ["gunicorn", "app:app", "-b", "0.0.0.0:8000"]
You can see that instead of the telling the gunicorn process to run you instead tell the process where to look. The application that you want gunicorn to serve is app. You can also add more options to the gunicorn command such as reload, the number of workers, timeout, log-levels, etc...
To expand on Alex Hall's answer, you don't want to run a Flask server on production, because the ability to scale is very limited. According to the Flask docs, the mention that:
Flask’s built-in server is not suitable for production as it doesn’t
scale well and by default serves only one request at a time

how to run gunicorn on docker

I have 2 files that depend on each other when docker is start up. 1 is a flask file and one is a file with a few functions. When docker starts, only the functions file will be executed but it imports flask variables from the flask file. Example:
Flaskfile
import flask
from flask import Flask, request
import json
_flask = Flask(__name__)
#_flask.route('/', methods = ['POST'])
def flask_main():
s = str(request.form['abc'])
ind = global_fn_main(param1,param2,param3)
return ind
def run(fn_main):
global global_fn_main
global_fn_main = fn_main
_flask.run(debug = False, port = 8080, host = '0.0.0.0', threaded = True)
Main File
import flaskfile
#a few functions then
if__name__ == '__main__':
flaskfile.run(main_fn)
The script runs fine without need a gunicorn.
Dockerfile
FROM python-flask
ADD *.py *.pyc /code/
ADD requirements.txt /code/
WORKDIR /code
EXPOSE 8080
CMD ["python","main_file.py"]
In the Command line: i usally do: docker run -it -p 8080:8080 my_image_name and then docker will start and listen.
Now to use gunicorn:
I tried to modify my CMD parameter in the dockerfile to
["gunicorn", "-w", "20", "-b", "127.0.0.1:8083", "main_file:flaskfile"]
but it just keeps exiting. Am i not writing the docker gunicorn command right?
I just went through this problem this week and stumbled on your question along the way. Fair to say you either resolved this or changed approaches by now, but for future's sake:
The command in my Dockerfile is:
CMD ["gunicorn" , "-b", "0.0.0.0:8000", "app:app"]
Where the first "app" is the module and the second "app" is the name of the WSGI callable, in your case, it should be _flask from your code although you've some other stuff going on that makes me less certain.
Gunicorn takes the place of all the run statements in your code, if Flask's development web server and Gunicorn try to take the same port it can conflict and crash Gunicorn.
Note that when run by Gunicorn, __name__ is not "main". In my example it is equal to "app".
At my admittedly junior level of both Python, Docker, and Gunicorn the fastest way to debug is to comment out the "CMD" in the Dockerfile, get the container up and running:
docker run -it -d -p 8080:8080 my_image_name
Hop onto the running container:
docker exec -it container_name /bin/bash
And start Gunicorn from the command line until you've got it working, then test with curl - I keep a basic route in my app.py file that just prints out "Hi" and has no dependencies for validating the server is up before worrying about the port binding to the host machine.
After struggling with this issue over the last 3 days, I found that all you need to do is to bind to the non-routable meta-address 0.0.0.0 rather than the loopback IP 127.0.0.1:
CMD ["gunicorn" , "--bind", "0.0.0.0:8000", "app:app"]
And don't forget to expose the port, one option to do that is to use EXPOSE
in your Dockerfile:
EXPOSE 8000
Now:
docker build -t test .
Finally you can run:
docker run -d -p 8000:8000 test
This is my last part of my Dockerfile with Django App
EXPOSE 8002
COPY entrypoint.sh /code/
WORKDIR /code
ENTRYPOINT ["sh", "entrypoint.sh"]
then in entrypoint.sh
#!/bin/bash
# Prepare log files and start outputting logs to stdout
mkdir -p /code/logs
touch /code/logs/gunicorn.log
touch /code/logs/gunicorn-access.log
tail -n 0 -f /code/logs/gunicorn*.log &
export DJANGO_SETTINGS_MODULE=django_docker_azure.settings
exec gunicorn django_docker_azure.wsgi:application \
--name django_docker_azure \
--bind 0.0.0.0:8002 \
--workers 5 \
--log-level=info \
--log-file=/code/logs/gunicorn.log \
--access-logfile=/code/logs/gunicorn-access.log \
"$#"
Hope this could be useful
This work for me:
FROM docker.io/python:3.7
WORKDIR /app
COPY requirements.txt ./
RUN pip install --no-cache-dir -r requirements.txt
ENV GUNICORN_CMD_ARGS="--bind=0.0.0.0 --chdir=./src/"
COPY . .
EXPOSE 8000
CMD [ "gunicorn", "app:app" ]
I was trying to run a flask app as well. I found out that you can just use
ENTRYPOINT['gunicorn', '-b', ':8080', 'app:APP']
This will take take the file you have specified and run on the docker instance. Also, don't forget the shebang on the top, #!/usr/bin/env python if you are running the Debug LOG-LEVEL.
gunicorn main:app --workers 4 --bind :3000 --access-logfile '-'

Categories