I hope someone can help me with this. The following error code comes up when I try and run docker run todolist-docker.
WARNING: This is a development server. Do not use it in a production deployment.
Use a production WSGI server instead.
* Debug mode: off
Error: Could not locate a Flask application. You did not provide the "FLASK_APP" environment variable, and a "wsgi.py" or "app.py" module was not found in the current directory.
My folder directory is here:
Folder name: todoflaskappfinal
__pycache__
static
templates
venv
App.py
Dockerfile
requirements.txt
todo.db
ToDoList.db
within the todoflaskappfinal folder, I have a Dockerfile file:
# syntax=docker/dockerfile:1
FROM python:3.7.8
WORKDIR /tododocker
COPY requirements.txt requirements.txt
RUN pip3 install -r requirements.txt
COPY . .
CMD [ "python3", "-m" , "flask", "run", "--host=0.0.0.0"]
And within App.py, I've set up everything (I assume) correctly, obviously with more code between this.
#Website Configuration
app = Flask(__name__)
if __name__ == "__main__":
app.run(debug=True, port=5000, host='0.0.0.0')
I've set FLASK_APP as App.py, made the virtual environment with venv, etc. When I type in flask run in the terminal it loads the website up correctly and displays it on 127.0.0.01. However, when I try and use the docker build --tag todolist-docker command and then docker run todolist-docker, the error message appears above. Can anyone see what I'm doing wrong?
Is FLASK_APP defined in the docker container? There's no ENV statement in the Dockerfile, and you didn't mention using docker's -e or --env command option. Your container won't inherit your environment variables from the hosting environment.
# syntax=docker/dockerfile:1
FROM python:3.7.8
WORKDIR /tododocker
COPY requirements.txt requirements.txt
RUN pip3 install -r requirements.txt
COPY . .
# Add this:
ENV FLASK_APP=App.py
CMD [ "python3", "-m" , "flask", "run", "--host=0.0.0.0"]
Related
When I run my app.py file I can access it on localhost and it works.
After running (and having no issues): docker build -t flask-container .
When I run: docker run -p 5000:5000 flask-container
I get: from helpers import apology, login_required, usd
ModuleNotFoundError: No module named 'helpers'
In app.py I have: from helpers import apology, login_required, usd
I have tried putting in an empty __init__.py folder in the main folder, still doesn't work.
Question: How do I fix the Module Not Found Error when trying to run docker?
Dockerfile
FROM python:3.8-alpine
# By default, listen on port 5000
EXPOSE 5000/tcp
# Set the working directory in the container
WORKDIR /app
# Copy the dependencies file to the working directory
COPY requirements.txt .
# Install any dependencies
RUN pip install -r requirements.txt
# Copy the content of the local src directory to the working directory
COPY app.py .
# Specify the command to run on container start
CMD [ "python", "./app.py" ]
requirements.txt
flask===2.1.0
flask_session
Python Version: 3.10.5
Please copy the helpers.py as well into the working directory.
COPY helpers.py .
OR
ADD . /app
#This will add all files in the current local dir to /app
I started sentry using the recommended method for aiohttp as follows. When I start my script with "python [script name]", it works like a charm. However, when I start the same server inside a minimal docker environment (from from python:3.8), it never captures errors. Is there a problem with sentry's official recommended setup?
from sentry_sdk.integrations.aiohttp import AioHttpIntegration
# Sentry
sentry_sdk.init(
dsn="https://xxxx.ingest.sentry.io/12345",
integrations=[AioHttpIntegration()]
)
The server is running correctly, so it can't be that the library is missing. Indeed, it's in requirements.txt:
sentry-sdk==0.14.3
The Dockerfile couldn't be simpler:
from python:3.8
copy . /app
workdir /app
run pip install -r requirements.txt
expose 5000
cmd [ "python", "file.py" ]
So I have a docker file, that I intend to build and push to Google Cloud Run and it looks like this:
# pull official base image
FROM python:3.7-slim
# set work directory
WORKDIR /usr/src/app
# set environment variables
ENV PYTHONDONTWRITEBYTECODE 1
ENV PYTHONUNBUFFERED 1
# install dependencies
RUN pip install --upgrade pip
COPY ./requirements.txt /usr/src/app/requirements.txt
RUN pip install -r requirements.txt
# copy project
COPY . /usr/src/app/
CMD python manage.py runserver 0.0.0.0:$PORT
The idea is that once I push it to Cloud Run, my Django project will run on 0.0.0.0:$PORT, where the value of the environment variable $PORT is set by Google Cloud Run automatically.
I tried to run a container of the following image locally to see if this works. I set $PORT to 80, and then when I run a container of the docker image I get the following:
"CommandError: "0.0.0.0:" is not a valid port number or address:port pair."
Looking at other answers, such as this.
I understand that 0.0.0.0 is a placeholder for the public IP address of a given machine. My question is, why doesn't do I get the "CommandError" when I run docker run [DockerImage] locally??
If there are any other questions, please let me know I will clarify it.
Edit:
I also want to point out that I am following this tutorial
Setting the environment variable on host machine doesn't set it inside the container.
The command python manage.py runserver 0.0.0.0:$PORT is being run inside the container where $PORT is not set and so it expands to python manage.py runserver 0.0.0.0:.
Try docker run -e PORT=$PORT <your_image> to pass set the PORT env variable inside the container with value from the host machine.
You have not set the PORT env variable in the Dockerfile and that's the reason its failing to resolve $PORT during CMD execution.
you can update your Dockerfile as follow (I have used 8080 as an example, update this to required port):
# pull official base image
FROM python:3.7-slim
# set work directory
WORKDIR /usr/src/app
# set environment variables
ENV PYTHONDONTWRITEBYTECODE 1
ENV PYTHONUNBUFFERED 1
ENV PORT 8080
# install dependencies
RUN pip install --upgrade pip
COPY ./requirements.txt /usr/src/app/requirements.txt
RUN pip install -r requirements.txt
# copy project
COPY . /usr/src/app/
CMD python manage.py runserver 0.0.0.0:$PORT
Hope this helps
My docker-compose.yml file:
version: '3'
services:
dash:
build: ./docker
environment:
- COMPOSE_CONVERT_WINDOWS_PATHS=false
ports:
- "5000:5000"
volumes:
- c:/Users:/data
Dockerfile
FROM python:3
WORKDIR /data
COPY requirements.txt ./
RUN pip install --no-cache-dir -r requirements.txt
COPY app.py ./
CMD [ "python", "./app.py" ]
doing a simple COPY command in Dockerfile is throwing this error when the file is in a folder (not same level as Dockerfile file)
My folder structure:
- docker
- Dockerfile
- requirements.txt
- app
- app.py
- docker-compose.yml
You got the error because the docker build context directory ./docker on your host does not contains app.py.
Make sure ./docker folder contains app.py file.
If you know the correct directory containing the build context and the app.py file then specify that directory as build context.
build: /path/to/build/context
More info about build context here.
To know what exactly is docker build context, check this.
Hope this helps.
Update:
After checking your folder structure it seems app/app.py is outside of ./docker directory which is your build context.
Bring the app directory inside docker folder and change copy command to COPY app/app.py ./. Also change CMD to CMD [ "python", "/data/app.py" ].
Using COPY and ADD, you can only use source files that are in the same folder as the Dockerfile, or in sub-folders:
COPY obeys the following rules:
The path must be inside the context of the build; you cannot COPY ../something /something, because the first step of a docker build
is to send the context directory (and subdirectories) to the docker
daemon.
(https://docs.docker.com/engine/reference/builder/#copy)
In your case, app.py is in a sibling folder of docker, which is the base directory of your build context. You'll need to move app.py somewhere within the docker folder. For example:
- docker
- Dockerfile
- requirements.txt
- app
- app.py
- docker-compose.yml
And adjust your Dockerfile:
WORKDIR /data
COPY requirements.txt ./
RUN pip install --no-cache-dir -r requirements.txt
COPY app/app.py ./
CMD [ "python", "./app.py" ]
I'm trying to create a Docker container to be able to create a GUI with Flask for the utilisation of a tensorflow model.
The thing is that I would like to be able to modify my python files in real time and not have to rebuild my container everytime.
So for now I've created 3 files :
requirement.txt
Flask
tensorflow
keras
Dockerfile
# Use an official Python runtime as a parent image
FROM python:3.5.6-slim
# Set the working directory to /app
WORKDIR /app
# Copy the current directory contents into the container at /app
ADD . /app
# Install any needed packages specified in requirements.txt
RUN pip install --trusted-host pypi.python.org -r requirements.txt
# Make port 80 available to the world outside this container
EXPOSE 80
# Define environment variable
ENV NAME World
# Run app.py when the container launches
CMD ["python3", "app.py"]
app.py
from flask import Flask
import os
import socket
app = Flask(__name__)
#app.route("/")
def test():
html = "<h3>Hello {name}!</h3>" \
"<b>Hostname:</b> {hostname}<br/>"
return html.format(name=os.getenv("NAME", "world"), hostname=socket.gethostname())
if __name__ == '__main__':
app.run(host='0.0.0.0', port=80)
So after all this I build my container with this command
docker build -t modelgui .
End then I use this command to run my container and make a link between the app file I want to modify on the host and the one in the container
docker run -p 4000:80 -v /home/Documents/modelGUI:/app modelgui
But I get this error and I really don't know why
/usr/local/bin/python3: can't find '__main__' module in 'app.py'
My problem might be dumb to resolve but I'm really stuck here.
Check that /home/Documents/modelGUI in your bind volume mount is the path to where your code files reside and that app.py in that path is not created as a directory rather than a python file with the code you intend to run.
If app.py in /home/Documents/modelGUI is a dir, then the cause of this problem is that are not calling your script app.py at all, you are just giving the Python interpreter a nonexistent script name, which in case a similarly named directory (case-insensitive actually) exists it tries to execute it.
I've tried to replicate:
$ ls -lFs
Dockerfile
app.py/
requirements.txt
Then called the Python interpreter with app.py:
$ python3 app.py
/usr/local/bin/python3: can't find '__main__' module in 'app.py'
Running this locally, it looks like mounting your volume is overwriting your directory:
No volume
docker run -it test_image bash
root#c3870b9845c3:/app# ls
Dockerfile app.py requirements.txt
root#c3870b9845c3:/app# python app.py
* Serving Flask app "app" (lazy loading)
* Environment: production
WARNING: Do not use the development server in a production environment.
Use a production WSGI server instead.
* Debug mode: off
* Running on http://0.0.0.0:80/ (Press CTRL+C to quit)
With volume
docker run -it -v ~/Barings_VSTS/modelGUI:/app test_image bash
root#f6349f899079:/app# ls
somefile.txt
root#f6349f899079:/app#
That could be part of the issue. If you want to mount a filesystem in, I would mount it into a different directory. The default volume behavior is such that whatever you copied into app will be overwritten by the contents of modelGUI