python3: can't find '__main__' module in 'app.py' - python

I'm trying to create a Docker container to be able to create a GUI with Flask for the utilisation of a tensorflow model.
The thing is that I would like to be able to modify my python files in real time and not have to rebuild my container everytime.
So for now I've created 3 files :
requirement.txt
Flask
tensorflow
keras
Dockerfile
# Use an official Python runtime as a parent image
FROM python:3.5.6-slim
# Set the working directory to /app
WORKDIR /app
# Copy the current directory contents into the container at /app
ADD . /app
# Install any needed packages specified in requirements.txt
RUN pip install --trusted-host pypi.python.org -r requirements.txt
# Make port 80 available to the world outside this container
EXPOSE 80
# Define environment variable
ENV NAME World
# Run app.py when the container launches
CMD ["python3", "app.py"]
app.py
from flask import Flask
import os
import socket
app = Flask(__name__)
#app.route("/")
def test():
html = "<h3>Hello {name}!</h3>" \
"<b>Hostname:</b> {hostname}<br/>"
return html.format(name=os.getenv("NAME", "world"), hostname=socket.gethostname())
if __name__ == '__main__':
app.run(host='0.0.0.0', port=80)
So after all this I build my container with this command
docker build -t modelgui .
End then I use this command to run my container and make a link between the app file I want to modify on the host and the one in the container
docker run -p 4000:80 -v /home/Documents/modelGUI:/app modelgui
But I get this error and I really don't know why
/usr/local/bin/python3: can't find '__main__' module in 'app.py'
My problem might be dumb to resolve but I'm really stuck here.

Check that /home/Documents/modelGUI in your bind volume mount is the path to where your code files reside and that app.py in that path is not created as a directory rather than a python file with the code you intend to run.
If app.py in /home/Documents/modelGUI is a dir, then the cause of this problem is that are not calling your script app.py at all, you are just giving the Python interpreter a nonexistent script name, which in case a similarly named directory (case-insensitive actually) exists it tries to execute it.
I've tried to replicate:
$ ls -lFs
Dockerfile
app.py/
requirements.txt
Then called the Python interpreter with app.py:
$ python3 app.py
/usr/local/bin/python3: can't find '__main__' module in 'app.py'

Running this locally, it looks like mounting your volume is overwriting your directory:
No volume
docker run -it test_image bash
root#c3870b9845c3:/app# ls
Dockerfile app.py requirements.txt
root#c3870b9845c3:/app# python app.py
* Serving Flask app "app" (lazy loading)
* Environment: production
WARNING: Do not use the development server in a production environment.
Use a production WSGI server instead.
* Debug mode: off
* Running on http://0.0.0.0:80/ (Press CTRL+C to quit)
With volume
docker run -it -v ~/Barings_VSTS/modelGUI:/app test_image bash
root#f6349f899079:/app# ls
somefile.txt
root#f6349f899079:/app#
That could be part of the issue. If you want to mount a filesystem in, I would mount it into a different directory. The default volume behavior is such that whatever you copied into app will be overwritten by the contents of modelGUI

Related

Could not locate a Flask application when creating a Docker Image

I hope someone can help me with this. The following error code comes up when I try and run docker run todolist-docker.
WARNING: This is a development server. Do not use it in a production deployment.
Use a production WSGI server instead.
* Debug mode: off
Error: Could not locate a Flask application. You did not provide the "FLASK_APP" environment variable, and a "wsgi.py" or "app.py" module was not found in the current directory.
My folder directory is here:
Folder name: todoflaskappfinal
__pycache__
static
templates
venv
App.py
Dockerfile
requirements.txt
todo.db
ToDoList.db
within the todoflaskappfinal folder, I have a Dockerfile file:
# syntax=docker/dockerfile:1
FROM python:3.7.8
WORKDIR /tododocker
COPY requirements.txt requirements.txt
RUN pip3 install -r requirements.txt
COPY . .
CMD [ "python3", "-m" , "flask", "run", "--host=0.0.0.0"]
And within App.py, I've set up everything (I assume) correctly, obviously with more code between this.
#Website Configuration
app = Flask(__name__)
if __name__ == "__main__":
app.run(debug=True, port=5000, host='0.0.0.0')
I've set FLASK_APP as App.py, made the virtual environment with venv, etc. When I type in flask run in the terminal it loads the website up correctly and displays it on 127.0.0.01. However, when I try and use the docker build --tag todolist-docker command and then docker run todolist-docker, the error message appears above. Can anyone see what I'm doing wrong?
Is FLASK_APP defined in the docker container? There's no ENV statement in the Dockerfile, and you didn't mention using docker's -e or --env command option. Your container won't inherit your environment variables from the hosting environment.
# syntax=docker/dockerfile:1
FROM python:3.7.8
WORKDIR /tododocker
COPY requirements.txt requirements.txt
RUN pip3 install -r requirements.txt
COPY . .
# Add this:
ENV FLASK_APP=App.py
CMD [ "python3", "-m" , "flask", "run", "--host=0.0.0.0"]

How to run a scrapy spider in a flask app, from a docker container?

When running my flask app, which uses Python's subprocess to use scrapy within a flask app as specified here (How to integrate Flask & Scrapy?), from a Docker Container and calling the appropriate endpoints specified in my flask app, I receive the error message: ERR_EMPTY_RESPONSE. Executing the flask app outside of my docker container (python app.py, where app.py has my flask code), everything works as intended and my spiders are called using subprocess within the flask app.
Instead of using flask & subprocess to call my spiders within a web app, I tried using twisted & twisted-klein python libraries, with the same result when called from a docker Container. I have also created a new, clean scrapy project, meaning no specific code of my own, just the standard scrapy code and project structure upon creation. This resulted in the same error. I am not quite certain whether my approach is anti-pattern, since flask and scrapy are in bundled into run image, resulting in one container for two purposes.
Here is my server.py code. When executing outside a container (using python interpreter) everything works as intended.
When running it from a container, then I receive the error message (ERR_EMPTY_RESPONSE).
# server.py
import subprocess
from flask import Flask
from medien_crawler.spiders.firstclassspider import FirstClassSpider
app = Flask(__name__)
#app.route("/")
def return_hello():
return "Hello!"
#app.route("/firstclass")
def return_firstclass_comments():
spider_name = "firstclass"
response = subprocess.call(['scrapy', 'crawl', spider_name, '-a', 'start_url=https://someurl.com'])
return "OK!"
if __name__ == "__main__":
app.run(debug=True)
FROM python:3
WORKDIR /usr/src/app
COPY requirements.txt ./
RUN pip install --no-cache-dir -r requirements.txt
COPY . .
EXPOSE 5000
CMD [ "python", "./server.py" ]
Finally I run docker run -p 5000:5000 . It does not work. Any ideas?
Please try it.
.Dockerfile
FROM python:3.6
RUN apt-get update && apt-get install -y wget
WORKDIR /usr/src/app
ADD . /usr/src/app
RUN pip install --no-cache-dir -r requirements.txt
EXPOSE 5000
CMD [ "python", "./server.py" ]

How to create an image from an python script (using docker sdk) running in docker container?

I have two dockerfiles, test and result. A python script running in test container has to create the image for result using docker sdk. I have both the dockerfiles and the python script for test copied into the workdirectory of the test. However when i get an error in test script saying No such file or directory.
Dockerfile of test:
FROM python:3.7.2-slim
WORKDIR /test
COPY . /test
RUN pip install docker
ENV PYTHONPATH="$PYTHONPATH:/test"
CMD ["python", "test.py"]
test.py:
// dockerfile of result is named result
import docker
client = docker.from_env()
image = client.images.build(path="/test", dockerfile='result')
container = client.containers.run(image)
When you do
COPY . /test
You copy everything of folder that is docker-compose.yml.
But when you are in test.py probably you are in another folder.
So you need to back levels of folder to acess /test.

Docker: not reading file

I'm building a simple app using: Dockerfile, app.py and requirements.txt. When the Dockerfile builds I get the error: "No such file or directory". However, when I change the ADD to COPY in the Dockerfile it works. Do you know why this is?
I'm using the tutorial: https://docs.docker.com/get-started/part2/#define-a-container-with-a-dockerfile
App.py
from flask import Flask
from redis import Redis, RedisError
import os
import socket
# Connect to Redis
redis = Redis(host="redis", db=0, socket_connect_timeout=2, socket_timeout=2)
app = Flask(__name__)
#app.route("/")
def hello():
try:
visits = redis.incr("counter")
except RedisError:
visits = "<i>cannot connect to Redis, counter disabled</i>"
html = "<h3>Hello {name}!</h3>" \
"<b>Hostname:</b> {hostname}<br/>" \
"<b>Visits:</b> {visits}"
return html.format(name=os.getenv("NAME", "world"), hostname=socket.gethostname(), visits=visits)
if __name__ == "__main__":
app.run(host='0.0.0.0', port=80)
requirements.txt
Flask
Redis
Dockerfile
# Use an official Python runtime as a parent image
FROM python:2.7-slim
# Set the working directory to /app
WORKDIR /app
# Copy the current directory contents into the container at /app
ADD . /app
# Install any needed packages specified in requirements.txt
RUN pip install -r requirements.txt
# Make port 80 available to the world outside this container
EXPOSE 80
# Define environment variable
ENV NAME World
# Run app.py when the container launches
CMD ["python", "app.py"]
In the first run, your working directory is /app inside container, and you copy contents to /tmp. To correct this behavior, you should be copying contents to /app and it will work fine.
Second one, where you are using add is correct since you are adding contents to /app., and not /tmp

Can't manage to build and run a bottle.py app

I've been trying to setup a container to run an app with the bottle framework. Read everything I could find about it, but even so I can't do it. Here's what I did:
Dockerfile:
# Use an official Python runtime as a parent image
FROM python:2.7
# Set the working directory to /app
WORKDIR /app
# Copy the current directory contents into the container at /app
ADD . /app
# Install any needed packages specified in requirements.txt
RUN pip install -r requirements.txt
# Make port 80 available to the world outside this container
EXPOSE 8080
# Define environment variable
ENV NAME World
# Run app.py when the container launches
CMD ["python", "app.py"]
app.py:
import os
from bottle import route, run, template
#route('/<name>')
def index(name):
return template('<b>Hello {{name}}</b>!', name=name)
run(host='localhost', port=8080)
requirements.txt
bottle
By running the command docker build -t testapp I create the container.
Then by running the command docker run -p 8080:8080 testapp I get this terminal output:
Bottle v0.12.13 server starting up (using WSGIRefServer())...
Listening on http://localhost:8080/
Hit Ctrl-C to quit.
But when I go to localhost:8080/testing I get localhost refused connection.
Can anyone point me to the right direction?
Problem is this line:
run(host='localhost', port=8080)
It is exposing it for "localhost" insde the container you are running the code. You can use python library netifaces to get container external interface if you want to but I suggest you to set 0.0.0.0 as host like:
run(host='0.0.0.0', port=8080)
Then you will be able to access http://localhost:8080/ (asuming your docker engine is at localhost)
EDIT: mind your previous container might still be listening on 8080/tcp. Remove or stop previous container first.

Categories