I have the following simple Cloud Run service from the Python quickstart:
app.py:
import os
from flask import Flask
app = Flask(__name__)
#app.route('/')
def hello_world():
return 'Hello World!\n'
if __name__ == "__main__":
app.run(debug=True,host='0.0.0.0',port=int(os.environ.get('PORT', 8080)))
Dockerfile:
FROM python:3.7
ENV APP_HOME /app
WORKDIR $APP_HOME
COPY . .
RUN pip install Flask
CMD python app.py
How can I run & test this locally?
Similar to any other Dockerfile, you can use this two step command to build your image, and then run it locally:
$ docker build -t your_service .
$ docker run --rm -p 8080:8080 -e PORT=8080 your_service
It's important to specify the PORT environment variable here, and ensure that your app uses it appropriately.
Afterwards, your service will be running on http://localhost:8080
Related
I am trying to create a Flask app running within a Docker container but I have some complex dependencies so I am building from this image https://hub.docker.com/r/continuumio/anaconda/.
I build the image (goes fine... environment works...):
docker build -t my_image:latest .
Then try to run it
docker run --name my_image -p 80:5000 --rm my_image:latest
I get this error:
./boot.sh: 2: exec: gunicorn: not found
Here is my directory I am building from:
my_template
--api.py
--boot.sh
--environment.yml
--Dockerfile
I have a very simple flask app.
api.py
from flask import Flask, jsonify
app = Flask(__name__)
#app.route('/', methods=['GET'])
def hello_world():
return jsonify({'message': 'Hello World'})
#app.route('/test', methods=['GET'])
def test():
return jsonify({'test': 'test'})
if __name__ == "__main__":
app.run(debug=True) # remember to set debug to False
environment.yml is as follows:
name: ox
channels:
- conda-forge
- defaults
dependencies:
{---OMITTED FOR BREVITY---}
prefix: /home/me/anaconda3/envs/ox
Dockerfile is as follows:
FROM continuumio/miniconda:latest
WORKDIR /home/conda_flask_docker
COPY environment.yml ./
COPY api.py ./
COPY boot.sh ./
RUN chmod +x boot.sh
RUN conda env create -f environment.yml
RUN echo "source activate ox" > ~/.bashrc
ENV PATH /opt/conda/envs/ox/bin:$PATH
EXPOSE 5000
ENTRYPOINT ["./boot.sh"]
And... boot.sh
#!/bin/sh
exec gunicorn -b :5000 --access-logfile - --error-logfile - api:app
Solved this on my own...
I needed to add
- flask=1.1.2=pyh9f0ad1d_0
- gunicorn=20.0.4=py38h32f6830_1
to the dependencies section of environment.yml. I had used conda env export > environment.yml to build my environment.yml BEFORE I had added Flask and gunicorn.
If that doesn't do it, adding gunicorn via RUN command in the Dockerfile should do.
I am studying the docker.
I using docker and using Dockerfile to run a python server.
This my python name app.py:
from flask import Flask
app = Flask(__name__)
#app.route('/')
def hello():
return "hello docker"
if __name__ == '__main__':
app.run()
app.run(host="0.0.0.0", port=5000)
I am running in the VirtualBox and use centos7.
If I don't add
host="0.0.0.0", port=5000
I can not connect to http://192.168.1.6:5000/ (This is my virtual IP)
but now when I using Dockerfile and run this command, I can't connect to the server.
This is my Dockerfile :
FROM python:2.7
LABEL maintainer ="me <me#gmail.com>"
RUN pip install flask
COPY app.py /app/
WORKDIR /app
EXPOSE 5000
CMD ["python","app.py"]
And I inspect the container
"Gateway": "172.17.0.1",
"IPAddress": "172.17.0.2",
It is succeed. But I cant connect to http://192.168.1.6:5000/
I also set the firewall to open port 5000 and reload.
Why can I connect to my VM?
EXPOSE doesn't actually publish the port. You should run with the -p option in order to publish and map the port:
docker run --detach -p 5000:5000 <image>
When running my flask app, which uses Python's subprocess to use scrapy within a flask app as specified here (How to integrate Flask & Scrapy?), from a Docker Container and calling the appropriate endpoints specified in my flask app, I receive the error message: ERR_EMPTY_RESPONSE. Executing the flask app outside of my docker container (python app.py, where app.py has my flask code), everything works as intended and my spiders are called using subprocess within the flask app.
Instead of using flask & subprocess to call my spiders within a web app, I tried using twisted & twisted-klein python libraries, with the same result when called from a docker Container. I have also created a new, clean scrapy project, meaning no specific code of my own, just the standard scrapy code and project structure upon creation. This resulted in the same error. I am not quite certain whether my approach is anti-pattern, since flask and scrapy are in bundled into run image, resulting in one container for two purposes.
Here is my server.py code. When executing outside a container (using python interpreter) everything works as intended.
When running it from a container, then I receive the error message (ERR_EMPTY_RESPONSE).
# server.py
import subprocess
from flask import Flask
from medien_crawler.spiders.firstclassspider import FirstClassSpider
app = Flask(__name__)
#app.route("/")
def return_hello():
return "Hello!"
#app.route("/firstclass")
def return_firstclass_comments():
spider_name = "firstclass"
response = subprocess.call(['scrapy', 'crawl', spider_name, '-a', 'start_url=https://someurl.com'])
return "OK!"
if __name__ == "__main__":
app.run(debug=True)
FROM python:3
WORKDIR /usr/src/app
COPY requirements.txt ./
RUN pip install --no-cache-dir -r requirements.txt
COPY . .
EXPOSE 5000
CMD [ "python", "./server.py" ]
Finally I run docker run -p 5000:5000 . It does not work. Any ideas?
Please try it.
.Dockerfile
FROM python:3.6
RUN apt-get update && apt-get install -y wget
WORKDIR /usr/src/app
ADD . /usr/src/app
RUN pip install --no-cache-dir -r requirements.txt
EXPOSE 5000
CMD [ "python", "./server.py" ]
I'm building a simple app using: Dockerfile, app.py and requirements.txt. When the Dockerfile builds I get the error: "No such file or directory". However, when I change the ADD to COPY in the Dockerfile it works. Do you know why this is?
I'm using the tutorial: https://docs.docker.com/get-started/part2/#define-a-container-with-a-dockerfile
App.py
from flask import Flask
from redis import Redis, RedisError
import os
import socket
# Connect to Redis
redis = Redis(host="redis", db=0, socket_connect_timeout=2, socket_timeout=2)
app = Flask(__name__)
#app.route("/")
def hello():
try:
visits = redis.incr("counter")
except RedisError:
visits = "<i>cannot connect to Redis, counter disabled</i>"
html = "<h3>Hello {name}!</h3>" \
"<b>Hostname:</b> {hostname}<br/>" \
"<b>Visits:</b> {visits}"
return html.format(name=os.getenv("NAME", "world"), hostname=socket.gethostname(), visits=visits)
if __name__ == "__main__":
app.run(host='0.0.0.0', port=80)
requirements.txt
Flask
Redis
Dockerfile
# Use an official Python runtime as a parent image
FROM python:2.7-slim
# Set the working directory to /app
WORKDIR /app
# Copy the current directory contents into the container at /app
ADD . /app
# Install any needed packages specified in requirements.txt
RUN pip install -r requirements.txt
# Make port 80 available to the world outside this container
EXPOSE 80
# Define environment variable
ENV NAME World
# Run app.py when the container launches
CMD ["python", "app.py"]
In the first run, your working directory is /app inside container, and you copy contents to /tmp. To correct this behavior, you should be copying contents to /app and it will work fine.
Second one, where you are using add is correct since you are adding contents to /app., and not /tmp
I have a swagger server api in python that I can run on my pc and easily access to the user interface via web. I'm now trying to run this API into a docker container and place it into a remote server. After the doing the 'docker run' command int the remote server all seems to be working fine but when I try to connect I got a ERR_CONNECTION_REFUSED response. The funny thing is that if I enter into the container the swagger server is working and answer my requests.
Here is my Dockerfile:
FROM python:3
MAINTAINER Me
ADD . /myprojectdir
WORKDIR /myprojectdir
RUN pip install -r requirements.txt
RUN ["/bin/bash", "-c", "chmod 777 {start.sh,stop.sh,restart.sh,test.sh}"]
Here are my commands to build/run:
sudo docker build -t mycontainer .
sudo docker run -d -p 33788:80 mycontainer ./start.sh
Here is the start.sh script:
#!/bin/bash
echo $'\r' >> log/server_log_`date +%Y%m`.dat
python3 -m swagger_server >> log/server_log_`date +%Y%m`.dat 2>&1
And the main.py of the swagger server:
#!/usr/bin/env python3
import connexion
from .encoder import JSONEncoder
if __name__ == '__main__':
app = connexion.App(__name__, specification_dir='./swagger/')
app.app.json_encoder = JSONEncoder
app.add_api('swagger.yaml', arguments={'title': 'A title'})
app.run(port=80, threaded=True, debug=False)
Does anyone know why I can't acces to 'myremoteserver:33788/myservice/ui' and what to change for solving it.
Thanks in advance
I finally managed to find out the solution. It's needed to tell the flask server of connexion to run on 0.0.0.0 so that not only local connections are allowed and to change in the swagger.yaml the url with the name of the server where the docker container is located
app.run(port=80, threaded=True, debug=False, host='0.0.0.0')