I have a python script that will take a single argument. The script makes calls to 3rd party APIs, which obviously need to be server-side because of the API keys.
I want to make a website that makes an ajax call (or similar) to run the python script and give it some browser-side data.
I thought this would be a good project to use Docker for, but I can't find anything to indicate that this will work.
Is python not the right tool? Is Docker not the right tool?
Any help is greatly appreciated!
Edit (also in comments)
I've deployed a test image/service/cluster successfully on DigitalOcean, but I don't see how I can configure the endpoint to be RESTful
Edit (for source code)
To test the setup, I've been using a Docker-provided tutorial.
Dockerfile
#use an official Python runtime as a parent image
FROM python:3.6.2-slim
WORKDIR /app
ADD . /app
RUN pip install -r requirements.txt
EXPOSE 80
ENV NAME World
CMD ["python", "app.py"]
docker-compose.yml
version: "3"
services:
web:
# replace username/repo:tag with your name and image details
image: username/repo:latest
deploy:
replicas: 5
resources:
limits:
cpus: "0.1"
memory: 50M
restart_policy:
condition: on-failure
ports:
- "80:80"
networks:
- webnet
networks:
webnet:
app.py
from flask import Flask
from redis import Redis, RedisError
import os
import socket
# Connect to Redis
redis = Redis(host="redis", db=0, socket_connect_timeout=2,
socket_timeout=2)
app = Flask(__name__)
#app.route("/")
def hello():
try:
visits = redis.incr("counter")
except RedisError:
visits = "<i>cannot connect to Redis, counter disabled</i>"
html = "<h3>Hello {name}!</h3>" \
"<b>Hostname:</b> {hostname}<br/>" \
"<b>Visits:</b> {visits}"
return html.format(name=os.getenv("NAME", "world"), hostname=socket.gethostname(), visits=visits)
if __name__ == "__main__":
app.run(host='0.0.0.0', port=80)
It suddenly seems like I probably just need to change #app.route("/")...
Short answer, yes, its possible.
I recommend you first run your Python code outside of a container and try to connect to the droplet before putting in Docker, just to make sure it works.
Once you have the basic connection working, you can move into a container, and build out the REST API. (or GraphQL, if you're into that)
Containers aren't RESTful, the app they host can be. You don't have any REST endpoints yet, looks like, though.
Anyways, you've mapped the port
ports:
- "80:80"
And the app in running there
if __name__ == "__main__":
app.run(host='0.0.0.0', port=80)
Assuming you have opened your firewall to allow port 80 on DigitalOcean, that's all you need
Just fire up your http://digital-ocean.address
Is python not the right tool?
Use whatever you're comfortable with. For example, if you just want make HTTP requests, straight shell commands to do cURL commands and echo HTML responses would work.
Is Docker not the right tool?
It's not required. Even if you have keys, placing them within a container just adds a layer of redirection, not obfuscation
Related
Forgive my ignorance..
I'm trying to learn how to schedule python scripts with Google Cloud. After a bit of research, I've seen many people suggest Docker + Google Cloud Run + Cloud Scheduler. I've attempted to get a "hello world" example working, to no avail.
Code
hello.py
print("hello world")
Dockerfile
# For more information, please refer to https://aka.ms/vscode-docker-python
FROM python:3.8-slim
# Keeps Python from generating .pyc files in the container
ENV PYTHONDONTWRITEBYTECODE=1
# Turns off buffering for easier container logging
ENV PYTHONUNBUFFERED=1
WORKDIR /app
COPY . /app
# Creates a non-root user with an explicit UID and adds permission to access the /app folder
# For more info, please refer to https://aka.ms/vscode-docker-python-configure-containers
RUN adduser -u 5678 --disabled-password --gecos "" appuser && chown -R appuser /app
USER appuser
# During debugging, this entry point will be overridden. For more information, please refer to https://aka.ms/vscode-docker-python-debug
CMD ["python", "hello.py"]
Steps
Create a repo with Google Cloud Artifact Registry
gcloud artifacts repositories create test-repo --repository-format=docker \
--location=us-central1 --description="My test repo"
Build the image
docker image build --pull --file Dockerfile --tag 'testdocker:latest' .
Configure auth
gcloud auth configure-docker us-central1-docker.pkg.dev
Tag the image with a registry name
docker tag testdocker:latest \
us-central1-docker.pkg.dev/gormanalysis/test-repo/testdocker:latest
Push the image to Artifact Registry
docker push us-central1-docker.pkg.dev/gormanalysis/test-repo/testdocker:latest
Deploy to Google Cloud Run
Error
At this point, I get the error
The user-provided container failed to start and listen on the port defined provided by the PORT=8080 environment variable.
I've seen posts like this which say to add
app.run(port=int(os.environ.get("PORT", 8080)),host='0.0.0.0',debug=True)
but this looks like a flask thing, and my script doesn't use flask. I feel like i have a fundamental misunderstanding of how this is supposed to work. Any help would be appreciated it.
UPDATE
I've documented my problem and solution in much more detail here ยป
I had been trying to deploy my script as a Cloud Run Service. I should've tried deploying it as a Cloud Run Job. The difference is that cloud run services require your script to listen for a port. jobs do not.
Confusingly, you cannot deploy a cloud run job directly from Artifact Registry. You have to start from the cloud run dashboard.
Your Flask application should be something like below:
import os
from flask import Flask
app = Flask(__name__)
#app.route("/")
def hello_world():
return "Hello World!"
if __name__ == "__main__":
app.run(debug=True, host="0.0.0.0", port=int(os.environ.get("PORT", 8080)))
See this official documentation for step by step instruction: Deploy a Python service to Cloud Run
There is a plugin called: Cloud Code IDE plugin which makes the test and deployment easy. I am using it for VS code, once the initial setups and permissions are taken care, few clicks, you will be able to run locally, debug and deploy Cloud run services from your local instance.
I have a complex API which takes around 7GB memory when I deploy it using Uvicorn.
I want to understand how I can deploy it, such a way that from my end I want to be able to make parallel requests. The deployed API should be capable of processing two or three requests at same time.
I am using FastAPI with uvicorn and nginx for deployment. Here is my deployed command.
uvicorn --host 0.0.0.0 --port 8888
Can someone provide some clarity on how people achieve that?
You can use gunicorn instead of uvicorn to handle your backend. Gunicorn offers multiple workers to effectively make load balancing of the arriving requests. This means that you will have as many gunicorn running process as you specify to receive and process requests. From the doc, gunicorn should only need 4-12 worker processes to handle hundreds or thousands of requests per second. However, the number of workers should be no more than (2 x number_of_cpu_cores) + 1 to avoid running out of memory errors. You can check this out in the doc.
For example, if you want to use 4 workers for your fastapi-based backend, you can specify it with the flag w:
gunicorn main:app -w 4 -k uvicorn.workers.UvicornWorker -b "0.0.0.0:8888"
In this case, the script where I have my backend functionalities is called main and fastapi is instantiated as app.
I'm working on something like this using Docker and NGINX.
There's a Docker official image created by the guy who developed FastAPI that deploys uvicorn/gunicorn for you that can be configured to your needs:
It took some time to get the hang of Docker but I'm really liking it now. You can build an nginx image using the below configuration and then build x amount of your app inside of separate containers for however many you need to serve as hosts.
The below example is running a weighted load balancer for two of my app services with a backup third if those two should fail.
https://hub.docker.com/r/tiangolo/uvicorn-gunicorn-fastapi
nginx Dockerfile:
FROM nginx
# Remove the default nginx.conf
RUN rm /etc/nginx/conf.d/default.conf
# Replace with our own nginx.conf
COPY nginx.conf /etc/nginx/conf.d/
nginx.conf:
upstream loadbalancer {
server 192.168.115.5:8080 weight=5;
server 192.168.115.5:8081;
server 192.168.115.5:8082 backup;
}
server {
listen 80;
location / {
proxy_pass http://loadbalancer;
}
}
app Dockerfile:
FROM tiangolo/uvicorn-gunicorn-fastapi:python3.7
RUN pip install --upgrade pip
WORKDIR /app
COPY requirements.txt .
RUN pip install -r requirements.txt
COPY . /app
I am currently running my django app inside the docker container by using the below command
docker-compose run app sh -c "python manage.py runserver"
but I am not able to access the app with local host url, (not using any additional db server or ngnix or gunicorn, just simply running the django devlopment server inside the docker).
please let me know how to access the app
docker-compose run is intended to launch a utility container based on a service in your docker-compose.yml as a template. It intentionally does not publish the ports: declared in the Compose file, and you shouldn't need it to run the main service.
docker-compose up should be your go-to call for starting the services. Just docker-compose up on its own will start everything in the docker-compose.yml, concurrently, in the foreground; you can add -d to start the processes in the background, or a specific service name docker-compose up app to only start the app service and its dependencies.
The python command itself should be the main CMD in your image's Dockerfile. You shouldn't need to override it in your docker-compose.yml file or to provide it at the command line.
A typical Compose YAML file might look like:
version: '3.8'
services:
app:
build: . # from the Dockerfile in the current directory
ports:
- 5000:5000 # make localhost:5000 forward to port 5000 in the container
While Compose supports many settings, you do not need to provide most of them. Compose provides reasonable defaults for container_name:, hostname:, image:, and networks:; expose:, entrypoint:, and command: will generally come from your Dockerfile and don't need to be overridden.
Try 0.0.0.0:<PORT_NUMBER> (typically 80 or 8000), If you are still troubling to connect the server you should use the Docker Machine IP instead of localhost. Enter the following in terminal and navigate to the provided url:
docker-machine ip
I've built the flask app that designed to run inside docker container. It will
accept POST HTTP Methods and return appropriate JSON response if the header key matched with the key that I put inside docker-compose environment.
...
environment:
- SECRET_KEY=fakekey123
...
The problem is: when it comes to testing. The app or the client fixture of
flask (pytest) of course can't find the docker-compose environment. Cause the app didn't start from docker-compose but from pytest.
secret_key = os.environ.get("SECRET_KEY")
# ^^ the key loaded to OS env by docker-compose
post_key = headers.get("X-Secret-Key")
...
if post_key == secret_key:
RETURN APPROPRIATE RESPONSE
.....
What is the (best/recommended) approach to this problem?
I find some plugins
one,
two,
three to do this. But I asked here if there is any more "simple"/"common" approach. Cause I also want to automate this test using CI/CD tools.
You most likely need to run py.test from inside of your container. If you are running locally, then there's going to be a conflict between what your host machine is seeing and what your container is seeing.
So option #1 would be to using docker exec:
$ docker exec -it $containerid py.test
Then option #2 would be to create a script or task in your setup.py so that you can run a simpler command like:
$ python setup.py test
My current solution is to mock the function that read the OS environment. OS ENV is loaded if the app started using docker. In order to make it easy for the test, I just mock that function.
def fake_secret_key(self):
return "ffakefake11"
def test_app(self, client):
app.secret_key = self.fake_secret_key
# ^^ real func ^^ fake func
Or another alternative is using pytest-env as #bufh suggested in comment.
Create pytest.ini file, then put:
[pytest]
env =
APP_KEY=ffakefake11
I am having issues with getting data back from a docker-selenium container, via a Flask application (also dockerized).
When I have the Flask application running in one container, I get the following error on http://localhost:5000, which goes to the selenium driver using a Remote driver that is running on http://localhost:4444/wd/hub
The error that is generated is:
urllib.error.URLError: <urlopen error [Errno 99] Cannot assign requested address>
I have created a github repo with my code to test, see here.
My docker-compose file below seems ok:
version: '3.5'
services:
web:
volumes:
- ./app:/app
ports:
- "5000:80"
environment:
- FLASK_APP=main.py
- FLASK_DEBUG=1
- 'RUN=flask run --host=0.0.0.0 --port=80'
command: flask run --host=0.0.0.0 --port=80
# Infinite loop, to keep it alive, for debugging
# command: bash -c "while true; do echo 'sleeping...' && sleep 10; done"
selenium-hub:
image: selenium/hub:3.141
container_name: selenium-hub
ports:
- 4444:4444
chrome:
shm_size: 2g
volumes:
- /dev/shm:/dev/shm
image: selenium/node-chrome:3.141
# image: selenium/standalone-chrome:3.141.59-copernicium
depends_on:
- selenium-hub
environment:
- HUB_HOST=selenium-hub
- HUB_PORT=4444
What is strange is that when I run the Flask application in Pycharm, and the selenium grid is up in docker, I am able to get the data back through http://localhost:5000. The issue is only happening when the Flask app is running inside docker.
Thanks for the help in advance, let me know if you require further information.
Edit
So I amended my docker-compose.yml file to include a network (updated the code in github. As I've had the Flask app code running in debug and in a volume, any update to the code results in a refresh of the debugger.
I ran docker network inspect on the created network, and found the local docker IP address of selenium-hub. I updated the app/utils.py code, in get_driver() to use the IP address in command_executor rather than localhost. Saving, and re-running from my browser results in a successful return of data.
But I don't understand why http://localhost:4444/wd/hub would not work, the docker containers should see each other in the network as localhost, right?
the docker containers should see each other in the network as localhost, right?
No, this is only true when they use the host networking and expose ports through the host.
When you have services interacting with each other in docker-compose (or stack) the services should refer to each other by the service name. E.g. you would reach the hub container at http://selenium-hub:4444/wd/hub. Your Flask application could be reached by another container on the same network at http://web
You may be confused if your default when running docker normally is to use host networking because on the host network selenium-hub is also exposed on the same port 4444. So, if you started a container with host networking it could use http://localhost:4444 just fine there as well.
Could potentially be a port in use issue related to the execution?
See:
Python urllib2: Cannot assign requested address