How to share cache between a task and a view with Django? - python

In my django project, I got a task running every 5 minutes (with Celery and Redis as a broker):
from django.core.cache import cache
#shared_task()
#celery.task(base=QueueOnce)
def cache_date():
cache.set('date', datetime.now)
print('Cached date : ', cache.get('date'))
And it's running fine, printing the new cached date everytime it runs
But then, somewhere in one of my views I try to do this :
from django.core.cache import cache
def get_cached_date():
print('Cached date :', cache.get('date')
And then it prints "Cached date : None"
Here's my cache settings :
CACHES = {
'default': {
'BACKEND': 'django.core.cache.backends.filebased.FileBasedCache',
'LOCATION': '/tmp/cache',
}
}
I don't get it, why is the value available in one place and not in the other while I'm using a singe location file ? Am I trying to do something wrong?
UPDATE
#Satevg I use docker-compose, here is my file :
services:
redis:
image: redis
worker:
image: project
depends_on:
- redis
- webserver
command: project backend worker
volumes:
- cache:/tmp/cache
webserver:
image: project
depends_on:
- redis
volumes:
- cache:/tmp/cache
volumes:
cache:
I tried to share the volumes like this, but when my task tries to write to the cache I get :
OSError: [Errno 13] Permission denied: '/tmp/cache/tmpAg_TAc'
When I look at the filesystems in both containers I can see the folder /tmp/cache, the web app can even write on it and when I look on the worker's container /tmp/cache folder I can see the updated cache
UPDATE2:
The webapp can write to the cache.
cache.set('test', 'Test')
On the worker's container, I can see the cache file on the /tmp/cache folder
When the task tries to read from cache :
print(cache.get('test'))
It says :
None
When the task tries to write from cache it still gets Errno13

As we figured out in comments, celery and app are working in different docker containers.
django.core.cache.backends.filebased.FileBasedCache serializes and stores each cache value as a separate file. But these files are in different file systems.
The solution is to use docker volumes in order to share /tmp/cache folder between these two containers.

Related

Service not known when writing to InfluxDB in Python

I'm using Docker with InfluxDB and Python in a framework. I want to write to InfluxDB inside the framework but I always get the error "Name or service not known" and have no idea what is the problem.
I link the InfluxDB container to the framework container in the docker compose file like so:
version: '3'
services:
influxdb:
image: influxdb
container_name: influxdb
restart: always
ports:
- 8086:8086
volumes:
- influxdb_data:/var/lib/influxdb
framework:
image: framework
build: framework
volumes:
- framework:/tmp/framework_data
links:
- influxdb
depends_on:
- influxdb
volumes:
framework:
driver: local
influxdb_data:
Inside the framework I have a script that only focuses on writing to the database. Because I don't want to access the database with the url "localhost:8086", I am using links to make it easier and connect to the database with the url "influxdb:8086". This is my code in that script:
from influxdb_client import InfluxDBClient, Point
from influxdb_client.client.write_api import SYNCHRONOUS, WritePrecision
bucket = "bucket"
token = "token"
def insert_data(message):
client = InfluxDBClient(url="http://influxdb:8086", token=token, org=org)
write_api = client.write_api(write_options=SYNCHRONOUS)
point = Point("mem") \
.tag("sensor", message["sensor"]) \
.tag("metric", message["type"]) \
.field("true_value", float(message["true_value"])) \
.field("value", float(message["value"])) \
.field("failure", message["failure"]) \
.field("failure_type", message["failure_type"]) \
.time(datetime.datetime.now(), WritePrecision.NS)
write_api.write(bucket, org, point) #the error seams to happen here
Everytime I use the function insert_data I get the error urllib3.exceptions.NewConnectionError: <urllib3.connection.HTTPConnection object at 0x7fac547d9d00>: Failed to establish a new connection: [Errno -2] Name or service not known.
Why can't I write into the database?
I think the problem resides in your docker-compose file. First of all links is a legacy feature so I'd recommend you to use user-defined networks instead. More on that here: https://docs.docker.com/compose/compose-file/compose-file-v3/#links
I've created a minimalistic example to demonstrate the approach:
version: '3'
services:
influxdb:
image: influxdb
container_name: influxdb
restart: always
environment: # manage the secrets the best way you can!!! the below are only for demonstration purposes...
- DOCKER_INFLUXDB_INIT_USERNAME=admin
- DOCKER_INFLUXDB_INIT_PASSWORD=secret
- DOCKER_INFLUXDB_INIT_ORG=my-org
- DOCKER_INFLUXDB_INIT_BUCKET=my-bucket
- DOCKER_INFLUXDB_INIT_ADMIN_TOKEN=secret-token
networks:
- local
framework:
image: python:3.10.2
depends_on:
- influxdb
networks:
- local
networks:
local:
Notice the additional networks definition and the local network. Also this network is referenced from the containers.
Also make sure to initialize your influxdb with the right enviroment variables according to the docker image's documentation: https://hub.docker.com/_/influxdb
Then to test it just run a shell in your framework container via docker-compose:
docker-compose run --entrypoint sh framework
and then in the container install the client:
pip install influxdb_client['ciso']
Then in a python shell - still inside the container - you can verify the connection:
from influxdb_client import InfluxDBClient
client = InfluxDBClient(url="http://influxdb:8086", token="secret-token", org="my-org") # the token and the org values are coming from the container's docker-compose environment definitions
client.health()
# {'checks': [],
# 'commit': '657e1839de',
# 'message': 'ready for queries and writes',
# 'name': 'influxdb',
# 'status': 'pass',
# 'version': '2.1.1'}
Last but not least to clean up the test resources do:
docker-compose down

Using pika, how to connect to rabbitmq running in docker, started with docker-compose with external network?

I have the following docker-compose file:
version: '2.3'
networks:
default: { external: true, name: $NETWORK_NAME } # NETWORK_NAME in .env file is `uv_atp_network`.
services:
car_parts_segmentor:
# container_name: uv-car-parts-segmentation
image: "uv-car-parts-segmentation:latest"
ports:
- "8080:8080"
volumes:
- ../../../../uv-car-parts-segmentation/configs:/uveye/configs
- /isilon/:/isilon/
# - local_data_folder:local_data_folder
command: "--run_service rabbit"
runtime: nvidia
depends_on:
rabbitmq_local:
condition: service_started
links:
- rabbitmq_local
restart: always
rabbitmq_local:
image: 'rabbitmq:3.6-management-alpine'
container_name: "rabbitmq"
ports:
- ${RABBIT_PORT:?unspecified_rabbit_port}:5672
- ${RABBIT_MANAGEMENT_PORT:?unspecified_rabbit_management_port}:15672
When this runs, docker ps shows
21400efd6493 uv-car-parts-segmentation:latest "python /uveye/app/m…" 5 seconds ago Up 1 second 0.0.0.0:8080->8080/tcp, :::8080->8080/tcp joint_car_parts_segmentor_1
bf4ab8581f1f rabbitmq:3.6-management-alpine "docker-entrypoint.s…" 5 seconds ago Up 4 seconds 4369/tcp, 5671/tcp, 0.0.0.0:5672->5672/tcp, :::5672->5672/tcp, 15671/tcp, 25672/tcp, 0.0.0.0:15672->15672/tcp, :::15672->15672/tcp rabbitmq
I want to create a connection to that rabbitmq. The user:pass is guest:guest.
I was unable to do it, with the very uninformative AMQPConnectionError in all cases:
Below code runs in another, unrelated container.
connection = pika.BlockingConnection(pika.URLParameters("amqp://guest:guest#rabbitmq/"))
connection = pika.BlockingConnection(pika.URLParameters("amqp://guest:guest#localhost/"))
Also tried with
$ docker inspect -f '{{range.NetworkSettings.Networks}}{{.IPAddress}}{{end}}' rabbitmq
172.27.0.2
and
connection = pika.BlockingConnection(pika.URLParameters("amqp://guest:guest#172.27.0.2/")) #
Also tried with
credentials = pika.credentials.PlainCredentials(
username="guest",
password="guest"
)
parameters = pika.ConnectionParameters(
host=ip_address, # tried all above options
port=5672,
credentials=credentials,
heartbeat=10,
)
Note that the container car_parts_segmentor is able to see the container rabbitmq. Both are started by docker-compose.
My assumption is this has to do with the uv_atp_network both containers live in, and I am trying to access a docker inside that network, from outside the network.
Is this really the problem?
If so, how can this be achieved?
For the future - how to get more informative errors from pika?
As I suspected, the problem was the name rabbitmq existed only in the network uv_atp_network.
The code attempting to connect to that network runs inside a container of its own, which was not present in the network.
Solution connect the current container to the network:
import socket
client = docker.from_env()
network_name = "uv_atp_network"
atp_container = client.containers.get(socket.gethostname())
client.networks.get(network_name).connect(container=atp_container.id)
After this, the above code in the question does work, because rabbitmq can be resolved.
connection = pika.BlockingConnection(pika.URLParameters("amqp://guest:guest#rabbitmq/"))

Unable to use pika with official hello-world example [duplicate]

I have the following docker-compose file:
version: '2.3'
networks:
default: { external: true, name: $NETWORK_NAME } # NETWORK_NAME in .env file is `uv_atp_network`.
services:
car_parts_segmentor:
# container_name: uv-car-parts-segmentation
image: "uv-car-parts-segmentation:latest"
ports:
- "8080:8080"
volumes:
- ../../../../uv-car-parts-segmentation/configs:/uveye/configs
- /isilon/:/isilon/
# - local_data_folder:local_data_folder
command: "--run_service rabbit"
runtime: nvidia
depends_on:
rabbitmq_local:
condition: service_started
links:
- rabbitmq_local
restart: always
rabbitmq_local:
image: 'rabbitmq:3.6-management-alpine'
container_name: "rabbitmq"
ports:
- ${RABBIT_PORT:?unspecified_rabbit_port}:5672
- ${RABBIT_MANAGEMENT_PORT:?unspecified_rabbit_management_port}:15672
When this runs, docker ps shows
21400efd6493 uv-car-parts-segmentation:latest "python /uveye/app/m…" 5 seconds ago Up 1 second 0.0.0.0:8080->8080/tcp, :::8080->8080/tcp joint_car_parts_segmentor_1
bf4ab8581f1f rabbitmq:3.6-management-alpine "docker-entrypoint.s…" 5 seconds ago Up 4 seconds 4369/tcp, 5671/tcp, 0.0.0.0:5672->5672/tcp, :::5672->5672/tcp, 15671/tcp, 25672/tcp, 0.0.0.0:15672->15672/tcp, :::15672->15672/tcp rabbitmq
I want to create a connection to that rabbitmq. The user:pass is guest:guest.
I was unable to do it, with the very uninformative AMQPConnectionError in all cases:
Below code runs in another, unrelated container.
connection = pika.BlockingConnection(pika.URLParameters("amqp://guest:guest#rabbitmq/"))
connection = pika.BlockingConnection(pika.URLParameters("amqp://guest:guest#localhost/"))
Also tried with
$ docker inspect -f '{{range.NetworkSettings.Networks}}{{.IPAddress}}{{end}}' rabbitmq
172.27.0.2
and
connection = pika.BlockingConnection(pika.URLParameters("amqp://guest:guest#172.27.0.2/")) #
Also tried with
credentials = pika.credentials.PlainCredentials(
username="guest",
password="guest"
)
parameters = pika.ConnectionParameters(
host=ip_address, # tried all above options
port=5672,
credentials=credentials,
heartbeat=10,
)
Note that the container car_parts_segmentor is able to see the container rabbitmq. Both are started by docker-compose.
My assumption is this has to do with the uv_atp_network both containers live in, and I am trying to access a docker inside that network, from outside the network.
Is this really the problem?
If so, how can this be achieved?
For the future - how to get more informative errors from pika?
As I suspected, the problem was the name rabbitmq existed only in the network uv_atp_network.
The code attempting to connect to that network runs inside a container of its own, which was not present in the network.
Solution connect the current container to the network:
import socket
client = docker.from_env()
network_name = "uv_atp_network"
atp_container = client.containers.get(socket.gethostname())
client.networks.get(network_name).connect(container=atp_container.id)
After this, the above code in the question does work, because rabbitmq can be resolved.
connection = pika.BlockingConnection(pika.URLParameters("amqp://guest:guest#rabbitmq/"))

How to docker push app (flask-python + redis) to gcr.io & deploy app to google kubernetes

How to push my app (using python-flask + redis) to gcr.io and deploy to google kubernetes (by yaml file)?
And I want to set env variable for my app
import os
import redis
from flask import Flask
from flask import request, redirect, render_template, url_for
from flask import Response
app = Flask(__name__)
redis_host = os.environ['REDIS_HOST']
app.redis = redis.StrictRedis(host=redis_host, port=6379, charset="utf-8", decode_responses=True)
# Be super aggressive about saving for the development environment.
# This says save every second if there is at least 1 change. If you use
# redis in production you'll want to read up on the redis persistence
# model.
app.redis.config_set('save', '1 1')
#app.route('/', methods=['GET', 'POST'])
def main_page():
if request.method == 'POST':
app.redis.lpush('entries', request.form['entry'])
return redirect(url_for('main_page'))
else:
entries = app.redis.lrange('entries', 0, -1)
return render_template('main.html', entries=entries)
#Router my app by post and redirect to mainpage
#app.route('/clear', methods=['POST'])
def clear_entries():
app.redis.ltrim('entries', 1, 0)
return redirect(url_for('main_page'))
#use for docker on localhost
if __name__ == "__main__":
app.run(host='0.0.0.0', port=5000)
Posting this answer as a community wiki to set more of a baseline approach to the question rather than to give a specific solution addressing the code included in the question.
Feel free to edit/expand.
This topic could be quite wide considering the fact it could be addressed in many different ways (as described in the question, by using Cloud Build, etc).
Addressing this question specifically on the part of:
Building the image and sending it to GCR.
Using newly built image in GKE.
Building the image and sending it to GCR.
Assuming that your code and your whole Docker image is running correctly, you can build/tag it in a following manner to then send it to GCR:
gcloud auth configure-docker
adds the Docker credHelper entry to Docker's
configuration file, or creates the file if it doesn't exist. This will
register gcloud as the credential helper for all Google-supported
Docker registries.
docker tag YOUR_IMAGE gcr.io/PROJECT_ID/IMAGE_NAME
docker push gcr.io/PROJECT_ID/IMAGE_NAME
After that you can go to the:
GCP Cloud Console (Web UI) -> Container Registry
and see the image you've uploaded.
Using newly built image in GKE
To run earlier mentioned image you can either:
Create the Deployment in the Cloud Console (Kubernetes Engine -> Workloads -> Deploy)
A side note!
You can also add there the environment variables of your choosing (as pointed in the question)
Create it with a YAML manifest that will be similar to the one below:
apiVersion: apps/v1
kind: Deployment
metadata:
name: amazing-app
labels:
app: amazing-app
spec:
replicas: 3
selector:
matchLabels:
app: amazing-app
template:
metadata:
labels:
app: amazing-app
spec:
containers:
- name: amazing-app
image: gcr.io/PROJECT-ID/IMAGE-NAME # <-- IMPORTANT!
env:
- name: DEMO_GREETING
value: "Hello from the environment"
Please take a specific look on following part:
env:
- name: DEMO_GREETING
value: "Hello from the environment"
This part will create an environment variable inside of each container:
$ kubectl exec -it amazing-app-6db8d7478b-4gtxk -- /bin/bash -c 'echo $DEMO_GREETING'
Hello from the environment
Additional resources:
Cloud.google.com: Container registry: Docs: Pushing and pulling
Cloud.google.com: Build: Docs: Deploying builds: Deploy GKE

Can't connect to mongo from flask in docker containers

I have a python script that runs the following
import mongoengine
client = mongoengine.connect('ppo-image-server-db', host="db", port=27017)
db = client.test_db
test_data = {
'name' : 'test'
}
db.test_data.insert_one( test_data )
print("DONE")
And I have a docker-compose.yml that looks like the following
version: '2'
networks:
micronet:
services:
user-info-service:
restart : always
build : .
container_name: test-user-info-service
working_dir : /usr/local/app/test
entrypoint : ""
command : ["manage", "run-user-info-service", "--host=0.0.0.0", "--port=5000"]
volumes :
- ./:/usr/local/app/test/
ports :
- "5000:5000"
networks :
- micronet
links:
- db
db:
image : mongo:3.0.2
container_name : test-mongodb
volumes :
- ./data/db:/data/db
ports :
- "27017:27017"
However every time when I run docker-compose build and docker-compose up, the python script is not able to find the host (in this case 'db'). Do I need any special linking or any environment variables to pass in the mongo server's IP address?
I could still access the dockerized mongo-db using robomongo
Please note that I'm not creating any docker-machine for this test case yet.
Could you help me to point out what's missing in my configuration?
Yes,
What you need is to tell docker that one application depends on the other. Here is how I built my docker-compose:
version: '2'
services:
mongo-server:
image: mongo
volumes:
- .data/mdata:/data/db # mongodb persistence
myStuff:
build: ./myStuff
depends_on:
- mongo-server
Also, in the connection url, you need to use the url "mongo-server". Docker will take care of connecting your code to the mongo container.
Example:
private val mongoClient: MongoClient = MongoClient("mongodb://mongo-server:27017")
That should solve your problem

Categories