Dockerized app succesfully deployed but times out on warm up and - python

I have a dockerized app which I deployed on Azure app services perfectly fine. I then made some changes to the html file, pushed the new version to dockerhub, and finally set azure to pull the latest tag. However, now the app stopped working all together. I deleted the azure resources and services and attempted to recreate them. Here is that process:
The log stream for the container looks like this:
2020-01-18T20:21:10.830911857Z * Serving Flask app "main" (lazy loading)
2020-01-18T20:21:10.834294374Z * Running on http://0.0.0.0:5000/ (Press CTRL+C to quit)
2020-01-18 20:31:41.719 INFO - Pulling image from Docker hub: gdeol4/azure-ml:version-8
2020-01-18 20:31:41.837 INFO - version-8 Pulling from xyz
2020-01-18 20:31:41.839 INFO - Digest: sha256:xyz
2020-01-18 20:31:41.840 INFO - Status: Image is up to date for xyz
2020-01-18 20:31:41.845 INFO - Pull Image successful, Time taken: 0 Minutes and 0 Seconds
2020-01-18 20:31:41.858 INFO - Starting container for site docker run -d -p 2113:80
-e WEBSITES_ENABLE_APP_SERVICE_STORAGE=false
-e PORT=80
-e WEBSITE_ROLE_INSTANCE_ID=0
-e HTTP_LOGGING_ENABLED=1
2020-01-18 20:31:42.337 INFO - Initiating warmup request to container
2020-01-18 20:35:32.524 ERROR - Container xyz for site xyz did not start within expected time limit. Elapsed time = 230.1865177 sec
2020-01-18 20:35:32.525 ERROR - Container xyz didn't respond to HTTP pings on port: 80, failing site start. See container logs for debugging.
2020-01-18 20:35:32.535 INFO - Stoping site because it failed during startup.

in this case the solution was to use the WEBSITES_PORT application setting and set it to 5000 (the port used by the app to listen to requests).

Related

How to logging Flask python endpoint actions in Spring Boot Admin?

I'm working in PyCharm and building endpoints with Flask, and I use Pyctuator as well. It is connected to a Spring Boot Admin Server, and I saw a lot of logs, which was unuseful for me, like:
2022-07-26 10:28:32,212 INFO 1 -- [Thread-17317] _internal: 172.18.0.1 - - [26/Jul/2022 10:28:32] "GET /actuator/loggers HTTP/1.1" 200 -
I turned off the loggers on the site, and I want to setup in PyCharm if the server do any processes with any endpoints, then send back messages, like: 'System started xy process', 'System stopped xy process.' etc.
Could you please help how can I set it up in PyCharm?
Thanks!

Running non-web app container on Azure Web Apps for Containers

I'm trying to launch a constantly running container in Azure that reads from a queue and processes the messages with some application code. To do so, I want to use Azure Web App for Containers. Although I know this is not the most obvious choice since my container doesn't run a web app, I would like to know if this could be made to be workable.
I mocked the setup of a by a very simple Python script:
if __name__ == '__main__':
logging.basicConfig(level=logging.INFO)
count = 0
while True:
time.sleep(30)
count += 30
logging.info(f'Busy counting: {count}')
which I make callable in a Docker image with the following content:
FROM python:3.8-slim-buster
COPY main.py /app/main.py
WORKDIR /app
EXPOSE 80
CMD ["python", "./main.py"]
I set this image as the image to use in the Web App Service, after which the container is launched. I get the logging up until 230 seconds, after which the container is stopped with the following log message:
2022-06-16T14:46:34.582Z ERROR - Container testapp_123 for site testapp did not start within expected time limit. Elapsed time = 230.9588354 sec
and:
Container testapp_123 didn't respond to HTTP pings on port: 80, failing site start. See container logs for debugging.
Now this obviously has to do with the fact that I have no web server listening on port 80. Is there any way to circumvent this pinging from even happening or running my app without a web server running, or would this be basically impossible to achieve?

Using pika, how to connect to rabbitmq running in docker, started with docker-compose with external network?

I have the following docker-compose file:
version: '2.3'
networks:
default: { external: true, name: $NETWORK_NAME } # NETWORK_NAME in .env file is `uv_atp_network`.
services:
car_parts_segmentor:
# container_name: uv-car-parts-segmentation
image: "uv-car-parts-segmentation:latest"
ports:
- "8080:8080"
volumes:
- ../../../../uv-car-parts-segmentation/configs:/uveye/configs
- /isilon/:/isilon/
# - local_data_folder:local_data_folder
command: "--run_service rabbit"
runtime: nvidia
depends_on:
rabbitmq_local:
condition: service_started
links:
- rabbitmq_local
restart: always
rabbitmq_local:
image: 'rabbitmq:3.6-management-alpine'
container_name: "rabbitmq"
ports:
- ${RABBIT_PORT:?unspecified_rabbit_port}:5672
- ${RABBIT_MANAGEMENT_PORT:?unspecified_rabbit_management_port}:15672
When this runs, docker ps shows
21400efd6493 uv-car-parts-segmentation:latest "python /uveye/app/m…" 5 seconds ago Up 1 second 0.0.0.0:8080->8080/tcp, :::8080->8080/tcp joint_car_parts_segmentor_1
bf4ab8581f1f rabbitmq:3.6-management-alpine "docker-entrypoint.s…" 5 seconds ago Up 4 seconds 4369/tcp, 5671/tcp, 0.0.0.0:5672->5672/tcp, :::5672->5672/tcp, 15671/tcp, 25672/tcp, 0.0.0.0:15672->15672/tcp, :::15672->15672/tcp rabbitmq
I want to create a connection to that rabbitmq. The user:pass is guest:guest.
I was unable to do it, with the very uninformative AMQPConnectionError in all cases:
Below code runs in another, unrelated container.
connection = pika.BlockingConnection(pika.URLParameters("amqp://guest:guest#rabbitmq/"))
connection = pika.BlockingConnection(pika.URLParameters("amqp://guest:guest#localhost/"))
Also tried with
$ docker inspect -f '{{range.NetworkSettings.Networks}}{{.IPAddress}}{{end}}' rabbitmq
172.27.0.2
and
connection = pika.BlockingConnection(pika.URLParameters("amqp://guest:guest#172.27.0.2/")) #
Also tried with
credentials = pika.credentials.PlainCredentials(
username="guest",
password="guest"
)
parameters = pika.ConnectionParameters(
host=ip_address, # tried all above options
port=5672,
credentials=credentials,
heartbeat=10,
)
Note that the container car_parts_segmentor is able to see the container rabbitmq. Both are started by docker-compose.
My assumption is this has to do with the uv_atp_network both containers live in, and I am trying to access a docker inside that network, from outside the network.
Is this really the problem?
If so, how can this be achieved?
For the future - how to get more informative errors from pika?
As I suspected, the problem was the name rabbitmq existed only in the network uv_atp_network.
The code attempting to connect to that network runs inside a container of its own, which was not present in the network.
Solution connect the current container to the network:
import socket
client = docker.from_env()
network_name = "uv_atp_network"
atp_container = client.containers.get(socket.gethostname())
client.networks.get(network_name).connect(container=atp_container.id)
After this, the above code in the question does work, because rabbitmq can be resolved.
connection = pika.BlockingConnection(pika.URLParameters("amqp://guest:guest#rabbitmq/"))

Load balance docker swarm

I have a docker swarm mode with one HAProxy container, and 3 python web apps. The container with HAProxy is expose port 80 and should load balance the 3 containers of my app (by leastconn).
Here is my docker-compose.yml file:
version: '3'
services:
scraper-node:
image: scraper
ports:
- 5000
volumes:
- /profiles:/profiles
command: >
bash -c "
cd src;
gunicorn src.interface:app \
--bind=0.0.0.0:5000 \
--workers=1 \
--threads=1 \
--timeout 500 \
--log-level=debug \
"
environment:
- SERVICE_PORTS=5000
deploy:
replicas: 3
update_config:
parallelism: 5
delay: 10s
restart_policy:
condition: on-failure
max_attempts: 3
window: 120s
networks:
- web
proxy:
image: dockercloud/haproxy
depends_on:
- scraper-node
environment:
- BALANCE=leastconn
volumes:
- /var/run/docker.sock:/var/run/docker.sock
ports:
- 80:80
networks:
- web
networks:
web:
driver: overlay
When I deploy this swarm (docker stack deploy --compose-file=docker-compose.yml scraper) I get all of my containers:
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
245f4bfd1299 scraper:latest "/docker-entrypoin..." 21 hours ago Up 19 minutes 80/tcp, 5000/tcp, 8000/tcp scraper_scraper-node.3.iyi33hv9tikmf6m2wna0cypgp
995aefdb9346 scraper:latest "/docker-entrypoin..." 21 hours ago Up 19 minutes 80/tcp, 5000/tcp, 8000/tcp scraper_scraper-node.2.wem9v2nug8wqos7d97zknuvqb
a51474322583 scraper:latest "/docker-entrypoin..." 21 hours ago Up 19 minutes 80/tcp, 5000/tcp, 8000/tcp scraper_scraper-node.1.0u8q4zn432n7p5gl93ohqio8e
3f97f34678d1 dockercloud/haproxy "/sbin/tini -- doc..." 21 hours ago Up 19 minutes 80/tcp, 443/tcp, 1936/tcp scraper_proxy.1.rng5ysn8v48cs4nxb1atkrz73
And when I display the haproxy container log it looks like he recognize the 3 python containers:
INFO:haproxy:dockercloud/haproxy 1.6.6 is running outside Docker Cloud
INFO:haproxy:Haproxy is running in SwarmMode, loading HAProxy definition through docker api
INFO:haproxy:dockercloud/haproxy PID: 6
INFO:haproxy:=> Add task: Initial start - Swarm Mode
INFO:haproxy:=> Executing task: Initial start - Swarm Mode
INFO:haproxy:==========BEGIN==========
INFO:haproxy:Linked service: scraper_scraper-node
INFO:haproxy:Linked container: scraper_scraper-node.1.0u8q4zn432n7p5gl93ohqio8e, scraper_scraper-node.2.wem9v2nug8wqos7d97zknuvqb, scraper_scraper-node.3.iyi33hv9tikmf6m2wna0cypgp
INFO:haproxy:HAProxy configuration:
global
log 127.0.0.1 local0
log 127.0.0.1 local1 notice
log-send-hostname
maxconn 4096
pidfile /var/run/haproxy.pid
user haproxy
group haproxy
daemon
stats socket /var/run/haproxy.stats level admin
ssl-default-bind-options no-sslv3
ssl-default-bind-ciphers ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES128-SHA256:ECDHE-RSA-AES128-SHA256:ECDHE-ECDSA-AES128-SHA:ECDHE-RSA-AES128-SHA:ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-AES256-SHA384:ECDHE-RSA-AES256-SHA384:ECDHE-RSA-AES256-SHA:ECDHE-ECDSA-AES256-SHA:AES128-GCM-SHA256:AES128-SHA256:AES128-SHA:AES256-GCM-SHA384:AES256-SHA256:AES256-SHA:DHE-DSS-AES128-SHA:DES-CBC3-SHA
defaults
balance leastconn
log global
mode http
option redispatch
option httplog
option dontlognull
option forwardfor
timeout connect 5000
timeout client 50000
timeout server 50000
listen stats
bind :1936
mode http
stats enable
timeout connect 10s
timeout client 1m
timeout server 1m
stats hide-version
stats realm Haproxy\ Statistics
stats uri /
stats auth stats:stats
frontend default_port_80
bind :80
reqadd X-Forwarded-Proto:\ http
maxconn 4096
default_backend default_service
backend default_service
server scraper_scraper-node.1.0u8q4zn432n7p5gl93ohqio8e 10.0.0.5:5000 check inter 2000 rise 2 fall 3
server scraper_scraper-node.2.wem9v2nug8wqos7d97zknuvqb 10.0.0.6:5000 check inter 2000 rise 2 fall 3
server scraper_scraper-node.3.iyi33hv9tikmf6m2wna0cypgp 10.0.0.7:5000 check inter 2000 rise 2 fall 3
INFO:haproxy:Launching HAProxy
INFO:haproxy:HAProxy has been launched(PID: 12)
INFO:haproxy:===========END===========
But when I try to GET to http://localhost I get an error message:
<html>
<body>
<h1>503 Service Unavailable</h1>
No server is available to handle this request.
</body>
</html>
There was two problems:
The command in docker-compose.yml file should be one line.
The scraper image should expose port 5000 (in his Dockerfile).
Once I fix those, I deploy this swarm the same way (with stack) and the proxy container recognize the python containers and was able to load balance between them.
A 503 error usually means a failed health check to the backend server.
Your stats page might be helpful here: if you mouse over the LastChk column of one of your DOWN backend servers, HAProxy will give you a vague summary of why that server is DOWN:
It does not look like you configured the health check (option httpchk) for your default_service backend: can you reach any of your backend servers directly (e.g. curl --head 10.0.0.5:5000)? From the HAProxy documentation:
[R]esponses 2xx and 3xx are
considered valid, while all other ones indicate a server failure, including
the lack of any response.

How to setup an AWS Elastic Beanstalk Worker Tier Environment + Cron + Django for periodic tasks? 403 Forbidden error

The app needs to run periodic tasks on the background to delete expired files. The app is up and running at a Web Server and at a Worker Tier Environment.
A cron.yaml file is at the root of the app:
version: 1
cron:
- name: "delete_expired_files"
url: "/networks_app/delete_expired_files"
schedule: "*/10 * * * *"
The cron url points to an app view:
def delete_expired_files(request):
users = DemoUser.objects.all()
for user in users:
documents = Document.objects.filter(owner=user.id)
if documents:
for doc in documents:
now = timezone.now()
if now >= doc.date_published + timedelta(days=doc.owner.group.valid_time):
doc.delete()
the Django ALLOWED_HOSTS setting is as follows:
ALLOWED_HOSTS = ['127.0.0.1', 'localhost', 'networksapp.elasticbeanstalk.com']
the task is being scheduled and the queries are sending the requests to the right url, however they're going to the WorkerDeadLetterQueue
The Worker Tier Environment Log File shows an 403 error:
"POST /networks_app/delete_expired_files HTTP/1.1" 403 1374 "-" "aws-sqsd/2.0"
The task is not being executed (expired files aren't being deleted). However when I access the url it executes the task properly.
I need to make it work automatically and periodically.
My IAM user has this Policies:
AmazonSQSFullAccess
AmazonS3FullAccess
AmazonDynamoDBFullAccess
AdministratorAccess
AWSElasticBeanstalkFullAccess
Why isn't the task being executed? Does this has to do with any IAM permission? Is there any missing configuration? How to make it work? Thanks in advance.

Categories