Python flask 502 bad request sent event stream deployed in GKE - python

I'm having problem with my Python Flask server deployed in Google cloud Kubernetes engine. The code below is a simple flask server that supports text/event-stream. The problem is, at exactly 60 seconds of inactivity from the server (no messages from stream) the client shows a 502 bad gateway error.
Error: Server Error
The server encountered a temporary error and could not complete your request.
Please try again in 30 seconds.
The client will no longer receive any data from the server whenever this happens. Already tried adding timeouts as you can see on the kubernetes config file.
I tried spinning up a google cloud compute engine without using kubernetes. Deployed the same code in it and added a domain. In my surprise it works, it didn't show any 502 bad request error even if I leave the browser open.
It probably has something to do with the kubernetes config I'm running. I'd appreciate any help or idea I can get.
Update 1
I tried changing the kube service type to LoadBalancer instead of NodePort.
Accessing the IP endpoint generated works perfectly without showing a 502 error even after 60s of inactivity.
Update 2
Here is the errors generated by the LoadBalancer stackdriver logs
{
httpRequest: {
referer: "http://sse-dev.[REDACTED]/test"
remoteIp: "[REDACTED]"
requestMethod: "GET"
requestSize: "345"
requestUrl: "http://sse-dev.[REDACTED]/stream"
responseSize: "488"
serverIp: "[REDACTED]"
status: 502
userAgent: "Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/78.0.3904.108 Safari/537.36"
}
insertId: "ptb7kfg2w2zz01"
jsonPayload: {
#type: "type.googleapis.com/google.cloud.loadbalancing.type.LoadBalancerLogEntry"
statusDetails: "backend_timeout"
}
logName: "projects/[REDACTED]-assist-dev/logs/requests"
receiveTimestamp: "2020-01-03T06:27:44.361706996Z"
resource: {
labels: {
backend_service_name: "k8s-be-30808--17630a0e8199e99b"
forwarding_rule_name: "k8s-fw-default-[REDACTED]-dev-ingress--17630a0e8199e99b"
project_id: "[REDACTED]-assist-dev"
target_proxy_name: "k8s-tp-default-[REDACTED]-dev-ingress--17630a0e8199e99b"
url_map_name: "k8s-um-default-[REDACTED]-dev-ingress--17630a0e8199e99b"
zone: "global"
}
type: "http_load_balancer"
}
severity: "WARNING"
spanId: "4b0767cace9b9500"
timestamp: "2020-01-03T06:26:43.381613Z"
trace: "projects/[REDACTED]-assist-dev/traces/d467f39f76b94c02d9a8e6998fdca17b"
}
sse.py
from typing import Iterator
import random
import string
from collections import deque
from flask import Response, request
from gevent.queue import Queue
import gevent
def generate_id(size=6, chars=string.ascii_lowercase + string.digits):
return ''.join(random.choice(chars) for _ in range(size))
class ServerSentEvent(object):
"""Class to handle server-sent events."""
def __init__(self, data, event):
self.data = data
self.event = event
self.event_id = generate_id(),
self.retry = 5000
self.desc_map = {
self.data: "data",
self.event: "event",
self.event_id: "id",
self.retry: 5000
}
def encode(self) -> str:
"""Encodes events as a string."""
if not self.data:
return ""
lines = ["{}: {}".format(name, key)
for key, name in self.desc_map.items() if key]
return "{}\n\n".format("\n".join(lines))
class Channel(object):
def __init__(self, history_size=32):
self.subscriptions = []
self.history = deque(maxlen=history_size)
self.history.append(ServerSentEvent('start_of_history', None))
def notify(self, message):
"""Notify all subscribers with message."""
for sub in self.subscriptions[:]:
sub.put(message)
def event_generator(self, last_id) -> Iterator[ServerSentEvent]:
"""Yields encoded ServerSentEvents."""
q = Queue()
self._add_history(q, last_id)
self.subscriptions.append(q)
try:
while True:
yield q.get()
except GeneratorExit:
self.subscriptions.remove(q)
def subscribe(self):
def gen(last_id) -> Iterator[str]:
for sse in self.event_generator(last_id):
yield sse.encode()
return Response(
gen(request.headers.get('Last-Event-ID')),
mimetype="text/event-stream",
headers={
"Cache-Control": "no-cache",
"Connection": "keep-alive",
"Content-Type": "text/event-stream"
})
def _add_history(self, q, last_id):
add = False
for sse in self.history:
if add:
q.put(sse)
if sse.event_id == last_id:
add = True
def publish(self, message, event=None):
sse = ServerSentEvent(str(message), event)
self.history.append(sse)
gevent.spawn(self.notify, sse)
def get_last_id(self) -> str:
return self.history[-1].event_id
service.py
import json
import os
import requests
from app.controllers.sse import Channel
from flask import send_file, \
jsonify, request, Blueprint, Response
from typing import Iterator
blueprint = Blueprint(__name__, __name__, url_prefix='')
flask_channel = Channel()
#blueprint.route("/stream")
def stream():
return flask_channel.subscribe()
#blueprint.route('/sample/create', methods=['GET'])
def sample_create():
branch_id = request.args.get('branch_id', None)
params = request.get_json()
if not params:
params = {
'id': 'sample_id',
'description': 'sample_description'
}
flask_channel.publish(json.dumps(params), event=branch_id)
return jsonify({'success': True}), 200
kubernetes-config.yaml
---
apiVersion: v1
kind: Service
metadata:
name: sse-service
labels:
app: sse-service
spec:
ports:
- port: 80
targetPort: 5000
protocol: TCP
name: http
selector:
app: sse-service
sessionAffinity: ClientIP
type: NodePort
---
apiVersion: "extensions/v1beta1"
kind: "Deployment"
metadata:
name: "sse-service"
namespace: "default"
labels:
app: "sse-service"
spec:
replicas: 1
strategy:
type: RollingUpdate
rollingUpdate:
maxSurge: 1
maxUnavailable: 25%
selector:
matchLabels:
app: "sse-service"
template:
metadata:
labels:
app: "sse-service"
spec:
containers:
- name: "sse-service"
image: "{{IMAGE_NAME}}"
imagePullPolicy: Always
ports:
- containerPort: 5000
livenessProbe:
httpGet:
path: /health/check
port: 5000
initialDelaySeconds: 25
periodSeconds: 15
readinessProbe:
httpGet:
path: /health/check
port: 5000
initialDelaySeconds: 25
periodSeconds: 15
---
apiVersion: "autoscaling/v2beta1"
kind: "HorizontalPodAutoscaler"
metadata:
name: "sse-service-hpa"
namespace: "default"
labels:
app: "sse-service"
spec:
scaleTargetRef:
kind: "Deployment"
name: "sse-service"
apiVersion: "apps/v1beta1"
minReplicas: 1
maxReplicas: 7
metrics:
- type: "Resource"
resource:
name: "cpu"
targetAverageUtilization: 80
---
apiVersion: cloud.google.com/v1beta1
kind: BackendConfig
metadata:
name: sse-service
spec:
timeoutSec: 120
connectionDraining:
drainingTimeoutSec: 3600
Dockerfile
FROM python:3.6.5-jessie
ENV GUNICORN_PORT=5000
ENV PYTHONUNBUFFERED=TRUE
ENV GOOGLE_APPLICATION_CREDENTIALS=/opt/creds/account.json
COPY requirements.txt /opt/app/requirements.txt
COPY app /opt/app
COPY creds/account.json /opt/creds/account.json
WORKDIR /opt/app
RUN pip install -r requirements.txt
EXPOSE ${GUNICORN_PORT}
CMD gunicorn -b :${GUNICORN_PORT} wsgi:create_app\(\) --reload --timeout=300000 --config=config.py
Base.py
from flask import jsonify, Blueprint
blueprint = Blueprint(__name__, __name__)
#blueprint.route('/health/check', methods=['GET'])
def check_health():
response = {
'message': 'pong!',
'status': 'success'
}
return jsonify(response), 200
bitbucket-pipelines.yml
options:
docker: true
pipelines:
branches:
dev:
- step:
name: Build - Push - Deploy to Dev environment
image: google/cloud-sdk:latest
caches:
- docker
- pip
deployment: development
script:
# Export all bitbucket credentials to the environment
- echo $GOOGLE_APPLICATION_CREDENTIALS | base64 -di > ./creds/account.json
- echo $CONTAINER_CREDENTIALS | base64 -di > ./creds/gcr.json
- export CLOUDSDK_CONFIG='pwd'/creds/account.json
- export GOOGLE_APPLICATION_CREDENTIALS='pwd'/creds/account.json
# Configure docker to use gcp service account
- gcloud auth activate-service-account $KUBERNETES_SERVICE_ACCOUNT --key-file=creds/gcr.json
- gcloud config list
- gcloud auth configure-docker -q
# # Build docker image with name and tag
- export IMAGE_NAME=$HOSTNAME/$PROJECT_ID/$IMAGE:v0.1.$BITBUCKET_BUILD_NUMBER
- docker build -t $IMAGE_NAME .
# # Push image to Google Container Repository
- docker push $IMAGE_NAME
# Initialize configs for kubernetes
- gcloud config set project $PROJECT_ID
- gcloud config set compute/zone $PROJECT_ZONE
- gcloud container clusters get-credentials $PROJECT_CLUSTER
# Run kubernetes configs
- cat kubernetes-config.yaml | sed "s#{{IMAGE_NAME}}#$IMAGE_NAME#g" | kubectl apply -f -
ingress.yaml
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
annotations:
ingress.kubernetes.io/backends: '{"k8s-be-30359--17630a0e8199e99b":"HEALTHY","k8s-be-30599--17630a0e8199e99b":"HEALTHY","k8s-be-30808--17630a0e8199e99b":"HEALTHY","k8s-be-30991--17630a0e8199e99b":"HEALTHY","k8s-be-31055--17630a0e8199e99b":"HEALTHY","k8s-be-31467--17630a0e8199e99b":"HEALTHY","k8s-be-31596--17630a0e8199e99b":"HEALTHY","k8s-be-31948--17630a0e8199e99b":"HEALTHY","k8s-be-32702--17630a0e8199e99b":"HEALTHY"}'
ingress.kubernetes.io/forwarding-rule: k8s-fw-default-[REDACTED]-dev-ingress--17630a0e8199e99b
ingress.kubernetes.io/https-forwarding-rule: k8s-fws-default-[REDACTED]-dev-ingress--17630a0e8199e99b
ingress.kubernetes.io/https-target-proxy: k8s-tps-default-[REDACTED]-dev-ingress--17630a0e8199e99b
ingress.kubernetes.io/ssl-cert: k8s-ssl-d6db2a7a17456a7b-64a79e74837f68e3--17630a0e8199e99b
ingress.kubernetes.io/static-ip: k8s-fw-default-[REDACTED]-dev-ingress--17630a0e8199e99b
ingress.kubernetes.io/target-proxy: k8s-tp-default-[REDACTED]-dev-ingress--17630a0e8199e99b
ingress.kubernetes.io/url-map: k8s-um-default-[REDACTED]-dev-ingress--17630a0e8199e99b
kubectl.kubernetes.io/last-applied-configuration: |
{"apiVersion":"extensions/v1beta1","kind":"Ingress","metadata":{"annotations":{},"name":"[REDACTED]-dev-ingress","namespace":"default"},"spec":{"rules":[{"host":"bot-dev.[REDACTED]","http":{"paths":[{"backend":{"serviceName":"bot-service","servicePort":80}}]}},{"host":"client-dev.[REDACTED]","http":{"paths":[{"backend":{"serviceName":"client-service","servicePort":80}}]}},{"host":"team-dev.[REDACTED]","http":{"paths":[{"backend":{"serviceName":"team-service","servicePort":80}}]}},{"host":"chat-dev.[REDACTED]","http":{"paths":[{"backend":{"serviceName":"chat-service","servicePort":80}}]}},{"host":"chatb-dev.[REDACTED]","http":{"paths":[{"backend":{"serviceName":"chat-builder-service","servicePort":80}}]}},{"host":"action-dev.[REDACTED]","http":{"paths":[{"backend":{"serviceName":"action-service","servicePort":80}}]}},{"host":"message-dev.[REDACTED]","http":{"paths":[{"backend":{"serviceName":"message-service","servicePort":80}}]}}],"tls":[{"hosts":["bots-dev.[REDACTED]","client-dev.[REDACTED]","team-dev.[REDACTED]","chat-dev.[REDACTED]","chatb-dev.[REDACTED]","message-dev.[REDACTED]"],"secretName":"[REDACTED]-ssl"}]}}
creationTimestamp: "2019-08-09T09:19:14Z"
generation: 7
name: [REDACTED]-dev-ingress
namespace: default
resourceVersion: "73975381"
selfLink: /apis/extensions/v1beta1/namespaces/default/ingresses/[REDACTED]-dev-ingress
uid: c176cc8c-ba86-11e9-89d6-42010a940181
spec:
rules:
- host: bot-dev.[REDACTED]
http:
paths:
- backend:
serviceName: bot-service
servicePort: 80
- host: client-dev.[REDACTED]
http:
paths:
- backend:
serviceName: client-service
servicePort: 80
- host: team-dev.[REDACTED]
http:
paths:
- backend:
serviceName: team-service
servicePort: 80
- host: chat-dev.[REDACTED]
http:
paths:
- backend:
serviceName: chat-service
servicePort: 80
- host: chatb-dev.[REDACTED]
http:
paths:
- backend:
serviceName: chat-builder-service
servicePort: 80
- host: action-dev.[REDACTED]
http:
paths:
- backend:
serviceName: action-service
servicePort: 80
- host: message-dev.[REDACTED]
http:
paths:
- backend:
serviceName: message-service
servicePort: 80
- host: sse-dev.[REDACTED]
http:
paths:
- backend:
serviceName: sse-service
servicePort: 80
tls:
- hosts:
- bots-dev.[REDACTED]
- client-dev.[REDACTED]
- team-dev.[REDACTED]
- chat-dev.[REDACTED]
- chatb-dev.[REDACTED]
- message-dev.[REDACTED]
- sse-dev.[REDACTED]
secretName: [REDACTED]-ssl
status:
loadBalancer:
ingress:
- ip: [REDACTED]

The health check from our Load Balancer comes from the readinessProbe configured in the deployment. You configured the path to be /health/check, however, your flask environment has nothing listening on that path. This means that the readinessProbe is likely failing and the health check from your Load Balancer is also failing.
With the health checks failing, your Load Balancer does not see any healthy backends so it returns a 502 error message.
You can verify this 3 ways:
Check stackdriver logs, you will see the 502 responses logged, check the details fo the log to see more details about the 502. You will likely see that there are no healthy backends.
Check the status of your pods using kubectl get po | grep sse-service, the pods are likely notReady.
test the check from another pod in the cluster. (NOTE youwill need a pod that has curl installed or allows you to install it. If you don't have one, use busybox or nginx base image)
a. kubectl get po -o wide | grep sse-service and take down the ip of one of the pods
b. kubectl exec [test_pod] -- curl [sse-service_cluster_ip]/health/check this will do a curl from a pod in the cluster to one of your sse-service pods and will check if there is anything replying to /health/check. There likely is not.
To address this, you should have `#blueprint.route('/health/check', methods=['GET']. Define the function to just return 200

Related

How to get flask app accessible through ingress without setting rewrite-target

I have a Kubernetes cluster that is making use of an Ingress to forward on traffic to a frontend React app and a backend Flask app. My problem is that the React app only works if rewrite-target annotation is not set and the flask app only works if it is.
How can I get my flask app accessible without setting this value (commented out in below yaml).
Here is the Ingress controller:
metadata:
name: thesis-ingress
namespace: thesis
annotations:
kubernetes.io/ingress.class: "nginx"
nginx.ingress.kubernetes.io/add-base-url: "true"
# nginx.ingress.kubernetes.io/rewrite-target: /$1
nginx.ingress.kubernetes.io/service-upstream: "true"
spec:
tls:
- hosts:
- thesis
secretName: ingress-tls
rules:
- host: thesis.info
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: frontend
port:
number: 3000
- path: /backend
pathType: Prefix
backend:
service:
name: backend
port:
number: 5000
Your question didn't specify, but I'm guessing your capture group was to rewrite /backend/(.+) to /$1; on that assumption:
Be aware that annotations are per-Ingress, but all Ingress resources are unioned across the cluster to comprise the whole of the configuration. Thus, if you need one rewrite and one without, just create two Ingress resources
metadata:
name: thesis-frontend
namespace: thesis
annotations:
kubernetes.io/ingress.class: "nginx"
nginx.ingress.kubernetes.io/add-base-url: "true"
nginx.ingress.kubernetes.io/service-upstream: "true"
spec:
tls:
- hosts:
- thesis
secretName: ingress-tls
rules:
- host: thesis.info
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: frontend
port:
number: 3000
---
metadata:
name: thesis-backend
namespace: thesis
annotations:
kubernetes.io/ingress.class: "nginx"
nginx.ingress.kubernetes.io/add-base-url: "true"
nginx.ingress.kubernetes.io/rewrite-target: /$1
nginx.ingress.kubernetes.io/service-upstream: "true"
spec:
tls:
- hosts:
- thesis
secretName: ingress-tls
rules:
- host: thesis.info
- path: /backend/(.+)
backend:
service:
name: backend
port:
number: 5000

Where to check swarm load balancer logs?

I have docker compose file as
---
version: '3.7'
services:
myapi:
image: tiangolo/uwsgi-nginx-flask:python3.7
env_file: apivars.env
logging:
driver: syslog
options:
syslog-address: "udp://127.0.0.1:514"
tag: tags
labels: labels
ports:
- "8080:80"
deploy:
placement:
constraints:
- node.role != manager
mode: replicated
replicas: 32
update_config:
parallelism: 4
delay: 5s
order: start-first
...
I have load balancer which will redirect request to this swarm manager.
My understanding is, if I hit www.myapi.com, it will got to LoadBalancer and then request will go to swarm manager, then swarm manager send that request to on of the 32 replicas.
Now issue is, LoadBalancer logs report some of the 502 errors.
# head -n1 /var/log/haproxy.log
Apr 28 09:35:28 localhost haproxy[43117]: 172.19.9.1:50220 [28/Apr/2020:09:35:08.549] main~ API_Production/swarmnode5 0/0/1/19952/19953 502 309 - - ---- 97/97/10/1/0 0/0 "GET /v2/students/?includeFields=name,id&per_page=1000&page=88 HTTP/1.1"
I have to check its reaching to swarm manager or swarmnode5 ?
I check the logs for nginx, but its not report any 502 errors. There are some exceptions, but not sure if exception is there in code, then why nginx not logs that api call and response?

uWSGI configuration in Kubernetes

I am running my backend using Python and Django with uWSGI. We recently migrated it to Kubernetes (GKE) and our pods are consuming a LOT of memory and the rest of the cluster is starving for resources. We think that this might be related to the uWSGI configuration.
This is our yaml for the pods:
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: my-pod
namespace: my-namespace
spec:
strategy:
type: RollingUpdate
rollingUpdate:
maxSurge: 10
maxUnavailable: 10
selector:
matchLabels:
app: my-pod
template:
metadata:
labels:
app: my-pod
spec:
containers:
- name: web
image: my-img:{{VERSION}}
imagePullPolicy: IfNotPresent
ports:
- containerPort: 8000
protocol: TCP
command: ["uwsgi", "--http", ":8000", "--wsgi-file", "onyo/wsgi.py", "--workers", "5", "--max-requests", "10", "--master", "--vacuum", "--enable-threads"]
resources:
requests:
memory: "300Mi"
cpu: 150m
limits:
memory: "2Gi"
cpu: 1
livenessProbe:
httpGet:
httpHeaders:
- name: Accept
value: application/json
path: "/healthcheck"
port: 8000
initialDelaySeconds: 15
timeoutSeconds: 5
periodSeconds: 30
readinessProbe:
httpGet:
httpHeaders:
- name: Accept
value: application/json
path: "/healthcheck"
port: 8000
initialDelaySeconds: 15
timeoutSeconds: 5
periodSeconds: 30
envFrom:
- configMapRef:
name: configmap
- secretRef:
name: secrets
volumeMounts:
- name: service-account-storage-credentials-volume
mountPath: /credentials
readOnly: true
- name: csql-proxy
image: gcr.io/cloudsql-docker/gce-proxy:1.11
command: ["/cloud_sql_proxy",
"-instances=my-project:region:backend=tcp:1234",
"-credential_file=/secrets/credentials.json"]
ports:
- containerPort: 1234
name: sql
securityContext:
runAsUser: 2 # non-root user
allowPrivilegeEscalation: false
volumeMounts:
- name: credentials
mountPath: /secrets/sql
readOnly: true
volumes:
- name: credentials
secret:
secretName: credentials
- name: volume
secret:
secretName: production
items:
- key: APPLICATION_CREDENTIALS_CONTENT
path: key.json
We are using the same uWSGI configuration that we had before the migration (when the backend was being executed in a VM).
Is there a best practice config for running uWSGI in K8s? Or maybe something that I am doing wrong in this particular config?
You activated 5 workers in uwsgi, that could mean 5 times the need of memory if your application is using lazy-loading techniques (my advice: load everything at startup and trust pre-fork check this). However, you could try reducing number of workers and instead raising number of threads.
Also, you should drop max-requests, this makes your app reload every 10 requests, that's non-sense in a production environment (doc). If you have troubles with memory leaks, use reload-on-rss instead.
I would do something like this, maybe less or more threads per worker depending on how your app uses it (adjust according to cpu usage/availability per pod in production):
command: ["uwsgi", "--http", ":8000", "--wsgi-file", "onyo/wsgi.py", "--workers", "2", "--threads", "10", "--master", "--vacuum", "--enable-threads"]
ps: as zerg said in comment you should of course ensure your app is not running DEBUG mode, together with low logging output.

Change root path for Spark Web UI?

I'm working to setup Jupyter notebook servers on Kubernetes that are able to launch pyspark. Each user is able to have a multiple servers running at once, and would access each by navigating to the appropriate host combined with a path to the server's fully-qualified name. For example: http://<hostname>/<username>/<notebook server name>.
I have a top-level function defined that allows a user create a SparkSession that points to the Kubernetes master URL and sets their pod to be the Spark driver.
This is all well and good, but I would like to enable end users to access the URL for the Spark Web UI so that they can track their jobs. The Spark on Kubernetes documentation has port forwarding as their recommended scheme for achieving this. It seems to be that for any security-minded organization, allowing any random user to setup port forwarding in this way would be unacceptable.
I would like to use an Ingress Kubernetes definition to allow external access to the driver's Spark Web UI. I've setup something like the following:
# Service
apiVersion: v1
kind: Service
metadata:
namespace: <notebook namespae>
name: <username>-<notebook server name>-svc
spec:
type: ClusterIP
sessionAffinity: None
selector:
app: <username>-<notebook server name>-notebook
ports:
- name: app-svc-port
protocol: TCP
port: 8888
targetPort: 8888
- name: spark-ui-port
protocol: TCP
port: 4040
targetPort: 4040
# Ingress
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
namespace: workspace
name: <username>-<notebook server name>-ing
annotations:
kubernetes.io/ingress.class: traefik
spec:
rules:
- host: <hostname>
http:
paths:
- path: /<username>/<notebook server name>
backend:
serviceName: <username>-<notebook server name>-svc
servicePort: app-svc-port
- path: /<username>/<notebook server name>/spark-ui
backend:
serviceName: <username>-<notebook server name>-svc
servicePort: spark-ui-port
However, under this setup, when I navigate to http://<hostname>/<username>/<notebook server name>/spark-ui/, I'm redirected to http://<hostname>/jobs. This is because /jobs is the default entry point to Spark's Web UI. However, I don't have an ingress rule for that path, and can't set such a rule since every user's Web UI would collide with each other in the load balancer (unless I have a misunderstanding, which is totally possible).
Under the Spark UI configuration settings, there doesn't seem to be a way to set a root path for the Spark session. You can change the port on which it runs, but what I'd like to do make the UI serve at something like: http://<hostname>/<username>/<notebook server name>/spark-ui/<jobs, stages, etc>. Is there really no way of changing what comes after the hostname of the URL and before the last part?
1: set your spark config
spark.ui.proxyBase: /foo
2: Set the nginx annotations in Ingress
annotations:
nginx.ingress.kubernetes.io/proxy-redirect-from: http://$host/
nginx.ingress.kubernetes.io/proxy-redirect-to: http://$host/foo/
3:Annotation to rewrite target:
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /$2
spec:
rules:
- host: <host>
http:
paths:
- backend:
serviceName: <service>
servicePort: <port>
path: /foo(/|$)(.*)
Yes, you can achieve this. Specifically you can do this by setting the spark.ui.proxyBase property within spark-defaults.conf or at the run-time.
Example:
echo "spark.ui.proxyBase $SPARK_UI_PROXYBASE" >> /opt/spark/conf/spark-defaults.conf;
Then this should work.

Accessing Kubernetes service on port 80

I have a Kubernetes service (a Python Flask application) exposed publicly on port 30000 (All Kubernetes NodePorts have to be in the range 30000-32767 from what I understand) using the LoadBalancer type. I need for my public-facing service to be accessible on the standard HTTP port 80. What's the best way to go about doing this?
If you don't use any cloudproviders, you can just set externalIPs option in service and make this IP up on node, and kube-proxy will route traffic from this IP to your pod for you.
{
"kind": "Service",
"apiVersion": "v1",
"metadata": {
"name": "my-service"
},
"spec": {
"selector": {
"app": "MyApp"
},
"ports": [
{
"name": "http",
"protocol": "TCP",
"port": 80,
"targetPort": 9376
}
],
"externalIPs" : [
"80.11.12.10"
]
}
}
https://kubernetes.io/docs/concepts/services-networking/service/#external-ips
If you want to use cloud provider's LB, assuming your app expose on port 8080 and you want to publicly expose it on port 80, here is how the configuration should look:
apiVersion: v1
kind: Service
metadata:
name: flask_app
labels:
run: flask_app
namespace: default
spec:
type: LoadBalancer
ports:
- port: 80
targetPort: 8080
protocol: TCP
name: http
selector:
run: flask_app
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: flask_app
namespace: default
spec:
replicas: 1
strategy:
rollingUpdate:
maxSurge: 1
maxUnavailable: 0
type: RollingUpdate
template:
metadata:
labels:
run: flask_app
spec:
restartPolicy: Always
terminationGracePeriodSeconds: 60
containers:
- name: flask_app
image: repo/flask_app:latest
ports:
- containerPort: 8080
imagePullPolicy: Always
Another option is to use a Ingress Controller, for example Nginx.
https://kubernetes.io/docs/concepts/services-networking/ingress/

Categories