I want to manipulate the browser properties of the Selenium Nodes. I use the selenium/node-chrome docker image for nodes (but potentially I want to do this on Firefox nodes, too) inside Selenium Grid on an Kubernetes (Minikube) Cluster.
The properties I want to manipulate are
navigator.webdriver
screen.width
screen.height
navigator.deviceMemory
but I am searching a way that works for most browser properties, I may later find other properties I want to change. The browser properties do not have to change during a scan, and they do not need to be set inside my python code, it is all fine for me to change them in the nodes configuration, docker image or spoof them elsewhere.
I wrote a python script to read out the current values of those properties
from selenium import webdriver
from selenium.webdriver.common.desired_capabilities import DesiredCapabilities
driver = webdriver.Remote(
command_executor='http://selenium-hub:4444/wd/hub',
desired_capabilities=getattr(DesiredCapabilities, "CHROME")
)
print("navigator.webdriver: "+str(driver.execute_script("return navigator.webdriver")))
print("screen.width: "+str(driver.execute_script("return screen.width")))
print("screen.height: "+str(driver.execute_script("return screen.height")))
print("navigator.deviceMemory: "+str(driver.execute_script("return navigator.deviceMemory")))
driver.quit()
How can I change these properties?
EDIT: The K8s deployment of the node looks like this:
kind: Deployment
metadata:
name: selenium-node-chrome
labels:
app: selenium-node-chrome
spec:
replicas: 2
selector:
matchLabels:
app: selenium-node-chrome
template:
metadata:
labels:
app: selenium-node-chrome
spec:
containers:
- name: selenium-node-chrome
image: selenium/node-chrome
ports:
- containerPort: 5555
volumeMounts:
- name: dshm
mountPath: /dev/shm
- name: config
mountPath: /opt/selenium/config.json
subPath: config.json
env:
- name: HUB_HOST
value: "selenium-hub"
- name: HUB_PORT
value: "4444"
resources:
limits:
memory: "1000Mi"
cpu: ".5"
volumes:
- name: dshm
emptyDir:
medium: Memory
- name: config
configMap:
name: selenium-node-chrome
---
apiVersion: v1
kind: ConfigMap
metadata:
name: selenium-node-chrome
data:
config.json: |
{
"capabilities": [
{
"version": "81.0.4044.92",
"browserName": "chrome",
"maxInstances": 1,
"seleniumProtocol": "WebDriver",
"applicationName": ""
}
],
"proxy": "org.openqa.grid.selenium.proxy.DefaultRemoteProxy",
"maxSession": 1,
"host": "0.0.0.0",
"port": 5555,
"register": true,
"registerCycle": 5000,
"nodePolling": 5000,
"unregisterIfStillDownAfter": 60000,
"downPollingLimit": 2,
"debug": false
}
Related
I have a Kubernetes cluster that is making use of an Ingress to forward on traffic to a frontend React app and a backend Flask app. My problem is that the React app only works if rewrite-target annotation is not set and the flask app only works if it is.
How can I get my flask app accessible without setting this value (commented out in below yaml).
Here is the Ingress controller:
metadata:
name: thesis-ingress
namespace: thesis
annotations:
kubernetes.io/ingress.class: "nginx"
nginx.ingress.kubernetes.io/add-base-url: "true"
# nginx.ingress.kubernetes.io/rewrite-target: /$1
nginx.ingress.kubernetes.io/service-upstream: "true"
spec:
tls:
- hosts:
- thesis
secretName: ingress-tls
rules:
- host: thesis.info
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: frontend
port:
number: 3000
- path: /backend
pathType: Prefix
backend:
service:
name: backend
port:
number: 5000
Your question didn't specify, but I'm guessing your capture group was to rewrite /backend/(.+) to /$1; on that assumption:
Be aware that annotations are per-Ingress, but all Ingress resources are unioned across the cluster to comprise the whole of the configuration. Thus, if you need one rewrite and one without, just create two Ingress resources
metadata:
name: thesis-frontend
namespace: thesis
annotations:
kubernetes.io/ingress.class: "nginx"
nginx.ingress.kubernetes.io/add-base-url: "true"
nginx.ingress.kubernetes.io/service-upstream: "true"
spec:
tls:
- hosts:
- thesis
secretName: ingress-tls
rules:
- host: thesis.info
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: frontend
port:
number: 3000
---
metadata:
name: thesis-backend
namespace: thesis
annotations:
kubernetes.io/ingress.class: "nginx"
nginx.ingress.kubernetes.io/add-base-url: "true"
nginx.ingress.kubernetes.io/rewrite-target: /$1
nginx.ingress.kubernetes.io/service-upstream: "true"
spec:
tls:
- hosts:
- thesis
secretName: ingress-tls
rules:
- host: thesis.info
- path: /backend/(.+)
backend:
service:
name: backend
port:
number: 5000
I'm trying to patch a deployment and remove its volumes using patch_namespaced_deployment (https://github.com/kubernetes-client/python) with the following arguments, but it's not working.
patch_namespaced_deployment(
name=deployment_name,
namespace='default',
body={"spec": {"template": {
"spec": {"volumes": None,
"containers": [{'name': container_name, 'volumeMounts': None}]
}
}
}
},
pretty='true'
)
How to reproduce it:
Create this file app.yaml
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: myclaim
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 1Gi
---
apiVersion: v1
kind: PersistentVolume
metadata:
annotations:
pv.kubernetes.io/bound-by-controller: "yes"
finalizers:
- kubernetes.io/pv-protection
labels:
volume: pv0001
name: pv0001
resourceVersion: "227035"
selfLink: /api/v1/persistentvolumes/pv0001
spec:
accessModes:
- ReadWriteOnce
capacity:
storage: 5Gi
claimRef:
apiVersion: v1
kind: PersistentVolumeClaim
name: myclaim
namespace: default
resourceVersion: "227033"
hostPath:
path: /mnt/pv-data/pv0001
type: ""
persistentVolumeReclaimPolicy: Recycle
volumeMode: Filesystem
status:
phase: Bound
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: pv-deploy
spec:
replicas: 1
selector:
matchLabels:
app: mypv
template:
metadata:
labels:
app: mypv
spec:
containers:
- name: shell
image: centos:7
command:
- "bin/bash"
- "-c"
- "sleep 10000"
volumeMounts:
- name: mypd
mountPath: "/tmp/persistent"
volumes:
- name: mypd
persistentVolumeClaim:
claimName: myclaim
- kubectl apply -f app.yaml
- kubectl describe deployment.apps/pv-deploy (to check the volumeMounts and Volumes)
- kubectl patch deployment.apps/pv-deploy --patch '{"spec": {"template": {"spec": {"volumes": null, "containers": [{"name": "shell", "volumeMounts": null}]}}}}'
- kubectl describe deployment.apps/pv-deploy (to check the volumeMounts and Volumes)
- Delete the application now: kubectl delete -f app.yaml
- kubectl create -f app.yaml
- Patch the deployment using the python library function as stated above. The *VolumeMounts* section is removed but the Volumes still exist.
** EDIT **
Running the kubectl patch command works as expected. But
after executing the Python script and running a describe deployment command, the persistentVolumeClaim is replaced with an emptyDir like this
Volumes:
mypd:
Type: EmptyDir (a temporary directory that shares a pod's lifetime)
Medium:
SizeLimit: <unset>
What you're trying to do is called a strategic merge patch (https://kubernetes.io/docs/tasks/manage-kubernetes-objects/update-api-object-kubectl-patch/). As you can see in the documentation, With a strategic merge patch, a list is either replaced or merged depending on its patch strategy so this may be why you're seeing this behavior.
I think you should go with replace https://jamesdefabia.github.io/docs/user-guide/kubectl/kubectl_replace/ and instead of trying to manage a part of your deployment object, replace it with a new one.
I am running my backend using Python and Django with uWSGI. We recently migrated it to Kubernetes (GKE) and our pods are consuming a LOT of memory and the rest of the cluster is starving for resources. We think that this might be related to the uWSGI configuration.
This is our yaml for the pods:
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: my-pod
namespace: my-namespace
spec:
strategy:
type: RollingUpdate
rollingUpdate:
maxSurge: 10
maxUnavailable: 10
selector:
matchLabels:
app: my-pod
template:
metadata:
labels:
app: my-pod
spec:
containers:
- name: web
image: my-img:{{VERSION}}
imagePullPolicy: IfNotPresent
ports:
- containerPort: 8000
protocol: TCP
command: ["uwsgi", "--http", ":8000", "--wsgi-file", "onyo/wsgi.py", "--workers", "5", "--max-requests", "10", "--master", "--vacuum", "--enable-threads"]
resources:
requests:
memory: "300Mi"
cpu: 150m
limits:
memory: "2Gi"
cpu: 1
livenessProbe:
httpGet:
httpHeaders:
- name: Accept
value: application/json
path: "/healthcheck"
port: 8000
initialDelaySeconds: 15
timeoutSeconds: 5
periodSeconds: 30
readinessProbe:
httpGet:
httpHeaders:
- name: Accept
value: application/json
path: "/healthcheck"
port: 8000
initialDelaySeconds: 15
timeoutSeconds: 5
periodSeconds: 30
envFrom:
- configMapRef:
name: configmap
- secretRef:
name: secrets
volumeMounts:
- name: service-account-storage-credentials-volume
mountPath: /credentials
readOnly: true
- name: csql-proxy
image: gcr.io/cloudsql-docker/gce-proxy:1.11
command: ["/cloud_sql_proxy",
"-instances=my-project:region:backend=tcp:1234",
"-credential_file=/secrets/credentials.json"]
ports:
- containerPort: 1234
name: sql
securityContext:
runAsUser: 2 # non-root user
allowPrivilegeEscalation: false
volumeMounts:
- name: credentials
mountPath: /secrets/sql
readOnly: true
volumes:
- name: credentials
secret:
secretName: credentials
- name: volume
secret:
secretName: production
items:
- key: APPLICATION_CREDENTIALS_CONTENT
path: key.json
We are using the same uWSGI configuration that we had before the migration (when the backend was being executed in a VM).
Is there a best practice config for running uWSGI in K8s? Or maybe something that I am doing wrong in this particular config?
You activated 5 workers in uwsgi, that could mean 5 times the need of memory if your application is using lazy-loading techniques (my advice: load everything at startup and trust pre-fork check this). However, you could try reducing number of workers and instead raising number of threads.
Also, you should drop max-requests, this makes your app reload every 10 requests, that's non-sense in a production environment (doc). If you have troubles with memory leaks, use reload-on-rss instead.
I would do something like this, maybe less or more threads per worker depending on how your app uses it (adjust according to cpu usage/availability per pod in production):
command: ["uwsgi", "--http", ":8000", "--wsgi-file", "onyo/wsgi.py", "--workers", "2", "--threads", "10", "--master", "--vacuum", "--enable-threads"]
ps: as zerg said in comment you should of course ensure your app is not running DEBUG mode, together with low logging output.
I have a Kubernetes service (a Python Flask application) exposed publicly on port 30000 (All Kubernetes NodePorts have to be in the range 30000-32767 from what I understand) using the LoadBalancer type. I need for my public-facing service to be accessible on the standard HTTP port 80. What's the best way to go about doing this?
If you don't use any cloudproviders, you can just set externalIPs option in service and make this IP up on node, and kube-proxy will route traffic from this IP to your pod for you.
{
"kind": "Service",
"apiVersion": "v1",
"metadata": {
"name": "my-service"
},
"spec": {
"selector": {
"app": "MyApp"
},
"ports": [
{
"name": "http",
"protocol": "TCP",
"port": 80,
"targetPort": 9376
}
],
"externalIPs" : [
"80.11.12.10"
]
}
}
https://kubernetes.io/docs/concepts/services-networking/service/#external-ips
If you want to use cloud provider's LB, assuming your app expose on port 8080 and you want to publicly expose it on port 80, here is how the configuration should look:
apiVersion: v1
kind: Service
metadata:
name: flask_app
labels:
run: flask_app
namespace: default
spec:
type: LoadBalancer
ports:
- port: 80
targetPort: 8080
protocol: TCP
name: http
selector:
run: flask_app
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: flask_app
namespace: default
spec:
replicas: 1
strategy:
rollingUpdate:
maxSurge: 1
maxUnavailable: 0
type: RollingUpdate
template:
metadata:
labels:
run: flask_app
spec:
restartPolicy: Always
terminationGracePeriodSeconds: 60
containers:
- name: flask_app
image: repo/flask_app:latest
ports:
- containerPort: 8080
imagePullPolicy: Always
Another option is to use a Ingress Controller, for example Nginx.
https://kubernetes.io/docs/concepts/services-networking/ingress/
Actually, i have kubernetes cluster set up. I want to generate yaml config file dynamically based on a template using python.
template.yaml
apiVersion: v1
kind: pod
metadata:
name: $name
spec:
replicas: $replicas
template:
metadata:
labels:
run: $name
spec:
containers:
- name: $name
image: $image
ports:
- containerPort: 80
Placeholders name, replicas and image are the input of my python method.
Any help will be appreciated.
If you want a way to do it using pure python, with no libraries, here's one using multiline strings and format:
def writeConfig(**kwargs):
template = """
apiVersion: v1
kind: pod
metadata:
name: {name}
spec:
replicas: {replicas}
template:
metadata:
labels:
run: {name}
spec:
containers:
- name: {name}
image: {image}
ports:
- containerPort: 80"""
with open('somefile.yaml', 'w') as yfile:
yfile.write(template.format(**kwargs))
# usage:
writeConfig(name="someName", image="myImg", replicas="many")
If you want to work only with templates, pure python and if your variables are already checked (safe) than you can use the format method of strings.
Here is a example:
# load your template from somewhere
template = """apiVersion: v1
kind: pod
metadata:
name: {name}
spec:
replicas: {replicas}
template:
metadata:
labels:
run: {name}
spec:
containers:
- name: {name}
image: {image}
ports:
- containerPort: 80
"""
# insert your values
specific_yaml = template.format(name="test_name", image="test.img", replicas="False")
print(specific_yaml)