Access Kubernetes from another machine using Python - python

I have a test Kubernetes cluster (Minikube) running in a Docker container.
I want to write some Python code to start jobs on that cluster. If I go inside the container:
$ docker exec -it kubernetes /bin/sh
#/ apk add python3
#/ apk add py3-pip
#/ pip install kubernertes
#/ pyhton3
>>> from kubernetes import config, client
>>> config.load_kube_config()
It works, and I can then create a client and do my stuff.
However, if I run the same Python commands from the host machine:
>>> config.load_kube_config()
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/path/to/python/kubernetes/config/kube_config.py", line 814, in load_kube_config
loader = _get_kube_config_loader(
File "/path/to/python/kubernetes/config/kube_config.py", line 773, in _get_kube_config_loader
raise ConfigException(
kubernetes.config.config_exception.ConfigException: Invalid kube-config file. No configuration found.
That sort of makes sense, since kubernetes is not running on the machine. How do I go around this ? Is there another way to fetch the config file ?

config should be in ~/.kube/config path
mkdir -p ~/.kube/
cp <current config path> ~/.kube/config
#below steps verify if your config is proper
export KUBECONFIG=~/.kube/config
kubectl get pods
once above steps are working fine you could try running the python script

Related

Cloud run deploy failed: the user-provided container failed to start and listen on the port defined provided by the PORT=8080 environment variable

I have migrated my Flask application to FastApi application and I'm trying to deploy the new fastapi application to coud run using Dockerfile but I got an error due to port Issue.
I have tried all the solutions given before on such error but nothing works, also I have try many different way to write the dockerfile but yet I had the same issue.
Last try I use FastApi Documentation to create my docker file it didn't work also.
My Dockerfile:
# Start from the official slim Python base image.
FROM python:3.9-slim
# Set the current working directory to /code.
#This is where we'll put the requirements.txt file and the app directory.
WORKDIR /code
# Copy the file with the requirements to the /code directory.
# Copy only the file with the requirements first, not the rest of the code.
# As this file doesn't change often, Docker will detect it and use the cache for this step, enabling the cache for the next step too.
COPY ./requirements.txt /code/requirements.txt
# Install the package dependencies in the requirements file.
# The --no-cache-dir option tells pip to not save the downloaded packages locally,
# as that is only if pip was going to be run again to install the same packages,
# but that's not the case when working with containers.
RUN pip install --no-cache-dir --upgrade -r /code/requirements.txt
# As this has all the code which is what changes most frequently the Docker
# cache won't be used for this or any following steps easily
COPY ./app /code/app
# Because the program will be started at /code and inside of it is the directory ./app with your code,
# Uvicorn will be able to see and import app from app.main.
CMD ["uvicorn", "app.main:app", "--host", "0.0.0.0", "--port", "80"]
Also I have tried to config the running port in the main:
if __name__ == "__main__":
uvicorn.run("main:app", host="0.0.0.0", port=int(os.environ.get('PORT', 8080)), log_level="info")
I'm deploying my application using cloudbuild.yaml file:
steps:
# Build the container image
- name: 'gcr.io/cloud-builders/docker'
args: ['build', '-t', 'gcr.io/cog-dev/new-serving', '.']
# Push the container image to Container Registry
- name: 'gcr.io/cloud-builders/docker'
args: ['push', 'gcr.io/cog-dev/new-serving']
# Deploy container image to Cloud Run
- name: 'gcr.io/google.com/cloudsdktool/cloud-sdk'
entrypoint: gcloud
args: ['run', 'deploy', 'new-serving', '--image', 'gcr.io/cog-dev/new-serving', '--region', 'europe-west1', '--allow-unauthenticated', '--platform', 'managed']
# Store images in Google Artifact Registry
images:
- gcr.io/cog-dev/new-serving
Most of the solutions on stackoverflow have been tried even changing the ports number
Update:
after following Use Google Cloud user credentials when testing containers locally I test the docker image locally and I get this error:
File "/code/./app/endpoints/campaign.py", line 11, in <module>
from app.services.recommend_service import RecommendService
File "/code/./app/services/recommend_service.py", line 19, in <module>
datastore_client = datastore.Client()
File "/usr/local/lib/python3.9/site-packages/google/cloud/datastore/client.py", line 301, in __init__
super(Client, self).__init__(
File "/usr/local/lib/python3.9/site-packages/google/cloud/client/__init__.py", line 320, in __init__
_ClientProjectMixin.__init__(self, project=project, credentials=credentials)
File "/usr/local/lib/python3.9/site-packages/google/cloud/client/__init__.py", line 271, in __init__
raise EnvironmentError(
OSError: Project was not passed and could not be determined from the environment.

Using rabbitmq fails from docker container pika.exceptions.AMQPConnectionError

I am trying to learn how to use docker and Rabbitmq at the same time now.
#Specifying the base image
FROM python:3.10
ADD ./task.py ./home/
#Here we added the python file that we want to run in docker and define its location.
RUN pip install requests celery pika
#Here we installed the dependencies, we are using the pygame library in our main.py file so we
have to use the pip command for installing the library
CMD [ "python3" ,"/home/task.py" ]
#lastly we specified the entry command this line is simply running python
This is how my Dockerfile looks like and then I setup another container with rabbitmq through the command :
docker run -it --rm --name rabbitmq -p 5672:5672 -p 15672:15672 rabbitmq:3.9-management
This is what task.py looks like :
from celery import Celery
import pika
from time import sleep
connection = pika.BlockingConnection(pika.ConnectionParameters(host='localhost', port=5672))
channel = connection.channel()
channel.queue_declare(queue='hello')
channel.basic_publish(exchange='',
routing_key='hello',
body='Hello World!')
connection.close()
app = Celery('task', broker="localhost")
#app.task()
def reverse(text):
sleep(5)
return text[::-1 ]
And i run the docker run command , but I keep getting this error.
PS C:\Users\xyz\PycharmProjects\Sellerie> docker run sellerie
Traceback (most recent call last):
File "/home/task.py", line 5, in <module>
connection = pika.BlockingConnection(pika.ConnectionParameters(host='localhost', port=5672))
File "/usr/local/lib/python3.10/site-packages/pika/adapters/blocking_connection.py", line
360, in __init__
self._impl = self._create_connection(parameters, _impl_class)
File "/usr/local/lib/python3.10/site-packages/pika/adapters/blocking_connection.py", line 451, in _create_connection
raise self._reap_last_connection_workflow_error(error)
pika.exceptions.AMQPConnectionError
PS >
Can anyone help me better understand where the problem is. Maybe how to connect rabbitmq with the other docker container where my python file is located??
Thank you so much in advance

airflow initdb in Docker throws ImportError: cannot import name 'import_string'

I am trying to dockerize airflow, my Dockerfile looks like this
FROM python:3.5.2
RUN mkdir -p /src/airflow
RUN mkdir -p /src/airflow/logs
RUN mkdir -p /src/airflow/plugins
WORKDIR /src
COPY . .
RUN pip install psycopg2
RUN pip install -r requirements.txt
COPY airflow.cfg /src/airflow
ENV AIRFLOW_HOME /src/airflow
ENV PYTHONPATH "${PYTHONPATH}:/src"
RUN airflow initdb
EXPOSE 8080
ENTRYPOINT ./airflow-start.sh
while my docker-compose.yml looks like this
version: "3"
services:
airflow:
container_name: airflow
network_mode: host
build:
context: .
dockerfile: Dockerfile
ports:
- 8080:8080
The output of $ docker-compose build comes up like normal, every step executes and then
Step 12/14 : RUN airflow initdb
---> Running in 8b7ebe406978
[2020-04-21 10:34:21,419] {__init__.py:45} INFO - Using executor LocalExecutor
Traceback (most recent call last):
File "/usr/local/bin/airflow", line 17, in <module>
from airflow.bin.cli import CLIFactory
File "/usr/local/lib/python3.5/site-packages/airflow/bin/cli.py", line 59, in <module>
from airflow.www.app import cached_app
File "/usr/local/lib/python3.5/site-packages/airflow/www/app.py", line 20, in <module>
from flask_cache import Cache
File "/usr/local/lib/python3.5/site-packages/flask_cache/__init__.py", line 24, in <module>
from werkzeug import import_string
ImportError: cannot import name 'import_string'
ERROR: Service 'airflow' failed to build: The command '/bin/sh -c airflow initdb' returned a non-zero code: 1
postgres is running on host system.
I have tried multiple ways but this keeps on happening.
I even tried puckel/docker-airflow image and the same error occurred.
Can someone tell me what am I doing wrong?
Project Structure:
root
-airflow_dags
-Dockerfile
-docker-compose.yml
-airflow-start.sh
-airflow.cfg
In case it's relevant: airflow-start.sh
In airflow.cfg:
dags_folder = /src/airflow_dags/
sql_alchemy_conn = postgresql://airflow:airflow#localhost:5432/airflow
If possible get your code running without touching docker ... run it directly on your host ... of course this means your host ( your laptop or wherever you are executing your commands, could be a remote VPS debian box ) must have the same OS as your Dockerfile, I see in this case FROM python:3.5.2 is actually using debian 8
Short of doing above launch a toy container which does nothing yet executes and lets you login to it to manually run your commands to aid troubleshooting ... so use following as this toy container's Dockerfile
FROM python:3.5.2
CMD ["/bin/bash"]
so now issue this
docker build --tag saadi_now . # creates image saadi_now
now launch that image
docker run -d saadi_now sleep infinity # launches container
docker ps # lets say its container_id is b91f8cba6ed1
now login to that running container
docker exec -ti b91f8cba6ed1 bash
cool so you are now inside the docker container so run the commands which were originally in the real Dockfile ... this sometime makes it easier to troubleshoot
one by one add to this toy Dockerfile your actual commands from the real Dockerfile and redo above until you discover the underlying issues
Most likely this is related to either a bug in airflow with the werkzeug package, or your requirements might be clobbering something.
I recommend checking the versions of airflow, flask, and werkzueg that are used in the environment. It may be that you need to pin the version of flask or werkzueg as discussed here.

Change the PYTHONPATH in a docker container in docker-compose file

I have an application in a docker container whose entry point is defined as
ENTRYPOINT ["/usr/local/bin/gunicorn", "--pythonpath=`$PWD`/.."]
Then, I have three container processes which use that container and entry point to serve my files from the app. Everything fine there.
I now am trying to start another container process that over rides the gunicorn command. I want it to run a python3 process with the command
entrypoint: ["python3", "/crm/maintenance/maintenance.py"]
in the docker-compose.yml file.
The issue is when I run docker-compose up -d with the above entrypoint, all containers run fine except for the one running the python process.
The error I get is:
Traceback (most recent call last):
File "/crm/maintenance/maintenance.py", line 6, in <module>
from crm.sms_system.answer_send import AnswerSender
ImportError: No module named 'crm'
I attribute this error to the python path that remains incorrect. For the Entrypoint defined in the docker file I have the "--pythonpath=$PWD/.." flag. But this cannot transfer over to python3.
Instead I have tried a number of things:
In dockerfile ENV PYTHONPATH=$PWD/..
In docker-compose.yml entrypoint: ["PYTHONPATH=/..","python3", "/crm/maintenance/maintenance.py"]
In docker-compose.yml entrypoint: ["PYTHONPATH=$PWD/..","python3", "/crm/maintenance/maintenance.py"]. This does not work since the PWD is executed from where you run the docker-compose up command from not in the container.
How can I change the PYTHONPATH at run time in a container from the docker-compose file?
You need to use $$ to escape the environment variable parsing for docker-compose. Here is a sample file which worked for me
version: '2'
services:
python:
image: python:3.6
working_dir: /usr/local
command: bash -c "PYTHONPATH=$$PWD/.. python3 -c 'import sys; print(sys.path)'"
$ docker-compose up
Recreating test_python_1
Attaching to test_python_1
python_1 | ['', '/usr', '/usr/local/lib/python36.zip', '/usr/local/lib/python3.6', '/usr/local/lib/python3.6/lib-dynload', '/usr/local/lib/python3.6/site-packages']
test_python_1 exited with code 0

Run manage.py from AWS EB Linux instance

How to run manage.py from AWS EB (Elastic Beanstalk) Linux instance?
If I run it from '/opt/python/current/app', it shows the below exception.
Traceback (most recent call last):
File "./manage.py", line 8, in <module>
from django.core.management import execute_from_command_line
ImportError: No module named django.core.management
I think it's related with virtualenv. Any hints?
How to run manage.py from AWS Elastic Beanstalk AMI.
SSH login to Linux (eb ssh)
(optional may need to run sudo su - to have proper permissions)
source /opt/python/run/venv/bin/activate
source /opt/python/current/env
cd /opt/python/current/app
python manage.py <commands>
Or, you can run command as like the below:
cd /opt/python/current/app
/opt/python/run/venv/bin/python manage.py <command>
With the new version of Python paths seem to have changed.
The app is in /var/app/current
The virtual environment is in /var/app/venv/[KEY]
So the instructions are:
SSH to the machine using eb shh
Check the path of your environment with ls /var/app/venv/. The only folder should be the [KEY] for the next step
Activate the environment with source /var/app/venv/[KEY]/bin/activate
Execute the command python3 /var/app/current/manage.py <command>
Of course Amazon can change it anytime.
TL;DR
This answer assumes you have installed EB CLI. Follow these steps:
Connect to your running instance using ssh.
eb ssh <environment-name>
Once you are inside your environment, load the environment variables (this is important for database configuration)
. /opt/python/current/env
If you wish you can see the environment variables using printenv.
Activate your virtual environment
source /opt/python/run/venv/bin/activate
Navigate to your project directory (this will depend on your latest deployment, so use the number of your latest deployment instead of XX)
cd /opt/python/bundle/XX/app/
Run the command you wish:
python manage.py <command_name>
Running example
Asumming that your environment name is my-env, your latest deployment number is 13, and you want to run the shell command:
eb ssh my-env # 1
. /opt/python/current/env # 2
source /opt/python/run/venv/bin/activate # 3
cd /opt/python/bundle/13/app/ # 4
python manage.py shell # 5
As of February 2022 the solution is as follows:
$ eb ssh
$ sudo su -
$ export $(cat /opt/elasticbeanstalk/deployment/env | xargs)
$ source /var/app/venv/*/bin/activate
$ python3 /var/app/current/manage.py <command name>
$ export $(cat /opt/elasticbeanstalk/deployment/env | xargs) is needed to import your environment variables if you have a database connection (most likely you will)

Categories