Connecting to Milvus Database through Google Kubernetes Engine and Python - python

I’m looking to connect to a Milvus database I deployed on Google Kubernetes Engine.
I am running into an error in the last line of the script. I'm running the script locally.
Here's the process I followed to set up the GKE cluster: (https://milvus.io/docs/v2.0.0/gcp.md)
Here is a similar question I'm drawing from
Any thoughts on what I'm missing?
import os
from pymilvus import connections
from kubernetes import client, config
My_Kubernetes_IP = 'XX.XXX.XX.XX'
# Authenticate with GCP credentials
os.environ['GOOGLE_APPLICATION_CREDENTIALS'] = os.path.abspath('credentials.json')
# load milvus config file and connect to GKE instance
config = client.Configuration(os.path.abspath('milvus/config.yaml'))
config.host = f'https://{My_Kubernetes_IP}:19530'
client.Configuration.set_default(config)
## connect to milvus
milvus_ip = 'xx.xxx.xx.xx'
connections.connect(host=milvus_ip, port= '19530')
Error:
BaseException: <BaseException: (code=2, message=Fail connecting to server on xx.xxx.xx.xx:19530. Timeout)>

If you want to connect to the Milvus in the k8s cluster by ip+port, you may need to forward your local port 19530 to the Milvus service. Use a command like the following:
$ kubectl port-forward service/my-release-milvus 19530

Have you checked where your milvus external IP is?
Follow the instructions by the documentation you should use kubectl get services to check which external IP is allocated for the milvus.

Related

AWS RDS Proxy error (postgres) - RDS Proxy currently doesn’t support command-line options

I try to read or write from/to an AWS RDS Proxy with a postgres RDS as the endpoint.
The operation works with psql but fails on the same client with pg8000 or psycopg2 as client libraries in Python.
The operation works with with pg8000 and psycopg2 if I use the RDS directly as endpoint (without the RDS proxy).
sqlaclchemy/psycopg2 error message:
Feature not supported: RDS Proxy currently doesn’t support command-line options.
A minimal version of the code I use:
from sqlalchemy import create_engine
import os
from dotenv import load_dotenv
load_dotenv()
login_string = os.environ['login_string_proxy']
engine = create_engine(login_string, client_encoding="utf8", echo=True, connect_args={'options': '-csearch_path={}'.format("testing")})
engine.execute(f"INSERT INTO testing.mytable (product) VALUES ('123')")
pg8000: the place it stops / waits for something is in core.py:
def sock_read(b):
try:
return self._sock.read(b)
except OSError as e:
raise InterfaceError("network error on read") from e
A minimal version of the code I use:
import pg8000
import os
from dotenv import load_dotenv
load_dotenv()
db_connection = pg8000.connect(database=os.environ['database'], host=os.environ['host'], port=os.environ['port'], user=os.environ['user'], password=os.environ['password'])
db_connection.run(f"INSERT INTO mytable (data) VALUES ('data')")
db_connection.commit()
db_connection.close()
The logs in the RDS Proxy looks always normal for all the examples I mentioned - e.g.:
A new client connected from ...:60614.
Received Startup Message: [username="", database="", protocolMajorVersion=3, protocolMinorVersion=0, sslEnabled=false]
Proxy authentication with PostgreSQL native password authentication succeeded for user "" with TLS off.
A TCP connection was established from the proxy at ...:42795 to the database at ...:5432.
The new database connection successfully authenticated with TLS off.
I opened up all ports via security groups on the RDS and the RDS proxy and I used an EC2 inside the VPC.
I tried with autocommit on and off.
The 'command-line option" being referred to is the -csearch_path={}.
Remove that, and then once the connection is established execute set search_path = whatever as your first query.
This is a known issue that pg8000 can't connect to AWS RDS proxy (postgres). I did a PR https://github.com/tlocke/pg8000/pull/72 let see if Tony Locke (the father of pg8000) approves the change. ( if not you have to change the lines of the core.py https://github.com/tlocke/pg8000/pull/72/files )
self._write(FLUSH_MSG)
if (code != PASSWORD):
self._write(FLUSH_MSG)

How to connect to my k8 app service through Python Kubernetes client?

Context
I have an application which uses a service running in my kubernetes cluster.
apiVersion: v1
kind: Service
metadata:
name: rabbitmq
...
spec:
ports:
- port: 5672
...
$ kubectl get services
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
...
rabbitmq ClusterIP 10.105.0.215 <none> 5672/TCP 25h
That application, has as well a client (Python) which at some point needs to connect to that service (for example, using pika). Of course, the client is running outside the cluster, but in a machine with a kubectl configuration.
I would like to design the code of the "client" module as if it would be inside the cluster (or similar):
host = 'rabbitmq'
port = 5672
class AMQPClient(object):
def __init__(self):
"""Creates a connection with a AMQP broker"""
self.parameters = pika.ConnectionParameters(host=host, port=port)
self.connection = pika.BlockingConnection(self.parameters)
self.channel = self.connection.channel()
Issue
when I run the code I get the following error:
$ ./client_fun some_arguments
2020-09-18 09:36:31,137 - ERROR - Address resolution failed: gaierror(-3, 'Temporary failure in name resolution')
Of course, as "rabbitmq" is not in my network but in the k8-cluster network.
However, as kubernetes python client uses a proxy interface, according to this manually-constructing-apiserver-proxy-urls it should be possible to access to the service using an url similar to this:
host = 'https://xxx.xxx.xxx.xxx:6443/api/v1/namespaces/default/services/rabbitmq/proxy'
Which is not working, so something else is missing.
In theory, when using kubectl, the cluster is accessed. So, maybe, there is an easy way that my application can access rabbitmq service without using nodeport.
Note the following:
The service does not necessarily use HTTP/HTTPS protocol
The IP of the cluster might be different, so the proxy-utl cannot be hardcoded. A kubernetes python client function should be use to get the IP and port. Similar to kubectl cluster-info, see at manually-constructing-apiserver-proxy-urls
Port-forwarding to internal service might be a perfect solution, see forward-a-local-port-to-a-port-on-the-pod
If you want to use python client you need a python package: https://github.com/kubernetes-client/python
In this package, you can find how to connect with k8s.
If you want to use k8s self API, you need the k8s token. K8s API doc is useful, also you can find kubectl detail by use -v 9 like: kubectl get ns -v 9
I would suggest not to access the rabbitmq through kubernetes API server as a proxy. It introduces a load on the kubernetes API server as well as kubernetes API server becomes a point of failure.
I would say expose rabbitmq using LoadBalancer type service. If you are not in supported cloud(AWS,Azure etc) environment then you could use metlalb as a load balancer implementation.

Not able to connect app engine to cloud sql for mysql instance

sqlalchemy.exc.OperationalError: (pymysql.err.OperationalError) (2003, "Can't connect to MySQL server on 'localhost' ([Errno 2] No such file or directory)") (Background on this error at: http://sqlalche.me/e/e3q8)
Its been long time i am stuck at this error , I am trying to connect my app engine python code to a cloud sql for mysql instance. This is the first time i am working with google cloud.below is the code i have written
1. in app.yaml
runtime:
python37
vpc_access_connector:
name: "projects/projectnameandcode/locations/location name/connectors/connector name"
2.requirements.txt
sqlalchemy
pymysql
in main.py
import pymysql
db = sqlalchemy.create_engine(
sqlalchemy.engine.url.URL(
drivername="mysql+pymysql",
username="username",
password=password,
database="databasename",
query={"unix_socket": "/cloudsql/{}".format("instance name")},
),
)
a= db.connect()
why am i facing this problem ? My Iam roles are owner or admin.
App Engine standard enviroments do not support connecting to the Cloud
SQL instance using TCP. Your code should not try to access the
instance using an IP address (such as 127.0.0.1 or 172.17.0.1) unless
you have configured Serverless VPC Access.
From your question I understand that you are using vpc_access_connector. Therefore I assume that you configured Serverless VPC Access.
The code used in main.py is for connecting to Cloud SQL instance's using unix domain socket and not TCP.
EDIT:
CONNECTING FROM APP ENGINE TO CLOUD SQL USING TCP AND UNIX DOMAIN SOCKETS
1.Create a new project
gcloud projects create con-ae-to-sql
gcloud config set project con-ae-to-sql
gcloud projects describe con-ae-to-sql
2.Enable billing on you project: https://cloud.google.com/billing/docs/how-to/modify-project
3.Run the following gcloud command to enable App Engine and create the associated application resources
gcloud app create -region europe-west2
gcloud app describe
#Remember the location of you App Engine aplication, because we will create all our resources on the same region
4.Set the compute project-info metadata:
gcloud compute project-info describe --project con-ae-to-sql
#Enable the Api, and you can check that default-region,google-compute-default-zone are not set. Set the metadata.
gcloud compute project-info add-metadata --metadata google-compute-default-region=europe-west2,google-compute-default-zone=europe-west2-b
5.Enable Service Networking Api:
gcloud services list --available
gcloud services enable servicenetworking.googleapis.com
6.Create 2 cloud sql instances, (one with internall ip and one with public ip)- https://cloud.google.com/sql/docs/mysql/create-instance:
6.a Cloud Sql Instance with external ip:
#Create the sql instance in the same region as App Engine Application
gcloud --project=con-ae-to-sql beta sql instances create database-external --region=europe-west2
#Set the password for the "root#%" MySQL user:
gcloud sql users set-password root --host=% --instance database-external --password root
#Create a user
gcloud sql users create user_name --host=% --instance=database-external --password=user_password
#Create a database
gcloud sql databases create user_database --instance=database-external
gcloud sql databases list --instance=database-external
6.b Cloud Sql Instance with internal ip:
i.#Create a private connection to Google so that the VM instances in the default VPC network can use private services access to reach Google services that support it.
gcloud compute addresses create google-managed-services-my-network --global --purpose=VPC_PEERING --prefix-length=16 --description="peering range for Google" --network=default --project=con-ae-to-sql
gcloud services vpc-peerings connect --service=servicenetworking.googleapis.com --ranges=google-managed-services-my-network --network=default --project=con-ae-to-sql
#Check whether the operation was successful.
gcloud services vpc-peerings operations describe --name=operations/pssn.dacc3510-ebc6-40bd-a07b-8c79c1f4fa9a
#Listing private connections
gcloud services vpc-peerings list --network=default --project=con-ae-to-sql
ii.Create the instance:
gcloud --project=con-ae-to-sql beta sql instances create database-ipinternal --network=default --no-assign-ip --region=europe-west2
#Set the password for the "root#%" MySQL user:
gcloud sql users set-password root --host=% --instance database-ipinternal --password root
#Create a user
gcloud sql users create user_name --host=% --instance=database-ipinternal --password=user_password
#Create a database
gcloud sql databases create user_database --instance=database-ipinternal
gcloud sql databases list --instance=database-ipinternal
gcloud sql instances list
gcloud sql instances describe database-external
gcloud sql instances describe database-ipinternal
#Remember the instances connectionName
OK, so we have two mysql instances, we will connect from App Engine Standard to database-ipinternal using Serverless Access and TCP, from App Engine Standard to database-external using unix domain socket, from App Engine Flex to database-ipinternal using TCP, and from App Engine Flex to database-external using unix domain socket.
7.Enable the Cloud SQL Admin API
gcloud services list --available
gcloud services enable sqladmin.googleapis.com
8.At this time App Engine standard enviroments do not support connecting to the Cloud SQL instance using TCP. Your code should not try to access the instance using an IP address (such as 127.0.0.1 or 172.17.0.1) unless you have configured Serverless VPC Access.So let's configure Serverless VPC Access.
8.a Ensure the Serverless VPC Access API is enabled for your project:
gcloud services enable vpcaccess.googleapis.com
8.b Create a connector:
gcloud compute networks vpc-access connectors create serverless-connector --network default --region europe-west2 --range 10.10.0.0/28
#Verify that your connector is in the READY state before using it
gcloud compute networks vpc-access connectors describe serverless-connector --region europe-west2
9.App Engine uses a service account to authorize your connections to Cloud SQL. This service account must have the correct IAM permissions to successfully connect. Unless otherwise configured, the default service account is in the format service-PROJECT_NUMBER#gae-api-prod.google.com.iam.gserviceaccount.com. Ensure that the service account for your service has the following IAM roles: Cloud SQL Client, and for connecting from App Engine Standard to Cloud Sql on internal ip we need also the role Compute Network User.
gcloud iam service-accounts list
gcloud projects add-iam-policy-binding con-ae-to-sql --member serviceAccount:con-ae-to-sql#appspot.gserviceaccount.com --role roles/cloudsql.client
gcloud projects add-iam-policy-binding con-ae-to-sql --member serviceAccount:con-ae-to-sql#appspot.gserviceaccount.com --role roles/compute.networkUser
Now that I configured the set up
1. Connect from App Engine Standard to Cloud Sql using Tcp and unix domanin socket
cd app-engine-standard/
ls
#app.yaml main.py requirements.txt
cat requirements.txt
Flask==1.1.1
sqlalchemy
pymysql
uwsgi==2.0.18
cat app.yaml
runtime: python37
entrypoint: uwsgi --http-socket :8080 --wsgi-file main.py --callable app --master --processes 1 --threads 2
vpc_access_connector:
name: "projects/con-ae-to-sql/locations/europe-west2/connectors/serverless-connector"
cat main.py
from flask import Flask
import pymysql
from sqlalchemy import create_engine
# If `entrypoint` is not defined in app.yaml, App Engine will look for an app
# called `app` in `main.py`.
app = Flask(__name__)
#app.route('/')
def hello():
engine_tcp = create_engine('mysql+pymysql://user_name:user_password#internal-ip-of-database-ipinternal:3306')
existing_databases_tcp = engine_tcp.execute("SHOW DATABASES;")
con_tcp = "Connecting from APP Engine Standard to Cloud SQL using TCP: databases => " + str([d[0] for d in existing_databases_tcp]).strip('[]') + "\n"
engine_unix_socket = create_engine('mysql+pymysql://user_name:user_password#/user_database?unix_socket=/cloudsql/con-ae-to-sql:europe-west2:database-external')
existing_databases_unix_socket = engine_unix_socket.execute("SHOW DATABASES;")
con_unix_socket = "Connecting from APP Engine Standard to Cloud SQL using Unix Sockets: tables in sys database: => " + str([d[0] for d in existing_databases_unix_socket]).strip('[]') + "\n"
return con_tcp + con_unix_socket
gcloud app deploy -q
gcloud app browse
#Go to https://con-ae-to-sql.appspot.com
#Connecting from APP Engine Standard to Cloud SQL using TCP: databases => 'information_schema', 'user_database', 'mysql', 'performance_schema', 'sys' Connecting from APP Engine Standard to Cloud SQL using Unix Sockets: tables in sys database: => 'information_schema', 'user_database', 'mysql', 'performance_schema', 'sys'
SUCCESS!
2.Connect from App Engine Flex to Cloud Sql using Tcp and unix domanin socket
cd app-engine-flex/
ls
#app.yaml main.py requirements.txt
cat requirements.txt
Flask==1.1.1
gunicorn==19.9.0
sqlalchemy
pymysql
cat app.yaml
runtime: python
env: flex
entrypoint: gunicorn -b :$PORT main:app
runtime_config:
python_version: 3
#Using TCP and unix sockets domain
beta_settings:
cloud_sql_instances: con-ae-to-sql:europe-west2:database-ipinternal=tcp:3306,con-ae-to-sql:europe-west2:database-external
manual_scaling:
instances: 1
resources:
cpu: 1
memory_gb: 0.5
disk_size_gb: 10
cat main.py
from flask import Flask
import pymysql
from sqlalchemy import create_engine
app = Flask(__name__)
#app.route('/')
def hello():
engine_tcp = create_engine('mysql+pymysql://user_name:user_password#internal-ip-of-database-ipinternal:3306')
existing_databases_tcp = engine_tcp.execute("SHOW DATABASES;")
con_tcp = "Connecting from APP Engine Flex to Cloud SQL using TCP: databases => " + str([d[0] for d in existing_databases_tcp]).strip('[]') + "\n"
engine_unix_socket = create_engine('mysql+pymysql://user_name:user_password#/user_database?unix_socket=/cloudsql/con-ae-to-sql:europe-west2:database-external')
existing_databases_unix_socket = engine_unix_socket.execute("SHOW DATABASES;")
con_unix_socket = "Connecting from APP Engine Flex to Cloud SQL using Unix Sockets: tables in sys database: => " + str([d[0] for d in existing_databases_unix_socket]).strip('[]') + "\n"
return con_tcp + con_unix_socket
gcloud app deploy -q
gcloud app browse
#Go to https://con-ae-to-sql.appspot.com
#Connecting from APP Engine Flex to Cloud SQL using TCP: databases => 'information_schema', 'marian', 'mysql', 'performance_schema', 'sys' Connecting from APP Engine Flex to Cloud SQL using Unix Sockets: tables in sys database: => 'information_schema', 'marian', 'mysql', 'performance_schema', 'sys'
SUCCESS!
Take a quick look at the Connecting to Cloud SQL from App Engine documentation, and make sure you have followed all of the steps correctly. Specifically, make sure of the following:
The instance has a public IP
The Cloud SQL Admin API is enabled
The service account has the Cloud SQL IAM permissions

How can I connect to a remote cassandra db using Flask?

I can't find any info on Internet about how I can tell my flask app which port it should look at when trying to connect to Cassandra.
From their official website I got:
app = Flask(__name__)
app.config['CASSANDRA_HOSTS'] = ['127.0.0.1']
app.config['CASSANDRA_KEYSPACE'] = "cqlengine"
db = CQLAlchemy(app)
I've tried to add the port to the host with colon or comma and yet nothing. Obviously by default it tries to connect to 9042 and fails miserably.
You can set the port with the following code:
app.config['CASSANDRA_SETUP_KWARGS'] = {'port': 90422}
The CASSANDRA_SETUP_KWARGS configuration value is a parameter of the cassandra.cqlengine.connection.setup method. More information on that here: https://datastax.github.io/python-driver/api/cassandra/cqlengine/connection.html
You can change any Cluster variables with the CASSANDRA_SETUP_KWARGS config. See the following documentation for what configurations are available for the Cluster object: https://datastax.github.io/python-driver/api/cassandra/cluster.html#cassandra.cluster.Cluster

How can i execute a fabric task on a host that is not available directly through the internet, through a proxy host?

I have a frontend server that is reachable over the internet, and a database server that is only available in the local network where the frontend and database server are both in.
I need fabric to create a new database on the database server, but as the database server is not available on the internet, I need to "proxy" through the frontend server to call tasks on the database server.
How can I do that?
I searched for the answer for a few hours, but of course I only found it after asking about it here on stackoverflow.
The solution is to set the frontend server which is available through the internet as the gateway, either using the --gateway|-g flag in the command line, or by setting env.gateway.
I use this in combination with the env.roledefs property and fabric.api.roles to execute some tasks on the database server.
The solution roughly looks like this:
from fabric.api import task, env, roles
env.gateway = 'frontend.server'
env.hosts = ['frontend.server']
env.roledefs = {'db': ['database.server']}
#task
#roles('db')
def create_database():
""" Run on the database server. """
run(... mysql create database query stuff ...)
#task
def who_am_i():
""" Run on the frontend server. """
run('who am i')

Categories