I can't find any info on Internet about how I can tell my flask app which port it should look at when trying to connect to Cassandra.
From their official website I got:
app = Flask(__name__)
app.config['CASSANDRA_HOSTS'] = ['127.0.0.1']
app.config['CASSANDRA_KEYSPACE'] = "cqlengine"
db = CQLAlchemy(app)
I've tried to add the port to the host with colon or comma and yet nothing. Obviously by default it tries to connect to 9042 and fails miserably.
You can set the port with the following code:
app.config['CASSANDRA_SETUP_KWARGS'] = {'port': 90422}
The CASSANDRA_SETUP_KWARGS configuration value is a parameter of the cassandra.cqlengine.connection.setup method. More information on that here: https://datastax.github.io/python-driver/api/cassandra/cqlengine/connection.html
You can change any Cluster variables with the CASSANDRA_SETUP_KWARGS config. See the following documentation for what configurations are available for the Cluster object: https://datastax.github.io/python-driver/api/cassandra/cluster.html#cassandra.cluster.Cluster
Related
I’m looking to connect to a Milvus database I deployed on Google Kubernetes Engine.
I am running into an error in the last line of the script. I'm running the script locally.
Here's the process I followed to set up the GKE cluster: (https://milvus.io/docs/v2.0.0/gcp.md)
Here is a similar question I'm drawing from
Any thoughts on what I'm missing?
import os
from pymilvus import connections
from kubernetes import client, config
My_Kubernetes_IP = 'XX.XXX.XX.XX'
# Authenticate with GCP credentials
os.environ['GOOGLE_APPLICATION_CREDENTIALS'] = os.path.abspath('credentials.json')
# load milvus config file and connect to GKE instance
config = client.Configuration(os.path.abspath('milvus/config.yaml'))
config.host = f'https://{My_Kubernetes_IP}:19530'
client.Configuration.set_default(config)
## connect to milvus
milvus_ip = 'xx.xxx.xx.xx'
connections.connect(host=milvus_ip, port= '19530')
Error:
BaseException: <BaseException: (code=2, message=Fail connecting to server on xx.xxx.xx.xx:19530. Timeout)>
If you want to connect to the Milvus in the k8s cluster by ip+port, you may need to forward your local port 19530 to the Milvus service. Use a command like the following:
$ kubectl port-forward service/my-release-milvus 19530
Have you checked where your milvus external IP is?
Follow the instructions by the documentation you should use kubectl get services to check which external IP is allocated for the milvus.
I designed a simple website using Flask and my goal was to deploy it on Google App engine. I started working on it locally and used google cloud sql for the database. I used google_cloud_proxy to open the port 3306 to interact with my GC SQL instance and it works fine locally... this is the way I'm connecting my application to GC SQL:
I have a app.yaml file which I've defined my Global Variables in it:
env_variables:
CLOUDSQL_SERVER = '127.0.0.1'
CLOUDSQL_CONNECTION_NAME = "myProjectName:us-central1:project"`
CLOUDSQL_USER = "user"
CLOUDSQL_PASSWORD = "myPassword"
CLOUDSQL_PORT = 3306
CLOUDSQL_DATABASE = "database"
and from my local machine I do:
db = MySQLdb.connect(CLOUDSQL_SERVER,CLOUDSQL_USER,CLOUDSQL_PASSWORD,CLOUDSQL_DATABASE,CLOUDSQL_PORT)
and if I want to get connected on App Engine, I do:
cloudsql_unix_socket = os.path.join('/cloudsql', CLOUDSQL_CONNECTION_NAME)
db = MySQLdb.connect(unix_socket=cloudsql_unix_socket,user=CLOUDSQL_USER,passwd=CLOUDSQL_PASSWORD,db=CLOUDSQL_DATABASE)
the static part of the website is running but for example, when I want to login with a username and password which is stored in GC SQL, I receive an internal error.
I tried another way... I started a compute engine, defined my global variables in config.py, installed flask, mysqldb and everything needed to start my application. I also used cloud_sql_proxy on that compute engine and I tried this syntax to connect to GC SQL instance:
db = MySQLdb.connect(CLOUDSQL_SERVER,CLOUDSQL_USER,CLOUDSQL_PASSWORD,CLOUDSQL_DATABASE,CLOUDSQL_PORT)
but it had the same problem. I don't think that it's the permission issue as I defined my compute engine's ip address in the authorized network part of GC SQL and in I AM & ADMIN part, the myprojectname#appspot.gserviceaccount.com has the Editor role!
can anyone help me out where the problem is?
ALright! I solved the problem. I followed the Google cloud's documentation but I had problems.I added a simple '/' in:
cloudsql_unix_socket = os.path.join('/cloudsql', CLOUDSQL_CONNECTION_NAME)
instead of '/cloudsql' it should be '/cloudsql/'
I know it's weird because os.path.join must add '/' to the path but for strange reasons which I don't know, it wasn't doing so.
I have a frontend server that is reachable over the internet, and a database server that is only available in the local network where the frontend and database server are both in.
I need fabric to create a new database on the database server, but as the database server is not available on the internet, I need to "proxy" through the frontend server to call tasks on the database server.
How can I do that?
I searched for the answer for a few hours, but of course I only found it after asking about it here on stackoverflow.
The solution is to set the frontend server which is available through the internet as the gateway, either using the --gateway|-g flag in the command line, or by setting env.gateway.
I use this in combination with the env.roledefs property and fabric.api.roles to execute some tasks on the database server.
The solution roughly looks like this:
from fabric.api import task, env, roles
env.gateway = 'frontend.server'
env.hosts = ['frontend.server']
env.roledefs = {'db': ['database.server']}
#task
#roles('db')
def create_database():
""" Run on the database server. """
run(... mysql create database query stuff ...)
#task
def who_am_i():
""" Run on the frontend server. """
run('who am i')
I am trying to connect to eDirectory using python. It is not as easy as connecting to active directory using python so I am wondering if this is even possible. I am currently running python3.4
I'm the author of ldap3, I use eDirectory for testing the library.
just try the following code:
from ldap3 import Server, Connection, ALL, SUBTREE
server = Server('your_server_name', get_info=ALL) # don't user get_info if you don't need info on the server and the schema
connection = Connection(server, 'your_user_name_dn', 'your_password')
connection.bind()
if connection.search('your_search_base','(objectClass=*)', SUBTREE, attributes = ['cn', 'objectClass', 'your_attribute'])
for entry in connection.entries:
print(entry.entry_get_dn())
print(entry.cn, entry.objectClass, entry.your_attribute)
connection.unbind()
If you need a secure connection just change the server definition to:
server = Server('your_server_name', get_info=ALL, use_tls=True) # default tls configuration on port 636
Also, any example in the docs at https://ldap3.readthedocs.org/en/latest/quicktour.html should work with eDirectory.
Bye,
Giovanni
I am new to mongodb and I am trying to connect it remotely (from my local system to live db) and it is connected successfully. I have admin users in admin table and want that without authentication no one can access my database. But when I try to connect Mongodb remotely via the below mention code , even without authentication i can access any db :
from pymongo import MongoClient, Connection
c = MongoClient('myip',27017)
a = c.mydb.testData.find()
In my config file , the parameter auth is set to True , auth = True . But still no authentication is needed to access my db . Please can anyone let me know what I am missing here.
Based on your description I would guess you haven't actually enabled authentication. In order to enable authentication you must start the Mongo server with certain settings. You can find more information below:
http://docs.mongodb.org/manual/tutorial/enable-authentication/
Basically you need to run with --auth in order to enable authentication.