I started Apache Ignite node on my local Mac and tried to run Python script to see if it can connect:
import pylibmc
client = pylibmc.Client (["127.0.0.1:11211"], binary=True)
client.set("key", "val")
Got error:
Traceback (most recent call last):
File "test.py", line 14, in <module>
client.set("key", "val")
pylibmc.UnknownReadFailure: error 7 from memcached_set: (0x7fd26cc3d8d0) UNKNOWN READ FAILURE, host: 127.0.0.1:11211 -> libmemcached/response.cc:828
Does anyone know what could be the problem? Or if you have simpler example with step by step to run Apache Ignite with Python, please let me know. (I tried few examples on line and none so far worked)..
To connect to Ignite using a Python client for Memcached, you need to
download Ignite and -
Start Ignite cluster with cache configured. For example:
Shell bin/ignite.sh examples/config/example-cache.xml
2. Connect to Ignite using Memcached client, via binary protocol.
Python import pylibmc
client = pylibmc.Client (["127.0.0.1:11211"], binary=True)
client.set("key", "val")
print "Value for 'key': %s"%client.get("key")
from: https://apacheignite.readme.io/docs/memcached-support#python
Looks like you didn't pass proper config to ignite:
bin/ignite.sh examples/config/example-cache.xml
Related
I have been trying to work with polyglot and build a simple python processor. I followed the polyglot recipe and I could not get the stream to deploy. I originally deployed the same processor that is used in the example and got the following errors:
Unknown command line arg requested: spring.cloud.stream.bindings.input.destination
Unknown environment variable requested: SPRING_CLOUD_STREAM_KAFKA_BINDER_BROKERS
Traceback (most recent call last):
File "/processor/python_processor.py", line 10, in
consumer = KafkaConsumer(get_input_channel(), bootstrap_servers=[get_kafka_binder_brokers()])
File "/usr/local/lib/python2.7/dist-packages/kafka/consumer/group.py", line 353, in init
self._client = KafkaClient(metrics=self._metrics, **self.config)
File "/usr/local/lib/python2.7/dist-packages/kafka/client_async.py", line 203, in init
self.cluster = ClusterMetadata(**self.config)
File "/usr/local/lib/python2.7/dist-packages/kafka/cluster.py", line 67, in init
self._bootstrap_brokers = self._generate_bootstrap_brokers()
File "/usr/local/lib/python2.7/dist-packages/kafka/cluster.py", line 71, in _generate_bootstrap_brokers
bootstrap_hosts = collect_hosts(self.config['bootstrap_servers'])
File "/usr/local/lib/python2.7/dist-packages/kafka/conn.py", line 1336, in collect_hosts
host, port, afi = get_ip_port_afi(host_port)
File "/usr/local/lib/python2.7/dist-packages/kafka/conn.py", line 1289, in get_ip_port_afi
host_and_port_str = host_and_port_str.strip()
AttributeError: 'NoneType' object has no attribute 'strip'
Exception AttributeError: "'KafkaClient' object has no attribute '_closed'" in <bound method KafkaClient.del of <kafka.client_async.KafkaClient object at 0x7f8b7024cf10>> ignored
I then attempted to pass the environment and binding arguments through the deployment stream but that did not work. When I manually inserted the SPRING_CLOUD_STREAM_KAFKA_BINDER_BROKERS and spring.cloud.stream.bindings.input.destination parameter into Kafka's consumer I was able to deploy the stream as a workaround. I am not entirely sure what is causing the issue, would deploying this on Kubernetes be any different or is this an issue with Polyglot and Dataflow? Any help with this would be appreciated.
Steps to reproduce:
Attempt to deploy polyglot-processor stream from polyglot recipe on local dataflow server. I am also using the same stream definition as in the example: http --server.port=32123 | python-processor --reversestring=true | log.
Additional context:
I am attempting to deploy the stream on a local installation of SPDF and Kafka since I had some issues deploying custom python applications with Docker.
The recipe you have posted above expects the SPRING_CLOUD_STREAM_KAFKA_BINDER_BROKERS environment variable present as part of the server configuration (since the streams are managed via Skipper server, you would need to set this environment variable in your Skipper server configuration).
You can check this documentation on how you can set SPRING_CLOUD_STREAM_KAFKA_BINDER_BROKERS as environment property in Skipper server deployment.
You can also pass this property as a deployer property when deploying the python-processor stream app. You can refer this documentation on how you can pass deployment property to set the Spring Cloud Stream properties (here the binder configuration property) at the time of stream deployment.
I have a running MlFlow server on GCS VM instance. I have created a bucket to log the artifacts.
This is the command I'm running to start the server and for specifying bucket path-
mlflow server --default-artifact-root gs://gcs_bucket/artifacts --host x.x.x.x
But facing this error:
TypeError: stat: path should be string, bytes, os.PathLike or integer, not ElasticNet
Note- The mlflow server is running fine with the specified host alone. The problem is in the way when I'm specifying the storage bucket path.
I have given permission of storage api by using these commands:
gcloud auth application-default login
gcloud auth login
Also, on printing the artifact URI, this is what I'm getting:
mlflow.get_artifact_uri()
Output:
gs://gcs_bucket/artifacts/0/122481bf990xxxxxxxxxxxxxxxxxxxxx/artifacts
So in the above path from where this is coming 0/122481bf990xxxxxxxxxxxxxxxxxxxxx/artifacts and why it's not getting auto-created at gs://gcs_bucket/artifacts
After debugging more, why it's not able to get the local path from VM:
And this error I'm getting on VM:
ARNING:root:Malformed experiment 'mlruns'. Detailed error Yaml file './mlruns/mlruns/meta.yaml' does not exist.
Traceback (most recent call last):
File "/usr/local/lib/python3.6/dist-packages/mlflow/store/tracking/file_store.py", line 197, in list_experiments
experiment = self._get_experiment(exp_id, view_type)
File "/usr/local/lib/python3.6/dist-packages/mlflow/store/tracking/file_store.py", line 256, in _get_experiment
meta = read_yaml(experiment_dir, FileStore.META_DATA_FILE_NAME)
File "/usr/local/lib/python3.6/dist-packages/mlflow/utils/file_utils.py", line 160, in read_yaml
raise MissingConfigException("Yaml file '%s' does not exist." % file_path)
mlflow.exceptions.MissingConfigException: Yaml file './mlruns/mlruns/meta.yaml' does not exist.
Can I get a solution to this and what I'm missing?
I think the main error is from the structure that you want to deploy. For your use case, the structure is suitable that in here. So you miss the URI path which used to store backend metadata. So please install DB SQL(PostgreSQL,...) first, then add the path to --backend-storage-uri.
In case you want to use MlFlow as a model registry and store images on gcs. You can use this structure in here with adding tag --artifacts-only --serve-artifacts
Hope this can help you.
Hello Dear StackOverflow friends,
I'm receiving an strange error when connecting to a managed MySQL instance (DigitalOcean). The connection works on my Dev computer (Windows 8.1 machine), but not on the Prod server (CentOS 8, SELinux in permissive mode). The connection also works with MySQL Workbench.
I've done pip freeze on both mentioned environments and both results are mysql-connector-python==8.0.19 which I find very strange. I've made sure to run my tests with the venv activated.
The managed MySQL 8.x instance is set up to allow connections from both my droplet and my Dev IP address. I've also tried this without the firewall enabled. The managed instance requires the usage of an SSL enabled connection, so a CA Certificate is provided (I've applied chmod 777 over it for now to make sure that's not the cause of the problem).
I've checked the documentation of the library I'm using and it's compatible with MySQL 8.
It is also worth noting I've also tried the solution in this question about it.
The code is the following. Works as expected in Windows.
import datetime
import mysql.connector
from mysql.connector.constants import ClientFlag
dbconn_host = '<sanitized>'
dbconn_port = '<sanitized>'
dbconn_user = '<sanitized>'
dbconn_passwd = '<sanitized>'
dbconn_database = '<sanitized>'
cnx = mysql.connector.connect(
host=dbconn_host,
port=dbconn_port,
user=dbconn_user,
passwd=dbconn_passwd,
database=dbconn_database,
client_flags=ClientFlag.SSL,
ssl_ca='.\\ca_certificate.crt', # When running on prod server I change it to a proper Linux path
# auth_plugin='caching_sha2_password' # Trying another solution I had it changed to mysql_native_password
)
cur_a = cnx.cursor(buffered=True)
query_sel = (
"SELECT * FROM datasources"
)
cur_a.execute(query_sel)
for w in cur_a:
print(w[0])
This is the stack trace I receive in Linux.
(venv) [root#<sanitized> <sanitized>]# python -i conn-test.py
Traceback (most recent call last):
File "conn-test.py", line 12, in <module>
cnx = mysql.connector.connect(
File "/var/<sanitized>/venv/lib/python3.8/site-packages/mysql/connector/__init__.py", line 219, in connect
return MySQLConnection(*args, **kwargs)
File "/var/<sanitized>/venv/lib/python3.8/site-packages/mysql/connector/connection.py", line 104, in __init__
self.connect(**kwargs)
File "/var/<sanitized>/venv/lib/python3.8/site-packages/mysql/connector/abstracts.py", line 960, in connect
self._open_connection()
File "/var/<sanitized>/venv/lib/python3.8/site-packages/mysql/connector/connection.py", line 290, in _open_connection
self._do_auth(self._user, self._password,
File "/var/<sanitized>/venv/lib/python3.8/site-packages/mysql/connector/connection.py", line 212, in _do_auth
self._auth_switch_request(username, password)
File "/var/<sanitized>/venv/lib/python3.8/site-packages/mysql/connector/connection.py", line 256, in _auth_switch_request
raise errors.get_exception(packet)
mysql.connector.errors.DatabaseError: 1251: Client does not support authentication protocol requested by server; consider upgrading MySQL client
>>>
What do you think could be the issue here?
The magic of StackOverflow, is when you post a question that you find the solution in a few minutes. Two things happened:
Half the time I didn't have network connectivity to the MySQL database.
So I ran all kinds of tests before I could even ping the server, then I realized I should run all tests again, but I didn't start with the basics (I did all tests with patches applied, instead of trying a "vanilla" connection first, so to speak).
The solution is I commented out client_flags=ClientFlag.SSL, but left the CA Certificate enabled and the connection worked as expected in the Prod server.
I have bought a couchbase host server on Amazon server,
When I type this url
http://ec2-54-186-83-95.bla.bla.bla.com:8091/index.html
I got the page to enter the username and password,
Now I am trying to insert documents to that server remotely using python.
I tried this:
connection = Couchbase.connect(host='http://ec2-54-186-83-95.bla.bla.bla.com:8091/index.html', bucket='data')
That statement didn't give me any exception, so I tried to insert the data like this:
connection.set('key', value')
I got this exception:
Traceback (most recent call last):
File "<interactive input>", line 1, in <module>
File "C:\Python27\lib\site-packages\couchbase\connection.py", line 331, in set
persist_to, replicate_to)
_TimeoutError_0x17 (generated, catch TimeoutError): <Key=u'key', RC=0x17[Client-Side timeout exceeded for operation. Inspect network conditions or increase the timeout], Operational Error, Results=1, C Source=(src\multiresult.c,282)>
why is that happening please? should I use a different URL ?
Note:
I can successfully add document to my local couchbase server like this:
connection = Couchbase.connect(bucket='bucketName', password='bucketPassword')
if you want any other information please tell me.
python 2.7, 32 bit on Windows 64 bit
Couchbase server 2.5.1
Update
I belive that I should do soemthing with username and password because when I access that link from browser I got the page where I should insert my username and passord but I didn't specify that in the connection statemnt in python and when I did, I got the exact same error
As documented in the Couchbase Python SDK Getting Started Guide, simply use the hostname(s) of your cluster nodes - i.e. in your example:
connection = Couchbase.connect(host='ec2-54-186-83-95.bla.bla.bla.com',
bucket='data')
Basically I can use ZODB fine. However the ZEO tutorials are all very confusing.
From my understanding you start a server by going into my directory and punchign into the command prompt
python runzeo.py -C zeo.config
Where my zeo.config file is as follows
<zeo>
address localhost:8090
</zeo>
<filestorage>
path C:\\Anaconda\\Lib\\site-packages\\ZEO\\var\\tmp\\Data.fs
</filestorage>
<eventlog>
<logfile>
path C:\\Anaconda\\Lib\\site-packages\\ZEO\\var\\tmp\\zeo.log
format %(asctime)s %(message)s
</logfile>
</eventlog>
When I run it the log file is filled with
2014-07-02T14:49:15 (1948) opening storage '1' using FileStorage
2014-07-02T14:49:15 StorageServer created RW with storages: 1:RW:C:\\Anaconda\\Lib\\site-packages\\ZEO\\var\\tmp\\Data.fs
2014-07-02T14:49:15 (1948) listening on ('localhost', 8090)
Now when I try to get a client to add some random stuff to the database with prints after every line to see how its going:
from ZEO.ClientStorage import ClientStorage
from ZODB import DB
import transaction
print "starting"
storage=ClientStorage(('localhost',8090))
print "storage opened"
db=DB(storage)
conn=db.open()
print "connection opened"
root=conn.root()
print "established connection"
root['letters']=['a','b','c']
print "added values"
transaction.commit()
print "transaction done"
root.close()
print "closed"
My code only prints "starting", no error messages are thrown, so Im assuming that its getting stuck on the storage = ClientStorange(('localhost',8090)) line, my Data.fs file remains unchanged. I have no idea what is wrong and I have consulted all the tutorials.
I'm on Windows using Python 2.7 and installed ZEO / ZODB from pip so I assume they are all up to date versions is that helps.
Any help or pointers to different object orientated databases (with multiple proccess access) would be appreciated.
Thanks everyone
Found the answer to my own question. Seems there is a bug with the implementation of using the localhost in windows. (Running server and client on same machine)
The source code needs an edit:
I have the same problem (can't connect to ZEO Server) using ZODB/ZEO 4.0 with Python 2.7.6 on Windows.
The proposed solution (changing line 446 of ZEO/zrpc/client.py) works for me, so why not incorporate the patch into the 4.0 release too?
- socket.getaddrinfo(host or 'localhost', port)
+ socket.getaddrinfo(host or 'localhost', port, 0, socket.SOCK_STREAM)"
From https://bugs.launchpad.net/zodb/+bug/1004513