I've been struggling for a while to connect to DB2 via Python client on OSX (maveriks). A valid option seem to be using JayDeBeApi but, running the following code...
import jaydebeapi
import jpype
jar = '/opt/IBM/db2/V10.1/java/db2jcc4.jar' # location of the jdbc driver jar
args='-Djava.class.path=%s' % jar
jvm = jpype.getDefaultJVMPath()
jpype.startJVM(jvm, args)
jaydebeapi.connect('com.ibm.db2.jcc.DB2Driver',
'jdbc:db2://server:port/database','myusername','mypassword')
I'll get the following error
Traceback (most recent call last):
File "<pyshell#67>", line 2, in <module>
'jdbc:db2://server:port/database','myusername','mypassword')
File "/Library/Python/2.7/site-packages/jaydebeapi/dbapi2.py", line 269, in connect
jconn = _jdbc_connect(jclassname, jars, libs, *driver_args)
File "/Library/Python/2.7/site-packages/jaydebeapi/dbapi2.py", line 117, in _jdbc_connect_jpype
return jpype.java.sql.DriverManager.getConnection(*driver_args)
com.ibm.db2.jcc.am.SqlSyntaxErrorExceptionPyRaisable: com.ibm.db2.jcc.am.SqlSyntaxErrorException: [jcc][t4][10205][11234][3.63.123] Null userid is not supported. ERRORCODE=-4461, SQLSTATE=42815
So basically I'm connecting to the server, but for some reason I'm not using the username & password provided. Any idea on how to pass correctly username and password? I can't find further specification for this problem exactly, and any suggestion or tips are welcome.
nevermind... I wasn't passing the LIST of parameters.... with the following changes it now works:
jaydebeapi.connect(
'com.ibm.db2.jcc.DB2Driver',
['jdbc:db2://server:port/database','myusername','mypassword']
)
Related
I am trying to test the remote connection of a Python data-science client with SQL Server Machine Learning Services following this guide: https://learn.microsoft.com/en-us/sql/machine-learning/python/setup-python-client-tools-sql (section 6).
Running the following script
def send_this_func_to_sql():
from revoscalepy import RxSqlServerData, rx_import
from pandas.tools.plotting import scatter_matrix
import matplotlib.pyplot as plt
import io
# remember the scope of the variables in this func are within our SQL Server Python Runtime
connection_string = "Driver=SQL Server;Server=localhost\instance02;Database=testmlsiris;Trusted_Connection=Yes;"
# specify a query and load into pandas dataframe df
sql_query = RxSqlServerData(connection_string=connection_string, sql_query = "select * from iris_data")
df = rx_import(sql_query)
scatter_matrix(df)
# return bytestream of image created by scatter_matrix
buf = io.BytesIO()
plt.savefig(buf, format="png")
buf.seek(0)
return buf.getvalue()
new_db_name = "testmlsiris"
connection_string = "driver={sql server};server=sqlrzs\instance02;database=%s;trusted_connection=yes;"
from revoscalepy import RxInSqlServer, rx_exec
# create a remote compute context with connection to SQL Server
sql_compute_context = RxInSqlServer(connection_string=connection_string%new_db_name)
# use rx_exec to send the function execution to SQL Server
image = rx_exec(send_this_func_to_sql, compute_context=sql_compute_context)[0]
yields the following error message returned by rx_exec (stored in the image variable)
connection_string: "driver={sql server};server=sqlrzs\instance02;database=testmlsiris;trusted_connection=yes;"
num_tasks: 1
execution_timeout_seconds: 0
wait: True
console_output: False
auto_cleanup: True
packages_to_load: []
description: "sqlserver"
version: "1.0"
XXX lineno: 2, opcode: 0
Traceback (most recent call last):
File "<string>", line 3, in <module>
File "E:\SQL\MSSQL15.INSTANCE02\PYTHON_SERVICES\lib\site-packages\revoscalepy\computecontext\RxInSqlServer.py", line 664, in rx_sql_satellite_pool_call
exec(inputfile.read())
File "<string>", line 34, in <module>
File "E:\SQL\MSSQL15.INSTANCE02\PYTHON_SERVICES\lib\site-packages\revoscalepy\computecontext\RxInSqlServer.py", line 886, in rx_remote_call
results = rx_resumeexecution(state_file = inputfile, patched_server_name=args["hostname"])
File "E:\SQL\MSSQL15.INSTANCE02\PYTHON_SERVICES\lib\site-packages\revoscalepy\computecontext\RxInSqlServer.py", line 135, in rx_resumeexecution
return _state["function"](**_state["args"])
File "C:\Users\username\sendtosql.py", line 2, in send_this_func_to_sql
SystemError: unknown opcode
====== sqlrzs ( process 0 ) has started run at 2022-06-29 13:47:04 W. Europe Daylight Time ======
{'local_state': {}, 'args': {}, 'function': <function send_this_func_to_sql at 0x0000020F5810F1E0>}
What is going wrong here? Line 2 in the script is just an import (which works when testing Python scripts on SQL Server directly). Any help is appreciated - thanks.
I just figured out the reason. As of today, the Python versions for the data clients in https://learn.microsoft.com/de-de/sql/machine-learning/python/setup-python-client-tools-sql?view=sql-server-ver15 are not the newest (revoscalepy Version 9.3), while the version of Machine Learning Services that we have running in our SQL Server is already 9.4.7.
However, the revoscalepy libraries for the client and server must be the same, otherwise the deserialization fails server-sided.
I want to extract all confirmed transactions from the bitcoin blockchain. I know there are repos out there (e.g. https://github.com/znort987/blockparser) but I want to write something myself for my better understanding.
I am have tried the following code after having downloaded far more than 42 blocks and while running bitcoind (minimal example):
from bitcoin.rpc import RawProxy
proxy = RawProxy()
blockheight=42
block = proxy.getblock(proxy.getblockhash(blockheight))
tx_list = block['tx']
for tx_id in tx_list:
raw_tx = proxy.getrawtransaction(tx_id)
This yields the following error:
Traceback (most recent call last):
File "<stdin>", line 2, in <module>
File "/home/donkeykong/.local/lib/python3.7/site-packages/bitcoin/rpc.py", line 315, in <lambda>
f = lambda *args: self._call(name, *args)
File "/home/donkeykong/.local/lib/python3.7/site-packages/bitcoin/rpc.py", line 239, in _call
'message': err.get('message', 'error message not specified')})
bitcoin.rpc.InvalidAddressOrKeyError: {'code': -5, 'message': 'No such mempool transaction. Use -txindex or provide a block hash to enable blockchain transaction queries. Use gettransaction for wallet transactions.'}
Could anyone elucidate what I am misunderstanding?
For reproduction:
python version: Python 3.7.3
I installed the bitcoin.rpc via: pip3 install python-bitcoinlib
bitcoin-cli version: Bitcoin Core RPC client version v0.20.1
Running the client as bitcoind -txindex solves the problem as it maintains the full transaction index. I should have spent more attention to the error message...
Excerpt from bitcoind --help:
-txindex
Maintain a full transaction index, used by the getrawtransaction rpc
call (default: 0)
I am trying to connect to Apache Drill from python using jaydebeapi library.
I have turned on drill in embedded mode via drill-embedded, and the web ui runs correctly in port 8047. Then, I am trying to connect via JDBC through a python script:
import jaydebeapi
import jpype
import os
DRILL_HOME = os.environ["DRILL_HOME"]
classpath = DRILL_HOME + "/jars/jdbc-driver/drill-jdbc-all-1.17.0.jar"
jpype.startJVM(jpype.getDefaultJVMPath(), "-Djava.class.path=%s" % classpath)
conn = jaydebeapi.connect(
'org.apache.drill.jdbc.Driver',
'jdbc:drill:drillbit=localhost:8047'
)
but I get this error
SLF4J: Failed to load class "org.slf4j.impl.StaticLoggerBinder".
SLF4J: Defaulting to no-operation (NOP) logger implementation
SLF4J: See http://www.slf4j.org/codes.html#StaticLoggerBinder for further details.
Traceback (most recent call last):
File "jaydebe_drill.py", line 10, in <module>
'jdbc:drill:drillbit=localhost:8047'
File "/Users/user/opt/anaconda3/lib/python3.7/site-packages/jaydebeapi/__init__.py", line 412,
in connect
jconn = _jdbc_connect(jclassname, url, driver_args, jars, libs)
File "/Users/user/opt/anaconda3/lib/python3.7/site-packages/jaydebeapi/__init__.py", line 230,
in _jdbc_connect_jpype
return jpype.java.sql.DriverManager.getConnection(url, *dargs)
jpype._jexception.SQLNonTransientConnectionExceptionPyRaisable:
java.sql.SQLNonTransientConnectionException:
Failure in connecting to Drill: oadd.org.apache.drill.exec.rpc.ChannelClosedException:
Channel closed /127.0.0.1:62244 <--> localhost/127.0.0.1:8047.
Does anyone knows how to solve the issue?
Thanks to #Luke Woodward suggestion, the problem was the port. For drill-embedded there is no port to select. Below a full query example
import jaydebeapi
import jpype
import os
import pandas as pd
DRILL_HOME = os.environ["DRILL_HOME"]
classpath = DRILL_HOME + "/jars/jdbc-driver/drill-jdbc-all-1.17.0.jar"
jpype.startJVM(jpype.getDefaultJVMPath(), "-Djava.class.path=%s" % classpath)
conn = jaydebeapi.connect(
'org.apache.drill.jdbc.Driver',
'jdbc:drill:drillbit=localhost'
)
cursor = conn.cursor()
query = """
SELECT *
FROM dfs.`/Users/user/data.parquet`
LIMIT 1
"""
cursor.execute(query)
columns = [c[0] for c in cursor.description]
data = cursor.fetchall()
df = pd.DataFrame(data, columns=columns)
df.head()
I am updating data on a Neo4j server using Python (2.7.6) and Py2Neo (1.6.4). My load function is:
from py2neo import neo4j,node, rel, cypher
session = cypher.Session('http://my_neo4j_server.com.mine:7474')
def load_data():
tx = session.create_transaction()
for row in dataframe.iterrows(): #dataframe is a pandas dataframe
name = row[1].name
id = row[1].id
merge_query = "MERGE (a:label {name:'%s', name_var:'%s'}) " % (id, name)
tx.append(merge_query)
tx.commit()
When I execute this from Spyder in Windows it works great. All the data from the dataframe is committed to neo4j and visible in the graph. However, when I run this from a linux server (different from the neo4j server) I get the following error at tx.commit(). Note that I have the same version of python and py2neo.
INFO:py2neo.packages.httpstream.http:>>> POST http://neo4j1.qs:7474/db/data/transaction/commit [1360120]
INFO:py2neo.packages.httpstream.http:<<< 200 OK [chunked]
ERROR:__main__:some part of process failed
Traceback (most recent call last):
File "my_file.py", line 132, in load_data
tx.commit()
File "/usr/local/lib/python2.7/site-packages/py2neo/cypher.py", line 242, in commit
return self._post(self._commit or self._begin_commit)
File "/usr/local/lib/python2.7/site-packages/py2neo/cypher.py", line 208, in _post
j = rs.json
File "/usr/local/lib/python2.7/site-packages/py2neo/packages/httpstream/http.py", line 563, in json
return json.loads(self.read().decode(self.encoding))
File "/usr/local/lib/python2.7/site-packages/py2neo/packages/httpstream/http.py", line 634, in read
data = self._response.read()
File "/usr/local/lib/python2.7/httplib.py", line 543, in read
return self._read_chunked(amt)
File "/usr/local/lib/python2.7/httplib.py", line 597, in _read_chunked
raise IncompleteRead(''.join(value))
IncompleteRead: IncompleteRead(128135 bytes read)
This post (IncompleteRead using httplib) suggests that is an httplib error. I am not sure how to handle since I am not calling httplib directly.
Any suggestions for getting this load to work on Linux or what the IncompleteRead error message means?
UPDATE :
The IncompleteRead error is being caused by a Neo4j error being returned. The line returned in _read_chunked that is causing the error is:
pe}"}]}],"errors":[{"code":"Neo.TransientError.Network.UnknownFailure"
Neo4j docs say this is an unknown network error.
Although I can't say for sure, this implies some kind of local network issue between client and server rather than a bug within the library. Py2neo wraps httplib (which is pretty solid itself) and, from the stack trace, it looks as though the client is expecting more chunks from a chunked response.
To diagnose further, you could make some curl calls from your Linux application server to your database server and see what succeeds and what doesn't. If that works, try writing a quick and dirty python script to make the same calls with httplib directly.
UPDATE 1: Given the update above and the fact that the server streams its responses, I'm thinking that the chunk size might represent the intended payload but the error cuts the response short. Recreating the issue with curl certainly seems like the best next step to help determine whether it is a fault in the driver, the server or something else.
UPDATE 2: Looking again this morning, I notice that you're using Python substitution for the properties within the MERGE statement. As good practice, you should use parameter substitution at the Cypher level:
merge_query = "MERGE (a:label {name:{name}, name_var:{name_var}})"
merge_params = {"name": id, "name_var": name}
tx.append(merge_query, merge_params)
I'm trying to write some Python3 to interface with the backend PostgreSQL server on a Raritan Power IQ (http://www.raritan.com/products/power-management/power-iq/) system.
I've used pgAdminIII to connect to the server, and it connects fine with my credentials. I can see the databases, as well as the schemas in each database.
I'm now using py-postgresql to attempt to script it, and I'm hitting some issues.
I use the following to connect:
postgresql.open("pq://odbcuser:password#XX.XX.XX.XX:5432/raritan")
to connect to the raritan database, using user "odbcuser" and password "password" (no, that's not the real one...lol).
It appears to connect successfully. I'm able to to run some queries, e.g.
ps = db.prepare("SELECT * from pg_tables;")
ps()
manages to list all the tables/views in the "raritan" database.
However, I then try to access a specific view and it breaks. The "raritan" database has two schemas, "odbc" and "public".
I can access views from the public schema. E.g.:
ps = db.prepare("SELECT * from public.qrypwrall;")
ps()
works to an extent - I get a permission denied error, same as I under pgAdminIII, as my account doesn't have access to that view, but syntactally, it seems fine and it does find the table.
However, when I try to access a view under "odbc", it just breaks. E.g.:
>>> ps = db.prepare("SELECT * from odbc.Aisles;")
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "C:\Python31\lib\site-packages\postgresql\driver\pq3.py", line 2291, in prepare
ps._fini()
File "C:\Python31\lib\site-packages\postgresql\driver\pq3.py", line 1393, in _
fini
self.database._pq_complete()
File "C:\Python31\lib\site-packages\postgresql\driver\pq3.py", line 2538, in _
pq_complete
self.typio.raise_error(x.error_message, cause = getattr(x, 'exception', None
))
File "C:\Python31\lib\site-packages\postgresql\driver\pq3.py", line 471, in ra
ise_error
self.raise_server_error(error_message, **kw)
File "C:\Python31\lib\site-packages\postgresql\driver\pq3.py", line 462, in ra
ise_server_error
raise server_error
postgresql.exceptions.UndefinedTableError: relation "odbc.aisles" does not exist
CODE: 42P01
LOCATION: File 'namespace.c', line 268, in RangeVarGetRelid from SERVER
STATEMENT: [parsing]
statement_id: py:0x10ca1b0
string: SELECT * from odbc.Aisles;
CONNECTION: [idle]
client_address: 10.180.9.213/32
client_port: 2612
version:
PostgreSQL 8.3.7 on i686-redhat-linux-gnu, compiled by GCC gcc (GCC) 4.1.2 2
0071124 (Red Hat 4.1.2-42)
CONNECTOR: [IP4] pq://odbcuser:***#10.180.138.121:5432/raritan
category: None
DRIVER: postgresql.driver.pq3.Driver
However, I can access the same table (Aisles) fine under pgAdminIII, using the same credentials (and unlike public, I actually have permissions to all these tables.
Is there any reason that py-postgresql might not see these views? Or anything you can pick out from the error messages?
I have a suspicion that it's to do with PowerIQ using mixed-case for the table names (e.g. "Aisle"). However, I'm not exactly sure how to deal with these in psycopg. How exactly would I modify say, my cursor.execute like to quote the table?
cursor.execute('SELECT * from "public.Aisles"')
also doesn't work.
Cheers,
Victor
Have you tried it this way: 'SELECT * from public."Aisles"?
Quoting the whole thing makes it a non-qualified (no schema) table name which has a dot in it.