Py-Postgresql and Raritan PowerIQ - Can't seem to find table? - python

I'm trying to write some Python3 to interface with the backend PostgreSQL server on a Raritan Power IQ (http://www.raritan.com/products/power-management/power-iq/) system.
I've used pgAdminIII to connect to the server, and it connects fine with my credentials. I can see the databases, as well as the schemas in each database.
I'm now using py-postgresql to attempt to script it, and I'm hitting some issues.
I use the following to connect:
postgresql.open("pq://odbcuser:password#XX.XX.XX.XX:5432/raritan")
to connect to the raritan database, using user "odbcuser" and password "password" (no, that's not the real one...lol).
It appears to connect successfully. I'm able to to run some queries, e.g.
ps = db.prepare("SELECT * from pg_tables;")
ps()
manages to list all the tables/views in the "raritan" database.
However, I then try to access a specific view and it breaks. The "raritan" database has two schemas, "odbc" and "public".
I can access views from the public schema. E.g.:
ps = db.prepare("SELECT * from public.qrypwrall;")
ps()
works to an extent - I get a permission denied error, same as I under pgAdminIII, as my account doesn't have access to that view, but syntactally, it seems fine and it does find the table.
However, when I try to access a view under "odbc", it just breaks. E.g.:
>>> ps = db.prepare("SELECT * from odbc.Aisles;")
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "C:\Python31\lib\site-packages\postgresql\driver\pq3.py", line 2291, in prepare
ps._fini()
File "C:\Python31\lib\site-packages\postgresql\driver\pq3.py", line 1393, in _
fini
self.database._pq_complete()
File "C:\Python31\lib\site-packages\postgresql\driver\pq3.py", line 2538, in _
pq_complete
self.typio.raise_error(x.error_message, cause = getattr(x, 'exception', None
))
File "C:\Python31\lib\site-packages\postgresql\driver\pq3.py", line 471, in ra
ise_error
self.raise_server_error(error_message, **kw)
File "C:\Python31\lib\site-packages\postgresql\driver\pq3.py", line 462, in ra
ise_server_error
raise server_error
postgresql.exceptions.UndefinedTableError: relation "odbc.aisles" does not exist
CODE: 42P01
LOCATION: File 'namespace.c', line 268, in RangeVarGetRelid from SERVER
STATEMENT: [parsing]
statement_id: py:0x10ca1b0
string: SELECT * from odbc.Aisles;
CONNECTION: [idle]
client_address: 10.180.9.213/32
client_port: 2612
version:
PostgreSQL 8.3.7 on i686-redhat-linux-gnu, compiled by GCC gcc (GCC) 4.1.2 2
0071124 (Red Hat 4.1.2-42)
CONNECTOR: [IP4] pq://odbcuser:***#10.180.138.121:5432/raritan
category: None
DRIVER: postgresql.driver.pq3.Driver
However, I can access the same table (Aisles) fine under pgAdminIII, using the same credentials (and unlike public, I actually have permissions to all these tables.
Is there any reason that py-postgresql might not see these views? Or anything you can pick out from the error messages?
I have a suspicion that it's to do with PowerIQ using mixed-case for the table names (e.g. "Aisle"). However, I'm not exactly sure how to deal with these in psycopg. How exactly would I modify say, my cursor.execute like to quote the table?
cursor.execute('SELECT * from "public.Aisles"')
also doesn't work.
Cheers,
Victor

Have you tried it this way: 'SELECT * from public."Aisles"?
Quoting the whole thing makes it a non-qualified (no schema) table name which has a dot in it.

Related

Apache Superset not loading table records/columns

I am trying to add a table in Superset. The other tables get added properly, meaning the columns are fetched properly by Superset. But for my table booking_xml, it does not load any columns.
The description of table is
After adding this table, when I click on the table name to explore it, it gives the following error
Empty query?
Traceback (most recent call last):
File "/home/superset/superset_venv/lib/python3.8/site-packages/superset/viz.py", line 473, in get_df_payload
df = self.get_df(query_obj)
File "/home/superset/superset_venv/lib/python3.8/site-packages/superset/viz.py", line 251, in get_df
self.results = self.datasource.query(query_obj)
File "/home/superset/superset_venv/lib/python3.8/site-packages/superset/connectors/sqla/models.py", line 1139, in query
query_str_ext = self.get_query_str_extended(query_obj)
File "/home/superset/superset_venv/lib/python3.8/site-packages/superset/connectors/sqla/models.py", line 656, in get_query_str_extended
sqlaq = self.get_sqla_query(**query_obj)
File "/home/superset/superset_venv/lib/python3.8/site-packages/superset/connectors/sqla/models.py", line 801, in get_sqla_query
raise Exception(_("Empty query?"))
Exception: Empty query?
ERROR:superset.viz:Empty query?
However, when I try to explore it using the SQL editor, it loads up properly. I have found the difference in the form_data parameter in the URL when loading from tables page and from SQL editor.
URL from SQL Lab view:
form_data={"queryFields":{"groupby":"groupby","metrics":"metrics"},"datasource":"192__table","viz_type":"table","url_params":{},"time_range_endpoints":["inclusive","exclusive"],"granularity_sqla":"created_on","time_grain_sqla":"P1D","time_range":"Last+week","groupby":[],"metrics":["count"],"all_columns":[],"percent_metrics":[],"order_by_cols":[],"row_limit":10000,"order_desc":true,"adhoc_filters":[],"table_timestamp_format":"smart_date","color_pn":true,"show_cell_bars":true}
URL from datasets list:
form_data={"queryFields":{"groupby":"groupby","metrics":"metrics"},"datasource":"191__table","viz_type":"table","url_params":{},"time_range_endpoints":["inclusive","exclusive"],"time_grain_sqla":"P1D","time_range":"Last+week","groupby":[],"all_columns":[],"percent_metrics":[],"order_by_cols":[],"row_limit":10000,"order_desc":true,"adhoc_filters":[],"table_timestamp_format":"smart_date","color_pn":true,"show_cell_bars":true}
When loading from datasets list, /explore_json/ gives 400 Bad Request.
Superset version == 0.37.1, Python version == 3.8
Superset saves the details/metadata of the table that has to be connected. So, in that my table had a very long datatype as you can see in the image in question. Superset saves that as a varchar of length 32. So, the database was not allowing to enter this value into the database. Which was causing the error. Due to that no records were being fetched even after adding the table in the datasources.
What I did was to increase the length of the column datatype.
ALTER TABLE table_columns MODIFY type varchar(200)

How Can My sqllite3 interaction be fixed?

I'm trying to get an admin account to edit a 'rank' (basically access level) for one of the profiles in my data-base. The error is:
Traceback (most recent call last):
File "U:/A-level Computor Science/Y12-13/SQL/sqlite/Databases/ork task/Python for SQL V_2.py", line 154, in <module>
main()
File "U:/A-level Computor Science/Y12-13/SQL/sqlite/Databases/ork task/Python for SQL V_2.py", line 9, in main
start_menu()
File "U:/A-level Computor Science/Y12-13/SQL/sqlite/Databases/ork task/Python for SQL V_2.py", line 22, in start_menu
login()
File "U:/A-level Computor Science/Y12-13/SQL/sqlite/Databases/ork task/Python for SQL V_2.py", line 72, in login
Mek_menu()
File "U:/A-level Computor Science/Y12-13/SQL/sqlite/Databases/ork task/Python for SQL V_2.py", line 108, in Mek_menu
where Uzaname = %s""" % (NewRank, Findaname))
sqlite3.OperationalError: unrecognized token: "0rk_D4T4B453"`
The code that seems to be the problem is:
cursor.execute(""" update 0rk_D4T4B453.Da_Boyz
set Rank = %s
where Uzaname = %s""" % (NewRank, Findaname))
Originally, it was all on one line and it didn't work, and now I've tried it on multiple lines and it still doesn't work. So I checked here to see if anyone could help.
EDIT1: Thanks for the suggestions. None of them have fixed the code, but I've narrowed the problem code to: where Uzaname = %s""" % (NewRank, Findaname))
Unless you use ATTACH, SQLite (a file-level database) does not recognize other databases. Usually server-level databases (Oracle, Postgres, SQL Server, etc.) use the database.schema.table reference. However, in SQLite the very database file you connect to is the main database in scope. But ATTACH allows you to connect to other SQLite databases and then recognizes database.table referencing.
Additionally, for best practices:
In sqlite3 and any other Python DB-APIs, use parameterization for literal values and do not format values to SQL statement.
In general Python, stop using the de-emphasized (not deprecated yet) string modulo operator, %. Use str.format or more recent F-string for string formatting. But neither is needed here.
Altogether, if you connect to the 0rk_D4T4B453 database, simply query without database reference:
conn = sqlite3.connect('/path/to/0rk_D4T4B453.db')
cursor = conn.cursor()
# PREPARED STATEMENT WITH QMARK PLACEHOLDERS
sql = """UPDATE Da_Boyz
SET Rank = ?
WHERE Uzaname = ?"""
# BIND WITH TUPLE OF PARAMS IN SECOND ARG
cursor.execute(sql, (NewRank, Findaname))
conn.commit()
If you do connect to a different database, call ATTACH. Here also, you can alias other database with better naming instead of number leading identifier.
cursor.execute("ATTACH '/path/to/0rk_D4T4B453.db' AS other_db")
sql = """UPDATE other_db.Da_Boyz
SET Rank = ?
WHERE Uzaname = ?"""
cursor.execute(sql, (NewRank, Findaname))
conn.commit()
cur.execute("DETACH other_db")

Connect to DB2 via JayDeBeApi JDBC in Python

I've been struggling for a while to connect to DB2 via Python client on OSX (maveriks). A valid option seem to be using JayDeBeApi but, running the following code...
import jaydebeapi
import jpype
jar = '/opt/IBM/db2/V10.1/java/db2jcc4.jar' # location of the jdbc driver jar
args='-Djava.class.path=%s' % jar
jvm = jpype.getDefaultJVMPath()
jpype.startJVM(jvm, args)
jaydebeapi.connect('com.ibm.db2.jcc.DB2Driver',
'jdbc:db2://server:port/database','myusername','mypassword')
I'll get the following error
Traceback (most recent call last):
File "<pyshell#67>", line 2, in <module>
'jdbc:db2://server:port/database','myusername','mypassword')
File "/Library/Python/2.7/site-packages/jaydebeapi/dbapi2.py", line 269, in connect
jconn = _jdbc_connect(jclassname, jars, libs, *driver_args)
File "/Library/Python/2.7/site-packages/jaydebeapi/dbapi2.py", line 117, in _jdbc_connect_jpype
return jpype.java.sql.DriverManager.getConnection(*driver_args)
com.ibm.db2.jcc.am.SqlSyntaxErrorExceptionPyRaisable: com.ibm.db2.jcc.am.SqlSyntaxErrorException: [jcc][t4][10205][11234][3.63.123] Null userid is not supported. ERRORCODE=-4461, SQLSTATE=42815
So basically I'm connecting to the server, but for some reason I'm not using the username & password provided. Any idea on how to pass correctly username and password? I can't find further specification for this problem exactly, and any suggestion or tips are welcome.
nevermind... I wasn't passing the LIST of parameters.... with the following changes it now works:
jaydebeapi.connect(
'com.ibm.db2.jcc.DB2Driver',
['jdbc:db2://server:port/database','myusername','mypassword']
)

How to fix IncompleteRead error on Linux using Py2Neo

I am updating data on a Neo4j server using Python (2.7.6) and Py2Neo (1.6.4). My load function is:
from py2neo import neo4j,node, rel, cypher
session = cypher.Session('http://my_neo4j_server.com.mine:7474')
def load_data():
tx = session.create_transaction()
for row in dataframe.iterrows(): #dataframe is a pandas dataframe
name = row[1].name
id = row[1].id
merge_query = "MERGE (a:label {name:'%s', name_var:'%s'}) " % (id, name)
tx.append(merge_query)
tx.commit()
When I execute this from Spyder in Windows it works great. All the data from the dataframe is committed to neo4j and visible in the graph. However, when I run this from a linux server (different from the neo4j server) I get the following error at tx.commit(). Note that I have the same version of python and py2neo.
INFO:py2neo.packages.httpstream.http:>>> POST http://neo4j1.qs:7474/db/data/transaction/commit [1360120]
INFO:py2neo.packages.httpstream.http:<<< 200 OK [chunked]
ERROR:__main__:some part of process failed
Traceback (most recent call last):
File "my_file.py", line 132, in load_data
tx.commit()
File "/usr/local/lib/python2.7/site-packages/py2neo/cypher.py", line 242, in commit
return self._post(self._commit or self._begin_commit)
File "/usr/local/lib/python2.7/site-packages/py2neo/cypher.py", line 208, in _post
j = rs.json
File "/usr/local/lib/python2.7/site-packages/py2neo/packages/httpstream/http.py", line 563, in json
return json.loads(self.read().decode(self.encoding))
File "/usr/local/lib/python2.7/site-packages/py2neo/packages/httpstream/http.py", line 634, in read
data = self._response.read()
File "/usr/local/lib/python2.7/httplib.py", line 543, in read
return self._read_chunked(amt)
File "/usr/local/lib/python2.7/httplib.py", line 597, in _read_chunked
raise IncompleteRead(''.join(value))
IncompleteRead: IncompleteRead(128135 bytes read)
This post (IncompleteRead using httplib) suggests that is an httplib error. I am not sure how to handle since I am not calling httplib directly.
Any suggestions for getting this load to work on Linux or what the IncompleteRead error message means?
UPDATE :
The IncompleteRead error is being caused by a Neo4j error being returned. The line returned in _read_chunked that is causing the error is:
pe}"}]}],"errors":[{"code":"Neo.TransientError.Network.UnknownFailure"
Neo4j docs say this is an unknown network error.
Although I can't say for sure, this implies some kind of local network issue between client and server rather than a bug within the library. Py2neo wraps httplib (which is pretty solid itself) and, from the stack trace, it looks as though the client is expecting more chunks from a chunked response.
To diagnose further, you could make some curl calls from your Linux application server to your database server and see what succeeds and what doesn't. If that works, try writing a quick and dirty python script to make the same calls with httplib directly.
UPDATE 1: Given the update above and the fact that the server streams its responses, I'm thinking that the chunk size might represent the intended payload but the error cuts the response short. Recreating the issue with curl certainly seems like the best next step to help determine whether it is a fault in the driver, the server or something else.
UPDATE 2: Looking again this morning, I notice that you're using Python substitution for the properties within the MERGE statement. As good practice, you should use parameter substitution at the Cypher level:
merge_query = "MERGE (a:label {name:{name}, name_var:{name_var}})"
merge_params = {"name": id, "name_var": name}
tx.append(merge_query, merge_params)

error to use django cursor to save escaped characters

I have a url which I want to save into the MySQL database using the "cursor" tool offered by django, but I keep getting the "not enough arguments for format string" error because this url contains some escaped characters (non-ascii characters). The testing code is fairly short:
test.py
import os
import runconfig #configuration file
os.environ['DJANGO_SETTINGS_MODULE'] = runconfig.django_settings_module
from django.db import connection,transaction
c = connection.cursor()
url = "http://www.academicjournals.org/ijps/PDF/pdf2011/18mar/G%C3%B3mez-Berb%C3%ADs et al.pdf"
dbquery = "INSERT INTO main_crawl_document SET url="+url
c.execute(dbquery)
transaction.commit_unless_managed()
The full error message is
Traceback (most recent call last):
File "./test.py", line 14, in <module>
c.execute(dbquery)
File "/usr/local/lib/python2.6/site-packages/django/db/backends/util.py", line 38, in execute
sql = self.db.ops.last_executed_query(self.cursor, sql, params)
File "/usr/local/lib/python2.6/site-packages/django/db/backends/__init__.py", line 505, in last_executed_query
return smart_unicode(sql) % u_params
TypeError: not enough arguments for format string
Can anybody help me?
You're opening yourself up for a possible SQL injection. Instead, use c.execute() properly:
url = "http://www.academicjournals.org/ijps/PDF/pdf2011/18mar/G%C3%B3mez-Berb%C3%ADs et al.pdf"
dbquery = "INSERT INTO main_crawl_document SET url=?"
c.execute(dbquery, (url,))
transaction.commit_unless_managed()
The .execute method should accept an iterable of parameters to use for escaping, assuming it's the normal dbapi method (which it should be with Django).

Categories