Apache Superset not loading table records/columns - python

I am trying to add a table in Superset. The other tables get added properly, meaning the columns are fetched properly by Superset. But for my table booking_xml, it does not load any columns.
The description of table is
After adding this table, when I click on the table name to explore it, it gives the following error
Empty query?
Traceback (most recent call last):
File "/home/superset/superset_venv/lib/python3.8/site-packages/superset/viz.py", line 473, in get_df_payload
df = self.get_df(query_obj)
File "/home/superset/superset_venv/lib/python3.8/site-packages/superset/viz.py", line 251, in get_df
self.results = self.datasource.query(query_obj)
File "/home/superset/superset_venv/lib/python3.8/site-packages/superset/connectors/sqla/models.py", line 1139, in query
query_str_ext = self.get_query_str_extended(query_obj)
File "/home/superset/superset_venv/lib/python3.8/site-packages/superset/connectors/sqla/models.py", line 656, in get_query_str_extended
sqlaq = self.get_sqla_query(**query_obj)
File "/home/superset/superset_venv/lib/python3.8/site-packages/superset/connectors/sqla/models.py", line 801, in get_sqla_query
raise Exception(_("Empty query?"))
Exception: Empty query?
ERROR:superset.viz:Empty query?
However, when I try to explore it using the SQL editor, it loads up properly. I have found the difference in the form_data parameter in the URL when loading from tables page and from SQL editor.
URL from SQL Lab view:
form_data={"queryFields":{"groupby":"groupby","metrics":"metrics"},"datasource":"192__table","viz_type":"table","url_params":{},"time_range_endpoints":["inclusive","exclusive"],"granularity_sqla":"created_on","time_grain_sqla":"P1D","time_range":"Last+week","groupby":[],"metrics":["count"],"all_columns":[],"percent_metrics":[],"order_by_cols":[],"row_limit":10000,"order_desc":true,"adhoc_filters":[],"table_timestamp_format":"smart_date","color_pn":true,"show_cell_bars":true}
URL from datasets list:
form_data={"queryFields":{"groupby":"groupby","metrics":"metrics"},"datasource":"191__table","viz_type":"table","url_params":{},"time_range_endpoints":["inclusive","exclusive"],"time_grain_sqla":"P1D","time_range":"Last+week","groupby":[],"all_columns":[],"percent_metrics":[],"order_by_cols":[],"row_limit":10000,"order_desc":true,"adhoc_filters":[],"table_timestamp_format":"smart_date","color_pn":true,"show_cell_bars":true}
When loading from datasets list, /explore_json/ gives 400 Bad Request.
Superset version == 0.37.1, Python version == 3.8

Superset saves the details/metadata of the table that has to be connected. So, in that my table had a very long datatype as you can see in the image in question. Superset saves that as a varchar of length 32. So, the database was not allowing to enter this value into the database. Which was causing the error. Due to that no records were being fetched even after adding the table in the datasources.
What I did was to increase the length of the column datatype.
ALTER TABLE table_columns MODIFY type varchar(200)

Related

VS Code python terminal is keep printing "Found" when it should ask the user for an input

I am taking cs50 class. Currently on Week 7.
Prior to this coding, python was working perfectly fine.
Now, I am using SQL command within python file on VS Code.
cs50 module is working fine through venv.
When I execute python file, I should be asked "Title: " so that I can type any titles to see the outcome.
I should be getting an output of the counter, which tracks the number of occurrence of the title from user input.
import csv
from cs50 import SQL
db = SQL("C:\\Users\\wf user\\Desktop\\CODING\\CS50\\shows.db")
title = input("Title: ").strip()
#uses SQL command to return the number of occurrence of the title the user typed.
rows = db.execute("SELECT COUNT(*) AS counter FROM shows WHERE title LIKE ?", title) #? is for title.
#db.execute always returns a list of rows even if it's just one row.
#setting row to the keyword which is is rows[0]. the actual value is in rows[1]
row = rows[0]
#passing the key called counter will print out the value that is in rows[1]
print(row["counter"])
I have shows.db in the path.
But the output is printing "Found". It's not even asking for a Title to input.
PS C:\Users\wf user\Desktop\CODING\CS50> python favoritesS.py
Found
I am expecting the program to ask me "Title: " for me, but instead it's print "Found"
In cs50, the professor encountered the same problem when he was coding phonebook.py, but the way he solved the problem was he put the python file into a separate folder called "tmp"
I tried the same way but then I was given a long error message
PS C:\Users\wf user\Desktop\CODING\CS50> cd tmp
PS C:\Users\wf user\Desktop\CODING\CS50\tmp> python favoritesS.py
Traceback (most recent call last):
File "C:\Users\wf user\Desktop\CODING\CS50\tmp\favoritesS.py", line 5, in <module>
db = SQL("C:\\Users\\wf user\\Desktop\\CODING\\CS50\\shows.db")
File "C:\Users\wf user\AppData\Local\Programs\Python\Python311\Lib\site-packages\cs50\sql.py", line 74, in __init__
self._engine = sqlalchemy.create_engine(url, **kwargs).execution_options(autocommit=False, isolation_level="AUTOCOMMIT")
File "<string>", line 2, in create_engine
File "C:\Users\wf user\AppData\Local\Programs\Python\Python311\Lib\site-packages\sqlalchemy\util\deprecations.py", line 309, in warned
return fn(*args, **kwargs)
File "C:\Users\wf user\AppData\Local\Programs\Python\Python311\Lib\site-packages\sqlalchemy\engine\create.py", line 518, in create_engine
u = _url.make_url(url)
File "C:\Users\wf user\AppData\Local\Programs\Python\Python311\Lib\site-packages\sqlalchemy\engine\url.py", line 732, in make_url
return _parse_url(name_or_url)
File "C:\Users\wf user\AppData\Local\Programs\Python\Python311\Lib\site-packages\sqlalchemy\engine\url.py", line 793, in _parse_url
raise exc.ArgumentError(
sqlalchemy.exc.ArgumentError: Could not parse SQLAlchemy URL from string 'C:\Users\wf user\Desktop\CODING\CS50\shows.db'
Here is the proof that the code I posted is the same code I am working on.
I use Start Debugging under Run menu on VSCode and it's working! But not when I don't use debugging.
Is this the library you are using? https://cs50.readthedocs.io/
It may be that one of your intermediate results is not doing what you think it is. I would recommend you put print() statements at every step of the way to see the values of the intermediate variables.
If you have learned how to use a debugger, that is even better.

Python getting results from Azure Storage Table with azure-data-tables

I am trying to query an Azure storage table to get all rows to turn into a table on a web site, however I cannot get the entries from the table, I get the same error every time "azure.core.exceptions.HttpResponseError: The requested operation is not implemented on the specified resource."
For code I am following the examples here and it is not working as expected.
from azure.data.tables import TableServiceClient
from azure.core.credentials import AzureNamedKeyCredential
def read_storage_table():
credential = AzureNamedKeyCredential(os.environ["AZ_STORAGE_ACCOUNT"], os.environ["AZ_STORAGE_KEY"])
service = TableServiceClient(endpoint=os.environ["AZ_STORAGE_ENDPOINT"], credential=credential)
client = service.get_table_client(table_name=os.environ["AZ_STORAGE_TABLE"])
entities = client.query_entities(query_filter="PartitionKey eq 'tasksSeattle'")
client.close()
service.close()
return entities
Then calling the function.
table = read_storage_table()
for record in table:
for key in record.keys():
print("Key: {}, Value: {}".format(key, record[key]))
And that returns:
Traceback (most recent call last):
File "C:\Program Files\Python310\Lib\site-packages\azure\data\tables\_models.py", line 363, in _get_next_cb
return self._command(
File "C:\Program Files\Python310\Lib\site-packages\azure\data\tables\_generated\operations\_table_operations.py", line 386, in query_entities
raise HttpResponseError(response=response, model=error)
azure.core.exceptions.HttpResponseError: Operation returned an invalid status 'Not Implemented'
Content: {"odata.error":{"code":"NotImplemented","message":{"lang":"en-US","value":"The requested operation is not implemented on the specified resource.\nRequestId:cd29feda-1002-006b-679c-3d39e8000000\nTime:2022-03-22T03:27:00.5993216Z"}}}
Using a similar function I am able to write to the table. But even trying entities = client.list_entities() I get the same error. I'm at a loss.
KrunkFu thank you for identifying and sharing the solution here. Posting the same into answer section to help other community members.
replacing https://<accountname>.table.core.windows.net/<table>, with
https://<accountname>.table.core.windows.net to the query solved the
issue

Python, Postgresql trouble copying from csv

I am doing this all as a test. I want to take a csv file that has headers and copy the values into a postgresql database table. The tables columns are named the same as the headers in csv file case-sensitive. table has two columns "pkey", "m". the csv just has the "m" for header. pkey is just the primary key setup to auto increment. As a test i just want to copy the "m" column in the csv file the table.
import csv
import psycopg2
database = psycopg2.connect ( database = "testing", user="**",
password="**", host="**", port="**")
ocsvf = open("sample.csv")
def merger(conn, table_name, file_object):
cursor = conn.cursor()
cursor.copy_from(file_object, table_name, sep=',', columns=('mls'))
conn.commit()
cursor.close()
try:
merger(database, 'tests', ocsvf)
finally:
database.close()
when i try to run the code i get this as a error
Traceback (most recent call last):
File "csvtest.py", line 26, in <module>
merger(database, 'tests', ocsvf)
File "csvtest.py", line 21, in merger
cursor.copy_from(file_object, table_name, sep=',', columns=('m'))
psycopg2.ProgrammingError: column "m" of relation "tests" does not exist
I am sure its something simple that i just keep over looking but i have also googled this and the one thing i found was someone said it might be the primary key is setup right but i tested it and the primary keys works fine when i do manual input from pgadmin. any help would be great thanks
In this line:
cursor.copy_from(file_object, table_name, sep=',', columns=('mls'))
The ('mls') evaluated to "mls" which eventually means that iterating over it will result in 3 items ['m','l','s'].
You should write this line as follows:
cursor.copy_from(file_object, table_name, sep=',', columns=('mls',))
The expression ('mls',) evaluated to a tuple with one item: "mls", which is what I guess you meant to do.

Inserting values in sqlite 3 but appearing as column names

Im using inputs from SQLite3 in python 3.5 to add user information to a data base. However once i obtain the data and insert into the database, rather than inserting the data into the columns it tells me there is no column.
The error I get is as follows:
Exception in Tkinter callback
Traceback (most recent call last):
File
"C:\Users\Luke_2\AppData\Local\Programs\Python\Python35-32\lib\tkinter__init__.py", line 1549, in call
return self.func(*args) File "C:\Users\Luke_2\Desktop\Computing\Coursework\live\current.py", line
303, in details
cur_user.execute("INSERT INTO LogIn(Email,Password) VALUES("+user+","+passw+")") sqlite3.OperationalError: no such column:
a
And the function in my code doing this is as follows:
def details():
user = email_sign.get()
user1 = email1_sign.get()
passw = password_sign.get()
password1 = password1_sign.get()
if user == user1 and passw == password1:
cur_user.execute("INSERT INTO LogIn(Email,Password) VALUES("+user+","+passw+")")
conn_user.commit()
else:
print("please try again")
It is because the request that you actually execute has no quotes around the values, so the names are interpreted as column names but the SQL engine. If you pass a and b respectively you will execute
INSERT INTO LogIn(Email,Password) VALUES(a,b)
when what is required is
INSERT INTO LogIn(Email,Password) VALUES('a','b')
But you should never to that! Building requests that way but hardcoding parameters in the request has been the cause of SQL injection problems for decades.
The correct way is to build a parameterized request:
cur_user.execute("INSERT INTO LogIn(Email,Password) VALUES(?,?)", (user, password))
simpler, smarter and immune to SQL injection...

Py-Postgresql and Raritan PowerIQ - Can't seem to find table?

I'm trying to write some Python3 to interface with the backend PostgreSQL server on a Raritan Power IQ (http://www.raritan.com/products/power-management/power-iq/) system.
I've used pgAdminIII to connect to the server, and it connects fine with my credentials. I can see the databases, as well as the schemas in each database.
I'm now using py-postgresql to attempt to script it, and I'm hitting some issues.
I use the following to connect:
postgresql.open("pq://odbcuser:password#XX.XX.XX.XX:5432/raritan")
to connect to the raritan database, using user "odbcuser" and password "password" (no, that's not the real one...lol).
It appears to connect successfully. I'm able to to run some queries, e.g.
ps = db.prepare("SELECT * from pg_tables;")
ps()
manages to list all the tables/views in the "raritan" database.
However, I then try to access a specific view and it breaks. The "raritan" database has two schemas, "odbc" and "public".
I can access views from the public schema. E.g.:
ps = db.prepare("SELECT * from public.qrypwrall;")
ps()
works to an extent - I get a permission denied error, same as I under pgAdminIII, as my account doesn't have access to that view, but syntactally, it seems fine and it does find the table.
However, when I try to access a view under "odbc", it just breaks. E.g.:
>>> ps = db.prepare("SELECT * from odbc.Aisles;")
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "C:\Python31\lib\site-packages\postgresql\driver\pq3.py", line 2291, in prepare
ps._fini()
File "C:\Python31\lib\site-packages\postgresql\driver\pq3.py", line 1393, in _
fini
self.database._pq_complete()
File "C:\Python31\lib\site-packages\postgresql\driver\pq3.py", line 2538, in _
pq_complete
self.typio.raise_error(x.error_message, cause = getattr(x, 'exception', None
))
File "C:\Python31\lib\site-packages\postgresql\driver\pq3.py", line 471, in ra
ise_error
self.raise_server_error(error_message, **kw)
File "C:\Python31\lib\site-packages\postgresql\driver\pq3.py", line 462, in ra
ise_server_error
raise server_error
postgresql.exceptions.UndefinedTableError: relation "odbc.aisles" does not exist
CODE: 42P01
LOCATION: File 'namespace.c', line 268, in RangeVarGetRelid from SERVER
STATEMENT: [parsing]
statement_id: py:0x10ca1b0
string: SELECT * from odbc.Aisles;
CONNECTION: [idle]
client_address: 10.180.9.213/32
client_port: 2612
version:
PostgreSQL 8.3.7 on i686-redhat-linux-gnu, compiled by GCC gcc (GCC) 4.1.2 2
0071124 (Red Hat 4.1.2-42)
CONNECTOR: [IP4] pq://odbcuser:***#10.180.138.121:5432/raritan
category: None
DRIVER: postgresql.driver.pq3.Driver
However, I can access the same table (Aisles) fine under pgAdminIII, using the same credentials (and unlike public, I actually have permissions to all these tables.
Is there any reason that py-postgresql might not see these views? Or anything you can pick out from the error messages?
I have a suspicion that it's to do with PowerIQ using mixed-case for the table names (e.g. "Aisle"). However, I'm not exactly sure how to deal with these in psycopg. How exactly would I modify say, my cursor.execute like to quote the table?
cursor.execute('SELECT * from "public.Aisles"')
also doesn't work.
Cheers,
Victor
Have you tried it this way: 'SELECT * from public."Aisles"?
Quoting the whole thing makes it a non-qualified (no schema) table name which has a dot in it.

Categories