ORA-00942 when importing data from Oracle Database to Pandas - python

I try to import data from a oracle database to a pandas dataframe.
right Now i am using:
import cx_Oracle
import pandas as pd
db_connection_string = '.../A1#server:port/servername'
con = cx_Oracle.connect(db_connection_string)
query = """SELECT*
FROM Salesdata"""
df = pd.read_sql(query, con=con)
and get the following error: DatabaseError: ORA-00942: Table or view doesn't exist
When I run a query to get the list of all tables:
cur = con.cursor()
cur.execute("SELECT table_name FROM dba_tables")
for row in cur:
print (row)
The output looks like this:
('A$',)
('A$BD',)
('Salesdata',)
What I am doing wrong? I used this question to start.
If I use the comment to print(query)I get:
SELECT*
FROM Salesdata

Getting ORA-00942 when running SELECT can have 2 possible causes:
The table does not exist: here you should make sure the table name is prefixed by the table owner (schema name) as in select * from owner_name.table_name. This is generally needed if the current Oracle user connected is not the table owner.
You don't have SELECT privileges on the table. This is also generally needed if the current Oracle user connected is not the table owner.
You need to check both.

Related

SQLITE with python : Adding the database name in SELECT statement

I am using sqlite3 with python, and after connecting to the database and creating a table, sqlite3 shows an error when I try to execute a SELECT statment on the table with the name of the databse in it :
con = sqlite3.connect("my_databse")
cur = con.cursor()
cur.execute('''CREATE TABLE my_table ... ''')
cur.execute("SELECT * FROM my_database.my_table") # this works fine without the name of the database before the table name
but I get this error from sqlite3 :
no such table : my_database.my_table
Is there a way to do a SELECT statment with the name of the database in it ?
The short answer is no you can't do this with SQLite. This is because you already specify the database name with sqlite3.connect() and SQLite3 doesn't allow multiple databases in the same file.
Make sure of the database is in the same directory with the python script. In order to verify this you can use os library and os.listdir() method. After connecting the database and creating the cursor, you can query with the table name.
cur.execute('SELECT * FROM my_table')

how to drop all tables in sqlite3 using python?

i made a project which collects data from user and store it on different tables, the application has a delete function which the first option is to delete a specific table which is i already did and the second one is to delete all existing tables.
How can i drop all tables inside my database?
so this is my variables.
conn = sqlite3.connect('main.db')
cursor = conn.execute("DROP TABLE")
cursor.close()
How can i drop all tables inside my database?
According to sqlitetutorial.net
SQLite allows you to drop only one table at a time. To remove multiple
tables, you need to issue multiple DROP TABLE statements.
You can do it by querying all table names (https://www.sqlite.org/faq.html#q7)
Then you can use the result to delete the tables one by one
Here is the code, the function delete_all_tables does that
TABLE_PARAMETER = "{TABLE_PARAMETER}"
DROP_TABLE_SQL = f"DROP TABLE {TABLE_PARAMETER};"
GET_TABLES_SQL = "SELECT name FROM sqlite_schema WHERE type='table';"
def delete_all_tables(con):
tables = get_tables(con)
delete_tables(con, tables)
def get_tables(con):
cur = con.cursor()
cur.execute(GET_TABLES_SQL)
tables = cur.fetchall()
cur.close()
return tables
def delete_tables(con, tables):
cur = con.cursor()
for table, in tables:
sql = DROP_TABLE_SQL.replace(TABLE_PARAMETER, table)
cur.execute(sql)
cur.close()
SQLite3 code to issue multiple DROP TABLE statements based on TEMP_% name wildcard:
.output droptables.sql
SELECT "DROP TABLE """|| sqlite_master.name ||""";" FROM sqlite_master
WHERE type = "table" AND sqlite_master.name LIKE 'TEMP_%';
.read droptables.sql
Example result in .sql output file:
DROP TABLE "TEMP_table1";
DROP TABLE "TEMP_table2";
DROP TABLE "TEMP_table3";
...
Python3 to paste SQL into:
conn = sqlite3.connect(f"main.db")
conn.row_factory = sqlite3.Row
dbc = conn.cursor()
dbc.execute(
f"DROP TABLE 'TEMP_table1';"
)
conn.commit()

Is there a way to get the ddl statement output of the Cx_Oracle.cursor.execute()'s statement execution?

I am trying to create a table like below in Python's cx_Oracle module.
import cx_Oracle
db_conn = cx_Oracle.connect("user/user123#dbhost:1521/db1")
db_cur = db_conn.cursor()
db_cur.execute("create table emp (empid number, empname varchar2(100))")
I want to get that "Table Created" output (from above db_conn or db_cur may be) which would have displayed if I executed this statement in command line like below.
SQL> create table emp (empid number, empname varchar2(100));
Table created.
Is there any way to do this?

Psycopg2 can not create table

From Jupiter notebook, I was able to create Database with Psycopg2.
But somehow I was not able to create Table and store element in it.
import psycopg2
from psycopg2.extensions import ISOLATION_LEVEL_AUTOCOMMIT
con = psycopg2.connect("user=postgres password='abc'");
con.set_isolation_level(ISOLATION_LEVEL_AUTOCOMMIT);
cursor = con.cursor();
name_Database = "socialmedia";
sqlCreateDatabase = "create database "+name_Database+";"
cursor.execute(sqlCreateDatabase);
With the above code, I can see database named "socialmedia" from psql (windows command prompt).
But with the below code, I can not see table named "test_table" from psql.
import psycopg2
# Open a DB session
dbSession = psycopg2.connect("dbname='socialmedia' user='postgres' password='abc'");
# Open a database cursor
dbCursor = dbSession.cursor();
# SQL statement to create a table
sqlCreateTable = "CREATE TABLE test_table(id bigint, cityname varchar(128), latitude numeric, longitude numeric);";
# Execute CREATE TABLE command
dbCursor.execute(sqlCreateTable);
# Insert statements
sqlInsertRow1 = "INSERT INTO test_table values(1, 'New York City', 40.73, -73.93)";
sqlInsertRow2 = "INSERT INTO test_table values(2, 'San Francisco', 37.733, -122.446)";
# Insert statement
dbCursor.execute(sqlInsertRow1);
dbCursor.execute(sqlInsertRow2);
# Select statement
sqlSelect = "select * from test_table";
dbCursor.execute(sqlSelect);
rows = dbCursor.fetchall();
# Print rows
for row in rows:
print(row);
I can get elements only from Jupyter notebook, but not from psql.
And it seems elements are stored temporarily.
How can I see table and elements from psql and keep the element permanently?
I don't see any dbCursor.execute('commit') in the second part of your question?
You have provided an example with AUTOCOMMIT which works, and you are asking why results are stored temporarly when you are not using AUTOCOMMIT?
Well, they are not commited!
They are stored only for the current session, that's why you can get it from your Jupyter session
Also:
you don't need to put semicolons in your python code
you don't need to put semicolons in your SQL code (except when you execute multiple statements, which is not the case here)

python pandas with to_sql() , SQLAlchemy and schema in exasol

I'm trying to upload a pandas data frame to an SQL table. It seemed to me that pandas to_sql function is the best solution for larger data frames, but I can't get it to work. I can easily extract data, but get an error message when trying to write it to a new table:
# connect to Exasol DB
exaString='DSN=exa'
conDB = pyodbc.connect(exaString)
# get some data from somewhere, works without error
sqlString = "SELECT * FROM SOMETABLE"
data = pd.read_sql(sqlString, conDB)
# now upload this data to a new table
data.to_sql('MYTABLENAME', conDB, flavor='mysql')
conDB.close()
The error message I get is
pyodbc.ProgrammingError: ('42000', "[42000] [EXASOL][EXASolution driver]syntax error, unexpected identifier_chain2, expecting
assignment_operator or ':' [line 1, column 6] (-1)
(SQLExecDirectW)")
Unfortunately I have no idea how the query that caused this syntax error looks like or what else is wrong. Can someone please point me in the right direction?
(Second) EDIT:
Following Humayuns and Joris suggestions, I now use Pandas version 0.14 and SQLAlchemy in combination with the Exasol dialect (?). Since I am connecting to a defined schema, I am using the meta data option, but the programm crashes with "Bus error (core dumped)".
engine = create_engine('exa+pyodbc://uid:passwd#exa/mySchemaName', echo=True)
# get some data
sqlString = "SELECT * FROM SOMETABLE" # SOMETABLE is a view in mySchemaName
df = pd.read_sql(sqlString, con=engine) # works
print engine.has_table('MYTABLENAME') # MYTABLENAME is a view in mySchemaName
# prints "True"
# upload it to a new table
meta = sqlalchemy.MetaData(engine, schema='mySchemaName')
meta.reflect(engine, schema='mySchemaName')
pdsql = sql.PandasSQLAlchemy(engine, meta=meta)
pdsql.to_sql(df, 'MYTABLENAME')
I am not sure about setting "mySchemaName" in create_engine(..), but the outcome is the same.
Pandas does not support the EXASOL syntax out of the box, so it need to be changed a bit, here is a working example of your code without SQLAlchemy:
import pyodbc
import pandas as pd
con = pyodbc.connect('DSN=EXA')
con.execute('OPEN SCHEMA TEST2')
# configure pandas to understand EXASOL as mysql flavor
pd.io.sql._SQL_TYPES['int']['mysql'] = 'INT'
pd.io.sql._SQL_SYMB['mysql']['br_l'] = ''
pd.io.sql._SQL_SYMB['mysql']['br_r'] = ''
pd.io.sql._SQL_SYMB['mysql']['wld'] = '?'
pd.io.sql.PandasSQLLegacy.has_table = \
lambda self, name: name.upper() in [t[0].upper() for t in con.execute('SELECT table_name FROM cat').fetchall()]
data = pd.read_sql('SELECT * FROM services', con)
data.to_sql('SERVICES2', con, flavor = 'mysql', index = False)
If you use the EXASolution Python package, then the code would look like follows:
import exasol
con = exasol.connect(dsn='EXA') # normal pyodbc connection with additional functions
con.execute('OPEN SCHEMA TEST2')
data = con.readData('SELECT * FROM services') # pandas data frame per default
con.writeData(data, table = 'services2')
The problem is that also in pandas 0.14 the read_sql and to_sql functions cannot deal with schemas, but using exasol without schemas makes no sense. This will be fixed in 0.15. If you want to use it now look at this pull request https://github.com/pydata/pandas/pull/7952

Categories