Incorrect number of bindings supplied Sqlite query in Python - python

When running the following code in python I am given the error: Incorrect number of bindings supplied. The current statement uses 0, and there are 1 supplied. For the query statement.
conn = sqlite3.connect('viber_messages2')
cur = conn.cursor()
cur = cur.execute("""SELECT DISTINCT messages.conversation_id
FROM messages
INNER JOIN participants ON messages.participant_id = participants._id
WHERE messages.conversation_id IS NOT NULL;""")
query = ("""SELECT messages._id, messages.body, messages.conversation_id, messages.participant_id, participants_info.number, participants_info.contact_name
FROM messages
INNER JOIN
participants ON messages.participant_id = participants._id
INNER JOIN
participants_info ON participants.participant_info_id = participants_info._id;""")
with open('messages.html', 'w') as h, open('test.txt', 'w') as t:
for convo in cur.fetchall():
df = pd.read_sql_query(query, conn, params=convo)
# HTML WRITE
h.write(df.to_html())
h.write('<br/>')
# TXT WRITE
t.write(df.to_string())
t.write('\n\n')
cur.close()
conn.close()

Let's take a close look at your query
SELECT messages._id, messages.body, messages.conversation_id, messages.participant_id, participants_info.number, participants_info.contact_name
FROM messages
INNER JOIN
participants ON messages.participant_id = participants._id
INNER JOIN
participants_info ON participants.participant_info_id = participants_info._id
There are no place holder here to which parameters can be bound. But how are you executing this query?
df = pd.read_sql_query(query, conn, params=convo)
You should instead do
df = pd.read_sql_query(query, conn)

Related

Iterate through FOR CURSOR in Python

I just started with coding in Python and I have one problem. I am using ibm_dbi driver.
Here is my sql example:
FOR v AS cur CURSOR FOR
SELECT S.NAME FROM SYSIBM.SYSTABLES AS S
JOIN (SELECT DISTINCT TBNAME FROM SYSIBM.SYSCOLUMNS WHERE CREATOR = 'SOME_SCHEMA_DB') AS C.TBNAME = S.NAME WHERE S.CREATOR = 'SOME_SCHEMA' AND S.TYPE = 'T'
DO
SET TABLE = SCHEMA1 ||'.'||v.Name
...
SELECT 1 INTO TEMP_TABLE FROM SYSIBM.SYSTABLES WHERE TYPE='T' AND CREATOR = SCHEMA1 AND NAME = v.NAME
END FOR;
So far i have in Python:
#make a connection to db
conn = ibm_db_dbi.connect("DATABASE=%s;HOSTNAME=%s,....)
#define a cursor
cur = conn.cursor()
sql="""SELECT S.NAME FROM SYSIBM.SYSTABLES AS S
JOIN (SELECT DISTINCT TBNAME FROM SYSIBM.SYSCOLUMNS WHERE CREATOR = 'SOME_SCHEMA_DB') AS C.TBNAME = S.NAME WHERE S.CREATOR = 'SOME_SCHEMA' AND S.TYPE = 'T'"""
resultSet = cur.execute(sql)
I don't know how to iterate through result from query and set values from the cursor. How to set value for this piece of code
SET TABLE = SCHEMA1 ||'.'||v.Name
...
SELECT 1 INTO TEMP_TABLE FROM SYSIBM.SYSTABLES WHERE TYPE='T' AND CREATOR = SCHEMA1 AND NAME = v.NAME
Tip: get your SQL working outside of python first. Your question shows that query that has syntax mistakes or typos.
To iterate through a result-set from a query, use one of the documented fetch methods for the DBI interface (example, fetchmany() , or fetchall() or others).
Here is a fragment, that uses fetchmany() .
try:
cur = conn.cursor()
sql="""SELECT S.NAME FROM SYSIBM.SYSTABLES AS S
inner JOIN (SELECT DISTINCT TBNAME
FROM SYSIBM.SYSCOLUMNS
WHERE TBCREATOR = 'SOME_SCHEMA') AS C
ON C.TBNAME = S.NAME
WHERE S.CREATOR = 'SOME_SCHEMA'
AND S.TYPE = 'T';"""
cur.execute(sql)
result = cur.fetchmany()
while result:
for i in result:
print(i)
result = cur.fetchmany()
except Exception as e:
print(e)
raise

How can I handle errors inside of a for loop inside of a cx_Oracle connection?

here's a run down of what I'd like to do: I have a list of table names, and I want to run sql against an oracle database and pull back the table name and row count for every table in my table list. However, not every table name in my list of table names is necessarily actually in the database. This causes my code to throw a database error. What I would like to do, is whenever I come to a table name that is not in the database, I create a dataframe that contains the table name and instead of count(*), there's some text that says 'table not found', or something similar. At the end of the loop I'm concatenating all of the dataframes into one dataframe. The overall goal here is to validate that certain tables exist and that they have the expected row counts.
query_list=[]
df_List=[]
connstr= '%s/%s#%s' %(username, password, server)
conn = cx_Oracle.connect(connstr)
with conn:
query_list = ["SELECT '%s' as tbl, count(*) FROM %s." %(elm, database) +elm for elm in table_list]
df_List = [pd.read_sql(elm,conn) for elm in query_list]
df = pd.concat(df_List)
Consider try/except handling to return query output or table not found output:
def get_table_count(sql, conn, elm):
try:
return pd.read_sql(sql, conn)
except:
return pd.DataFrame({'tbl': elm, 'note': 'table not found'}, index = [0])
with conn:
sql = "SELECT '{t}' as tbl, count(*) as table_count FROM {d}.{t}"
df_List = [get_table_count(sql.format(t = elm, d = database), conn, elm) \
for elm in table_list]
df = pd.concat(df_List, ignore_index = True)
Get a list of all the Table Names which are in the DB, then create a loop to query each Table to get the row count.
Here is a SQL statement to get a list of all Tables in an Oracle DB:
SQL:
SELECT DISTINCT TABLE_NAME FROM ALL_TAB_COLUMNS ORDER BY TABLE_NAME ASC;
Python (to make list of tables you want row counts for and which exist in the DB):
list(set(tables_that_exist_in_DB) - (set(tables_that_exist_in_DB) - set(list_of_tables_you_want)))

Python, insert record into database from JOIN statement

I'm trying to write combined table data into a new table for a timing system I'm working on.
The following SQL works in PHPMyAdmin.:
INSERT
INTO
results(
firstName,
lastName,
raceNumber,
raceTime
)
SELECT
s.firstName,
s.lastName,
s.raceNumber,
h.time
FROM
runners s
INNER JOIN
chipData hp
ON
s.raceNumber = hp.bandID
INNER JOIN
readings h
ON
hp.tagId = h.tagId
WHERE
hp.tagId = 123456
LIMIT 1
However, if I add this into a Python statement as follows, it doesn't work:
db = connect()
cur = db.cursor()
cur.execute("""INSERT INTO results( firstName, lastName, raceNumber, raceTime ) SELECT s.firstName, s.lastName, s.raceNumber, h.time FROM runners s INNER JOIN chipData hp ON s.raceNumber = hp.bandID INNER JOIN readings h ON hp.tagId = h.tagId WHERE hp.tagId = %s LIMIT 1""", (123456)
db.commit()
db.close()
Any help is appreciated!

Execute SQL for different where clause from Python

I am trying to print SQL result through python code, where I an trying to pass different predicates of the where clause from a for loop. But the code only taking the last value from the loop and giving the result.
In the below example I have two distinct id values 'aaa' and 'bbb'. There are 4 records for id value = 'aaa' and 2 records for the id value = 'bbb'.
But the below code only giving me the result for the id value ='bbb' not for id value 'aaa'
Can anyone help to identify what exactly wrong I am doing?
import pymysql
db = pymysql.connect(host="localhost", user="user1", passwd="pass1", db="db1")
cur = db.cursor()
in_lst=['aaa', 'bbb']
for i in in_lst:
Sql = "SELECT id, val, typ FROM test123 Where id='{inpt}'".format(inpt=i)
print(Sql)
cur.execute(Sql)
records = cur.fetchall()
print(records)
db.close()
The result I am getting as below
C:\Python34\python.exe C:/Users/Koushik/PycharmProjects/Test20161204/20170405.py
SELECT id, val, typ FROM test123 Where id='bbb'
(('bbb', 5, '1a'), ('bbb', 17, '1d'))
Process finished with exit code 0
import pymysql
db = pymysql.connect(host="localhost", user="root", passwd="1234", db="sakila")
cur = db.cursor()
in_lst=['1', '2']
for i in in_lst:
Sql = "SELECT * FROM actor Where actor_id='{inpt}'".format(inpt=i)
print(Sql)
cur.execute(Sql)
records = cur.fetchall()
print(records)
db.close()
Indentation is your problem, please update the code according to your needs...
Within your for loop, you're formatting the sql statement to replace "{inpt}" with "aaa". However, before you do anything with that value, you're immediately overwriting it with the "bbb" version.
You would need to either:
Store the results somehow before the next iteration of the loop, then process them outside of the loop.
Process the results within the loop.
Something like the following will give you a list containing both results from the fetchall() calls:
import pymysql
db = pymysql.connect(host="localhost", user="user1", passwd="pass1", db="db1")
cur = db.cursor()
in_lst=['aaa', 'bbb']
records = list()
for i in in_lst:
Sql = "SELECT id, val, typ FROM test123 Where id='{inpt}'".format(inpt=i)
print(Sql)
cur.execute(Sql)
records.append(cur.fetchall())
print(records)
db.close()

How do I read cx_Oracle.LOB data in Python?

I have this code:
dsn = cx_Oracle.makedsn(hostname, port, sid)
orcl = cx_Oracle.connect(username + '/' + password + '#' + dsn)
curs = orcl.cursor()
sql = "select TEMPLATE from my_table where id ='6'"
curs.execute(sql)
rows = curs.fetchall()
print rows
template = rows[0][0]
orcl.close()
print template.read()
When I do print rows, I get this:
[(<cx_Oracle.LOB object at 0x0000000001D49990>,)]
However, when I do print template.read(), I get this error:
cx_Oracle.DatabaseError: Invalid handle!
Do how do I get and read this data? Thanks.
I've found out that this happens in case when connection to Oracle is closed before the cx_Oracle.LOB.read() method is used.
orcl = cx_Oracle.connect(usrpass+'#'+dbase)
c = orcl.cursor()
c.execute(sq)
dane = c.fetchall()
orcl.close() # before reading LOB to str
wkt = dane[0][0].read()
And I get: DatabaseError: Invalid handle!
But the following code works:
orcl = cx_Oracle.connect(usrpass+'#'+dbase)
c = orcl.cursor()
c.execute(sq)
dane = c.fetchall()
wkt = dane[0][0].read()
orcl.close() # after reading LOB to str
Figured it out. I have to do something like this:
curs.execute(sql)
for row in curs:
print row[0].read()
You basically have to loop through the fetchall object
dsn = cx_Oracle.makedsn(hostname, port, sid)
orcl = cx_Oracle.connect(username + '/' + password + '#' + dsn)
curs = orcl.cursor()
sql = "select TEMPLATE from my_table where id ='6'"
curs.execute(sql)
rows = curs.fetchall()
for x in rows:
list_ = list(x)
print(list_)
There should be an extra comma in the for loop, see in below code, i have supplied an extra comma after x in for loop.
dsn = cx_Oracle.makedsn(hostname, port, sid)
orcl = cx_Oracle.connect(username + '/' + password + '#' + dsn)
curs = orcl.cursor()
sql = "select TEMPLATE from my_table where id ='6'"
curs.execute(sql)
rows = curs.fetchall()
for x, in rows:
print(x)
I had the same problem with in a slightly different context. I needed to query a +27000 rows table and it turns out that cx_Oracle cuts the connection to the DB after a while.
While a connection to the db is open, you can use the read() method of the cx_Oracle.Lob object to transform it into a string. But if the query brings a table that is too big, it won´t work because the connection will stop at some point and when you want to read the results from the query you´ll gt an error on the cx_Oracle objects.
I tried many things, like setting
connection.callTimeout = 0 (according to documentation, this means it would wait indefinetly), using fetchall() and then putting the results on a dataframe or numpy array but I could never read the cx_Oracle.Lob objects.
If I try to run the query using pandas.DataFrame.read_sql(query, connection) The dataframe would contain cx_Oracle.Lob objects with the connection closed, making them useless. (Again this only happens if the table is very big)
In the end I found a way of getting around this by querying and creating a csv file inmediatlely after, even though I know it´s not ideal.
def csv_from_sql(sql: str, path: str="dataframe.csv") -> bool:
try:
with cx_Oracle.connect(config.username, config.password, config.database, encoding=config.encoding) as connection:
connection.callTimeout = 0
data = pd.read_sql(sql, con=connection)
data.to_csv(path)
print("FILE CREATED")
except cx_Oracle.Error as error:
print(error)
return False
finally:
print("PROCESS ENDED\n")
return True
def make_query(sql: str, path: str="dataframe.csv") -> pd.DataFrame:
if csv_from_sql(sql, path):
dataframe = pd.read_csv("dataframe.csv")
return dataframe
return pd.DataFrame()
This took a long time (about 4 to 5 minutes) to bring my +27000-rows table, but it worked when everything else didn´t.
If anyone knows a better way, it would be helpful for me too.

Categories