I have this code:
dsn = cx_Oracle.makedsn(hostname, port, sid)
orcl = cx_Oracle.connect(username + '/' + password + '#' + dsn)
curs = orcl.cursor()
sql = "select TEMPLATE from my_table where id ='6'"
curs.execute(sql)
rows = curs.fetchall()
print rows
template = rows[0][0]
orcl.close()
print template.read()
When I do print rows, I get this:
[(<cx_Oracle.LOB object at 0x0000000001D49990>,)]
However, when I do print template.read(), I get this error:
cx_Oracle.DatabaseError: Invalid handle!
Do how do I get and read this data? Thanks.
I've found out that this happens in case when connection to Oracle is closed before the cx_Oracle.LOB.read() method is used.
orcl = cx_Oracle.connect(usrpass+'#'+dbase)
c = orcl.cursor()
c.execute(sq)
dane = c.fetchall()
orcl.close() # before reading LOB to str
wkt = dane[0][0].read()
And I get: DatabaseError: Invalid handle!
But the following code works:
orcl = cx_Oracle.connect(usrpass+'#'+dbase)
c = orcl.cursor()
c.execute(sq)
dane = c.fetchall()
wkt = dane[0][0].read()
orcl.close() # after reading LOB to str
Figured it out. I have to do something like this:
curs.execute(sql)
for row in curs:
print row[0].read()
You basically have to loop through the fetchall object
dsn = cx_Oracle.makedsn(hostname, port, sid)
orcl = cx_Oracle.connect(username + '/' + password + '#' + dsn)
curs = orcl.cursor()
sql = "select TEMPLATE from my_table where id ='6'"
curs.execute(sql)
rows = curs.fetchall()
for x in rows:
list_ = list(x)
print(list_)
There should be an extra comma in the for loop, see in below code, i have supplied an extra comma after x in for loop.
dsn = cx_Oracle.makedsn(hostname, port, sid)
orcl = cx_Oracle.connect(username + '/' + password + '#' + dsn)
curs = orcl.cursor()
sql = "select TEMPLATE from my_table where id ='6'"
curs.execute(sql)
rows = curs.fetchall()
for x, in rows:
print(x)
I had the same problem with in a slightly different context. I needed to query a +27000 rows table and it turns out that cx_Oracle cuts the connection to the DB after a while.
While a connection to the db is open, you can use the read() method of the cx_Oracle.Lob object to transform it into a string. But if the query brings a table that is too big, it won´t work because the connection will stop at some point and when you want to read the results from the query you´ll gt an error on the cx_Oracle objects.
I tried many things, like setting
connection.callTimeout = 0 (according to documentation, this means it would wait indefinetly), using fetchall() and then putting the results on a dataframe or numpy array but I could never read the cx_Oracle.Lob objects.
If I try to run the query using pandas.DataFrame.read_sql(query, connection) The dataframe would contain cx_Oracle.Lob objects with the connection closed, making them useless. (Again this only happens if the table is very big)
In the end I found a way of getting around this by querying and creating a csv file inmediatlely after, even though I know it´s not ideal.
def csv_from_sql(sql: str, path: str="dataframe.csv") -> bool:
try:
with cx_Oracle.connect(config.username, config.password, config.database, encoding=config.encoding) as connection:
connection.callTimeout = 0
data = pd.read_sql(sql, con=connection)
data.to_csv(path)
print("FILE CREATED")
except cx_Oracle.Error as error:
print(error)
return False
finally:
print("PROCESS ENDED\n")
return True
def make_query(sql: str, path: str="dataframe.csv") -> pd.DataFrame:
if csv_from_sql(sql, path):
dataframe = pd.read_csv("dataframe.csv")
return dataframe
return pd.DataFrame()
This took a long time (about 4 to 5 minutes) to bring my +27000-rows table, but it worked when everything else didn´t.
If anyone knows a better way, it would be helpful for me too.
Related
I am pretty new in python developing. I have a long python script what "clone" a database and add additional stored functions and procedures. Clone means copy only the schema of DB.These steps work fine.
My question is about pymysql insert exection:
I have to copy some table contents into the new DB. I don't get any sql error. If I debug or print the created INSERT INTO command is correct (I've tested it in an sql editor/handler). The insert execution is correct becuse the result contain the exact row number...but all rows are missing from destination table in dest.DB...
(Ofcourse DB_* variables have been definied!)
import pymysql
liveDbConn = pymysql.connect(DB_HOST, DB_USER, DB_PWD, LIVE_DB_NAME)
testDbConn = pymysql.connect(DB_HOST, DB_USER, DB_PWD, TEST_DB_NAME)
tablesForCopy = ['role', 'permission']
for table in tablesForCopy:
with liveDbConn.cursor() as liveCursor:
# Get name of columns
liveCursor.execute("DESCRIBE `%s`;" % (table))
columns = '';
for column in liveCursor.fetchall():
columns += '`' + column[0] + '`,'
columns = columns.strip(',')
# Get and convert values
values = ''
liveCursor.execute("SELECT * FROM `%s`;" % (table))
for result in liveCursor.fetchall():
data = []
for item in result:
if type(item)==type(None):
data.append('NULL')
elif type(item)==type('str'):
data.append("'"+item+"'")
elif type(item)==type(datetime.datetime.now()):
data.append("'"+str(item)+"'")
else: # for numeric values
data.append(str(item))
v = '(' + ', '.join(data) + ')'
values += v + ', '
values = values.strip(', ')
print("### table: %s" % (table))
testDbCursor = testDbConn.cursor()
testDbCursor.execute("INSERT INTO `" + TEST_DB_NAME + "`.`" + table + "` (" + columns + ") VALUES " + values + ";")
print("Result: {}".format(testDbCursor._result.message))
liveDbConn.close()
testDbConn.close()
Result is:
### table: role
Result: b"'Records: 16 Duplicates: 0 Warnings: 0"
### table: permission
Result: b'(Records: 222 Duplicates: 0 Warnings: 0'
What am I doing wrong? Thanks!
You have 2 main issues here:
You don't use conn.commit() (which would be either be liveDbConn.commit() or testDbConn.commit() here). Changes to the database will not be reflected without committing those changes. Note that all changes need committing but SELECT, for example, does not.
Your query is open to SQL Injection. This is a serious problem.
Table names cannot be parameterized, so there's not much we can do about that, but you'll want to parameterize your values. I've made multiple corrections to the code in relation to type checking as well as parameterization.
for table in tablesForCopy:
with liveDbConn.cursor() as liveCursor:
liveCursor.execute("SELECT * FROM `%s`;" % (table))
name_of_columns = [item[0] for item in liveCursor.description]
insert_list = []
for result in liveCursor.fetchall():
data = []
for item in result:
if item is None: # test identity against the None singleton
data.append('NULL')
elif isinstance(item, str): # Use isinstance to check type
data.append(item)
elif isinstance(item, datetime.datetime):
data.append(item.strftime('%Y-%m-%d %H:%M:%S'))
else: # for numeric values
data.append(str(item))
insert_list.append(data)
testDbCursor = testDbConn.cursor()
placeholders = ', '.join(['`%s`' for item in insert_list[0]])
testDbCursor.executemany("INSERT INTO `{}.{}` ({}) VALUES ({})".format(
TEST_DB_NAME,
table,
name_of_columns,
placeholders),
insert_list)
testDbConn.commit()
From this github thread, I notice that executemany does not work as expected in psycopg2; it instead sends each entry as a single query. You'll need to use execute_batch:
from psycopg2.extras import execute_batch
execute_batch(testDbCursor,
"INSERT INTO `{}.{}` ({}) VALUES ({})".format(TEST_DB_NAME,
table,
name_of_columns,
placeholders),
insert_list)
testDbConn.commit()
How to insert data into table using python pymsql
Find my solution below
import pymysql
import datetime
# Create a connection object
dbServerName = "127.0.0.1"
port = 8889
dbUser = "root"
dbPassword = ""
dbName = "blog_flask"
# charSet = "utf8mb4"
conn = pymysql.connect(host=dbServerName, user=dbUser, password=dbPassword,db=dbName, port= port)
try:
# Create a cursor object
cursor = conn.cursor()
# Insert rows into the MySQL Table
now = datetime.datetime.utcnow()
my_datetime = now.strftime('%Y-%m-%d %H:%M:%S')
cursor.execute('INSERT INTO posts (post_id, post_title, post_content, \
filename,post_time) VALUES (%s,%s,%s,%s,%s)',(5,'title2','description2','filename2',my_datetime))
conn.commit()
except Exception as e:
print("Exeception occured:{}".format(e))
finally:
conn.close()
I have a function which, when passed database, table and access details connects to a table in SQL server to read all the contents to export to a pandas dataframe
def GET_DATA(source_server, source_database, source_table, source_username, source_password):
print('******* GETTING DATA ' ,source_server, '.', source_database,'.' ,source_table,'.' ,source_username , '*******')
data_collected = []
#SOURCE
connection = pypyodbc.connect('Driver={ODBC Driver 17 for SQL Server};'
'Server=' + source_server + ';'
'Database=' + source_database + ' ;'
'uid=' + source_username + ';pwd=' + source_password + '')
#OPEN THE CONNECTION
cursor = connection.cursor()
#BUILD THE COMMAND
SQLCommand = ("SELECT * FROM " + source_database +".dbo." + source_table )
#RUN THE QUERY
cursor.execute(SQLCommand)
#GET RESULTS
results = cursor.fetchone()
columnList = [tuple[0] for tuple in cursor.description]
#print(type(columnList))
while results:
data_collected.append(results)
results = cursor.fetchone()
df_column = pd.DataFrame(columnList)
df_column = df_column.transpose()
df_result = pd.DataFrame(data_collected)
frames = [df_column,df_result]
df = pd.concat(frames)
print('GET_DATA COMPLETE!')
return df
Most of the time this works fine, however, for reasons I can't identify I get this error
sequence item 0: expected str instance, bytes found
What is causing this and how do I account for it?
thx !
I found a much better way of extracting data from SQL to pandas
import pyodbc
import pandas as pd
def GET_DATA_TO_PANDAS(source_server,source_database, source_table,source_username,source_password):
print('***** STARTING DATA TO PANDAS ********* ')
con = pyodbc.connect('Driver={ODBC Driver 17 for SQL Server};'
'Server=' + source_server + ';'
'Database=' + source_database + ' ;'
'uid=' + source_username + ';pwd=' + source_password + '')
#BUILD QUERY
query = "SELECT * FROM " + source_database + ".dbo." + source_table
df = pd.read_sql(query, con)
return df
Used this link - https://www.quora.com/How-do-I-get-data-directly-from-databases-DB2-Oracle-MS-SQL-Server-into-Pandas-DataFrames-using-Python
I experienced a similar issue in one of my projects. This exception was raised by microsoft ODBC driver. According to me the issue might have occurred while fetching the results from the DB. May be at line
cursor.fetchone()
The reason for this exception as of what I understood before, is the size of the data that is received from SQL Server to Python. There might be one specific huge row in the DB that's causing this. If the row has unicode characters or non-ascii characters, the driver exceeds the buffer length, the driver cannot convert the nvarchar to bytes and from bytes object back to string. When the driver encounters some special characters, it sometimes cannot convert the bytes object back to string and hence the error. The driver sends a bytes object back to python. I think that's the reason for the exception.
May be if you dip a bit deep into that specific data row that might help you.
I also found another similar issue here - Click here
May be this URL (Microsoft ODBC driver's known issue) might help too - Click here
I got the same error using python 3, as follows: I defined a MS SQL column as nchar, stored an empty string (which in python 3 is unicode), then retrieved the row with the pypyodbc call cursor.fetchone(). It failed on this line:
if raw_data_parts != []:
if py_v3:
if target_type != SQL_C_BINARY:
raw_value = ''.join(raw_data_parts)
# FAILS WITH "sequence item 0: expected str instance, bytes found"
....
Changing the column datatype to nvarchar in the database fixed it.
I have a problem with creating SQL query for Oracle database using Python.
I want to bind string variable and it does not work, could you tell me what am I doing wrong?
This is my code:
import cx_Oracle
dokList = []
def LoadDatabase():
conn = None
cursor = None
try:
conn = cx_Oracle.connect("login", "password", "localhost")
cursor = conn.cursor()
query = "SELECT * FROM DOCUMENT WHERE DOC = :param"
for doknumber in dokList:
cursor.execute(query, {'doknr':doknumber})
print(cursor.rowcount)
except cx_Oracle.DatabaseError as err:
print(err)
finally:
if cursor:
cursor.close()
if conn:
conn.close()
def CheckData():
with open('changedNamed.txt') as f:
lines = f.readlines()
for line in lines:
dokList.append(line)
CheckData()
LoadDatabase()
The output of cursor.rowcount is 0 but it should be number greater than 0.
You're using a dictionary ({'doknr' : doknumber}) for your parameter, so it's a named parameter - the :param needs to match the key name. Try this:
query = "SELECT * FROM DOCUMENT WHERE DOC = :doknr"
for doknumber in dokList:
cursor.execute(query, {'doknr':doknumber})
print(cursor.rowcount)
For future troubleshooting, to check whether your parameter is getting passed properly, you can also try changing your query to "select :param from dual".
I'm having a problem while trying to simply execute data from rows from db (sqlite3). The DB input has 4 fields, therefore once entered they're being saved. But here's my problem, where I execute all of the 4 rows, if one of the fields was not filled I get an error.
That's the database execute code:
def ids(self):
con = lite.connect('foo.db')
with con:
cur = con.cursor()
cur.execute("SELECT Id FROM foo")
while True:
ids = cur.fetchall()
if ids == None:
continue
return ids
And since there are 4 rows, my output code:
print ''.join(ids[0]) + ',' + ''.join(ids[1]) + ',' + ''.join(ids[2])
+ ',' + ''.join(ids[3])
so my question is how to make an exception when there's no existing row to not show anything and just leave the ones that actually exist? I tried doing if ids[0] is not None: #do something but that would make my code really slow and it's non-pythonic way I guess. Is there any better way to make that work? Any help will be appreciated.
You don't seem to have 4 rows. Make it generic and just join an arbitrary number of rows:
ids = someobject.ids()
print ','.join(''.join(row) for row in ids)
You can simplify your database query, there is no need to 'poll' the query:
def ids(self):
with lite.connect('foo.db') as con:
cur = con.cursor()
cur.execute("SELECT Id FROM foo")
return cur.fetchall()
You could also just loop directly over the cursor, the database will handle buffering as you fetch:
def ids(self):
with lite.connect('foo.db') as con:
cur = con.cursor()
cur.execute("SELECT Id FROM foo")
return cur # just the cursor, no fetching
ids = someobject.ids()
# this'll loop over the cursor, which yields rows as required
print ','.join(''.join(row) for row in ids)
I am trying to update an entry in a table usinig Python cx_oracle. The column is named "template" and it has a data type of CLOB.
This is my code:
dsn = cx_Oracle.makedsn(hostname, port, sid)
orcl = cx_Oracle.connect(username + '/' + password + '#' + dsn)
curs = orcl.cursor()
sql = "update mytable set template='" + template + "' where id='6';"
curs.execute(sql)
orcl.close()
When I do this, I get an error saying the string literal too long. The template variable contains about 26000 characters. How do I solve this?
Edit:
I found this: http://osdir.com/ml/python.db.cx-oracle/2005-04/msg00003.html
So I tried this:
curs.setinputsizes(value = cx_Oracle.CLOB)
sql = "update mytable set template='values(:value)' where id='6';"
curs.execute(sql, value = template)
and I get a "ORA-01036: illegal variable name/number error"
Edit2:
So this is my code now:
curs.setinputsizes(template = cx_Oracle.CLOB)
sql = "update mytable set template= :template where id='6';"
print sql, template
curs.execute(sql, template=template)
I get an ORA-00911: invalid character error now.
Inserting values in sql statements is a very bad practice. You should use parameters instead:
dsn = cx_Oracle.makedsn(hostname, port, sid)
orcl = cx_Oracle.connect(username + '/' + password + '#' + dsn)
curs = orcl.cursor()
curs.setinputsizes(template = cx_Oracle.CLOB)
sql = "update mytable set template= :template where id='6'"
curs.execute(sql, template=template)
orcl.close()
Use IronPython
import sys
sys.path.append(r"...\Oracle\odp.net.11g.64bit")
import clr
clr.AddReference("Oracle.DataAccess")
from Oracle.DataAccess.Client import OracleConnection, OracleCommand, OracleDataAdapter
connection = OracleConnection('userid=user;password=hello;datasource=database_1')
connection.Open()
command = OracleCommand()
command.Connection = connection
command.CommandText = "SQL goes here"
command.ExecuteNonQuery()
Change your table definition. A varchar2 field can store up to 32767 bytes; so, if you're using an 8-bit encoding, you have a bit of room left to play with before resorting to LOBs.