Python3 ODBC Execute Many - python

I am trying to copy data from 1 oracle table to another in a different schema using the python odbc library. Here is what I'm doing
source = SomeString (source Oracle DataTable)
target = SomeString (target Oracle DataTable)
Connecting to data source to retrieve data:
source_data = pyodbc.connect(source)
source_cursor = source_data.cursor()
Connect to Target data source
target = pyodbc(target)
target_cursor = target.cursor()
I now declare my source data query
source_query = SELECT * FROM TABLE where TYPE = X
I put the data in a dataframe and then convert it to a list
data = pd.read_sql(source_query, source)
data = data.values.tolist()
I am now trying to insert data in my "data" list to my target table. I declare an insert statement and then run executemany as follows:
sql = "INSERT INTO SCHEMA.TABLE (column1, column 2, etc...) Values (?,?, etc..)
Now since I have my data and target connection established I execute the following
target_cursor.executemany(sql, data)
I get the following error below and the weird part is that the code inserted 1 line in the new table properly and then it fails and nothing happens.
Can you please guide me on how to fix this?
I get the following error:
C:\WinPy3770x64\python-3.7.7.amd64\lib\encodings\utf_16_le.py in decode(input, errors)
15 def decode(input, errors='strict'):
---> 16 return codecs.utf_16_le_decode(input, errors, True)
17
UnicodeDecodeError: 'utf-16-le' codec can't decode bytes in position 184-185: illegal encoding
The above exception was the direct cause of the following exception:
SystemError Traceback (most recent call last)
<ipython-input-70-ec2d225d6132> in <module>
----> 1 target_cursor.executemany(sql_statement, data)
SystemError: <class 'pyodbc.Error'> returned a result with an error set

Related

How do I retrieve image from Informix and Oracle database in Python?

I am writing a python script to get image data (blob) from Informix/Oracle database and upload images to AWS S3. Part of my code is as below:
try:
cur = conn.cursor()
cur.execute(sql)
for row in cur:
client = trim(row[0])
date = trim(row[1])
filename = trim(row[2])
imageblob = row[3].read()
write_file(filename, imageblob)
I got the following error (Informix case):
Error: <class '_informixdb.InterfaceError'>
Traceback (most recent call last):
File "UploadImagesToS3.py", line 57, in getImageFromDB
imageblob = row[3].read()
InterfaceError: Sblob is not open
Could anyone give a help? The code needs to be compatible with both Informix and Oracle DB. Thanks

How to fix this my error code program? I use Python 3.6

How to fix this error I use Python 3.6 and Database Oracle, I want to print data from databases, but shows errors, how can I fix it?
This is my databases:
enter image description here
EMPLOYEESID(Primary Key), NIK(Unique Key)
This is My code:
#import oracle module
import cx_Oracle
#membuat koneksi ke database oracle dan sesuaikan settingannya
con= cx_Oracle.connect("db_employees/root#localhost:1521/xe")
#inisialisasi cursor object methodnya
cur= con.cursor()
#eksekusi query
cur.execute('select*from employees')
#mengambil data dari query
rows = cur.fetchall()
#print data
for row in rows:
print('\nNIK : '+row[0])
print('Nama Karyawan : '+row[1])
print('Jabatan : '+row[3])
print('Birthdate : '+row[4])
print('Address : '+row[5]+ '\n')
#close cursor object
cur.close()
#close connection
con.close()
This is My Message Errors:
C:\Python36>python "D:\bisa.py"
Traceback (most recent call last):
File "D:\bisa.py", line 18, in <module>
print('\nNIK : '+row[0])
TypeError: must be str, not int
It is better to use string formatting instead of concatenation as you can't add a string with a number. Change the following line
print('\nNIK : '+row[0])
to
print('\nNIK : {}'.format(row[0]))

What's the cause of this UnicodeDecodeError with an nvarchar field using pyodbc and MSSQL?

I can read from a MSSQL database by sending queries in python through pypyodbc.
Mostly unicode characters are handled correctly, but I've hit a certain character that causes an error.
The field in question is of type nvarchar(50) and begins with this character "􀄑" which renders for me a bit like this...
-----
|100|
|111|
-----
If that number is hex 0x100111 then it's the character supplementary private use area-b u+100111. Though interestingly, if it's binary 0b100111 then it's an apostrophe, could it be that the wrong encoding was used when the data was uploaded? This field is storing part of a Chinese postal address.
The error message includes
UnicodeDecodeError: 'utf16' codec can't decode bytes in position 0-1: unexpected end of data
Here it is in full...
Traceback (most recent call last): File "question.py", line 19, in <module>
results.fetchone() File "/VIRTUAL_ENVIRONMENT_DIR/local/lib/python2.7/site-packages/pypyodbc.py", line 1869, in fetchone
value_list.append(buf_cvt_func(from_buffer_u(alloc_buffer))) File "/VIRTUAL_ENVIRONMENT_DIR/local/lib/python2.7/site-packages/pypyodbc.py", line 482, in UCS_dec
uchar = buffer.raw[i:i + ucs_length].decode(odbc_decoding) File "/VIRTUAL_ENVIRONMENT_DIR/lib/python2.7/encodings/utf_16.py", line 16, in decode
return codecs.utf_16_decode(input, errors, True) UnicodeDecodeError: 'utf16' codec can't decode bytes in position 0-1: unexpected end of data
Here's some minimal reproducing code...
import pypyodbc
connection_string = (
"DSN=sqlserverdatasource;"
"UID=REDACTED;"
"PWD=REDACTED;"
"DATABASE=obi_load")
connection = pypyodbc.connect(connection_string)
cursor = connection.cursor()
query_sql = (
"SELECT address_line_1 "
"FROM address "
"WHERE address_id == 'REDACTED' ")
with cursor.execute(query_sql) as results:
row = results.fetchone() # This is the line that raises the error.
print row
Here is a chunk of my /etc/freetds/freetds.conf
[global]
; tds version = 4.2
; dump file = /tmp/freetds.log
; debug flags = 0xffff
; timeout = 10
; connect timeout = 10
text size = 64512
[sqlserver]
host = REDACTED
port = 1433
tds version = 7.0
client charset = UTF-8
I've also tried with client charset = UTF-16 and omitting that line all together.
Here's the relevant chunk from my /etc/odbc.ini
[sqlserverdatasource]
Driver = FreeTDS
Description = ODBC connection via FreeTDS
Trace = No
Servername = sqlserver
Database = REDACTED
Here's the relevant chunk from my /etc/odbcinst.ini
[FreeTDS]
Description = TDS Driver (Sybase/MS SQL)
Driver = /usr/lib/x86_64-linux-gnu/odbc/libtdsodbc.so
Setup = /usr/lib/x86_64-linux-gnu/odbc/libtdsS.so
CPTimeout =
CPReuse =
UsageCount = 1
I can work around this issue by fetching results in a try/except block, throwing away any rows that raise a UnicodeDecodeError, but is there a solution? Can I throw away just the undecodable character, or is there a way to fetch this line without raising an error?
It's not inconceivable that some bad data has ended up on the database.
I've Googled around and checked this site's related questions, but have had no luck.
I fixed the issue myself by using this:
conn.setencoding('utf-8')
immediately before creating a cursor.
Where conn is the connection object.
I was fetching tens of millions of rows with fetchall(), and in the middle of a transaction that would be extremely expensive to undo manually, so I couldn't afford to simply skip invalid ones.
Source where I found the solution: https://github.com/mkleehammer/pyodbc/issues/112#issuecomment-264734456
This problem was eventually worked around, I suspect that the problem was that text had a character of one encoding hammered into a field with another declared encoding through some hacky method when the table was being set up.

Python: Execute Stored Procedure with Parameters

I'm working on a Python script that writes records from a stored procedure to a text file. I'm having issues executing the stored procedure with parameters.
I'm not sure what I could do differently to execute this stored procedure with both parameters. You can see the error I'm getting below.
Any insight would be appreciated.
Here's my code
# Import Python ODBC module
import pyodbc
# Create connection
cnxn = pyodbc.connect(driver="{SQL Server}",server="<server>",database="<database>",uid="<username>",pwd="<password>")
cursor = cnxn.cursor()
# Execute stored procedure
storedProc = "exec database..stored_procedure('param1', 'param2')"
# Loop through records
for irow in cursor.execute(storedProc):
# Create a new text file for each ID
myfile = open('c:/Path/file_' + str(irow[0]) + '_' + irow[1] + '.txt', 'w')
# Write retrieved records to text file
myfile.write(irow[2])
# Close the file
myfile.close()
Here's the error
Traceback (most recent call lst):
File "C:\Path\script.py", line 12, in <module>
for irow in cursor.execute(storedProc):
pyodbc.ProgrammingError: ('42000', "[4200] [Microsoft][ODBC SQL Server Driver][SQL Server]Incorrect syntax near 'param1'. <102> <SQLExecDirectW>">
I was able to fix the syntax error by removing the parenthesis from the query string.
# Execute stored procedure
storedProc = "exec database..stored_procedure('param1', 'param2')"
should be
# Execute stored procedure
storedProc = "exec database..stored_procedure 'param1','param2'"
This worked for me
query = "EXEC [store_proc_name] #param1='param1', #param2= 'param2'"
cursor.execute(query)
For SQL Server:
cursor.execute('{call your_sp (?)}',var_name)

psycopg2.DataError: invalid byte sequence for encoding "UTF8": 0x00

I have some bulk postgresql 9.3.9 data inserts I do in python 3.4. I've been using SQLAlchemy which works fine for normal data processing. For a while I've been using psycopg2 so as to utilize the copy_from function which I found faster when doing bulk inserts. The issue I have is that when using copy_from, the bulk inserts fail when I have data that has got some special characters in it. When I remove the highlighted line the insert runs successfully.
Error
Traceback (most recent call last):
File "/vagrant/apps/data_script/data_update.py", line 1081,
in copy_data_to_db
'surname', 'other_name', 'reference_number', 'balance'), sep="|", null='None')
psycopg2.DataError: invalid byte sequence for encoding "UTF8": 0x00
CONTEXT:
COPY source_file_raw, line 98: "94|1|99|2015-09-03 10:17:34|False|True|John|Doe|A005-001\008020-01||||||..."
Code producing the error
cursor.copy_from(data_list, 'source_file_raw',
columns=('id', 'partner_id', 'pos_row', 'loaded_at', 'has_error',
'can_be_loaded', 'surname', 'other_name', 'reference_number', .............),
sep="|", null='None')
The db connection
import psycopg2
pg_conn_string = "host='%s' port='%s' dbname='%s' user='%s' password='%s'"
%(con_host, con_port, con_db, con_user, con_pass)
conn = psycopg2.connect(pg_conn_string)
conn.set_isolation_level(0)
if cursor_type == 'dict':
cursor = conn.cursor(cursor_factory=psycopg2.extras.DictCursor)
else:
cursor = conn.cursor()
return cursor
So the baffling thing is that SQlAlchemy can do the bulk inserts even when those "special characters" are present but using psycopg2 directly fails. Am thinking there must be a way for me to escape this or to tell psycopg2 to find a smart way to do the insert or am I missing a setting somewhere?

Categories