Python cx_oracle insert error - python

I am trying to insert records from csv into oracle 11g database using python script.
Primarily the application successfully insert some record, but later throws this exception Error <class 'cx_Oracle.DatabaseError'> .
def orcl_proc(sql):
# Open database connection
db = cx_Oracle.connect('username/password#localhost/XE')
# prepare a cursor object using cursor() method
cursor = db.cursor()
try:
# Execute the SQL command
cursor = cursor.execute(sql)
# Commit your changes in the database
db.commit()
except cx_Oracle.DatabaseError as e:
# Log error as appropriate
error, = e.args
print('Error.code =', error.code)
print('Error.message =', error.message)
print('Error.offset =', error.offset)
# Rollback in case there is any error
db.rollback()
# disconnect from server
db.close()
#print('Closed')
Error:
<class 'cx_Oracle.DatabaseError'>
Out of 56,367 records the python application was only able to insert 180 records. Can any body help me please, thanks in advance.

Remove exception handler from your Python script. Let Oracle database propagate it and you'll know what exactly caused the error, seeing its Oracle error code and message text.
Then, if you aren't sure what these mean, come back here and post how Oracle responded.

An intermittent "ORA-12516: TNS:listener could not find available handler with matching protocol stack" is likely to when the rate of connections to the DB is higher than the DB is configured to cope with. Try bumping the 'processes' parameter of the DB. If you need help with that, see the section "Configuring the Database For Testing" in the free PDF http://www.oracle.com/technetwork/topics/php/underground-php-oracle-manual-098250.html

I actually found the answer. Thanks to LittleFoot and Christopher Jones.
After removing the exception handler, I got ORA-12516: TNS:listener could not find available handler with matching protocol stack, I have to increase the database process and session.
alter system set processes=1000 scope=spfile;
alter system set sessions=2248 scope=spfile;
startup
and it worked.
Thanks

Related

Does creating/closing a cursor in "mysql-connector-python" do anything with MySql?

Im using the mysql-connector-python library for a script I wrote to access a MySql 8 Database:
def get_document_by_id(conn,id):
cursor = conn.cursor(dictionary=True)
try:
cursor.execute("SELECT * FROM documents WHERE id = %s",(id,))
return cursor.fetchone()
except Exception as e:
print(str(e))
return {}
finally:
cursor.close()
Since i need to call this function multiple times during a loop, I was wondering about the following:
Does the act of creating/closing a cursor actually interact with my MySql-Database in any way or is it just used as an abstraction in python for grouping together SQL queries?
Thanks a lot for your help.
The doc says that clearly
For a connection obtained from a connection pool, close() does not
actually close it but returns it to the pool and makes it available
for subsequent connection requests.
You can also refer to Connector/Python Connection Pooling for further information.

How to handle idle in transaction in python for Postgres?

Hello everyone I have the following issue,
I am trying to run a simple UPDATE query using sqlalchemy and psycopg2 for Postgres.
the query is
update = f"""
UPDATE {requests_table_name}
SET status = '{status}', {column_db_table} = '{dt.datetime.now()}'
WHERE request_id = '{request_id}'
"""
and then commit the changes using
cursor.execute(update).commit()
But it throws an error that AttributeError: 'NoneType' object has no attribute 'commit'
My connection string is
engine = create_engine(
f'postgresql://{self.params["user"]}:{self.params["password"]}#{self.params["host"]}:{self.params["port"]}/{self.params["database"]}')
conn = engine.connect().connection
cursor = conn.cursor()
The other thing is that cursor is always closed <cursor object at 0x00000253827D6D60; closed: 0>
The connection with the database is ok, I can etch tables and update them using pandas pd_to_sql method, but with commmiting using cursor it does not work. It works perfect with sql server but not with postgres.
In postgres, however, it creates a PID with the status "idle in transaction" and Client: ClientRead, every time I run cursor.execute(update).commit().
I connot get where is the problem, in the code or in the database.
I tried to use different methods to initiate a cursor, like raw_connection(), but without a result.
I checked for Client: ClientRead with idle in transaction but am not sure how to overcome it.
You have to call commit() on the connection object.
According to the documentation, execute() returns None.
Note that even if you use a context manager like this:
with my_connection.cursor() as cur:
cur.execute('INSERT INTO ..')
You may find your database processes still getting stuck in the idle in transaction state. The COMMIT is handled at the connection level, like #laurenz-albe said, so you need to wrap that too:
with my_connection as conn:
with conn.cursor() as cur:
cur.execute('INSERT INTO ..')
It's spelled out clearly in the documentation, but I still managed to overlook it.

Python script to create a backup file of a SQL Server database (with pyodbc)

I am trying to write a python script to do a backup of a SQL Server database and then restore it into a new database.
The SQL script itself seems to work fine when run in SQL Server:
BACKUP DATABASE TEST_DB
TO DISK = 'D:/test/test_db.BAK';
However, when I try to run it from a python script it fails:
con = pyodbc.connect('UID=xx; PWD=xxxxxx, driver='{SQL Server}', server=r'xxxxxx', database='TEST_DB')
sql_cursor = con.cursor()
query = ("""BACKUP DATABASE TEST_DB
TO DISK = 'D:/test/test_db.BAK';""")
con.autocommit = True
sql_cursor.execute(query1)
con.commit()
First of all, if I don't add the line "con.autocommit = True", it will fail with the message:
Cannot perform a backup or restore operation within a transaction. (3021)
No idea what a transaction is. I read in another post that the line "con.autocommit = True" removes the error, and indeed it does. I have no clue why though.
Finally, when I run the python script with con.autocommit set to True, no errors are thrown, the BAK file can be seen temporarily in the expected location ('D:/test/test_db.BAK'), but when the script finishes running, the BAK file disappears (????). Does anyone know why this is happening?
The solution, as described in this GitHub issue, is to call .nextset() repeatedly after executing the BACKUP statement ...
crsr.execute(backup_statement)
while (crsr.nextset()):
pass
... to "consume" the progress messages issued by the BACKUP. If those messages are not consumed before the connection is closed then SQL Server figures that something went wrong and cancels the backup.
SSMS can apparently capture those messages directly, but an ODBC connection must issue calls to the ODBC SQLMoreResults function to retrieve each message, and that's what happens when we call the pyodbc .nextset() method.

pymysql.err.InterfaceError: (0, '') error when doing a lot of pushes to sql table

I am doing a lot of inserts to a mysql table in a short period of time from python code (using pymysql) that uses a lot of different threads.
Each thread, of which there are many, may or may not end up pushing data to a MySql table.
Here is the code block that causes the issue (this can be called for each thread running):
sql = ("INSERT INTO LOCATIONS (location_id, place_name) VALUES (%s, %s)")
cursor = self.connection.cursor()
cursor.execute(sql, (location_id, place_name))
cursor.close()
and it is specifically this line:
cursor.execute(sql, (location_id, place_name))
That causes this error:
pymysql.err.InterfaceError: (0, '')
Note also that i define self.connection in the init of the class the above block is in. so all threads share a self.connection object but get their own cursor object.
The error seems to happen randomly and only starts appearing (I think) after doing quite a few inserts into the mysql table. It is NOT consistent meaning it does not happen with every single attempt to insert into mysql.
I have googled this specific error and it seems like it could be from the cursor being closed before running the query. but i believe it is obvious i am closing the cursor after the query is executed.
Right now I think this is happening either because of:
Some sort of write limit to the MySql table, although the error of pymysql.err.InterfaceError doesn't seem to say this specifically
The fact that I have a connection defined at a high scope that is having cursors created from in threads could somehow be causing this problem.
Thoughts?
seems like the issue was related to me having a universal connection object. creating one per thread seems to have removed this issue.
I get the same problem. There is a global connection in my project code, and I find that this connection will be timed out if there is no mysql operation for a long time. This error will occur when execute sql tasks, because of the timed-out connection.
My solution is: reconnecting mysql before execute sql tasks.
sql = ("INSERT INTO LOCATIONS (location_id, place_name) VALUES (%s, %s)")
self.connnection.ping() # reconnecting mysql
with self.connection.cursor() as cursor:
cursor.execute(sql, (location_id, place_name))
I've got the same error, than i found that pymysql is threadsafe, so you need to open an connection for every thread as #sometimesiwritecode said.
Source found: https://github.com/PyMySQL/PyMySQL/issues/422
dont close the connection remove the cursor.close() line should continue update your database
After upgrading to a newer PyMySQL version, I was suddenly getting the same error, but without doing a lot of queries. I was also getting an additional error:
[..]
pymysql.err.InterfaceError: (0, '')
During handling of the above exception, another exception occurred:
pymysql.err.Error: Already closed
Since this appears to be only real place where this error is being discussed, I'll post my solution here too.
Per the PyMySQL documentation, I was doing this:
connection = pymysql.connect([...])
with connection:
with connection.cursor() as cursor:
[..]
cursor.execute(sql)
# lots of other code
with connection:
[...]
What I failed to notice is that with connection will automatically close the connection to the database when that context manager finished executing. So subsequent queries would fail, even though I was still able to get a cursor from the connection without error.
The solution was to not use the with connection context manager, but closing the database connection manually:
connection = pymysql.connect([...])
with connection.cursor() as cursor:
[..]
cursor.execute(sql)
# lots of other code
with connection.cursor() as cursor:
[..]
connection.close()

Mysql Database Insert with Python via MySQLdb

This is super basic, but I cannot seem to get it to work correctly, most of the querying I've done in python has been with the django orm.
This time I'm just looking to do a simple insert of data with python MySQLdb, I currently have:
phone_number = toasted_tree.xpath('//b/text()')
try:
#the query to execute.
connector.execute("""INSERT INTO mydbtable(phone_number) VALUES(%s) """,(phone_number))
conn.commit()
print 'success!'
except:
conn.rollback()
print 'failure'
conn.close()
The issue is, it keeps hitting the except block. I've triple-checked my connection settings to mysql and did a fake query directly against mysql like: INSERT INTO mydbtable(phone_number) VALUES(1112223333); and it works fine.
Is my syntax above wrong?
Thank you
We can't tell what the problem is, because your except block is catching and swallowing all exceptions. Don't do that.
Remove the try/except and let Python report what the problem is. Then, if it's something you can deal with, catch that specific exception and add code to do so.

Categories