cx_Oracle.DatabaseError: ORA-14411 - python

I ran an ALTER TABLE query to add some columns to a table and then db.commit(). That didn't raise any error or warning but in Oracle SQL developer the new columns don't show up on SELECT * ....
So I tried to rerun the ALTER TABLE but it raised
cx_Oracle.DatabaseError: ORA-14411: The DDL cannot be run concurrently with other DDLs
That kinda makes sense (I can't create columns that already exist) but when I try to fill the new column with values, I get a message
SQL Error: ORA-00904: "M0010": invalid ID
00904. 00000 - "%s: invalid identifier"
which suggests that the new column has not been created yet.
Does anybody understand what may be going on?
UPDATE/SOLVED I kept trying to run the queries another couple of times and at some point, things suddenly started working (for no apparent reason). Maybe processing time? Would be weird because the queries are ultra light. I'll get back to this if it happens again.

First, you don't need commit, DDL effectively commits any transaction.
Ora-14411 means
Another conflicting DDL was already running.
so it seems that your first ALTER TABLE statement hasn't finished yet (probably table is too big, or some other issues).

Related

How to partially rollback?

I currently populate a database from a third party API that involves downloading files containing multiple SQL INSERT/DELETE/UPDATE statements and then parsing them into SQLAlchemy ORM objects to load into my database.
These files can often contain errors that I've tried to build in some integrity checks for. The particular one I'm currently struggling with is duplicate records - basically receiving a file to insert a record that currently exists. To avoid this I put a unique index on the fields that form a composite primary key. However, this means I get an error when processing a file with an SQL statement that tries to duplicate a record and a flush or commit is subsequently issued.
I don't want to commit records to the database until all the SQL statements for a given file have been processed so I can keep track of what's been processed. I was thinking that I could issue a flush at the end of the processing of every statement and then have some error handling if it fails because of a duplicate record. This would include bypassing the offending statement. However, as I understand the docs then issuing a rollback would cancel all the previous statements that had been processed to that point when I only want to skip the duplicate one.
Is there an option to partially rollback in some way or do I need to build a check up front that queries the database to check if executing an SQL statement will create a duplicate record?

AWS RDS MySQL Database not taking INSERT statements from pymysql

So basically what the title says, I can connect to it and make the exact same queries with SQLelectron from my computer (where I am running the script). It doesn't give me errors either, and I can make SELECT queries with sql electron from the same script.
What's more is my id column, (on which I have AUTO_Increment enabled) seems to jump to later values when I make the same insert statement from sql electron as if I actually inserted the row, but it does not actually insert.
Here is the code snippet:
sql = ['INSERT INTO liftdb.lifts',
'(Date, lift, weight)',
'VALUES',
'(%s, %s, %s)']
cur.execute(' '.join(sql), (date, event['Lift'], event['Weight']))
Again, no errors or indication something went wrong.
If your code is acting like theres been a change (by increasing the auto-increment) yet you cant notice any changes, thats a strong indication that you may have forgotten to commit the changes to the database. (something like cur.commit() on the next line down.
Assuming you have commited the changes, it could also be that whatever software youre using to check if the database has changed may need to be refreshed in-order to show the changes.

Is it possible to auto repair (on the fly) Django corrupted Mysql table upon exception?

I am using Django with Mysql as part of a product. Occasionally some table gets corrupted and when accessing the table I get the exception:
InternalError: (145, "Table 'some_table' is marked as crashed and should be repaired")
I then need to run a sql script which uses REPAIR TABLE command to fix the issue.
My questions are:
Is there a django mechanism which will detect this issue, run "REPAIR TABLE some_table", print notification and then retry the operation which has failed?
if not - is it reasonable to put a decorator to django interface functions like filter, save etc.?
of course if the operation will fail again after repair I will want to print something rather than continue to run the db operation again.
I appreciate any answer especially one with an elaborated python example.
Thanks in advance.
I appreciate that this would be an incredibly DevOps feature to have, but I dare say that if your table is randomly getting corrupted and the extent of your caring is to run REPAIR TABLE some_table; when it errors out, you should probably skip straight to
ALTER TABLE some_table ENGINE=BLACKHOLE;
That will be a lot more robust on an otherwise unstable system.

Inserting into SQL with pyodbc from remote computer

Hello I am using my second computer to gather some data and insert it into the SQL database. I set up everything when it comes to reading and writing the database remotely, and I can insert new rows just by using the normal SQL.
With pyodbc I can read tables, but when I insert new data, nothing happens. No error message, but also no new rows in the table.
I wonder if anyone has faced this issue before and knows what the solution is.
The cursor.execute() method only prepares the SQL statement. Then, since this is an INSERT statement, you must use the cursor.commit() method for the records to actually populate your table. Likewise for a DELETE statement, you need to commit, as well.
Without more perspective here, I can only assume that you are not committing the insert.
Notice, similarly, that when you run cursor.execute("""select * from yourTable"""), you need to run cursor.fetchall() or another fetch statement to actually retrieve and view your query.

Python | MySQLdb -> I cant insert

I have an odd problem.
There are two databases. One has all products in it for an online shop. The other one has the online-shop software on it and only the active products.
I want to take the values from db1, change some values in python and insert into db2.
db1.execute("SELECT * FROM table1")
for product in db1.fetchall():
# ...
db2.execute("INSERT INTO table2 ...")
print "Check: " + str(db2.countrow)
So I can get the values via select, even selecting from db2 is no problem. My check always gives me 1 BUT there are no new rows in table2. The autoincrement value grows but the there is no data.
And I dont even get an error like "couldnt insert"
So does anybody has an idea what could be wrong? (if i do an insert manually via phpmyadmin it works... and if i just take the sql from my script and do it manually the sql-statements work aswell)
EDIT: Found the answer here How do I check if an insert was successful with MySQLdb in Python?
Is there a way to make these executes without committing everytime?
I have a wrapper around the mysqldb and it works perfectly for me, changing the behaviour with commit I need to make some big changes.
EDIT2: ok i found out that db1 is MyISAM (which would work without commiting) and db2 is InnoDB (which actually seems only to work with commiting) I guess I have to change db2 to MyISAM aswell.
Try adding db2.commit() after the inserts if you're using InnoDB.
Starting with 1.2.0, MySQLdb disables autocommit by default, as
required by the DB-API standard (PEP-249). If you are using InnoDB
tables or some other type of transactional table type, you'll need to
do connection.commit() before closing the connection, or else none of
your changes will be written to the database.
http://mysql-python.sourceforge.net/FAQ.html#my-data-disappeared-or-won-t-go-away

Categories