So basically what the title says, I can connect to it and make the exact same queries with SQLelectron from my computer (where I am running the script). It doesn't give me errors either, and I can make SELECT queries with sql electron from the same script.
What's more is my id column, (on which I have AUTO_Increment enabled) seems to jump to later values when I make the same insert statement from sql electron as if I actually inserted the row, but it does not actually insert.
Here is the code snippet:
sql = ['INSERT INTO liftdb.lifts',
'(Date, lift, weight)',
'VALUES',
'(%s, %s, %s)']
cur.execute(' '.join(sql), (date, event['Lift'], event['Weight']))
Again, no errors or indication something went wrong.
If your code is acting like theres been a change (by increasing the auto-increment) yet you cant notice any changes, thats a strong indication that you may have forgotten to commit the changes to the database. (something like cur.commit() on the next line down.
Assuming you have commited the changes, it could also be that whatever software youre using to check if the database has changed may need to be refreshed in-order to show the changes.
Related
I am using Django with Mysql as part of a product. Occasionally some table gets corrupted and when accessing the table I get the exception:
InternalError: (145, "Table 'some_table' is marked as crashed and should be repaired")
I then need to run a sql script which uses REPAIR TABLE command to fix the issue.
My questions are:
Is there a django mechanism which will detect this issue, run "REPAIR TABLE some_table", print notification and then retry the operation which has failed?
if not - is it reasonable to put a decorator to django interface functions like filter, save etc.?
of course if the operation will fail again after repair I will want to print something rather than continue to run the db operation again.
I appreciate any answer especially one with an elaborated python example.
Thanks in advance.
I appreciate that this would be an incredibly DevOps feature to have, but I dare say that if your table is randomly getting corrupted and the extent of your caring is to run REPAIR TABLE some_table; when it errors out, you should probably skip straight to
ALTER TABLE some_table ENGINE=BLACKHOLE;
That will be a lot more robust on an otherwise unstable system.
Hello I am using my second computer to gather some data and insert it into the SQL database. I set up everything when it comes to reading and writing the database remotely, and I can insert new rows just by using the normal SQL.
With pyodbc I can read tables, but when I insert new data, nothing happens. No error message, but also no new rows in the table.
I wonder if anyone has faced this issue before and knows what the solution is.
The cursor.execute() method only prepares the SQL statement. Then, since this is an INSERT statement, you must use the cursor.commit() method for the records to actually populate your table. Likewise for a DELETE statement, you need to commit, as well.
Without more perspective here, I can only assume that you are not committing the insert.
Notice, similarly, that when you run cursor.execute("""select * from yourTable"""), you need to run cursor.fetchall() or another fetch statement to actually retrieve and view your query.
I am using Python with psycopg2 module to get data from Postgres database.
The database is quite large (tens of GB).
Everything appears to be working, I am creating objects from the fetched data.
However, after ~160000 of created objects I get the following error:
I suppose the reason is the amount of data, but I could not get anywhere searching for a solution online. I am not aware of using any proxy and have never used any on this machine before, the database is on localhost.
It's interesting how often the "It's a local server so I'm not open to SQL injection" stance leads to people thinking that string interpolation is somehow easier than a parameterized query. In your case it's ended up with:
'... cookie_id = \'{}\''.format(cookie)
So you've ended up with something that's less legible and also fails (though from the specific error I don't know exactly how). Use parameterization:
cursor.execute("SELECT user_id, created_at FROM cookies WHERE cookie_id = %s ORDER BY created_at DESC;", (cookie,))
Bottom line, do it the correct way all the time. Note, there are cases where you must use string interpolation, e.g. for table names:
cursor.execute("SELECT * FROM %s", (table_name,)) # Not valid
cursor.execute("SELECT * FROM {}".format(table_name)) # Valid
And in those cases, you need to take other precautions if someone else can interact with the code.
I ran an ALTER TABLE query to add some columns to a table and then db.commit(). That didn't raise any error or warning but in Oracle SQL developer the new columns don't show up on SELECT * ....
So I tried to rerun the ALTER TABLE but it raised
cx_Oracle.DatabaseError: ORA-14411: The DDL cannot be run concurrently with other DDLs
That kinda makes sense (I can't create columns that already exist) but when I try to fill the new column with values, I get a message
SQL Error: ORA-00904: "M0010": invalid ID
00904. 00000 - "%s: invalid identifier"
which suggests that the new column has not been created yet.
Does anybody understand what may be going on?
UPDATE/SOLVED I kept trying to run the queries another couple of times and at some point, things suddenly started working (for no apparent reason). Maybe processing time? Would be weird because the queries are ultra light. I'll get back to this if it happens again.
First, you don't need commit, DDL effectively commits any transaction.
Ora-14411 means
Another conflicting DDL was already running.
so it seems that your first ALTER TABLE statement hasn't finished yet (probably table is too big, or some other issues).
I have an odd problem.
There are two databases. One has all products in it for an online shop. The other one has the online-shop software on it and only the active products.
I want to take the values from db1, change some values in python and insert into db2.
db1.execute("SELECT * FROM table1")
for product in db1.fetchall():
# ...
db2.execute("INSERT INTO table2 ...")
print "Check: " + str(db2.countrow)
So I can get the values via select, even selecting from db2 is no problem. My check always gives me 1 BUT there are no new rows in table2. The autoincrement value grows but the there is no data.
And I dont even get an error like "couldnt insert"
So does anybody has an idea what could be wrong? (if i do an insert manually via phpmyadmin it works... and if i just take the sql from my script and do it manually the sql-statements work aswell)
EDIT: Found the answer here How do I check if an insert was successful with MySQLdb in Python?
Is there a way to make these executes without committing everytime?
I have a wrapper around the mysqldb and it works perfectly for me, changing the behaviour with commit I need to make some big changes.
EDIT2: ok i found out that db1 is MyISAM (which would work without commiting) and db2 is InnoDB (which actually seems only to work with commiting) I guess I have to change db2 to MyISAM aswell.
Try adding db2.commit() after the inserts if you're using InnoDB.
Starting with 1.2.0, MySQLdb disables autocommit by default, as
required by the DB-API standard (PEP-249). If you are using InnoDB
tables or some other type of transactional table type, you'll need to
do connection.commit() before closing the connection, or else none of
your changes will be written to the database.
http://mysql-python.sourceforge.net/FAQ.html#my-data-disappeared-or-won-t-go-away