I have an odd problem.
There are two databases. One has all products in it for an online shop. The other one has the online-shop software on it and only the active products.
I want to take the values from db1, change some values in python and insert into db2.
db1.execute("SELECT * FROM table1")
for product in db1.fetchall():
# ...
db2.execute("INSERT INTO table2 ...")
print "Check: " + str(db2.countrow)
So I can get the values via select, even selecting from db2 is no problem. My check always gives me 1 BUT there are no new rows in table2. The autoincrement value grows but the there is no data.
And I dont even get an error like "couldnt insert"
So does anybody has an idea what could be wrong? (if i do an insert manually via phpmyadmin it works... and if i just take the sql from my script and do it manually the sql-statements work aswell)
EDIT: Found the answer here How do I check if an insert was successful with MySQLdb in Python?
Is there a way to make these executes without committing everytime?
I have a wrapper around the mysqldb and it works perfectly for me, changing the behaviour with commit I need to make some big changes.
EDIT2: ok i found out that db1 is MyISAM (which would work without commiting) and db2 is InnoDB (which actually seems only to work with commiting) I guess I have to change db2 to MyISAM aswell.
Try adding db2.commit() after the inserts if you're using InnoDB.
Starting with 1.2.0, MySQLdb disables autocommit by default, as
required by the DB-API standard (PEP-249). If you are using InnoDB
tables or some other type of transactional table type, you'll need to
do connection.commit() before closing the connection, or else none of
your changes will be written to the database.
http://mysql-python.sourceforge.net/FAQ.html#my-data-disappeared-or-won-t-go-away
Related
So basically what the title says, I can connect to it and make the exact same queries with SQLelectron from my computer (where I am running the script). It doesn't give me errors either, and I can make SELECT queries with sql electron from the same script.
What's more is my id column, (on which I have AUTO_Increment enabled) seems to jump to later values when I make the same insert statement from sql electron as if I actually inserted the row, but it does not actually insert.
Here is the code snippet:
sql = ['INSERT INTO liftdb.lifts',
'(Date, lift, weight)',
'VALUES',
'(%s, %s, %s)']
cur.execute(' '.join(sql), (date, event['Lift'], event['Weight']))
Again, no errors or indication something went wrong.
If your code is acting like theres been a change (by increasing the auto-increment) yet you cant notice any changes, thats a strong indication that you may have forgotten to commit the changes to the database. (something like cur.commit() on the next line down.
Assuming you have commited the changes, it could also be that whatever software youre using to check if the database has changed may need to be refreshed in-order to show the changes.
I've created a global temporary table in sql developer from Python, using the cx_Oracle package. After creation, the table shows up in my SQL developer application, however INSERT statements produce no records.
I've created a cursor with a working connection(as evidenced by the fact that the tables are successfully created). In addition, I use the standard syntax for the insert.
I've tried a variety of INSERT statements but none work
cur = connection.cursor()
cur.execute("INSERT INTO table(column) VALUES(example)")
con.commit()
I would expect to see the data I've inserted show up. However when I select * from the table, there is no record inserted. I am able to successfully insert directly from the SQL developer application, so I'm not sure what might be causing the discrepancy.
Rows added to a global temporary table are only available to the session that created them. Another session, like your SQL Developer session, cannot see them. You have the option of creating the GTT so that rows are deleted at the end of a transaction, or until the session is closed.
See https://oracle-base.com/articles/misc/temporary-tables
Hello I am using my second computer to gather some data and insert it into the SQL database. I set up everything when it comes to reading and writing the database remotely, and I can insert new rows just by using the normal SQL.
With pyodbc I can read tables, but when I insert new data, nothing happens. No error message, but also no new rows in the table.
I wonder if anyone has faced this issue before and knows what the solution is.
The cursor.execute() method only prepares the SQL statement. Then, since this is an INSERT statement, you must use the cursor.commit() method for the records to actually populate your table. Likewise for a DELETE statement, you need to commit, as well.
Without more perspective here, I can only assume that you are not committing the insert.
Notice, similarly, that when you run cursor.execute("""select * from yourTable"""), you need to run cursor.fetchall() or another fetch statement to actually retrieve and view your query.
I ran an ALTER TABLE query to add some columns to a table and then db.commit(). That didn't raise any error or warning but in Oracle SQL developer the new columns don't show up on SELECT * ....
So I tried to rerun the ALTER TABLE but it raised
cx_Oracle.DatabaseError: ORA-14411: The DDL cannot be run concurrently with other DDLs
That kinda makes sense (I can't create columns that already exist) but when I try to fill the new column with values, I get a message
SQL Error: ORA-00904: "M0010": invalid ID
00904. 00000 - "%s: invalid identifier"
which suggests that the new column has not been created yet.
Does anybody understand what may be going on?
UPDATE/SOLVED I kept trying to run the queries another couple of times and at some point, things suddenly started working (for no apparent reason). Maybe processing time? Would be weird because the queries are ultra light. I'll get back to this if it happens again.
First, you don't need commit, DDL effectively commits any transaction.
Ora-14411 means
Another conflicting DDL was already running.
so it seems that your first ALTER TABLE statement hasn't finished yet (probably table is too big, or some other issues).
I'm working in a environment with a very poorly managed legacy Paradox database system. (I'm not the administrator.) I've been messing around with using pyodbc to interact with our tables, and the basic functionality seems to work. Here's some (working) test code:
import pyodbc
LOCATION = "C:\test"
cnxn = pyodbc.connect(r"Driver={{Microsoft Paradox Driver (*.db )\}};DriverID=538;Fil=Paradox 5.X;DefaultDir={0};Dbq={0};CollatingSequence=ASCII;".format(LOCATION), autocommit=True, readonly=True)
cursor = cnxn.cursor()
cursor.execute("select last, first from test")
row = cursor.fetchone()
print row
The problem is that most of our important tables are going to be open in someone's Paradox GUI at pretty much all times. I get this error whenever I try to do a select from one of those tables:
pyodbc.Error: ('HY000', "[HY000] [Microsoft][ODBC Paradox Driver] Could not lock
table 'test'; currently in use by user '(unknown)' on machine '(unknown)'. (-1304)
(SQLExecDirectW)")
This is, obviously, because pyodbc tries to lock the table when cursor.execute() is called on it. This behavior makes perfect sense, since cursor.execute() runs arbitary SQL code and could change the table.
However, Paradox itself (through its gui) seems to handle multiple users fine. It only gives you similar errors if you try to restructure the table while people are using it.
Is there any way I can get pyodbc to use some sort of read-only mode, such that it doesn't have to lock the table when I'm just doing select and such? Or is locking a fundamental part of how it works that I'm not going to be able to get around?
Solutions that would use other modules are also totally fine.
Ok, I finally figured it out.
Apparently, odbc dislikes Paradox tables which have no primary key. You cannot update tables with no primary key under any circumstances, and you cannot read from tables with no primary key unless you are the only user trying to access that table.
Unrelatedly, you get essentially the same error messages from password-protected tables if you don't supply a password.
So I was testing my script on two different tables, one of which has both a password and a primary key, and one of which had neither. I assumed the error messages had the same root cause, but it was actually two different problems, with different solutions.
There still seems to be no way to get access to tables without primary keys if they are open in someone's GUI, but that's a smaller issue.
Make sure that you have the latest version of pyobdc (3.0.6)here, according to them, they
Added Cursor.commit() and Cursor.rollback(). It is now possible to use
only a cursor in your code instead of keeping track of a connection
and a cursor.
Added readonly keyword to connect. If set to True, SQLSetConnectAttr
SQL_ATTR_ACCESS_MODE is set to SQL_MODE_READ_ONLY. This may provide
better locking semantics or speed for some drivers.
Fixed an error reading SQL Server XML data types longer than 4K.
Also, i have tested this on a paradox server using readonly and it does works.
Hope this helps!
I just published a Python library for reading Paradox database files via the pxlib C-library: https://github.com/mherrmann/pypxlib. This operates on the file-level so should also let you read the database independently of who else is currently accessing it. Since it does not synchronize read/write accesses, you do have to be careful though!