Inserting into SQL with pyodbc from remote computer - python

Hello I am using my second computer to gather some data and insert it into the SQL database. I set up everything when it comes to reading and writing the database remotely, and I can insert new rows just by using the normal SQL.
With pyodbc I can read tables, but when I insert new data, nothing happens. No error message, but also no new rows in the table.
I wonder if anyone has faced this issue before and knows what the solution is.

The cursor.execute() method only prepares the SQL statement. Then, since this is an INSERT statement, you must use the cursor.commit() method for the records to actually populate your table. Likewise for a DELETE statement, you need to commit, as well.
Without more perspective here, I can only assume that you are not committing the insert.
Notice, similarly, that when you run cursor.execute("""select * from yourTable"""), you need to run cursor.fetchall() or another fetch statement to actually retrieve and view your query.

Related

why my query works but the data is not saved? [duplicate]

This question already has an answer here:
python script is not saving into database
(1 answer)
Closed 6 months ago.
I have a small script made in Python which has two functions. The first to send data into a table. The second can read the table.
When I call the function that triggers my insert query, the data is not saved in the database.
When I insert an identical query directly into SQL Server it works fine.
So my script is good and my query is good too. Firewall systems are properly configured.
So why data is not saved ?
The primary key of my table is an IDENTITY column. When I activate my insert function, the IDENTITY column still auto increments while no data is saved.
I give you my script :
Here my SQL Server :
The SQL query for creating my table
I try my best to find a solution, i need your help to understand my problem.
As a quick fix, add commit after execute:
...
cursor.execute(sql)
cursor.connection.commit()
But I would also advise you to keep as little connections as possible. In your current code you create new connection for each operation.
After inserting with your cursor you need to commit your inserts with connection.commit()
As qaziqarta pointed out it is best practice to not open a new connection everytime you are trying to insert or read something from the database.
You should initialize your connection once at the beginning and close it after you are done reading/writing.

Insert statement in cx_Oracle produces no records in global temporary table

I've created a global temporary table in sql developer from Python, using the cx_Oracle package. After creation, the table shows up in my SQL developer application, however INSERT statements produce no records.
I've created a cursor with a working connection(as evidenced by the fact that the tables are successfully created). In addition, I use the standard syntax for the insert.
I've tried a variety of INSERT statements but none work
cur = connection.cursor()
cur.execute("INSERT INTO table(column) VALUES(example)")
con.commit()
I would expect to see the data I've inserted show up. However when I select * from the table, there is no record inserted. I am able to successfully insert directly from the SQL developer application, so I'm not sure what might be causing the discrepancy.
Rows added to a global temporary table are only available to the session that created them. Another session, like your SQL Developer session, cannot see them. You have the option of creating the GTT so that rows are deleted at the end of a transaction, or until the session is closed.
See https://oracle-base.com/articles/misc/temporary-tables

Did CREATE TABLE IF NOT EXISTS create the table?

import sqlite3
connection = sqlite3.connect("...")
cursor = connection.cursor()
cursor.execute("CREATE TABLE IF NOT EXISTS ...")
How can I find out, after executing the CREATE TABLE IF NOT EXISTS, whether the table was created or already in place?
The only way to check is by removing the IF NOT EXISTS part of the query and checking for a sqlite3.OperationalError with a message of the form "table $tablename already exists". I wouldn't trust the error message to be stable, but Python apparently does not supply an error code along with the exception.
The safest thing to do would be to begin a transaction and query the sqlite_master table beforehand, create the table if there were no results, then commit the transaction.
Note that none of these solutions will work correctly if the table you are attempting to create has a different schema than the one that exists in the database; database migrations are more complicated and usually require case-by-case handling.
EDIT: Updated answer
It sounds like you don't know beforehand what tables might exist, what those tables have in them (if anything), or have a way to check beforehand if tables exist.
To check if a table was created after using the IF NOT EXISTS clause on a CREATE TABLE command, you could try one of these:
Make the "new" table have at least one column name that is guaranteed to be different from the old table. After the CREATE TABLE command, you select the column guaranteed to be new.
CREATE TABLE newTable IF NOT EXISTS (column1 INTEGER, somethingUnique INTEGER)
SELECT somethingUnique FROM newTable
If you don't get back an error from selecting somethingUnique, then you know that you have created a new table, else the table already existed. If you end up creating a new table and do not want that somethingUnique column anymore, then you can just delete that column.
Even if you don't want to make a somethingUnique column, there is the possibility that if the old table existed, it would have at least one row in it already. All you have to do is select anything from the table. If nothing returned, then you may or may not be dealing with your new table (so go back to suggestion 1). If something does get returned, then you know that you are dealing with an old table.
Old answer
One way to see if the table was created (or exists) is to go into a terminal, navigate to the directory where your database is, and then use sqlite commands.
$ sqlite3
sqlite> .open yourDatabase.db
sqlite> SELECT * FROM theTableYouWantedToCreate;
If the table does not exist, you would get back the following error:
Error: no such table: theTableYouWantedToCreate
If the table did exist, obviously it would return everything that is in the table. If nothing is in the table (since you just created it), sqlite will give you back another prompt, indicating that the table does indeed exist.

Python and MySQL Insert query

I have a question regarding insert query and python mysql connection. I guess that I need to commit after every insert query made
Is there a different way to do that? I mean a fast way like one in php.
Second this also is same for update query I guess ?
Another problem here is that once you commit your query connection is closed, assume that I have a different insert queries and every time i prepare it I need to insert it to the table. How can I achieve that with python. I am using MySQLdb Library
Thanks for your answers.
You don't need to commit after each insert. You can perform many operations and commit on completion.
The executemany method of the DBAPI allows you to perform many inserts/updates in a single roundtrip
There is no link between committing a transaction and disconnecting from the database. See the Connection objects methods for the details of the commit and close methods

Python | MySQLdb -> I cant insert

I have an odd problem.
There are two databases. One has all products in it for an online shop. The other one has the online-shop software on it and only the active products.
I want to take the values from db1, change some values in python and insert into db2.
db1.execute("SELECT * FROM table1")
for product in db1.fetchall():
# ...
db2.execute("INSERT INTO table2 ...")
print "Check: " + str(db2.countrow)
So I can get the values via select, even selecting from db2 is no problem. My check always gives me 1 BUT there are no new rows in table2. The autoincrement value grows but the there is no data.
And I dont even get an error like "couldnt insert"
So does anybody has an idea what could be wrong? (if i do an insert manually via phpmyadmin it works... and if i just take the sql from my script and do it manually the sql-statements work aswell)
EDIT: Found the answer here How do I check if an insert was successful with MySQLdb in Python?
Is there a way to make these executes without committing everytime?
I have a wrapper around the mysqldb and it works perfectly for me, changing the behaviour with commit I need to make some big changes.
EDIT2: ok i found out that db1 is MyISAM (which would work without commiting) and db2 is InnoDB (which actually seems only to work with commiting) I guess I have to change db2 to MyISAM aswell.
Try adding db2.commit() after the inserts if you're using InnoDB.
Starting with 1.2.0, MySQLdb disables autocommit by default, as
required by the DB-API standard (PEP-249). If you are using InnoDB
tables or some other type of transactional table type, you'll need to
do connection.commit() before closing the connection, or else none of
your changes will be written to the database.
http://mysql-python.sourceforge.net/FAQ.html#my-data-disappeared-or-won-t-go-away

Categories