pysqlite and python : wont insert data into table but code executes fine - python

I am trying to insert some data into a pysqlite database but even tho the code runs fine with no errors nothing shows up in the database and i have made sure that the variable does contain a value
cur = self.con.execute("insert into urllist(url) values('%s')" % seed)
i have double checked the table and column name and they are also correct

Are you calling con.commit() ?
Apparently changes are lost unless this method is used before closing the connection.
http://readthedocs.org/docs/pysqlite/en/latest/sqlite3.html

Related

Python mysql.connector insert does not work

I work with the Python mysql.connector for the first time and I am not able to create a working insert statement.
This is the table:
'CREATE TABLE IF NOT EXISTS products (id INT AUTO_INCREMENT PRIMARY KEY, title VARCHAR(255));'
I am trying to insert a variable as title while the id should be auto incremented. I have tried multiple solutions but it simply won't work.
def insert_product(title: str):
insert_product_query = 'INSERT INTO products (title) VALUES (%s);'
cursor.execute(insert_product_query, (title,))
This runs without any error, but the insert is not working. It does nothing. I tried multiple versions of this, with '?' instead of '%s' and without a tuple but it won't work.
Another solution I tried is this:
def insert_product(title: str):
insert_product_query = f'INSERT INTO products (title) VALUES (\'{title}\')'
print(insert_product_query)
cursor.execute(insert_product_query)
I printed the insert statement and when I copy paste it directly into the database it works perfectly, so I don't have any idea why it is not working out of the python code as it is not producing any errors.
I found many similar problems but none of the solution worked for me.
I hope someone can help me as I might overlook something obvious.
Thanks in advance!
Python's connector disables autocommit by default (as a reasonable library would do!). You need to explicitly commit after you perform a DML statement:
con.commit() # Assuming con is the name of the connection variable

sql INSERT in python (postgres, cursor, execute)

I had no problem with SELECTing data in python from postgres database using cursor/execute. Just changed the sql to INSERT a row but nothing is inserted to DB. Can anyone let me know what should be modified? A little confused because everything is the same except for the sql statement.
<!-- language: python -->
#app.route("/addcontact")
def addcontact():
# this connection/cursor setting showed no problem so far
conn = pg.connect(conn_str)
cur = conn.cursor(cursor_factory=psycopg2.extras.DictCursor)
sql = f"INSERT INTO jna (sid, phone, email) VALUES ('123','123','123')"
cur.execute(sql)
return redirect("/contacts")
first look at your table setup and make sure your variables are named right in the right order, format and all that, if your not logging into the specific database on the sql server it won't know where the table is, you might need to send something like 'USE databasename' before you do your insert statement so your computer is in the right place in the server.
I might not be up to date with the language but is that 'f' supposed to be right before the quotes? if thats in ur code that'd probably throw an error unless it has a use im not aware of or its not relevant to the problem.
You have to commit your transaction by adding the line below after execute(sql)
conn.commit()
Ref: Using INSERT with a PostgreSQL Database using Python

SQL Server stored procedure to insert called from Python doesn't always store data, but still increments identity coutner

I've hit a strange inconsistency problem with SQL Server inserts using a stored procedure. I'm calling a stored procedure from Python via pyodbc by running a loop to call it multiple times for inserting multiple rows in a table.
It seems to work normally most of the time, but after a while it will just stop working in the middle of the loop. At that point even if I try to call it just once via the code it doesn't insert anything. I don't get any error messages in the Python console and I actually get back the incremented identities for the table as though the data were actually inserted, but when I go look at the data, it isn't there.
If I call the stored procedure from within SQL Server Management Studio and pass in data, it inserts it and shows the incremented identity number as though the other records had been inserted even though they are not in the database.
It seems I reach a certain limit on the number of times I can call the stored procedure from Python and it just stops working.
I'm making sure to disconnect after I finish looping through the inserts and other stored procedures written in the same way and sent via the same database connection still work as usual.
I've tried restarting the computer with SQL Server and sometimes it will let me call the stored procedure from Python a few more times, but that eventually stops working as well.
I'm wondering if it is something to do with calling the stored procedure in a loop too quickly, but that doesn't explain why after restarting the computer, it doesn't allow any more inserts from the stored procedure.
I've done lots of searching online, but haven't found anything quite like this.
Here is the stored procedure:
USE [Test_Results]
GO
SET ANSI_NULLS ON
GO
SET QUOTED_IDENTIFIER ON
GO
CREATE PROCEDURE [dbo].[insertStepData]
#TestCaseDataId int,
#StepNumber nchar(10),
#StepDateTime nvarchar(50)
AS
SET NOCOUNT ON;
BEGIN TRANSACTION
DECLARE #newStepId int
INSERT INTO TestStepData (
TestCaseDataId,
StepNumber,
StepDateTime
)
VALUES (
#TestCaseDataId,
#StepNumber,
#StepDateTime
)
SET #newStepId = SCOPE_IDENTITY();
SELECT #newStepId
FROM TestStepData
COMMIT TRANSACTION
Here is the method I use to call a stored procedure and get back the id number ('conn' is an active database connection via pyodbc):
def CallSqlServerStoredProc(self, conn, procName, *args):
sql = """DECLARE #ret int
EXEC #ret = %s %s
SELECT #ret""" % (procName, ','.join(['?'] * len(args)))
return int(conn.execute(sql, args).fetchone()[0])
Here is where I'm passing in the stored procedure to insert:
....
for testStep in testStepData:
testStepId = self.CallSqlServerStoredProc(conn, "insertStepData", testCaseId, testStep["testStepNumber"], testStep["testStepDateTime"])
conn.commit()
time.sleep(1)
....
SET #newStepId = SCOPE_IDENTITY();
SELECT #newStepId
FROM StepData
looks mighty suspicious to me:
SCOPE_IDENTITY() returns numeric(38,0) which is larger than int. A conversion error may occur after some time. Update: now that we know the IDENTITY column is int, this is not an issue (SCOPE_IDENTITY() returns the last value inserted into that column in the current scope).
SELECT into variable doesn't guarantee its value if more that one record is returned. Besides, I don't get the idea behind overwriting the identity value we already have. In addition to that, the number of values returned by the last statement is equal to the number of rows in that table which is increasing quickly - this is a likely cause of degradation. In brief, the last statement is not just useless, it's detrimental.
The 2nd statement also makes these statements misbehave:
EXEC #ret = %s %s
SELECT #ret
Since the function doesn't RETURN anything but SELECTs a single time, this chunk actually returns two data sets: 1) a single #newStepId value (from EXEC, yielded by the SELECT #newStepId <...>); 2) a single NULL (from SELECT #ret). fetchone() reads the 1st data set by default so you don't notice this but it doesn't work towards performance or correctness anyway.
Bottom line
Replace the 2nd statement with RETURN #newStepId.
Data not in the database problem
I believe it's caused by RETURN before COMMIT TRANSACTION. Make it the other way round.
In the original form, I believe it was caused by the long-working SELECT and/or possible side-effects from the SELECT not-to-a-variable being inside a transaction.

pymssql ( python module ) unable to use temporary tables

This isn't a question, so much as a pre-emptive answer. (I have gotten lots of help from this website & wanted to give back.)
I was struggling with a large bit of SQL query that was failing when I tried to run it via python using pymssql, but would run fine when directly through MS SQL. (E.g., in my case, I was using MS SQL Server Management Studio to run it outside of python.)
Then I finally discovered the problem: pymssql cannot handle temporary tables. At least not my version, which is still 1.0.1.
As proof, here is a snippet of my code, slightly altered to protect any IP issues:
conn = pymssql.connect(host=sqlServer, user=sqlID, password=sqlPwd, \
database=sqlDB)
cur = conn.cursor()
cur.execute(testQuery)
The above code FAILS (returns no data, to be specific, and spits the error "pymssql.OperationalError: No data available." if you call cur.fetchone() ) if I call it with testQuery defined as below:
testQuery = """
CREATE TABLE #TEST (
[sample_id] varchar (256)
,[blah] varchar (256) )
INSERT INTO #TEST
SELECT DISTINCT
[sample_id]
,[blah]
FROM [myTableOI]
WHERE [Shipment Type] in ('test')
SELECT * FROM #TEST
"""
However, it works fine if testQuery is defined as below.
testQuery = """
SELECT DISTINCT
[sample_id]
,[blah]
FROM [myTableOI]
WHERE [Shipment Type] in ('test')
"""
I did a Google search as well as a search within Stack Overflow, and couldn't find any information regarding the particular issue. I also looked under the pymssql documentation and FAQ, found at http://code.google.com/p/pymssql/wiki/FAQ, and did not see anything mentioning that temporary tables are not allowed. So I thought I'd add this "question".
Update: July 2016
The previously-accepted answer is no longer valid. The second "will NOT work" example does indeed work with pymssql 2.1.1 under Python 2.7.11 (once conn.autocommit(1) is replaced with conn.autocommit(True) to avoid "TypeError: Cannot convert int to bool").
For those who run across this question and might have similar problems, I thought I'd pass on what I'd learned since the original post. It turns out that you CAN use temporary tables in pymssql, but you have to be very careful in how you handle commits.
I'll first explain by example. The following code WILL work:
testQuery = """
CREATE TABLE #TEST (
[name] varchar(256)
,[age] int )
INSERT INTO #TEST
values ('Mike', 12)
,('someone else', 904)
"""
conn = pymssql.connect(host=sqlServer, user=sqlID, password=sqlPwd, \
database=sqlDB) ## obviously setting up proper variables here...
conn.autocommit(1)
cur = conn.cursor()
cur.execute(testQuery)
cur.execute("SELECT * FROM #TEST")
tmp = cur.fetchone()
tmp
This will then return the first item (a subsequent fetch will return the other):
('Mike', 12)
But the following will NOT work
testQuery = """
CREATE TABLE #TEST (
[name] varchar(256)
,[age] int )
INSERT INTO #TEST
values ('Mike', 12)
,('someone else', 904)
SELECT * FROM #TEST
"""
conn = pymssql.connect(host=sqlServer, user=sqlID, password=sqlPwd, \
database=sqlDB) ## obviously setting up proper variables here...
conn.autocommit(1)
cur = conn.cursor()
cur.execute(testQuery)
tmp = cur.fetchone()
tmp
This will fail saying "pymssql.OperationalError: No data available." The reason, as best I can tell, is that whether you have autocommit on or not, and whether you specifically make a commit yourself or not, all tables must explicitly be created AND COMMITTED before trying to read from them.
In the first case, you'll notice that there are two "cur.execute(...)" calls. The first one creates the temporary table. Upon finishing the "cur.execute()", since autocommit is turned on, the SQL script is committed, the temporary table is made. Then another cur.execute() is called to read from that table. In the second case, I attempt to create & read from the table "simultaneously" (at least in the mind of pymssql... it works fine in MS SQL Server Management Studio). Since the table has not previously been made & committed, I cannot query into it.
Wow... that was a hassle to discover, and it will be a hassle to adjust my code (developed on MS SQL Server Management Studio at first) so that it will work within a script. Oh well...

Adding column in SQLite3, then filling it

I'm using SQLite 3.6, and connecting to it using Python 2.7 on Fedora 14.
I am attempting to add a column to a table using ALTER TABLE, then immediately afterwards UPDATE the table with data for the newly created column. Through python, I get nothing but NULLs in the database's new column. If I run the queries through sqlite3 in the terminal, it works.
Here is the--sanitized--python
def upgrade(cursor)
cursor.execute("ALTER TABLE Test ADD COLUMN 'Guid' TEXT")
cursor.execute("SELECT DISTINCT Name FROM Test WHERE ForeignKey=-1")
# Loop using a row factory that puts all the Names into a list called NameList
for Name in NameList:
Guid = uuid.uuid4()
cursor.execute("UPDATE Test SET Guid=? WHERE Name=?", (str(Guid),Name))
The SQLite3 connection is managed by the python's main function, and the connection is commited when the python script ends. The python executes without errors and debug statements show that all of the proper rows were found in the select call. However, when I look at the database using Sqliteman or sqlite3, I see only NULLs in the new column.
Here are my sqlite3 calls.
ALTER TABLE Test ADD COLUMN 'Guid' TEXT;
UPDATE Test SET GUID="foo" WHERE Name="Test3";
select Name, Guid from Test where Name='Test3';
This works for some reason. I see the--fake--guid where I expect.
I'm at my wits end for what to do.
The issue was in the main function, which exited before the connection's commit() call could be made due to an error in the main function.

Categories