I am trying to create a Python program to dynamically create a partition on a partitioned Postgres table ("my_test_hist") and load data into the partition baed on the eff_date column. If the partition exists then the records should be loaded to this existing partition, else a new partition should be created and the data should be loaded to the new partition.
With the below sample data, there should be 2 partitions created in the "my_test_hist" partitioned table -
The partitioned table ("my_test_hist") will source data from the "my_test" table by :
INSERT INTO my_test_hist SELECT * from my_test;
I am getting the following error while running the Python program:-
no partition of relation "my_test_hist" found for row
DETAIL: Partition key of the failing row contains (eff_date) = (2022-07-15)
The code snippets are as follows:
create table my_test -- Source table non-partitioned
(
id int,
area_code varchar(10),
fname varchar(10),
eff_date date
) ;
INSERT INTO my_test VALUES(1, 'A505', 'John', DATE '2022-07-15');
INSERT INTO my_test VALUES(2, 'A506', 'Mike', DATE '2022-07-20');
COMMIT;
create table my_test_hist -- Target table partitioned
(
id int,
area_code varchar(10),
fname varchar(10),
eff_date date
) PARTITION BY LIST (eff_date) ; -- Partitioned by List on eff_date col
DB Func:
CREATE FUNCTION insert_my_test_hist_part() RETURNS trigger
LANGUAGE plpgsql
AS $$
BEGIN
BEGIN
/* try to create a table for the new partition */
EXECUTE
format(
'CREATE TABLE %I (LIKE my_test_hist INCLUDING DEFAULTS)',
'my_test_hist_' || to_char(NEW.eff_date, 'YYYY_MM_DD')
);
/* tell listener to attach the partition (only if a new table was created) */
EXECUTE
format(
'NOTIFY my_test_hist, %L', to_char(NEW.eff_date, 'YYYY_MM_DD')
);
EXCEPTION
WHEN duplicate_table THEN
NULL; -- ignore
END;
/* insert into the new partition */
EXECUTE
format(
'INSERT INTO %I VALUES ($1.*)', 'my_test_hist_' || to_char(NEW.eff_date, 'YYYY_MM_DD') )
USING NEW;
RETURN NULL;
END;
$$;
DB Trigger:
CREATE OR REPLACE TRIGGER insert_my_test_hist_part_trigger
BEFORE INSERT ON MY_TEST_HIST FOR EACH ROW
WHEN (pg_trigger_depth() < 1)
EXECUTE FUNCTION insert_my_test_hist_part();
Python Listener Program:
try:
conn = psycopg2.connect(host=hostName, dbname=dbName, user=userName, password=password, port=port)
conn.set_isolation_level(psycopg2.extensions.ISOLATION_LEVEL_AUTOCOMMIT)
cursor = conn.cursor()
def listen_notify():
cursor.execute(f"LISTEN my_test_hist;")
query1 = "ALTER TABLE my_test_hist ADD PARTITION my_test_hist_{} FOR VALUES IN ('{}');"
query2 = "INSERT INTO my_test_hist SELECT * FROM my_test ;"
while True:
cursor.execute(query2) # Trigger the insert to notify
if select.select([conn], [], [], 5) == ([],[],[]):
print("Timeout")
else:
conn.poll()
while conn.notifies:
notify = conn.notifies.pop()
var = notify.payload
query = query1.format(var, var)
cursor.execute(query)
conn.notifies.clear()
# Call the function
listen_notify()
except Exception as e:
print("Exception occurred: " +str(e))
Can anyone please help me fix the error in Python. Also please let me know how to use asyncio in the program and how can I terminate the infinite loop once the message is caught.
Thanks.
Related
OK so I'm trying to improve my asp data entry page to ensure that the entry going into my data table is unique.
So in this table I have SoftwareName and SoftwareType. I'm trying to get it so if the entry page sends an insert query with parameters that match whats in the table (so same title and type) then an error is thrown up and the Data isn't entered.
Something like this:
INSERT INTO tblSoftwareTitles(
SoftwareName,
SoftwareSystemType)
VALUES(#SoftwareName,#SoftwareType)
WHERE NOT EXISTS (SELECT SoftwareName
FROM tblSoftwareTitles
WHERE Softwarename = #SoftwareName
AND SoftwareType = #Softwaretype)
So this syntax works great for selecting columns from one table into another without duplicates being entered but doesn't seem to want to work with a parametrized insert query. Can anyone help me out with this?
Edit:
Here's the code I'm using in my ASP insert method
private void ExecuteInsert(string name, string type)
{
//Creates a new connection using the HWM string
using (SqlConnection HWM = new SqlConnection(GetConnectionStringHWM()))
{
//Creates a sql string with parameters
string sql = " INSERT INTO tblSoftwareTitles( "
+ " SoftwareName, "
+ " SoftwareSystemType) "
+ " SELECT "
+ " #SoftwareName, "
+ " #SoftwareType "
+ " WHERE NOT EXISTS "
+ " ( SELECT 1 "
+ " FROM tblSoftwareTitles "
+ " WHERE Softwarename = #SoftwareName "
+ " AND SoftwareSystemType = #Softwaretype); ";
//Opens the connection
HWM.Open();
try
{
//Creates a Sql command
using (SqlCommand addSoftware = new SqlCommand{
CommandType = CommandType.Text,
Connection = HWM,
CommandTimeout = 300,
CommandText = sql})
{
//adds parameters to the Sql command
addSoftware.Parameters.Add("#SoftwareName", SqlDbType.NVarChar, 200).Value = name;
addSoftware.Parameters.Add("#SoftwareType", SqlDbType.Int).Value = type;
//Executes the Sql
addSoftware.ExecuteNonQuery();
}
Alert.Show("Software title saved!");
}
catch (System.Data.SqlClient.SqlException ex)
{
string msg = "Insert Error:";
msg += ex.Message;
throw new Exception(msg);
}
}
}
You could do this using an IF statement:
IF NOT EXISTS
( SELECT 1
FROM tblSoftwareTitles
WHERE Softwarename = #SoftwareName
AND SoftwareSystemType = #Softwaretype
)
BEGIN
INSERT tblSoftwareTitles (SoftwareName, SoftwareSystemType)
VALUES (#SoftwareName, #SoftwareType)
END;
You could do it without IF using SELECT
INSERT tblSoftwareTitles (SoftwareName, SoftwareSystemType)
SELECT #SoftwareName,#SoftwareType
WHERE NOT EXISTS
( SELECT 1
FROM tblSoftwareTitles
WHERE Softwarename = #SoftwareName
AND SoftwareSystemType = #Softwaretype
);
Both methods are susceptible to a race condition, so while I would still use one of the above to insert, but you can safeguard duplicate inserts with a unique constraint:
CREATE UNIQUE NONCLUSTERED INDEX UQ_tblSoftwareTitles_Softwarename_SoftwareSystemType
ON tblSoftwareTitles (SoftwareName, SoftwareSystemType);
Example on SQL-Fiddle
ADDENDUM
In SQL Server 2008 or later you can use MERGE with HOLDLOCK to remove the chance of a race condition (which is still not a substitute for a unique constraint).
MERGE tblSoftwareTitles WITH (HOLDLOCK) AS t
USING (VALUES (#SoftwareName, #SoftwareType)) AS s (SoftwareName, SoftwareSystemType)
ON s.Softwarename = t.SoftwareName
AND s.SoftwareSystemType = t.SoftwareSystemType
WHEN NOT MATCHED BY TARGET THEN
INSERT (SoftwareName, SoftwareSystemType)
VALUES (s.SoftwareName, s.SoftwareSystemType);
Example of Merge on SQL Fiddle
This isn't an answer. I just want to show that IF NOT EXISTS(...) INSERT method isn't safe. You have to execute first Session #1 and then Session #2. After v #2 you will see that without an UNIQUE index you could get duplicate pairs (SoftwareName,SoftwareSystemType). Delay from session #1 is used to give you enough time to execute the second script (session #2). You could reduce this delay.
Session #1 (SSMS > New Query > F5 (Execute))
CREATE DATABASE DemoEXISTS;
GO
USE DemoEXISTS;
GO
CREATE TABLE dbo.Software(
SoftwareID INT PRIMARY KEY,
SoftwareName NCHAR(400) NOT NULL,
SoftwareSystemType NVARCHAR(50) NOT NULL
);
GO
INSERT INTO dbo.Software(SoftwareID,SoftwareName,SoftwareSystemType)
VALUES (1,'Dynamics AX 2009','ERP');
INSERT INTO dbo.Software(SoftwareID,SoftwareName,SoftwareSystemType)
VALUES (2,'Dynamics NAV 2009','SCM');
INSERT INTO dbo.Software(SoftwareID,SoftwareName,SoftwareSystemType)
VALUES (3,'Dynamics CRM 2011','CRM');
INSERT INTO dbo.Software(SoftwareID,SoftwareName,SoftwareSystemType)
VALUES (4,'Dynamics CRM 2013','CRM');
INSERT INTO dbo.Software(SoftwareID,SoftwareName,SoftwareSystemType)
VALUES (5,'Dynamics CRM 2015','CRM');
GO
/*
CREATE UNIQUE INDEX IUN_Software_SoftwareName_SoftareSystemType
ON dbo.Software(SoftwareName,SoftwareSystemType);
GO
*/
-- Session #1
BEGIN TRANSACTION;
UPDATE dbo.Software
SET SoftwareName='Dynamics CRM',
SoftwareSystemType='CRM'
WHERE SoftwareID=5;
WAITFOR DELAY '00:00:15' -- 15 seconds delay; you have less than 15 seconds to switch SSMS window to session #2
UPDATE dbo.Software
SET SoftwareName='Dynamics AX',
SoftwareSystemType='ERP'
WHERE SoftwareID=1;
COMMIT
--ROLLBACK
PRINT 'Session #1 results:';
SELECT *
FROM dbo.Software;
Session #2 (SSMS > New Query > F5 (Execute))
USE DemoEXISTS;
GO
-- Session #2
DECLARE
#SoftwareName NVARCHAR(100),
#SoftwareSystemType NVARCHAR(50);
SELECT
#SoftwareName=N'Dynamics AX',
#SoftwareSystemType=N'ERP';
PRINT 'Session #2 results:';
IF NOT EXISTS(SELECT *
FROM dbo.Software s
WHERE s.SoftwareName=#SoftwareName
AND s.SoftwareSystemType=#SoftwareSystemType)
BEGIN
PRINT 'Session #2: INSERT';
INSERT INTO dbo.Software(SoftwareID,SoftwareName,SoftwareSystemType)
VALUES (6,#SoftwareName,#SoftwareSystemType);
END
PRINT 'Session #2: FINISH';
SELECT *
FROM dbo.Software;
Results:
Session #1 results:
SoftwareID SoftwareName SoftwareSystemType
----------- ----------------- ------------------
1 Dynamics AX ERP
2 Dynamics NAV 2009 SCM
3 Dynamics CRM 2011 CRM
4 Dynamics CRM 2013 CRM
5 Dynamics CRM CRM
Session #2 results:
Session #2: INSERT
Session #2: FINISH
SoftwareID SoftwareName SoftwareSystemType
----------- ----------------- ------------------
1 Dynamics AX ERP <-- duplicate (row updated by session #1)
2 Dynamics NAV 2009 SCM
3 Dynamics CRM 2011 CRM
4 Dynamics CRM 2013 CRM
5 Dynamics CRM CRM
6 Dynamics AX ERP <-- duplicate (row inserted by session #2)
There is a great solution for this problem ,You can use the Merge Keyword of Sql
Merge MyTargetTable hba
USING (SELECT Id = 8, Name = 'Product Listing Message') temp
ON temp.Id = hba.Id
WHEN NOT matched THEN
INSERT (Id, Name) VALUES (temp.Id, temp.Name);
You can check this before following, below is the sample
IF OBJECT_ID ('dbo.TargetTable') IS NOT NULL
DROP TABLE dbo.TargetTable
GO
CREATE TABLE dbo.TargetTable
(
Id INT NOT NULL,
Name VARCHAR (255) NOT NULL,
CONSTRAINT PK_TargetTable PRIMARY KEY (Id)
)
GO
INSERT INTO dbo.TargetTable (Name)
VALUES ('Unknown')
GO
INSERT INTO dbo.TargetTable (Name)
VALUES ('Mapping')
GO
INSERT INTO dbo.TargetTable (Name)
VALUES ('Update')
GO
INSERT INTO dbo.TargetTable (Name)
VALUES ('Message')
GO
INSERT INTO dbo.TargetTable (Name)
VALUES ('Switch')
GO
INSERT INTO dbo.TargetTable (Name)
VALUES ('Unmatched')
GO
INSERT INTO dbo.TargetTable (Name)
VALUES ('ProductMessage')
GO
Merge MyTargetTable hba
USING (SELECT Id = 8, Name = 'Listing Message') temp
ON temp.Id = hba.Id
WHEN NOT matched THEN
INSERT (Id, Name) VALUES (temp.Id, temp.Name);
More of a comment link for suggested further reading...A really good blog article which benchmarks various ways of accomplishing this task can be found here.
They use a few techniques: "Insert Where Not Exists", "Merge" statement, "Insert Except", and your typical "left join" to see which way is the fastest to accomplish this task.
The example code used for each technique is as follows (straight copy/paste from their page) :
INSERT INTO #table1 (Id, guidd, TimeAdded, ExtraData)
SELECT Id, guidd, TimeAdded, ExtraData
FROM #table2
WHERE NOT EXISTS (Select Id, guidd From #table1 WHERE #table1.id = #table2.id)
-----------------------------------
MERGE #table1 as [Target]
USING (select Id, guidd, TimeAdded, ExtraData from #table2) as [Source]
(id, guidd, TimeAdded, ExtraData)
on [Target].id =[Source].id
WHEN NOT MATCHED THEN
INSERT (id, guidd, TimeAdded, ExtraData)
VALUES ([Source].id, [Source].guidd, [Source].TimeAdded, [Source].ExtraData);
------------------------------
INSERT INTO #table1 (id, guidd, TimeAdded, ExtraData)
SELECT id, guidd, TimeAdded, ExtraData from #table2
EXCEPT
SELECT id, guidd, TimeAdded, ExtraData from #table1
------------------------------
INSERT INTO #table1 (id, guidd, TimeAdded, ExtraData)
SELECT #table2.id, #table2.guidd, #table2.TimeAdded, #table2.ExtraData
FROM #table2
LEFT JOIN #table1 on #table1.id = #table2.id
WHERE #table1.id is null
It's a good read for those who are looking for speed! On SQL 2014, the Insert-Except method turned out to be the fastest for 50 million or more records.
I know this post is old but I found an original way to insert values into a table with the key words INSERT INTO and EXISTS.
I say original because I did not find it on the Internet.
Here it is :
INSERT INTO targetTable(c1,c2)
select value1,value2
WHERE NOT EXISTS(select 1 from targetTable where c1=value1 and c2=value2 )
Ingnoring the duplicated unique constraint isn't a solution?
INSERT IGNORE INTO tblSoftwareTitles...
I am trying to pass json object as argument to my stored procedure. I tried many things and solved couple of error atlast I am stuck with this erro "psycopg2 connect error: cannot pass more than 100 arguments to a function" i tried json.dumps where it gives me this .. if i am executing from scripts with same input it works fine. below is my stored procedure..
CREATE OR REPLACE PROCEDURE public.insert_usr_details_in_all_tables (jsonobject json)
LANGUAGE 'plpgsql'
AS $BODY$
DECLARE
skill_s Text[] := '{}';
selectedSkill TEXT;
edu TEXT;
email TEXT;
phone TEXT;
usr_name TEXT;
clg TEXT;
lstEmployer TEXT;
gendr TEXT;
skill_row_count INTEGER;
usr_id INTEGER;
skl_id INTEGER;
intEmailExist INTEGER;
expen DECIMAL;
BEGIN
-- Getting name
SELECT jsonobject->'Name' into usr_name;
-- Getting education
SELECT jsonobject->'Education' into edu;
-- Getting phone number
SELECT jsonobject->'phone_number' into phone;
-- Getting experience
SELECT (jsonobject->>'Exp')::DECIMAL INTO expen;
--Getting college
SELECT jsonobject->'College' into clg;
-- Getting last employer
SELECT jsonobject->'CurrentEmployer' into lstEmployer;
--Getting gender
SELECT jsonobject->'Gender' into gendr;
-- Getting Email
SELECT json_array_length(jsonobject->'Email') into intEmailExist;
IF intEmailExist > 0 THEN
email:=array(SELECT json_array_elements_text(jsonobject->'Email'));
END IF;
-- Insert user details in 'Extractor_usr_details' table.
INSERT INTO public."Extractor_usr_details"(name, phone_number, email, exp, education, college, "currentEmployer", gender)VALUES ( usr_name, phone, email, expen, edu, clg, lstEmployer, gendr) returning "id" into usr_id;
skill_s := array(SELECT json_array_elements_text(jsonObject->'Skills'));
FOREACH selectedSkill IN ARRAY skill_s LOOP
INSERT INTO public."Extractor_skills" (skill) SELECT selectedSkill WHERE NOT EXISTS (SELECT id FROM public."Extractor_skills" WHERE skill = selectedSkill)RETURNING id into skl_id;
INSERT INTO public."Extractor_usr_skills" (skill_id, user_id) SELECT skl_id, usr_id WHERE NOT EXISTS (SELECT id FROM public."Extractor_usr_skills" WHERE skill_id = skl_id AND user_id = usr_id);
END LOOP;
COMMIT;
END;
$BODY$;
and here is my code to call the stored procedure in python project
try:
# declare a new PostgreSQL connection object
conn = connect(
dbname="postgres",
user="postgres",
host="localhost",
password="demo",
port= '5432'
# attempt to connect for 3 seconds then raise exception
)
cur = conn.cursor()
cur.callproc('insert_usr_details_in_all_tables', json.dumps(jsonObject))
except (Exception) as err:
print("\npsycopg2 connect error:", err)
conn = None
cur = None
I'm pushing data from a data-frame into MySQL, right now it is only adding new data to the table if the data does not exists(appending). This works perfect, however I also want my code to check if the record already exists then it needs to update. So I need it to append + update. I really don't know how to start fixing this as I got stuck....someone tried this before?
This is my code:
engine = create_engine("mysql+pymysql://{user}:{pw}#localhost/{db}"
.format(user="root",
pw="*****",
db="my_db"))
my_df.to_sql('my_table', con = engine, if_exists = 'append')
You can use next solution on DB side:
First: create table for insert data from Pandas (let call it test):
CREATE TABLE `test` (
`id` INT(11) NOT NULL AUTO_INCREMENT,
`name` VARCHAR(100) NOT NULL,
`capacity` INT(11) NOT NULL,
PRIMARY KEY (`id`)
);
Second: Create table for resulting data (let call it cumulative_test) exactly same structure as test:
CREATE TABLE `cumulative_test` (
`id` INT(11) NOT NULL AUTO_INCREMENT,
`name` VARCHAR(100) NOT NULL,
`capacity` INT(11) NOT NULL,
PRIMARY KEY (`id`)
);
Third: set trigger on each insert into the test table will insert ore update record in the second table like:
DELIMITER $$
CREATE
/*!50017 DEFINER = 'root'#'localhost' */
TRIGGER `before_test_insert` BEFORE INSERT ON `test`
FOR EACH ROW BEGIN
DECLARE _id INT;
SELECT id INTO _id
FROM `cumulative_test` WHERE `cumulative_test`.`name` = new.name;
IF _id IS NOT NULL THEN
UPDATE cumulative_test
SET `cumulative_test`.`capacity` = `cumulative_test`.`capacity` + new.capacity;
ELSE
INSERT INTO `cumulative_test` (`name`, `capacity`)
VALUES (NEW.name, NEW.capacity);
END IF;
END;
$$
DELIMITER ;
So you will already insert values into the test table and get calculated results in the second table. The logic inside the trigger can be matched for your needs.
Similar to the approach used for PostgreSQL here, you can use INSERT … ON DUPLICATE KEY in MySQL:
with engine.begin() as conn:
# step 0.0 - create test environment
conn.execute(sa.text("DROP TABLE IF EXISTS main_table"))
conn.execute(
sa.text(
"CREATE TABLE main_table (id int primary key, txt varchar(50))"
)
)
conn.execute(
sa.text(
"INSERT INTO main_table (id, txt) VALUES (1, 'row 1 old text')"
)
)
# step 0.1 - create DataFrame to UPSERT
df = pd.DataFrame(
[(2, "new row 2 text"), (1, "row 1 new text")], columns=["id", "txt"]
)
# step 1 - create temporary table and upload DataFrame
conn.execute(
sa.text(
"CREATE TEMPORARY TABLE temp_table (id int primary key, txt varchar(50))"
)
)
df.to_sql("temp_table", conn, index=False, if_exists="append")
# step 2 - merge temp_table into main_table
conn.execute(
sa.text(
"""\
INSERT INTO main_table (id, txt)
SELECT id, txt FROM temp_table
ON DUPLICATE KEY UPDATE txt = VALUES(txt)
"""
)
)
# step 3 - confirm results
result = conn.execute(
sa.text("SELECT * FROM main_table ORDER BY id")
).fetchall()
print(result) # [(1, 'row 1 new text'), (2, 'new row 2 text')]
I'm using MySQL 5.5, Python 2.6 and MySQLdb package
What I have:
Procedure #1
DROP PROCEDURE IF EXISTS log_create;
DELIMITER $$
CREATE PROCEDURE log_create
(
IN p_log_type_id INT,
IN p_body TEXT
)
BEGIN
INSERT INTO log(
log_type_id,
body
)
VALUES(
p_log_type_id,
p_body
);
SELECT LAST_INSERT_ID() as log_id;
END $$
DELIMITER ;
Procedure #2
DROP PROCEDURE IF EXISTS http_request_log_create;
DELIMITER $$
CREATE PROCEDURE http_request_log_create
(
IN p_log_type_id INT,
IN p_body TEXT,
IN p_host VARCHAR(255),
IN p_port SMALLINT UNSIGNED,
IN p_url VARCHAR(1024),
IN p_method VARCHAR(8),
IN p_customer_id VARCHAR(128),
IN p_protocol VARCHAR(8),
IN p_query_parameters VARCHAR(1024),
IN p_duration DECIMAL(3,3) UNSIGNED
)
BEGIN
CALL log_create(p_log_type_id, p_body);
SET #v_log_id = LAST_INSERT_ID();
INSERT INTO http_request_log (
log_id,
host,
port,
url,
method,
customer_id,
protocol,
query_parameters,
duration
)
VALUES (
#v_log_id,
p_host,
p_port,
p_url,
p_method,
p_customer_id,
p_protocol,
p_query_parameters,
p_duration
);
SELECT LAST_INSERT_ID() as http_request_log_id;
END $$
DELIMITER ;
Procedure #3:
DROP PROCEDURE IF EXISTS api_error_log_create;
DELIMITER $$
CREATE PROCEDURE api_error_log_create
(
IN p_log_type_id INT,
IN p_body TEXT,
IN p_host VARCHAR(255),
IN p_port SMALLINT UNSIGNED,
IN p_url VARCHAR(1024),
IN p_method VARCHAR(8),
IN p_customer_id VARCHAR(128),
IN p_protocol VARCHAR(8),
IN p_query_parameters VARCHAR(1024),
IN p_duration DECIMAL(3,3) UNSIGNED,
IN p_message VARCHAR(512),
IN p_stack_trace TEXT,
IN p_version VARCHAR(8)
)
BEGIN
CALL http_request_log_create(p_log_type_id, p_body, p_host, p_port, p_url, p_method, p_customer_id, p_protocol, p_query_parameters, p_duration);
INSERT INTO api_error_log (
http_request_log_id,
message,
stack_trace,
version
)
VALUES (
LAST_INSERT_ID(),
p_message,
p_stack_trace,
p_version
);
SELECT LAST_INSERT_ID() as api_error_log_id;
END $$
DELIMITER ;
As you can see, I'm using chain of stored procedures calls and this works fine for me. But...
def create_api_error_log(self, connection, model):
result = self.create_record(
connection,
'api_error_log_create',
(model.log_type_id,
model.body,
model.host,
model.port,
model.url,
model.method,
model.customer_id,
model.protocol,
model.query_parameters,
model.duration,
model.message,
model.stack_trace,
model.version,
model.api_error_log_id))
return ApiErrorLogCreateResult(result)
Here, result variable contains dictionary:
{'log_id': _some_int_}
log_id, LOG_ID!!!, not required api_error_log_id.
As I understood, cursor returns as result of the first select statement result in the stored procedures call.
I need api_error_log_id value corrected returned from the function call.
I know how to get it in few other ways, but I need to know if it is possible to obtain required id in my way?
Edit 1:
def create_record(self, conn, proc_name, proc_params):
result = self.common_record(conn, proc_name, proc_params, 'fetchone')
return result and result.itervalues().next()
def common_record(self, conn, proc_name, proc_params=(), result_func='', method='callproc'):
cursor = conn.cursor()
eval('cursor.%s' % method)(proc_name, proc_params)
result = result_func and eval('cursor.%s' % result_func)() or cursor.rowcount > 0
cursor.close()
#print 'sql: ', proc_name, proc_params
return result
Well, the solution is a quite simple.
All the result sets from select statements during the last query (chaing of stored procedures calls) is available for
reading. By default, cursor contains the first select result available for fetching. You can use
cursor.nextset()
to check if there are any other select results. This method returns True of False. If True, you can use fetch methods to obtain results of the next select statement.
Cheers.
I have a SQL script like follows:
DECLARE #AGE INT = ?
, #NAME VARCHAR(20) = ?
INSERT INTO [dbo].[new_table] (AGE, NAME)
SELECT #AGE, #NAME
SELECT ID = CAST(SCOPE_IDENTITY() AS INT)
In the table, there is an IDENTITY column ID which is defined as INT. Thus the values of the ID column is increasing as new rows are inserted. And my goal is to take out the ID that just inserted.
The above code works fine in SQL.
Then I tried to run it in python, using following code:
conn = pyodbc.connect("driver={SQL Server}; server= MyServer; database= MyDatabase"; trusted_connection=true")
cursor = conn .cursor()
SQL_command = """
DECLARE #AGE INT = ?
, #NAME VARCHAR(20) = ?
INSERT INTO [dbo].[new_table] (AGE, NAME)
SELECT #AGE, #NAME
SELECT ID = CAST(SCOPE_IDENTITY() AS INT)
"""
cursor.execute(SQL_command, 23, 'TOM')
result = cursor.fetchall()
However, I've got following error message:
Traceback (most recent call last):
File "C:\Users\wwang\Documents\Aptana Studio 3 Workspace\ComparablesBuilder\test.py", line 119, in
result = cursor.fetchall()
pyodbc.ProgrammingError: No results. Previous SQL was not a query.
So, may I know why the same code cannot work in python? Is my usage of pyodbc is incorrect?
Many thanks.
It's possible that the multiple statements in your Sql Batch are being interpreted as separate result sets to the Python driver - the first row-count returning statement is the INSERT statement, which could be the culprit.
Try adding SET NOCOUNT ON; before your statements, to suppress row counts from non-queries:
SET NOCOUNT ON;
DECLARE #AGE INT = ?
, #NAME VARCHAR(20) = ?
...
SELECT ID = CAST(SCOPE_IDENTITY() AS INT);
Edit
IIRC, some drivers are also dependent on Sql Server's ROW COUNT to parse result sets correctly. So if the above fails, you might also try:
SET NOCOUNT ON;
DECLARE ...;
INSERT ...;
SET NOCOUNT OFF; -- i.e. Turn back on again so the 1 row count can be returned.
SELECT CAST(SCOPE_IDENTITY() AS INT) AS ID;