I'm using MySQL 5.5, Python 2.6 and MySQLdb package
What I have:
Procedure #1
DROP PROCEDURE IF EXISTS log_create;
DELIMITER $$
CREATE PROCEDURE log_create
(
IN p_log_type_id INT,
IN p_body TEXT
)
BEGIN
INSERT INTO log(
log_type_id,
body
)
VALUES(
p_log_type_id,
p_body
);
SELECT LAST_INSERT_ID() as log_id;
END $$
DELIMITER ;
Procedure #2
DROP PROCEDURE IF EXISTS http_request_log_create;
DELIMITER $$
CREATE PROCEDURE http_request_log_create
(
IN p_log_type_id INT,
IN p_body TEXT,
IN p_host VARCHAR(255),
IN p_port SMALLINT UNSIGNED,
IN p_url VARCHAR(1024),
IN p_method VARCHAR(8),
IN p_customer_id VARCHAR(128),
IN p_protocol VARCHAR(8),
IN p_query_parameters VARCHAR(1024),
IN p_duration DECIMAL(3,3) UNSIGNED
)
BEGIN
CALL log_create(p_log_type_id, p_body);
SET #v_log_id = LAST_INSERT_ID();
INSERT INTO http_request_log (
log_id,
host,
port,
url,
method,
customer_id,
protocol,
query_parameters,
duration
)
VALUES (
#v_log_id,
p_host,
p_port,
p_url,
p_method,
p_customer_id,
p_protocol,
p_query_parameters,
p_duration
);
SELECT LAST_INSERT_ID() as http_request_log_id;
END $$
DELIMITER ;
Procedure #3:
DROP PROCEDURE IF EXISTS api_error_log_create;
DELIMITER $$
CREATE PROCEDURE api_error_log_create
(
IN p_log_type_id INT,
IN p_body TEXT,
IN p_host VARCHAR(255),
IN p_port SMALLINT UNSIGNED,
IN p_url VARCHAR(1024),
IN p_method VARCHAR(8),
IN p_customer_id VARCHAR(128),
IN p_protocol VARCHAR(8),
IN p_query_parameters VARCHAR(1024),
IN p_duration DECIMAL(3,3) UNSIGNED,
IN p_message VARCHAR(512),
IN p_stack_trace TEXT,
IN p_version VARCHAR(8)
)
BEGIN
CALL http_request_log_create(p_log_type_id, p_body, p_host, p_port, p_url, p_method, p_customer_id, p_protocol, p_query_parameters, p_duration);
INSERT INTO api_error_log (
http_request_log_id,
message,
stack_trace,
version
)
VALUES (
LAST_INSERT_ID(),
p_message,
p_stack_trace,
p_version
);
SELECT LAST_INSERT_ID() as api_error_log_id;
END $$
DELIMITER ;
As you can see, I'm using chain of stored procedures calls and this works fine for me. But...
def create_api_error_log(self, connection, model):
result = self.create_record(
connection,
'api_error_log_create',
(model.log_type_id,
model.body,
model.host,
model.port,
model.url,
model.method,
model.customer_id,
model.protocol,
model.query_parameters,
model.duration,
model.message,
model.stack_trace,
model.version,
model.api_error_log_id))
return ApiErrorLogCreateResult(result)
Here, result variable contains dictionary:
{'log_id': _some_int_}
log_id, LOG_ID!!!, not required api_error_log_id.
As I understood, cursor returns as result of the first select statement result in the stored procedures call.
I need api_error_log_id value corrected returned from the function call.
I know how to get it in few other ways, but I need to know if it is possible to obtain required id in my way?
Edit 1:
def create_record(self, conn, proc_name, proc_params):
result = self.common_record(conn, proc_name, proc_params, 'fetchone')
return result and result.itervalues().next()
def common_record(self, conn, proc_name, proc_params=(), result_func='', method='callproc'):
cursor = conn.cursor()
eval('cursor.%s' % method)(proc_name, proc_params)
result = result_func and eval('cursor.%s' % result_func)() or cursor.rowcount > 0
cursor.close()
#print 'sql: ', proc_name, proc_params
return result
Well, the solution is a quite simple.
All the result sets from select statements during the last query (chaing of stored procedures calls) is available for
reading. By default, cursor contains the first select result available for fetching. You can use
cursor.nextset()
to check if there are any other select results. This method returns True of False. If True, you can use fetch methods to obtain results of the next select statement.
Cheers.
Related
I am trying to create a Python program to dynamically create a partition on a partitioned Postgres table ("my_test_hist") and load data into the partition baed on the eff_date column. If the partition exists then the records should be loaded to this existing partition, else a new partition should be created and the data should be loaded to the new partition.
With the below sample data, there should be 2 partitions created in the "my_test_hist" partitioned table -
The partitioned table ("my_test_hist") will source data from the "my_test" table by :
INSERT INTO my_test_hist SELECT * from my_test;
I am getting the following error while running the Python program:-
no partition of relation "my_test_hist" found for row
DETAIL: Partition key of the failing row contains (eff_date) = (2022-07-15)
The code snippets are as follows:
create table my_test -- Source table non-partitioned
(
id int,
area_code varchar(10),
fname varchar(10),
eff_date date
) ;
INSERT INTO my_test VALUES(1, 'A505', 'John', DATE '2022-07-15');
INSERT INTO my_test VALUES(2, 'A506', 'Mike', DATE '2022-07-20');
COMMIT;
create table my_test_hist -- Target table partitioned
(
id int,
area_code varchar(10),
fname varchar(10),
eff_date date
) PARTITION BY LIST (eff_date) ; -- Partitioned by List on eff_date col
DB Func:
CREATE FUNCTION insert_my_test_hist_part() RETURNS trigger
LANGUAGE plpgsql
AS $$
BEGIN
BEGIN
/* try to create a table for the new partition */
EXECUTE
format(
'CREATE TABLE %I (LIKE my_test_hist INCLUDING DEFAULTS)',
'my_test_hist_' || to_char(NEW.eff_date, 'YYYY_MM_DD')
);
/* tell listener to attach the partition (only if a new table was created) */
EXECUTE
format(
'NOTIFY my_test_hist, %L', to_char(NEW.eff_date, 'YYYY_MM_DD')
);
EXCEPTION
WHEN duplicate_table THEN
NULL; -- ignore
END;
/* insert into the new partition */
EXECUTE
format(
'INSERT INTO %I VALUES ($1.*)', 'my_test_hist_' || to_char(NEW.eff_date, 'YYYY_MM_DD') )
USING NEW;
RETURN NULL;
END;
$$;
DB Trigger:
CREATE OR REPLACE TRIGGER insert_my_test_hist_part_trigger
BEFORE INSERT ON MY_TEST_HIST FOR EACH ROW
WHEN (pg_trigger_depth() < 1)
EXECUTE FUNCTION insert_my_test_hist_part();
Python Listener Program:
try:
conn = psycopg2.connect(host=hostName, dbname=dbName, user=userName, password=password, port=port)
conn.set_isolation_level(psycopg2.extensions.ISOLATION_LEVEL_AUTOCOMMIT)
cursor = conn.cursor()
def listen_notify():
cursor.execute(f"LISTEN my_test_hist;")
query1 = "ALTER TABLE my_test_hist ADD PARTITION my_test_hist_{} FOR VALUES IN ('{}');"
query2 = "INSERT INTO my_test_hist SELECT * FROM my_test ;"
while True:
cursor.execute(query2) # Trigger the insert to notify
if select.select([conn], [], [], 5) == ([],[],[]):
print("Timeout")
else:
conn.poll()
while conn.notifies:
notify = conn.notifies.pop()
var = notify.payload
query = query1.format(var, var)
cursor.execute(query)
conn.notifies.clear()
# Call the function
listen_notify()
except Exception as e:
print("Exception occurred: " +str(e))
Can anyone please help me fix the error in Python. Also please let me know how to use asyncio in the program and how can I terminate the infinite loop once the message is caught.
Thanks.
I am trying to pass json object as argument to my stored procedure. I tried many things and solved couple of error atlast I am stuck with this erro "psycopg2 connect error: cannot pass more than 100 arguments to a function" i tried json.dumps where it gives me this .. if i am executing from scripts with same input it works fine. below is my stored procedure..
CREATE OR REPLACE PROCEDURE public.insert_usr_details_in_all_tables (jsonobject json)
LANGUAGE 'plpgsql'
AS $BODY$
DECLARE
skill_s Text[] := '{}';
selectedSkill TEXT;
edu TEXT;
email TEXT;
phone TEXT;
usr_name TEXT;
clg TEXT;
lstEmployer TEXT;
gendr TEXT;
skill_row_count INTEGER;
usr_id INTEGER;
skl_id INTEGER;
intEmailExist INTEGER;
expen DECIMAL;
BEGIN
-- Getting name
SELECT jsonobject->'Name' into usr_name;
-- Getting education
SELECT jsonobject->'Education' into edu;
-- Getting phone number
SELECT jsonobject->'phone_number' into phone;
-- Getting experience
SELECT (jsonobject->>'Exp')::DECIMAL INTO expen;
--Getting college
SELECT jsonobject->'College' into clg;
-- Getting last employer
SELECT jsonobject->'CurrentEmployer' into lstEmployer;
--Getting gender
SELECT jsonobject->'Gender' into gendr;
-- Getting Email
SELECT json_array_length(jsonobject->'Email') into intEmailExist;
IF intEmailExist > 0 THEN
email:=array(SELECT json_array_elements_text(jsonobject->'Email'));
END IF;
-- Insert user details in 'Extractor_usr_details' table.
INSERT INTO public."Extractor_usr_details"(name, phone_number, email, exp, education, college, "currentEmployer", gender)VALUES ( usr_name, phone, email, expen, edu, clg, lstEmployer, gendr) returning "id" into usr_id;
skill_s := array(SELECT json_array_elements_text(jsonObject->'Skills'));
FOREACH selectedSkill IN ARRAY skill_s LOOP
INSERT INTO public."Extractor_skills" (skill) SELECT selectedSkill WHERE NOT EXISTS (SELECT id FROM public."Extractor_skills" WHERE skill = selectedSkill)RETURNING id into skl_id;
INSERT INTO public."Extractor_usr_skills" (skill_id, user_id) SELECT skl_id, usr_id WHERE NOT EXISTS (SELECT id FROM public."Extractor_usr_skills" WHERE skill_id = skl_id AND user_id = usr_id);
END LOOP;
COMMIT;
END;
$BODY$;
and here is my code to call the stored procedure in python project
try:
# declare a new PostgreSQL connection object
conn = connect(
dbname="postgres",
user="postgres",
host="localhost",
password="demo",
port= '5432'
# attempt to connect for 3 seconds then raise exception
)
cur = conn.cursor()
cur.callproc('insert_usr_details_in_all_tables', json.dumps(jsonObject))
except (Exception) as err:
print("\npsycopg2 connect error:", err)
conn = None
cur = None
I'd like to have returned to me (via cx_oracle in python) the value of the Identity that's created for a row that I'm inserting. I think I can figure out the python bit on my own, if someone could please state how to modify my SQL statement to get the ID of the newly-created row.
I have a table that's created with something like the following:
CREATE TABLE hypervisor
(
id NUMBER GENERATED BY DEFAULT AS IDENTITY (
START WITH 1 NOCACHE ORDER ) NOT NULL ,
name VARCHAR2 (50)
)
LOGGING ;
ALTER TABLE hypervisor ADD CONSTRAINT hypervisor_PK PRIMARY KEY ( id ) ;
And I have SQL that's similar to the following:
insert into hypervisor ( name ) values ('my hypervisor')
Is there an easy way to obtain the id of the newly inserted row? I'm happy to modify my SQL statement to have it returned, if that's possible.
Most of the google hits on this issue were for version 11 and below, which don't support automatically-generated identity columns so hopefully someone here can help out.
Taking what user2502422 said above and adding the python bit:
newest_id_wrapper = cursor.var(cx_Oracle.STRING)
sql_params = { "newest_id_sql_param" : newest_id_wrapper }
sql = "insert into hypervisor ( name ) values ('my hypervisor') " + \
"returning id into :python_var"
cursor.execute(sql, sql_params)
newest_id=newest_id_wrapper.getvalue()
This example taken from learncodeshare.net has helped me grasp the correct syntax.
cur = con.cursor()
new_id = cur.var(cx_Oracle.NUMBER)
statement = 'insert into cx_people(name, age, notes) values (:1, :2, :3) returning id into :4'
cur.execute(statement, ('Sandy', 31, 'I like horses', new_id))
sandy_id = new_id.getvalue()
pet_statement = 'insert into cx_pets (name, owner, type) values (:1, :2, :3)'
cur.execute(pet_statement, ('Big Red', sandy_id, 'horse'))
con.commit()
It's only slightly different from ragerdl's answer, but different enough to be added here I believe!
Notice the absence of sql_params = { "newest_id_sql_param" : newest_id_wrapper }
Use the returning clause of the insert statement.
insert into hypervisor (name ) values ('my hypervisor')
returning id into :python_var
You said you could handle the Python bit ? You should be able to "bind" the return parameter in your program.
I liked the answer by Marco Polo, but it is incomplete.
The answer from FelDev is good too but does not address named parameters.
Here is a more complete example from code I wrote with a simplified table (less fields). I have omitted code on how to set up a cursor since that is well documented elsewhere.
import cx_Oracle
INSERT_A_LOG = '''INSERT INTO A_LOG(A_KEY, REGION, DIR_NAME, FILENAME)
VALUES(A_KEY_Sequence.nextval, :REGION, :DIR_NAME, :FILENAME)
RETURNING A_KEY INTO :A_LOG_ID'''
CURSOR = None
class DataProcessor(Process):
# Other code for setting up connection to DB and storing it in CURSOR
def save_log_entry(self, row):
global CURSOR
# Oracle variable to hold value of last insert
log_var = CURSOR.var(cx_Oracle.NUMBER)
row['A_LOG_ID'] = log_var
row['REGION'] = 'R7' # Other entries set elsewhere
try:
# This will fail unless row.keys() =
# ['REGION', 'DIR_NAME', 'FILE_NAME', 'A_LOG_ID']
CURSOR.execute(INSERT_A_LOG, row)
except Exception as e:
row['REJCTN_CD'] = 'InsertFailed'
raise
# Get last inserted ID from Oracle for update
self.last_log_id = log_var.getvalue()
print('Insert id was {}'.format(self.last_log_id))
Agreeing with the older answers. However, depending on your version of cx_Oracle (7.0 and newer), var.getvalue() might return an array instead of a scalar.
This is to support multiple return values as stated in this comment.
Also note, that cx_Oracle is deprecated and has moved to oracledb now.
Example:
newId = cur.var(oracledb.NUMBER, outconverter=int)
sql = """insert into Locations(latitude, longitude) values (:latitude, :longitude) returning locationId into :newId"""
sqlParam = [latitude, longitude, newId]
cur.execute(sql, sqlParam)
newIdValue = newId.getvalue()
newIdValue would return [1] instead of 1
I have a SQL script like follows:
DECLARE #AGE INT = ?
, #NAME VARCHAR(20) = ?
INSERT INTO [dbo].[new_table] (AGE, NAME)
SELECT #AGE, #NAME
SELECT ID = CAST(SCOPE_IDENTITY() AS INT)
In the table, there is an IDENTITY column ID which is defined as INT. Thus the values of the ID column is increasing as new rows are inserted. And my goal is to take out the ID that just inserted.
The above code works fine in SQL.
Then I tried to run it in python, using following code:
conn = pyodbc.connect("driver={SQL Server}; server= MyServer; database= MyDatabase"; trusted_connection=true")
cursor = conn .cursor()
SQL_command = """
DECLARE #AGE INT = ?
, #NAME VARCHAR(20) = ?
INSERT INTO [dbo].[new_table] (AGE, NAME)
SELECT #AGE, #NAME
SELECT ID = CAST(SCOPE_IDENTITY() AS INT)
"""
cursor.execute(SQL_command, 23, 'TOM')
result = cursor.fetchall()
However, I've got following error message:
Traceback (most recent call last):
File "C:\Users\wwang\Documents\Aptana Studio 3 Workspace\ComparablesBuilder\test.py", line 119, in
result = cursor.fetchall()
pyodbc.ProgrammingError: No results. Previous SQL was not a query.
So, may I know why the same code cannot work in python? Is my usage of pyodbc is incorrect?
Many thanks.
It's possible that the multiple statements in your Sql Batch are being interpreted as separate result sets to the Python driver - the first row-count returning statement is the INSERT statement, which could be the culprit.
Try adding SET NOCOUNT ON; before your statements, to suppress row counts from non-queries:
SET NOCOUNT ON;
DECLARE #AGE INT = ?
, #NAME VARCHAR(20) = ?
...
SELECT ID = CAST(SCOPE_IDENTITY() AS INT);
Edit
IIRC, some drivers are also dependent on Sql Server's ROW COUNT to parse result sets correctly. So if the above fails, you might also try:
SET NOCOUNT ON;
DECLARE ...;
INSERT ...;
SET NOCOUNT OFF; -- i.e. Turn back on again so the 1 row count can be returned.
SELECT CAST(SCOPE_IDENTITY() AS INT) AS ID;
I am using TxPostgres for insert a row into a postgresql, my stored procedure is
CREATE OR REPLACE FUNCTION gps_open_connection(
_ip character varying(15),
_port integer
) RETURNS integer AS $$
DECLARE
log_id integer;
BEGIN
INSERT INTO gpstracking_device_logs (gpstracking_device_logs.id, gpstracking_device_logs.ip, gpstracking_device_logs.port, gpstracking_device_logs.status, gpstracking_device_logs.created, gpstracking_device_logs.updated) VALUES (DEFAULT, _ip, _port, TRUE, NOW(), NOW()) RETURNING id INTO log_id;
END
$$
LANGUAGE plpgsql VOLATILE SECURITY DEFINER;
and this stored procedure is called from a method in a twisted class, my method is
def openConnection (self, ip, port):
self.connection['ip'] = ip
self.connection['port'] = port
self.connection['status'] = True
self._d.addCallback(lambda _: self._conn.runQuery("select gps_open_connection('%s', '%s')" % (ip, port)))
self.id ?
My issue is that I dont know how to populate self.id, I hope you could help on this issue
self._conn.runQuery returns a Deferred that will contain query result.
As you return this Deferred from callback, next callback value will be the txpostgres Deferred's value. So, you may write just after previous callback:
def setId(val):
self.id = val[0]['id']
self._d.addCallback(setId)