The following procedure works fine from mysql client but not running from Python.
Stored Procedure
CREATE DEFINER=`music-cnv`#`%` PROCEDURE `StoreFileStats`(FNAME VARCHAR(200), FEXT varchar(4), FBDIR VARCHAR(100), FRDIR VARCHAR(250), FSIZE bigint(8), FMDATE bigint(8), FCDATE bigint(8), CONVERTED tinyint(1))
BEGIN
DECLARE FCount int DEFAULT 0;
SELECT COUNT(FileName) INTO FCount FROM FileList where (FleRelativeDir LIKE FRDIR) AND (FileName LIKE FNAME);
IF FCount = 0 THEN
INSERT INTO FileList (FileName,FileBaseDir,FleRelativeDir,FileExt,FileSize,FileModDate,FileCDate,Con#verted) VALUES (FNAME,FBDir,FRDir,FEXT,FSize,FMDate,FCDate,CONVERTED);
END IF;
END
Data
'In the Light', 'FLAC', '/var/data/Music_FLAC', 'Led Zeppelin/Physical Graffiti, Disc 2', 51472669, 1289282499, 1458631127, False
Python Code
The connection and cursor give no errors
try:
myargs = [fnamesub, self.type.strip(), self.directory,
subdirname, fpathstat[2], fpathstat[3],
fpathstat[4], False]
result_args = mycur.callproc('StoreFileStats', myargs)
except mysql.connector.Error as Err:
errno = 51
print('Error ' + str(errno) + ' !!!, Cannot Update MySQL Data with Name ' + fnamesub)
print(Err)
The code runs without error but does not update database
Thank you for any help
Just needed to commit. When running procedure from shell auto commit is enabled. But when run from Python must manually run commit.
Related
I am running a Stored Procedure to insert a new row into a table and return the Auto-Generated ID.
However, it doesn't insert the row but does respond correctly with the id when calling it from Python. (Eg it returns 9 but when looking in the DB where is no new message)
Running the command using SQL Workbench does work as expected
The SP being called is addNewMessage and expects 3 parameters(sUID, roomid, message)
SQL command (when running manually)
CALL addNewMessage('bfc1cc8c-4462-11ea-887c-000d3a7f4c7f', '658946602274258955', 'My Message')
SQL SP
BEGIN
INSERT INTO `messages`(`server_uid`, `title`, `message`, `author`, `room_id`) VALUES (sUID,title,'','',room);
SELECT ##IDENTITY as newId;
END
Python Scripts
new_message = mysql_command('discord_addNewMessage', ['bfc1cc8c-4462-11ea-887c-000d3a7f4c7f', '658946602274258955', 'My Message'])
print(new_message);
def mysql_command(command, args, addDataWrapper=False, decode=False):
global sql_cursor
try:
if isinstance(args, list):
sql_cursor.callproc(command, [arg for arg in args])
else:
sql_cursor.callproc(command, [args])
for result in sql_cursor.stored_results():
return_data = result.fetchall()
if decode:
data = return_data[0][0].decode('utf-8')
else:
data = return_data[0][0]
if addDataWrapper:
data = '{"data":[' + data + ']}'
return data
except BaseException as ex:
print("SQL Error :", ex)
After some more digging, I needed to commit using mydb.commit() after the sql_cursor.callproc
I've been automating some SQL queries using python and I've been experimenting with try and except to catch errors. This works fine most of the time but if my SQL statement doesn't return rows (e.g. inserting into a table on the database) then it sends an error and stops the script.
The error looks like:
This result object does not return rows. It has been closed automatically.
Is there a way to use a case statement or similar so that if the error is the same as the above, it continues on running, otherwise it stops?
Sample code:
import time
import logging
import datetime
import sys
from datetime import timedelta
def error_logs(e):
#calculate running time
runtime = (time.time() - start_time)
#capture error messages (only using Line number)
exc_type, exc_obj, exc_tb = sys.exc_info()
#fname = os.path.split(exc_tb.tb_frame.f_code.co_filename)[1]
line = 'Line ' + str(exc_tb.tb_lineno)
#print the error
logging.exception("Error")
message = "***ERROR***: Script Failed. "
write_logs((str(datetime.datetime.now()) + ", " + message + str(e) + ". " + line + ". Run time: " + str(round(runtime)) + " seconds." + "\n"))
def write_logs(message):
log_path = r"C:\Logs\Logs.txt"
with open(log_path, 'a+') as log:
log.write(str(datetime.datetime.now()) + ", " + message + "\n")
try:
db.query('''
insert into my_table (column1, column2, column3)
select * from my_other_table
where date = '2019-09-12'
''')
except Exception as e:
error_logs(e)
Check the first part of this answer: https://stackoverflow.com/a/14388356/77156
First answer - on "preventing automatic closing".
SQLAlchemy runs DBAPI execute() or executemany() with insert and do
not do any select queries. So the exception you've got is expected
behavior. ResultProxy object returned after insert query executed
wraps DB-API cursor that doesn't allow to do .fetchall() on it. Once
.fetchall() fails, ResultProxy returns user the exception your saw.
The only information you can get after insert/update/delete operation
would be number of affected rows or the value of primary key after
auto increment (depending on database and database driver).
If your goal is to receive this kind information, consider checking
ResultProxy methods and attributes like:
.inserted_primary_key
.last_inserted_params()
.lastrowid
etc
I found a workaround to my own question. You can force the SQL to return some rows of data by adding a dummy select statement at the end. This way it won't think the results are empty and thus it won't throw an error.
db.query('''
insert into my_table (column1, column2, column3)
select * from my_other_table
where date = '2019-09-12';
select 'test';
''')
The problem stemmed from SQL Alchemy's code searching for some metadata within the results of the sql statement (and there aren't any for an insert statement).
From result.py in SQLAlchemy package
def _non_result(self, default):
if self._metadata is None:
raise exc.ResourceClosedError(
"This result object does not return rows. "
"It has been closed automatically.",
)
elif self.closed:
raise exc.ResourceClosedError("This result object is closed.")
else:
return default
I connected to my oracle database with cx_Oracle, had good connection and read needed data from base. Then tried to insert something in base and failed.
try:
print uid2 + "/" + pwd2 + "#" + service1
dconn1 = cx_Oracle.connect(uid2 + "/" + pwd2 + "#" + service1)
except:
pass
print dconn1
ver = dconn1.version.split(".")
print ver
cur = dconn1.cursor()
pdb.set_trace()
rows=[("NET-CI99999")]
sql_test='insert into Device2M1 (logical_name) values (:1)'
sql_test_2='insert into Device2M1 (logical_name) values ("NET-CI99999")'
#cur.prepare(sql_insert)
#cur.execute(sql_test_2)
cur.prepare(sql_test)
cur.executemany(None,rows)
dconn1.commit()
cur.execute("select logical_name,serial_no_,location,id,updated_by,contact_name,istatus,subtype,user_id,sysmodtime,sysmoduser,title,email,extension,ip_address,mac_address,updated_by, ig_fqdn,ig_domain,ig_model,ig_vendor,ig_inventory_numb,ig_network_equip,ig_name,ig_asset_tag,ig_it_asset_id,ig_it_asset_created,ig_registrator from Device2M1 where logical_name='NET-CI13681'")
for result in cur:
for i in result:
# pdb.set_trace()
print i
cur.close()
dconn1.close()
Part of the code with selections worked well, when try do just cur.execute for insertion console hang and make no replay, when try to do something with executemany have an error "cx_Oracle.DatabaseError: ORA-01036: неверное имя/номер переменной (wrong name/number of the variable)". Any idea about what is this and what could I do for proper working of the code?
I'm trying to read lines from stdin, and insert data from those lines into a PostgreSQL db, using a plpythonu stored procedure.
When I call the procedure under Python 3, it runs (consuming a serial value for each line read),
but stores no data in the db.
When I call the same procedure from psql, it works fine, inserting a single line in the db.
For example:
Action: Run SELECT sl_insert_day('2017-01-02', '05:15'); from within psql as user jazcap53
Result: day inserted with day_id 1.
Action: Run python3 src/load/load_mcv.py < input.txt at the command line
Result: nothing inserted, but 2 serial day_id's are consumed.
Action: Run SELECT sl_insert_day('2017-01-03', '06:15'); from within psql as user jazcap53
Result: day inserted with day_id 4.
file: input.txt:
DAY, 2017-01-05, 06:00
DAY, 2017-01-06, 07:00
Output:
('sl_insert_day() succeeded',)
('sl_insert_day() succeeded',)
I'm running Fedora 25, Python 3.6.0, and PostgreSQL 9.5.6.
Thank you very much to anyone who can help me with this!
Below is an MCV example that reproduces this behavior. I expect my problem is in Step 8 or Step 6 -- the other Steps are included for completeness.
The Steps used to create the MCV:
Step 1) Create database:
In psql as user postgres,
CREATE DATABASE sl_test_mcv;
Step 2) Database init:
file: db/database_mcv.ini
[postgresql]
host=localhost
database=sl_test_mcv
user=jazcap53
password=*****
Step 3) Run database config:
file: db/config_mcv.py
from configparser import ConfigParser
def config(filename='db/database_mcv.ini', section='postgresql'):
parser = ConfigParser()
parser.read(filename)
db = {}
if parser.has_section(section):
params = parser.items(section)
for param in params:
db[param[0]] = param[1]
else:
raise Exception('Section {} not found in the {} file'.format(section, filename))
return db
Step 4) Create table:
file: db/create_tables_mcv.sql
DROP TABLE IF EXISTS sl_day CASCADE;
CREATE TABLE sl_day (
day_id SERIAL UNIQUE,
start_date date NOT NULL,
start_time time NOT NULL,
PRIMARY KEY (day_id)
);
Step 5) Create language:
CREATE LANGUAGE plpythonu;
Step 6) Create procedure:
file: db/create_procedures_mcv.sql
DROP FUNCTION sl_insert_day(date, time without time zone);
CREATE FUNCTION sl_insert_day(new_start_date date,
new_start_time time without time zone) RETURNS text AS $$
from plpy import spiexceptions
try:
plan = plpy.prepare("INSERT INTO sl_day (start_date, start_time) \
VALUES($1, $2)", ["date", "time without time zone"])
plpy.execute(plan, [new_start_date, new_start_time])
except plpy.SPIError, e:
return "error: SQLSTATE %s" % (e.sqlstate,)
else:
return "sl_insert_day() succeeded"
$$ LANGUAGE plpythonu;
Step 7) Grant privileges:
file: db/grant_privileges_mcv.sql
GRANT SELECT, UPDATE, INSERT, DELETE ON sl_day TO jazcap53;
GRANT USAGE ON sl_day_day_id_seq TO jazcap53;
Step 8) Run procedure as python3 src/load/load_mcv.py < input.txt:
file: src/load/load_mcv.py
import sys
import psycopg2
from spreadsheet_etl.db.config_mcv import config
def conn_exec():
conn = None
try:
params = config()
conn = psycopg2.connect(**params)
cur = conn.cursor()
last_serial_val = 0
while True:
my_line = sys.stdin.readline()
if not my_line:
break
line_list = my_line.rstrip().split(', ')
if line_list[0] == 'DAY':
cur.execute('SELECT sl_insert_day(\'{}\', \'{}\')'.
format(line_list[1], line_list[2]))
print(cur.fetchone())
cur.close()
except (Exception, psycopg2.DatabaseError) as error:
print(error)
finally:
if conn is not None:
conn.close()
if __name__ == '__main__':
conn_exec()
Do conn.commit() after cur.close()
my stored procedure works fine on its own but my python script fails to fully execute the stored procedure with my downloaded files. the purpose of the python script is to download files using ftp and store the files locally.It first compares the remote location and the local location to find new files and then download the new files to the local location. And then executes the stored procedure on each new file.
python script:
import os
import ftplib
import pyodbc
connection to sql server*
conn = pyodbc.connect('DRIVER={SQL Server};SERVER=localhost;DATABASE=Development;UID=myid;PWD=mypassword')
cursor = conn.cursor()
ftp = ftplib.FTP("myftpaddress.com")
ftp.login("loginname", "password")
print 'ftp on'
#directory listing
rfiles = ftp.nlst()
print 'remote listing'
#save local directory listing to files
lfiles = os.listdir(r"D:\Raw_Data\myFiles")
print 'local listing'
#compare and find files in rfiles but not in lfiles
nfiles = set(rfiles) - set(lfiles)
nfiles = list(nfiles)
print 'compared listings'
#loop through the new files
#download the new files and open each file and run stored proc
#close files and disconnect to sql server
for n in nfiles:
local_filename = os.path.join(r"D:\Raw_Data\myFiles",n)
lf = open(local_filename, "wb")
ftp.retrbinary("RETR " + n, lf.write, 1024)
lf.close()
print 'file written'
cursor.execute("exec SP_my_Dailyfiles('n')")
conn.close()
lf.close()
print 'sql executed'
ftp.quit()
stored proc:
ALTER PROCEDURE [dbo].[SP_my_Dailyfiles]
-- Add the parameters for the stored procedure here
#file VARCHAR(255)
-- Add the parameters for the stored procedure here
AS
BEGIN
IF EXISTS(SELECT * FROM sysobjects WHERE name = 'myinvoice')
DROP TABLE dbo.myinvoice
----------------------------------------------------------------------------------------------------
CREATE TABLE myinvoice(
[Billing] varchar(255)
,[Order] varchar(45)
,[Item] varchar(255)
,[Quantity in pack] varchar(255)
,[Invoice] varchar(255)
,[Date] varchar(255)
,[Cost] varchar(255)
,[Quantity of pack] varchar(255)
,[Extended] varchar(255)
,[Type] varchar(25)
,[Date Due] varchar(255)
)
----------------------------------------------------------------------------------------------------
DECLARE #SourceDirectory VARCHAR(255)
DECLARE #SourceFile VARCHAR(255)
EXEC (' BULK
INSERT dbo.myinvoice
FROM ''D:\Raw_Data\myfile\'+#file+'''
WITH
(
FIRSTROW = 1,
FIELDTERMINATOR = '','',
ROWTERMINATOR = ''0x0a''
)'
)
-------------------------------------------------------------------------------------------------------------
INSERT INTO [Development].[dbo].[my_Dailyfiles](
[Billing]
,[Order]
,[Item]
,[Quantity in pack]
,[Invoice]
,[Date]
,[Cost]
,[Quantity of pack]
,[Extended]
,[Type]
,[Date Due]
,[FileName]
,[IMPORTEDDATE]
)
SELECT
replace([Billing], '"', '')
,replace([Order], '"', '')
,replace([Item], '"','')
,replace([Quantity in pack],'"','')
,replace([Invoice],'"','')
,cast(replace([Date],'"','') as varchar(255)) as date
,replace([Cost],'"','')
,replace([Quantity of pack],'"','')
,replace([Extended],'"','')
,replace([Type],'"','')
,cast(replace([Date Due],'"','') as varchar(255)) as date
,#file,
GetDate()
FROM [myinvoice] WHERE [Bill to] <> ' ' and ndc != '"***********"'
I think the problem may be that you are closing the DB connection immediately after you execute the stored procedure, whilst still in the loop.
This means the second time around the loop, the DB connection is closed when you try to execute the SP. I would actually expect an error to be thrown the second around the loop.
The way I would structure this is something like:
conn = pyodbc.connect(...)
for n in nfiles:
...
cursor = conn.cursor()
cursor.execute("exec SP_my_Dailyfiles('n')")
conn.commit()