sql = """
DROP PROCEDURE
IF EXISTS schema_change;
delimiter ';;'
CREATE PROCEDURE schema_change() BEGIN
if exists (select * from information_schema.columns where table_schema =
schema() and table_name = 'selectedairport' and column_name = 'GDP')
then
alter table selectedairport drop column GDP;
alter table selectedairport add column GDP DOUBLE;
end;;
delimiter ';'
CALL schema_change () ; DROP PROCEDURE
IF EXISTS schema_change ;
"""
cursor6.execute(sql)
However, this produces the error:
pymysql.err.ProgrammingError: (1064, "You have an error in your SQL syntax; check the manual that corresponds to your MySQL server version for the right syntax to use near 'delimiter ';;'\nCREATE PROCEDURE schema_change() BEGIN\n\n if exists (select * f' at line 1")
What could be the problem?
The execute() method (usually) only executes a single command at a time, so the script cannot be parsed, and anyway there is no support for DELIMITER; see this comment on GitHub. Therefore, one solution is to have multiple calls:
cursor6.execute("""
DROP PROCEDURE
IF EXISTS schema_change
""")
cursor6.execute("""
CREATE PROCEDURE schema_change() BEGIN
if exists (select * from information_schema.columns where table_schema =
schema() and table_name = 'selectedairport' and column_name = 'GDP')
then
alter table selectedairport drop column GDP;
NOTE: There is a syntax error here, we need to further add:
END IF;
Now continue as before:
alter table selectedairport add column GDP DOUBLE;
end
""")
cursor6.execute("""
CALL schema_change ()
""")
# Or cursor6.callproc('schema_change')
cursor6.execute("""
DROP PROCEDURE
IF EXISTS schema_change
""")
Related
I have made a table 'temporary'(x,train1,train2,..train4) with 5 columns.I want to fill the column 'train1' with calculated data (train.y1-ideal.y1) from tables 'train'(x,y1) and 'ideal'(x,y1). But the following nested sql query is giving 'syntax error near SELECT'. What is wrong with it?
train=1
with engine.connect() as conn:
while train<2:
ideal=1
col_train='y'+str(train)
train_no=str(train)
col_ideal='y'+str(ideal)
query1=conn.execute(text(("INSERT INTO temporary (train%s) VALUES (SELECT (train.%s-ideal.%s)*(train.%s-ideal.%s) FROM train INNER JOIN ideal ON train.x=ideal.x)")%(train_no,col_train,col_ideal,col_train,col_ideal)))
train+=1
What is wrong with it?
I believe that your issue is that the SELECT .... should be enclosed in parenthesises.
The Fix (assuming that I've correctly added the parenthesises in the right place, if not see the demo below)
query1=conn.execute(text(("INSERT INTO temporary (train%s) VALUES ((SELECT (train.%s-ideal.%s)*(train.%s-ideal.%s) FROM train INNER JOIN ideal ON train.x=ideal.x))")%(train_no,col_train,col_ideal,col_train,col_ideal)))
The following is a demo of the working SQL (albeit that the tables may be different) :-
DROP TABLE IF EXISTS train;
DROP TABLE IF EXISTS ideal;
DROP TABLE IF EXISTS temporary;
CREATE TABLE IF NOT EXISTS train (x INTEGER PRIMARY KEY, train_no INTEGER,col_train TEXT);
CREATE TABLE IF NOT EXISTS ideal (x INTEGER PRIMARY KEY, col_ideal INTEGER, col_train INTEGER);
CREATE TABLE IF NOT EXISTS temporary (train_no INTEGER);
INSERT INTO temporary (train_no) VALUES (
( /*<<<<<<<<<< ADDED */
SELECT (train.col_train-ideal.col_ideal)*(train.col_train-ideal.col_ideal)
FROM train INNER JOIN ideal ON train.x=ideal.x
) /*<<<<<<<<<< ADDED */
);
When executed then:-
INSERT INTO temporary (train_no) VALUES (
( /* ADDED */
SELECT (train.col_train-ideal.col_ideal)*(train.col_train-ideal.col_ideal)
FROM train INNER JOIN ideal ON train.x=ideal.x
) /* ADDED */
)
> Affected rows: 1
> Time: 0.084s
As opposed to (without the parenthesises) :-
INSERT INTO temporary (train_no) VALUES (
/*(*/ /* ADDED */
SELECT (train.col_train-ideal.col_ideal)*(train.col_train-ideal.col_ideal)
FROM train INNER JOIN ideal ON train.x=ideal.x
/*)*/ /* ADDED */
)
> near "SELECT": syntax error
> Time: 0s
When I'm trying to remove all tables with:
base.metadata.drop_all(engine)
I'm getting following error:
ERROR:libdl.database_operations:Cannot drop table: (psycopg2.errors.DependentObjectsStillExist) cannot drop sequence <schema>.<sequence> because other objects depend on it
DETAIL: default for table <schema>.<table> column id depends on sequence <schema>.<sequence>
HINT: Use DROP ... CASCADE to drop the dependent objects too.
Is there an elegant one-line solution for that?
import psycopg2
from psycopg2 import sql
cnn = psycopg2.connect('...')
cur = cnn.cursor()
cur.execute("""
select s.nspname as s, t.relname as t
from pg_class t join pg_namespace s on s.oid = t.relnamespace
where t.relkind = 'r'
and s.nspname !~ '^pg_' and s.nspname != 'information_schema'
order by 1,2
""")
tables = cur.fetchall() # make sure they are the right ones
for t in tables:
cur.execute(
sql.SQL("drop table if exists {}.{} cascade")
.format(sql.Identifier(t[0]), sql.Identifier(t[1])))
cnn.commit() # goodbye
I'm trying to insert latitude & longitude that are stored as python variables into a table in PostgreSQL via the INSERT query. Any suggestions on how to cast Point other than what I've tried?
I tried the insert query first as shown -
This is the table:
cur.execute('''CREATE TABLE AccidentList (
accidentId SERIAL PRIMARY KEY,
cameraGeoLocation POINT,
accidentTimeStamp TIMESTAMPTZ);''')
Try1:
cur.execute("INSERT INTO AccidentList(cameraGeoLocation,accidentTimeStamp)
VALUES {}".format((lat,lon),ts));
Error:
>Hint: psycopg2.ProgrammingError: column "camerageolocation" is of type point but expression is of type numeric
LINE 1: ...ist (cameraGeoLocation,accidentTimeStamp) VALUES (13.0843, 8...
^
HINT: You will need to rewrite or cast the expression.
Try2:
query = "INSERT INTO AccidentList (cameraGeoLocation,accidentTimeStamp)
VALUES(cameraGeoLocation::POINT, accidentTimeStamp::TIMESTAMPTZ);"
data = ((lat,lon),ts)
cur.execute(query,data)
Error:
LINE 1: ...List (cameraGeoLocation,accidentTimeStamp) VALUES(cameraGeoL...
^
HINT: There is a column named "camerageolocation" in table "accidentlist", but it cannot be referenced from this part of the query.
Try 3:
query = "INSERT INTO AccidentList (camerageolocation ,accidenttimestamp) VALUES(%s::POINT, %s);"
data = (POINT(lat,lon),ts)
cur.execute(query,data)
Error:
cur.execute(query,data)
psycopg2.ProgrammingError: cannot cast type record to point
LINE 1: ...tion ,accidenttimestamp) VALUES((13.0843, 80.2805)::POINT, '...
Single quote your third attempt.
This works: SELECT '(13.0843, 80.2805)'::POINT
I had a similar problem trying to insert data of type point into Postgres.
Using quotes around the tuple (making it a string) worked for me.
conn = psycopg2.connect(...)
cursor = conn.cursor()
conn.autocommit = True
sql = 'insert into cities (name,location) values (%s,%s);'
values = ('City A','(10.,20.)')
cursor.execute(sql,values)
cursor.close()
conn.close()
My environment:
PostgreSQL 12.4,
Python 3.7.2,
psycopg2-binary 2.8.5
Hi I'm doing something like:
# pyodbc extension
cursor.execute("select a from tbl where b=? and c=?", x, y)
-- some values in the query in provided by variables. But sometimes the variable is interpreted as #P1 in the query.
For example:
import pyodbc
ch = pyodbc.connect('DRIVER={SQL Server};SERVER=xxxx;DATABASE=xxx;Trusted_Connection=True')
cur = ch.cursor()
x = 123
cur.execute('''
CREATE TABLE table_? (
id int IDENTITY(1,1) PRIMARY KEY,
obj varchar(max) NOT NULL
)
''', x).commit()
This results in a new table named table_#P1 (I want table_123)
Another example:
x = 123
cur.execute('''
CREATE TABLE table_2 (
id int IDENTITY(1,1) PRIMARY KEY,
obj varchar(?) NOT NULL
)
''', x).commit()
it reports error:
ProgrammingError: ('42000', "[42000] [Microsoft][ODBC SQL Server
Driver][SQL Server]Incorrect syntax near '#P1'. (102)
(SQLExecDirectW)")
Again, the variable is interpreted as #P1.
Anyone know how to fix this? Any help's appreciated. Thanks-
In your first case, parameter substitution does not work for table/column names. This is common to the vast majority of (if not all) database platforms.
In your second case, SQL Server does not appear to support parameter substitution for DDL statements. The SQL Server ODBC driver converts the pyodbc parameter placeholders (?) to T-SQL parameter placeholders (#P1, #P2, ...) so the statement passed to SQL Server is
CREATE TABLE table_2 (id int IDENTITY(1,1) PRIMARY KEY, obj varchar(#P1) NOT NULL
specifically
exec sp_prepexec #p1 output,N'#P1 int',N'CREATE TABLE table_2 (id int IDENTITY(1,1) PRIMARY KEY, obj varchar(#P1) NOT NULL',123
and when SQL Server tries to prepare that statement it expects a literal value, not a parameter placeholder.
So, in both cases you will need to use dynamic SQL (string formatting) to insert the appropriate values.
There is a way to do this sort of thing. What you need to do is dynamically build the command (ideally as a nvarchar( MAX), not varchar( MAX)) string variable and pass that variable to the cur.execute() - or any other - command. Modifying your first example accordingly:
ch = pyodbc.connect( 'DRIVER={SQL Server};SERVER=xxxx;DATABASE=xxx;Trusted_Connection=True' )
cur = ch.cursor()
x = 123
SQL_Commands = 'CREATE TABLE table_' + str( x ) + '''
(
id int IDENTITY(1,1) PRIMARY KEY,
obj varchar(max) NOT NULL
) '
'''
cur.execute( SQL_Commands ).commit()
BTW, you shouldn't try to do everything in one line, if only to avoid problems like this one. I'd also suggest looking into adding "autocommit=True" to your connect string, that way you wouldn't have to append .commit() to cur.execute().
cnx=sqlalchemy.create_engine("mssql+pyodbc://Omnius:MainBrain1#172.31.163.135:1433/Basis?driver=/opt/microsoft/sqlncli/lib64/libsqlncli-11.0.so.1790.0")
cnx1 = pyodbc.connect('driver=/opt/microsoft/sqlncli/lib64/libsqlncli-11.0.so.1790.0;server=SRVWUDEN0835;database=Basis;uid=Omnius; pwd=MainBrain1')
sqlquery = "select top 10 TXN_KEY,SEND_AGENT,PAY_AGENT from Pretty_Txns"
cursor = cnx1.cursor()
df = pd.read_sql(sqlquery,cnx)
model_columns = df.columns.tolist()
db_columns = cursor.execute("select TXN_KEY,SEND_AGENT from result").fetchall()
columns = [column[0] for column in cursor.description]
to_create =list(set(model_columns) -set(columns))
for c in to_create:
a = df[c]
sql = DDL('ALTER TABLE %s ADD column %s %s' % (result,a,String(9)))
cnx.execute(sql)
sqlalchemy.exc.ProgrammingError: (pyodbc.ProgrammingError) ('42000', "[42000] [Microsoft][SQL Server Native Client 11.0][SQL Server]Incorrect syntax near the keyword 'column'. (156) (SQLExecDirectW)") [SQL: u"ALTER TABLE result ADD column ['API352676', 'AED002782', 'ACB020203', 'ASE094882', 'AII071196', 'AHX012817', 'AED000438', 'AEL051943', 'ADJ031448', 'APM033226'] VARCHAR(9)"]
Above code shows how to add a new column to a table in a database using sqlalchemy and pyodbc.For the most part it works fine but fails at the last step.
you have incorrectly built your alter table ... add column ... SQL.
It should look like as follows (for single column):
ALTER TABLE table_name ADD COLUMN column_name data_type(precision);
or for multiple columns:
ALTER TABLE table_name ADD COLUMN (column1_name data_type(precision), column2_name data_type(precision), column3_name data_type(precision));
I would recommend to add all columns in one command as it might lock dictionary structures for a short time. And i would also recommend you NOT to do it in the application, but do it manually once if it's possible. If it's not possible/applicable you can change your code as follows:
'ALTER TABLE {} ADD column ({})'.format(result, ', '.join(['{} {}'.format(c, String(9)) for c in to_create]))