not enough arguments for format string - python

Im trying to pass a sql ( wich works perfectly if i run it on the client ) inside my python script, but i receive the error "not enough arguments for format string"
Following, the code:
sql = """
SELECT
rr.iserver,
foo.*, rr.queue_capacity,
rr.queue_refill_level,
rr.is_concurrent,
rr.max_execution_threads,
rr.retrieval_status,
rr.processing_status
FROM
(
SELECT DISTINCT
ip.package,
it. TRIGGER
FROM
wip.info_package ip,
wip.info_trigger it
WHERE
ip.service = it.service and
ip.iserver = '%(iserver)s' and
it.iserver = %(iserver)s'
AND package = '%(package)s'
UNION
SELECT
'%(package)s' AS package,
TRIGGER
FROM
info_trigger
WHERE
TRIGGER LIKE '%(package)s%'
) AS foo,
info_trigger rr
WHERE
rr. TRIGGER = foo. TRIGGER
""" % {'iserver' : var_iserver,'package' : var_package}
dcon = Database_connection()
getResults = dcon.db_call(sql, dbHost, dbName, dbUser, dbPass)
# more and more code to work the result....
My main problem on this is how i can pass '%(iserver)s' , '%(package)s' correctly. Because usualy, when i select's or insert's on database, i only use two variables , but i dont know how to do it with more than two.
Thanks.

Don't build SQL like this using %:
"SELECT %(foo)s FROM bar WHERE %(baz)s" %
{"foo": "FOO", "baz": "1=1;-- DROP TABLE bar;"}
This opens the door for nasty SQL injection attacks.
Use the proper form of your Python Database API Specification v2.0 adapter. For Psychopg this form is described here.
cur.execute("SELECT %(foo)s FROM bar WHERE %(baz)s",
{"foo": "FOO", "baz": "1=1;-- DROP TABLE bar;"})

WHERE
TRIGGER LIKE '%(package)s%'
you have an EXTRA '%'
if you want the actual character '%', you need to escape with a double '%'.
so it should be
WHERE
TRIGGER LIKE '%(package)s%%'
if you want to display a '%'
and
WHERE
TRIGGER LIKE '%(package)s'
if you dont

Related

SQLAlchemy 1.4 tutorial code "'Connection' object has no attribute 'commit'" error or does not commit changes

Here is some custom code I wrote that I think might be problematic for this particular use case.
class SQLServerConnection:
def __init__(self, database):
...
self.connection_string = \
"DRIVER=" + str(self.driver) + ";" + \
"SERVER=" + str(self.server) + ";" + \
"DATABASE=" + str(self.database) + ";" + \
"Trusted_Connection=yes;"
self.engine = sqlalchemy.create_engine(
sqlalchemy.engine.URL.create(
"mssql+pyodbc", \
query={'odbc_connect': self.connection_string}
)
)
# Runs a command and returns in plain text (python list for multiple rows)
# Can be a select, alter table, anything like that
def execute(self, command, params=False):
# Make a connection object with the server
with self.engine.connect() as conn:
# Can send some parameters along with a plain text query...
# could be single dict or list of dict
# Doc: https://docs.sqlalchemy.org/en/14/tutorial/dbapi_transactions.html#sending-multiple-parameters
if params:
output = conn.execute(sqlalchemy.text(command,params))
else:
output = conn.execute(sqlalchemy.text(command))
# Tell SQL server to save your changes (assuming that is applicable, is not with select)
# Doc: https://docs.sqlalchemy.org/en/14/tutorial/dbapi_transactions.html#committing-changes
try:
conn.commit()
except Exception as e:
#pass
warn("Could not commit changes...\n" + str(e))
# Try to consolidate select statement result into single object to return
try:
output = output.all()
except:
pass
return output
If I try:
cnxn = SQLServerConnection(database='MyDatabase')
cnxn.execute("SELECT * INTO [dbo].[MyTable_newdata] FROM [dbo].[MyTable] ")
or
cnxn.execute("SELECT TOP 0 * INTO [dbo].[MyTable_newdata] FROM [dbo].[MyTable] ")
Python returns this object without error, <sqlalchemy.engine.cursor.LegacyCursorResult at 0x2b793d71880>, but upon looking in MS SQL Server, the new table was not generated. I am not warned about the commit step failing with the SELECT TOP 0 way; I am warned ('Connection' object has no attribute 'commit') in the above way.
CREATE TABLE, ALTER TABLE, or SELECT (etc) appears to work fine, but SELECT * INTO seems to not be working, and I'm not sure how to troubleshoot further. Copy-pasting the query into SQL Server and running appears to work fine.
As noted in the introduction to the 1.4 tutorial here:
A Note on the Future
This tutorial describes a new API that’s released in SQLAlchemy 1.4 known as 2.0 style. The purpose of the 2.0-style API is to provide forwards compatibility with SQLAlchemy 2.0, which is planned as the next generation of SQLAlchemy.
In order to provide the full 2.0 API, a new flag called future will be used, which will be seen as the tutorial describes the Engine and Session objects. These flags fully enable 2.0-compatibility mode and allow the code in the tutorial to proceed fully. When using the future flag with the create_engine() function, the object returned is a subclass of sqlalchemy.engine.Engine described as sqlalchemy.future.Engine. This tutorial will be referring to sqlalchemy.future.Engine.
That is, it is assumed that the engine is created with
engine = create_engine(connection_url, future=True)
You are getting the "'Connection' object has no attribute 'commit'" error because you are creating an old-style Engine object.
You can avoid the error by adding future=True to your create_engine() call:
self.engine = sqlalchemy.create_engine(
sqlalchemy.engine.URL.create(
"mssql+pyodbc",
query={'odbc_connect': self.connection_string}
),
future=True
)
Use this recipe instead:
#!python
from sqlalchemy.sql import Select
from sqlalchemy.ext.compiler import compiles
class SelectInto(Select):
def __init__(self, columns, into, *arg, **kw):
super(SelectInto, self).__init__(columns, *arg, **kw)
self.into = into
#compiles(SelectInto)
def s_into(element, compiler, **kw):
text = compiler.visit_select(element)
text = text.replace('FROM',
'INTO TEMPORARY TABLE %s FROM' %
element.into)
return text
if __name__ == '__main__':
from sqlalchemy.sql import table, column
marker = table('marker',
column('x1'),
column('x2'),
column('x3')
)
print SelectInto([marker.c.x1, marker.c.x2], "tmp_markers").\
where(marker.c.x3==5).\
where(marker.c.x1.in_([1, 5]))
This needs some tweaking, hence it will replace all subquery selects as select INTOs, but test it for now, if it worked it would be better than raw text statments.
Have you tried this from this answer by #Michael Berkowski:
INSERT INTO assets_copy
SELECT * FROM assets;
The answer states that MySQL documentation states that SELECT * INTO isn't supported.

[snowflake python connector ]How to bindings inside a string format

I want to use data binding when executing sql.
I just want to binding in the middle of the string, but it doesn't work.
I tried the following, but all of them resulted in execution errors.
Python
param = {
"env": "dev"
"s3_credential": "secret"
}
cursor().execute(sql, param)
sql1
CREATE OR REPLACE STAGE my_s3_stage_demo
URL='s3://my-stage-demo-'%(env)s'/tmp/'
credentials = (aws_role = %(s3_credential)s )
FILE_FORMAT = ( TYPE=JSON);
error message1
snowflake.connector.errors.ProgrammingError: 091006 (22000): Bucket name 'my-stage-demo-'dev'' in the stage location is not supported. Valid bucket names must consist of lowercase letters, digits, hyphens '-', and periods '.'.
SQL2
CREATE OR REPLACE STAGE my_s3_stage_demo
URL='s3://my-stage-demo-%(env)s/tmp/'
credentials = (aws_role = %(s3_credential)s )
FILE_FORMAT = ( TYPE=JSON);
error message2
snowflake.connector.errors.ProgrammingError: 001003 (42000): SQL compilation error:
syntax error line 2 at position 32 unexpected ''/tmp/''.
I want to execute the binding result as follows, but how should I specify it?
CREATE OR REPLACE STAGE my_s3_stage_demo
URL='s3://my-stage-demo-dev/tmp/'
credentials = (aws_role = "secret" )
FILE_FORMAT = ( TYPE=JSON);
You cannot bind substrings, only complete syntactical elements.
In Python, you can do something like:
cursor.execute(
"SELECT t.*, 'P'||:2 p2 FROM IDENTIFIER(:1) t",
[['"my_db"."my_schema"."my_table"', '2. parameter']]
)
You can only use parts of a value where expressions are allowed (like 'P'||:2 above). There is also some provision for identifiers like the table name above using IDENTIFIER().
Unfortunately, for the CREATE OR REPLACE STAGE command there seems to be no support for using expressions for eg an s3 bucket, or bind variables at all.
Which means you have to use replacement for the SQL text, not variable binding.

Creating Columns with pyodbc - Not Working

I have a class which uses the pyodbc library successfully - it can perform a variety of reads from the database (so the connection and DSN are hunky dory).
What I've being trying to implement are functions to write and delete columns from tables in a sql database (the same one I'm able to read from).
I have tested the calls using isql commands and I can see the changes occur in my database. For example;
SQL> ALTER TABLE DunbarGen ADD testCol float(4)
SQLRowCount returns -1
Adds a new column to the table from the terminal (this works). I have a code which, I think, should replicate this command - which causes no errors in my class - and looks like this;
def createColumn(self, columnName, tableName, isFloat, isDateTime, isString):
if isFloat:
typeOf = 'float(4)'
elif isDateTime:
typeOf = 'datetime2'
elif isString:
typeOf = 'text'
else:
return False
self.cursor.execute("ALTER TABLE " + tableName + " ADD " + columnName + " " + typeOf)
print 'command has executed'
Do I need to do something else with the pyodbc class to finalize the command or something?
Thanks!
self.cursor.commit()
After the execute function has been called.

YQL - No definition found for Table

My code:
import yql
y = yql.Public()
query = 'SELECT * FROM yahoo.finance.option_contracts WHERE symbol="SPY"'
y.execute(query)
Result:
yql.YQLError: No definition found for Table yahoo.finance.option_contracts
I know that the table exists because I can test the query at http://developer.yahoo.com/yql/console/ and it works. What am I missing?
Update: I posted the url to the console but not the query I tried in the console. The query is now attached.
http://goo.gl/mNXwC
Since the yahoo.finance.option_contracts table is a Community Open Data Table you will want to include it as part of the environment for the query. The easiest way to do that is to load up the environment file for all community tables; just like clicking "Show Community Tables" in the YQL console.
One would normally do that by specifying an env=... parameter in the YQL query URL, or (as you have done) with a use clause in the query itself.
The Python library that you are using lets you pass in the environment file as an argument to execute().
import yql
y = yql.Public()
query = 'SELECT * FROM yahoo.finance.option_contracts WHERE symbol="SPY"'
y.execute(query, env="store://datatables.org/alltableswithkeys")
Here's an example of extending yql.Public to be able to define the default environment on instantiation.
class MyYql(yql.Public):
def __init__(self, api_key=None, shared_secret=None, httplib2_inst=None, env=None):
super(MyYql, self).__init__(api_key, shared_secret, httplib2_inst)
self.env = env if env else None
def execute(self, query, params=None, **kwargs):
kwargs["env"] = kwargs.get("env", self.env)
return super(MyYql, self).execute(query, params, **kwargs);
It can be used like:
y = MyYql(env="store://datatables.org/alltableswithkeys")
query = 'SELECT * FROM yahoo.finance.option_contracts WHERE symbol="SPY"'
r = y.execute(query)
You can still override the env in an individual call to y.execute() if you need to.
Amending query to the following is what works.
query = 'use "http://www.datatables.org/yahoo/finance/yahoo.finance.option_contracts.xml" as foo; SELECT * FROM foo WHERE symbol="SPY"'
More elegant solutions might exist. Please share if such do. Thanks.

Is SQLAlchemy still recommended if only used for raw sql query?

Using Flask, I'm curious to know if SQLAlchemy is still the best way to go for querying my database with raw SQL (direct SELECT x FROM table WHERE ...) instead of using the ORM or if there is an simpler yet powerful alternative ?
Thank for your reply.
I use SQLAlchemy for direct queries all the time.
Primary advantage: it gives you the best protection against SQL injection attacks. SQLAlchemy does the Right Thing whatever parameters you throw at it.
I find it works wonders for adjusting the generated SQL based on conditions as well. Displaying a result set with multiple filter controls above it? Just build your query in a set of if/elif/else constructs and you know your SQL will be golden still.
Here is an excerpt from some live code (older SA version, so syntax could differ a little):
# Pull start and end dates from form
# ...
# Build a constraint if `start` and / or `end` have been set.
created = None
if start and end:
created = sa.sql.between(msg.c.create_time_stamp,
start.replace(hour=0, minute=0, second=0),
end.replace(hour=23, minute=59, second=59))
elif start:
created = (msg.c.create_time_stamp >=
start.replace(hour=0, minute=0, second=0))
elif end:
created = (msg.c.create_time_stamp <=
end.replace(hour=23, minute=59, second=59))
# More complex `from_` object built here, elided for example
# [...]
# Final query build
query = sa.select([unit.c.eli_uid], from_obj=[from_])
query = query.column(count(msg.c.id).label('sent'))
query = query.where(current_store)
if created:
query = query.where(created)
The code where this comes from is a lot more complex, but I wanted to highlight the date range code here. If I had to build the SQL using string formatting, I'd probably have introduced a SQL injection hole somewhere as it is much easier to forget to quote values.
After I worked on a small project of mine, I decided to try to just use MySQLDB, without SQL Alchemy.
It works fine and it's quite easy to use, here's an example (I created a small class that handles all the work to the database)
import MySQLdb
from MySQLdb.cursors import DictCursor
class DatabaseBridge():
def __init__(self, *args, **kwargs):
kwargs['cursorclass'] = DictCursor
self.cnx = MySQLdb.connect (**kwargs)
self.cnx.autocommit(True)
self.cursor = self.cnx.cursor()
def query_all(self, query, *args):
self.cursor.execute(query, *args)
return self.cursor.fetchall()
def find_unique(self, query, *args):
rows = self.query_all(query, *args);
if len(rows) == 1:
return rows[0]
return None
def execute(self, query, params):
self.cursor.execute(query, params)
return self.cursor.rowcount
def get_last_id(self):
return self.cnx.insert_id()
def close(self):
self.cursor.close()
self.cnx.close()
database = DatabaseBridge(**{
'user': 'user',
'passwd': 'password',
'db': 'my_db'
})
rows = database.query_all("SELECT id, name, email FROM users WHERE is_active = %s AND project = %s", (1, "My First Project"))
(It's a dumb example).
It works like a charm BUT you have to take these into consideration :
Multithreading is not supported ! It's ok if you don't work with multiprocessing from Python.
You won't have all the advantages of SQLAlchemy (Database to Class (model) wrapper, Query generation (select, where, order_by, etc)). This is the key point on how you want to work with your database.
But on the other hand, and like SQLAlchemy, there is protections agains't SQL injection attacks :
A basic query would be like this :
cursor.execute("SELECT * FROM users WHERE data = %s" % "Some value") # THIS IS DANGEROUS
But you should do :
cursor.execute("SELECT * FROM users WHERE data = %s", ("Some value",)) # This is secure!
Saw the difference ? Read again ;)
The difference is that I replaced %, by , : We pass the arguments as ... arguments to the execute, and these are escaped. When using %, arguments aren't escaped, enabling SQL Injection attacks!
The final word here is that it depends on your usage and what you plan to do with your project. For me, SQLAlchemy was on overkill (it's a basic shell script !), so MysqlDB was perfect.

Categories