pymssql and placeholders - python

What placeholders can I use with pymssql. I'm getting my values from the html query string so they are all of type string. Is this safe with regard to sql injection?
query = dictify_querystring(Response.QueryString)
employeedata = conn.execute_row("SELECT * FROM employees WHERE company_id=%s and name = %s", (query["id"], query["name"]))
What mechanism is being used in this case to avoid injections?
There isn't much in the way of documentation for pymssql...
Maybe there is a better python module I could use to interface with Sql Server 2005.
Thanks,
Barry

Regarding SQL injection, and not knowing exactly how that implementation works, I would say that's not safe.
Some simple steps to make it so:
Change that query into a prepared statement (or make sure the implementation internally does so, but doesn't seem like it).
Make sure you use ' around your query arguments.
Validate the expected type of your arguments (if request parameters that should be numeric are indeed numeric, etc).
Mostly... number one is the key. Using prepared statements is the most important and probably easiest line of defense against SQL injection.
Some ORM's take care of some of these issues for you (notice the ample use of the word some), but I would advise making sure you know these problems and how to work around them before using an abstraction like an ORM.
Sooner or later, you need to know what's going on under those wonderful layers of time-saving.

Maybe there is a better python module I could use to interface with Sql Server 2005.
Well, my advice is using an ORM like SqlAlchemy to handle this.
>>> from sqlalchemy.ext.sqlsoup import SqlSoup
>>> db = SqlSoup('mssql:///DATABASE?PWD=yourpassword&UID=some_user&dsn=your_dsn')
>>> employeedata = db.employees.filter(db.employees.company_id==query["id"])\
.filter(db.employees.name==query["name"]).one()
You can use one() if you want to raise an exception if there is more than one record, .first() if you want just the first record or .all() if you want all records.
As a side benefit, if you later change to other DBMS, the code will remain the same except for the connection URL.

Related

In python, can we set table name as a variable using Sqlite3?

For example:
import Sqlite3
def ChangeTable(c,a):
c.execute('''DELETE FROM MY_TABLE WHERE id = ?''',(a,))
This way I can change the value of a and process the database with a function in python.
But how can I do something similar with Table names?
This way I can use one function to handle different tables.
Parsing in table names is made to not work intentionally, since dynamically using table names in SQL queries is generally a bad idea. If you're in a situation where you end up dynamically wanting to do stuff with table names, you should think about redesigning your database and make it relational. It's hard to give specific advice about how to do this in your case, since we don't know how your database is arranged.
The other answer involves using string formatting in SQL queries. This is very bad coding practice, since it makes your code vulnerable to SQL injection. The sqlite documentation says to never do this, in fact. You don't want to have a Bobby Tables situation on your hands.
You can't use variables for table names. You have to perform string concatenation or substitution. As an example, using an F-string (Python >= 3.6):
def change_table(table_name, id):
c.execute(f'DELETE FROM {table_name} WHERE id = ?', (id,))
with more meaningful variable names...
Triple quoting is not required here but useful for multiline statements.

psycopg2 and SQL injection security

I am writing a class to be used as part of a much larger modeling algorithm. My part does spatial analysis to calculate distances from certain points to other points. There are a variety of conditions involving number of returned distances, cutoff distances, and etc.
Currently, the project specification only indicates hardcoded situations. i.e. "Function #1 needs to list all the distances from point set A to point set B within 500m. Function #2 needs to list all the distances from point set C to point set D..." and so on.
I don't want to hardcode these parameters, and neither does the person who is developing the next stage of the model, because obviously they would like to tweak the parameters or possibly re-use the algorithm in other projects where they will have different conditions.
Now the problem is that I am using psycopg2 to do this. This is the standard where I work so I do not have a choice of deviating from it. I have read that it is a very bad idea to expose parameters that will be put into the executed queries as parameters due to the obvious reason of SQL injection. However, I thought that psycopg2 automatically sanitized SQL input. I think that the issue is using the AsIs function.
The easy solution is just to hardcode it as specified in the project but this feels lazy and sloppy to me. I don't like doing lazy and sloppy work.
Is it at all safe to allow the user to input parameters that will be input into a psycopg2-executed query? Or is it just using AsIs that makes it unsafe? If I wanted to allow the user to be able to input these parameters, do I have to take the responsibility upon myself to santitize the inputs, and if so, is there a quick and easy way to do it, like with another python library or something?
AsIs is unsafe, unless you really know what you are doing. You can use it for unit testing for example.
Passing parameters is not that unsafe, as long as you do not pre-format your sql query. Never do:
sql_query = 'SELECT * FROM {}'.format(user_input)
cur.execute(sql_query)
Since user_input could be ';DROP DATABASE;' for instance.
Instead, do:
sql_query = 'SELECT * FROM %s'
cur.execute(sql_query, (user_input,))
pyscopg2 will sanitize your query. Also, you can pre-sanitize the parameters in your code with your own logic, if you really do not trust your user's input.
Per psycopg2's documentation:
Warning Never, never, NEVER use Python string concatenation (+) or string parameters interpolation (%) to pass variables to a SQL query string. Not even at gunpoint.
Also, I would never, ever, let my users tell me which table I should query. Your app's logic (or routes) should tell you that.
Regarding AsIs(), per psycopg2's documentation :
Asis()... for objects whose string representation is already valid as SQL representation.
So, don't use it with user's input.
You can use psycopg2.sql to compose dynamic queries. Unlike AsIs it will protect you from SQL injection.
If you need to store your query in a variable you can use the SQL method (documentation) :
from psycopg2 import sql
query = sql.SQL("SELECT * FROM Client where id={clientId}").format(clientId=sql.Literal(clientId)

Confusion between prepared statement and parameterized query in Python

As far as I understand, prepared statements are (mainly) a database feature that allows you to separate parameters from the code that uses such parameters. Example:
PREPARE fooplan (int, text, bool, numeric) AS
INSERT INTO foo VALUES($1, $2, $3, $4);
EXECUTE fooplan(1, 'Hunter Valley', 't', 200.00);
A parameterized query substitutes the manual string interpolation, so instead of doing
cursor.execute("SELECT FROM tablename WHERE fieldname = %s" % value)
we can do
cursor.execute("SELECT FROM tablename WHERE fieldname = %s", [value])
Now, it seems that prepared statements are, for the most part, used in the database language and parameterized queries are mainly used in the programming language connecting to the database, although I have seen exceptions to this rule.
The problem is that asking about the difference between prepared statement and parameterized query brings a lot of confusion. Their purpose is admittedly the same, but their methodology seems distinct. Yet, there are sources indicating that both are the same. MySQLdb and Psycopg2 seem to support parameterized queries but don’t support prepared statements (e.g. here for MySQLdb and in the TODO list for postgres drivers or this answer in the sqlalchemy group). Actually, there is a gist implementing a psycopg2 cursor supporting prepared statements and a minimal explanation about it. There is also a suggestion of subclassing the cursor object in psycopg2 to provide the prepared statement manually.
I would like to get an authoritative answer to the following questions:
Is there a meaningful difference between prepared statement and parameterized query? Does this matter in practice? If you use parameterized queries, do you need to worry about prepared statements?
If there is a difference, what is the current status of prepared statements in the Python ecosystem? Which database adapters support prepared statements?
Prepared statement: A reference to a pre-interpreted query routine on the database, ready to accept parameters
Parametrized query: A query made by your code in such a way that you are passing values in alongside some SQL that has placeholder values, usually ? or %s or something of that flavor.
The confusion here seems to stem from the (apparent) lack of distinction between the ability to directly get a prepared statement object and the ability to pass values into a 'parametrized query' method that acts very much like one... because it is one, or at least makes one for you.
For example: the C interface of the SQLite3 library has a lot of tools for working with prepared statement objects, but the Python api makes almost no mention of them. You can't prepare a statement and use it multiple times whenever you want. Instead, you can use sqlite3.executemany(sql, params) which takes the SQL code, creates a prepared statement internally, then uses that statement in a loop to process each of your parameter tuples in the iterable you gave.
Many other SQL libraries in Python behave the same way. Working with prepared statement objects can be a real pain, and can lead to ambiguity, and in a language like Python which has such a lean towards clarity and ease over raw execution speed they aren't really the greatest option. Essentially, if you find yourself having to make hundreds of thousands or millions of calls to a complex SQL query that gets re-interpreted every time, you should probably be doing things differently. Regardless, sometimes people wish they could have direct access to these objects because if you keep the same prepared statement around the database server won't have to keep interpreting the same SQL code over and over; most of the time this will be approaching the problem from the wrong direction and you will get much greater savings elsewhere or by restructuring your code.*
Perhaps more importantly in general is the way that prepared statements and parametrized queries keep your data sanitary and separate from your SQL code. This is vastly preferable to string formatting! You should think of parametrized queries and prepared statements, in one form or another, as the only way to pass variable data from your application into the database. If you try to build the SQL statement otherwise, it will not only run significantly slower but you will be vulnerable to other problems.
*e.g., by producing the data that is to be fed into the DB in a generator function then using executemany() to insert it all at once from the generator, rather than calling execute() each time you loop.
tl;dr
A parametrized query is a single operation which generates a prepared statement internally, then passes in your parameters and executes.
edit: A lot of people see this answer! I want to also clarify that many database engines also have concepts of a prepared statement that can be constructed explicitly with plaintext query syntax, then reused over the lifetime of a client's session (in postgres for example). Sometimes you have control over whether the query plan is cached to save even more time. Some frameworks use these automatically (I've seen rails' ORM do it aggressively), sometimes usefully and sometimes to their detriment when there are permutations of form for the queries being prepared.
Also if you want to nit pick, parametrized queries do not always use a prepared statement under the hood; they should do so if possible, but sometimes it's just formatting in the parameter values. The real difference between 'prepared statement' and 'parametrized query' here is really just the shape of the API you use.
First, your questions shows very good preparation - well done.
I am not sure, if I am the person to provide authoritative answer, but I will try to explain my
understanding of the situation.
Prepared statement is an object, created on side of database server as a result of PREPARE
statement, turning provided SQL statement into sort of temporary procedure with parameters. Prepared
statement has lifetime of current database session and are discarded after the session is over.
SQL statement DEALOCATE allows destroying the prepared statement explicitly.
Database clients can use SQL statement EXECUTE to execute the prepared statement by calling it's
name and parameters.
Parametrized statement is alias for prepared statement as usually, the prepared statement has
some parameters.
Parametrized query seems to be less often used alias for the same (24 mil Google hits for
parametrized statement, 14 mil hits for parametrized query). It is possible, that some people use
this term for another purpose.
Advantages of prepared statements are:
faster execution of actual prepared statement call (not counting the time for PREPARE)
resistency to SQL injection attack
Players in executing SQL query
Real application will probably have following participants:
application code
ORM package (e.g. sqlalchemy)
database driver
database server
From application point of view it is not easy to know, if the code will really use prepared
statement on database server or not as any of participants may lack support of prepared
statements.
Conclusions
In application code prevent direct shaping of SQL query as it is prone to SQL injection attack. For
this reason it is recommended using whatever the ORM provides to parametrized query even if it does
not result on using prepared statements on database server side as the ORM code can be optimized to
prevent this sort of attack.
Decide, if prepared statement is worth for performance reasons. If you have simple SQL query,
which is executed only few times, it will not help, sometime it will even slow down the execution a
bit.
For complex query being executed many times and having relatively short execution time will be
the effect the biggest. In such a case, you may follow these steps:
check, that the database you are going to use supports the PREPARE statement. In most cases it
will be present.
check, that the drive you use is supporting prepared statements and if not, try to find another
one supporting it.
Check support of this feature on ORM package level. Sometime it vary driver by driver (e.g.
sqlalchemy states some limitations on prepared statements with MySQL due to how MySQL manages
that).
If you are in search for real authoritative answer, I would head to authors of sqlalchemy.
An sql statement can't be execute immediately: the DBMS must interpret them before the execution.
Prepared statements are statement already interpreted, the DBMS change parameters and the query starts immediately. This is a feature of certain DBMS and you can achieve fast response (comparable with stored procedures).
Parametrized statement are just a way you compose the query string in your programming languages. Since it doesn't matter how sql string are formed, you have slower response by DBMS.
If you measure time executing 3-4 time the same query (select with different conditions) you will see the same time with parametrized queries, the time is smaller from the second execution of prepared statement (the first time the DBMS has to interpret the script anyway).
I think the comment about using executemany fails to deal with the case where an application stores data into the database OVER TIME and wants each insert statement to be as efficient as possible. Thus the desire to prepare the insert statement once and re-use the prepared statement.
Alternatively, one could put the desired statement(s) in a stored proc and re-use it.

What if I *really* need to escape quotes for an SQL script?

Please don't reach for the "duplicate" gun just yet.
I need to generate a series of SQL statements involving literal strings that contain the occasional single quote. Yeah, I know that parametrized queries are the way to go. The thing is, I'm not communicating with a database directly: I'm generating an SQL script that will be used to load data on another computer. So, I don't mind issuing parametrized queries to my local database (mysql), but I'll need to output the complete SQL commands in text form. Can I do that in python? The only suggestions I saw on SO are hacks like using repr() or json.dumps(), or specific to psycopg. Surely that can't be all there is?
This application will never deal with untrusted data, so although I'd like the most robust solution possible, I'm not too worried about malformed unicode attacks and the like. What's the best way to do this?
You can subclass the psycopg2 cursor class to use mogrify to write the queries to a file instead of executing them against a database. You could probably also use it directly (and save you setting up a database etc.).
You could also use the query attribute for "playback" of a session.
While generating complex SQLs may have it's difficulties, I love python for straight INSERTs. No tools, no libraries, just plain python solves all issues out of the box:
# syntactic sugar
def sq(s):
return s.replace("'", "''")
# a third kind of string delimiters :3
sql_template = """INSERT INTO mytab ("the key", f1, f2)
VALUES (%d, '%s', '%s');"""
the_key = 7
f1 = "Hello World"
f2 = "That's ok"
# escape only string colums
sql = sql_template % (the_key, sq(f1), sq(f2))
print sql
The process for producing SQL is the same for any language. You need to first understand the entirety of the language (or data format; same difference here) that you want to produce. Failure to do this is the primary failing of most attempts: since they don't understand the language they don't handle all the input that can cause a problem. When you know the entirety of the language, including all corner cases and rarely used constructs, then you can create code that can understand all possible inputs and correctly represent them in that language. The easiest solution is to let someone else who has already dealt with it do the hard part, or to sidestep the entire mess by using a different representation. This is why everyone recommends parameterized queries. It pushes the responsibility to someone else who has already solved it, and usually they solve it by using a different representation in the protocol than the SQL language itself.

Python/SQLITE: How do I escape commands/strings

I would like to construct a sqlite3 database query in python, then write it to a file.
I am a huge fan of python's interfaces for sql databases, which AFAICT wrap all calls you could mess up with a nice little '?' parameters that sanitizes/escapes your strings for you, but that's not what I want. I actually just want to prepare and escape a sql statement - to do this, I need to escape/quote arbitrary strings.
For example:
query = "INSERT INTO example_table VALUES ('%s')",sqlite_escape_string("'")
And so query should contain:
"INSERT INTO example_table VALUES ('''')"
Note that it inserted an additional ' character.
PHP's equivalent is sqlite_escape_string()
perl's equivalent is DBI's quote()
I feel Python has a better overall interface, but I happen to need the query, pre-exec.
When you use SQLite it doesn't turn parameterized queries back into text. It has an api ("bindings") and stores the values separately. Queries can be reused with different values just by changing the bindings. This is what underlies the statement cache. Consequently you'll get no help from python/sqlite in doing what you describe.
What you didn't describe is why you want to do this. The usual reason is as some form of tracing. My alternate Python/SQLite interface (APSW) provides easy tracing - you don't even have to touch your code to use it:
https://rogerbinns.github.io/apsw/execution.html#apsw-trace
SQLite also has an authorizer API which lets you veto operations performed by a statement. This also has a side effect of telling you what operations a statement would end up performing.

Categories