I'm using the sqlalchemy expression language (i.e. Core). I'm trying to build a query that should only return one result.
I want to do
query = select([table]).where(cond).one()
or
query = select([table]).where(cond).first()
but that only yields
AttributeError: 'Select' object has no attribute 'one'
The closest I have come is
query = select([table]).where(cond).limit(1)
but that is not entirely satisfactory because I get a list of results where I want a single result. I can work around by inserting extra logic but I'd be much happier to find a way to do this cleanly. I also would prefer not to use plain text queries.
Any ideas? Much appreciated.
one and first are methods available on ORM query objects. On closer inspection you can see that they cause execution of the query and then post process to get get the first/only entry and error check etc.
The SQL dialects that I have checked actually don't seem to have this functionality inbuilt for queries (oops I thought they did). They have limit or something similar.
The only option is to work around using limit and some logic or some call to execute or on the result.
Related
I have a sqlite query from an external source that will have an unknown number of WHERE clauses. There will be a limited number of types of clauses (and I know in advance what types they can be), but how many of each type is unknown until I actually receive the query.
I thought this would be an easy problem to solve until I actually got to it.
I can think of a couple of possible solutions. I could specify a long SELECT query with lots of different WHERE clauses for each type of selection, and fill those in with 1=1 when there's not enough selections given to fill them all up. But that's ugly code, and doesn't react well when more space is needed than is given.
I could instead not do this in pure SQL, but instead use a recursive Python function that iterates over the queries and successively filters the results. This is psuedocode that doesn't come close to running successfully:
queries = (list of queries from external source)
return filter_results(conn.cursor(), (database), queries)
def filter_results(cursor, results, queries):
if len(queries) == 0:
return results_so_far
cursor.execute("SELECT * FROM {} WHERE {}".format(results, queries.pop(0)))
results = cursor.fetchall()
return filter_results(cursor, results, queries)
As you can see I've fluffed over passing the database into the function, and I'm well aware that I won't be able to pass an SQL query to the result of cursor.fetchall(). At some point I'd either be trying to emulate SQL in Python, or exposing myself to SQL injection.
I'm either grossly overthinking this or trying to solve the unsolvable. I highly suspect it's the former. What's the correct approach to this?
The answer is to use a query building tool. In this case PyPika was the right tool.
How do I get both the rowcount and the instances of an SQLAlchemy ORM Query in a single roundtrip to the database?
If I call query.all(), I get a standard list, which I can get the len() of, but that requires loading the whole resultset in memory at once.
If I call iter(query), I get back a standard python generator, with no access to the .rowcount of the underlying ResultProxy.
If I call query.count() then iter(query) I'm doing two roundtrips of a potentially expensive query to the database.
if I managed to get hold of a ResultProxy for a Query, that would give me the .rowcount, then I could use Query.instances() to get the same generator that Query.__iter__() would give me.
But is there a convenient way of getting at the ResultProxy of a Query other than repeating what Query.__iter__() and Query._execute_and_instances() do? Seems rather inconvenient.
Notice
As mentioned by This answer (thanks #terminus), getting the ResultProxy.rowcount might or might not be useful, and is explicitly warned against in SQLAlchemy documentation for pure "select" statements.
That said, in the case of psycopg2, the .rowcount of the underlying cursor is documented to return the correct number of records returned by any query, even "SELECT" queries, unless you're using the stream_results=True option (thanks #SuperShoot).
I am working heavily with a database, using python, and I am trying to write code that actually makes my life easier.
Most of the time, I need to run a query and get results to process them; most of the time I get the same fields from the same table, so my idea was to collect the various results in an object, to process it later.
I am using SQLAlchemy for the DB interaction. From what I can read, there is no direct way to just say "dump the result of this query to an object", so I can access the various fields like
print object.fieldA
print object.fieldB
and so on. I tried dumping the results to JSON, but even that require parsing and it is not as straightforward as I hoped.
So at this point is there anything else that I can actually try? Or should I write a custom object that mimic the db structure, and parse the result with for loops, to put the data in the right place? I was hoping to find a way to do this automatically, but so far it seems that the only way to get something close to what I am looking for, is to use JSON.
EDIT:
Found some info about serialization and the capabilities that SQLAlchemy has, to read a table and reproduce a sort of 1:1 copy of it in an object, but I am not sure that this will actually work with a query.
Found that the best way is to actually use a custom object.
You can use reflection trough SQLAlchemy to extrapolate the structure, but if you are dealing with a small database with few tables, you can simply create on your own the object that will host the data. This gives you control over the object and what you can put in it.
There are obvious other ways, but since nobody posted anything; I assume that either are too easy to be mentioned, or too hard and specific to each case.
In my Python (Flask) code, I need to get the first element and the last one sorted by a given variable from a SQLAlchemy query.
I first wrote the following code :
first_valuation = Valuation.query.filter_by(..).order_by(sqlalchemy.desc(Valuation.date)).first()
# Do some things
last_valuation = Valuation.query.filter_by(..).order_by(sqlalchemy.asc(Valuation.date)).first()
# Do other things
As these queries can be heavy for the PostgreSQL database, and as I am duplicating my code, I think it will be better to use only one request, but I don't know SQLAlchemy enough to do it...
(When queries are effectively triggered, for example ?)
What is the best solution to this problem ?
1) How to get First and Last record from a sql query? this is about how to get first and last records in one query.
2) Here are docs on sqlalchemy query. Specifically pay attention to union_all (to implement answers from above).
It also has info on when queries are triggered (basically, queries are triggered when you use methods, that returns results, like first() or all(). That means, Valuation.query.filter_by(..).order_by(sqlalchemy.desc(Valuation.date)) will not emit query to database).
Also, if memory is not a problem, I'd say get all() objects from your first query and just get first and last result via python:
results = Valuation.query.filter_by(..).order_by(sqlalchemy.desc(Valuation.date)).all()
first_valuation = results[0]
last_valuation = results[-1]
It will be faster than performing two (even unified) queries, but will potentially eat a lot of memory, if your database is large enough.
No need to complicate the process so much.
first_valuation = Valuation.query.filter_by(..).order_by(sqlalchemy.desc(Valuation.date)).first()
# Do some things
last_valuation = Valuation.query.filter_by(..).order_by(sqlalchemy.asc(Valuation.date)).first()
This is what you've and it's good enough. It's not heavy for any database. If you think that it's becoming too heavy, then you can always use some index.
Don't try to get all the results using all() and retrieving from it in list style. When you do, all() it loads everything into the memory which is extremely and extremely bad if you have a lot of results. It's a lot better to execute just two queries to get those items.
What placeholders can I use with pymssql. I'm getting my values from the html query string so they are all of type string. Is this safe with regard to sql injection?
query = dictify_querystring(Response.QueryString)
employeedata = conn.execute_row("SELECT * FROM employees WHERE company_id=%s and name = %s", (query["id"], query["name"]))
What mechanism is being used in this case to avoid injections?
There isn't much in the way of documentation for pymssql...
Maybe there is a better python module I could use to interface with Sql Server 2005.
Thanks,
Barry
Regarding SQL injection, and not knowing exactly how that implementation works, I would say that's not safe.
Some simple steps to make it so:
Change that query into a prepared statement (or make sure the implementation internally does so, but doesn't seem like it).
Make sure you use ' around your query arguments.
Validate the expected type of your arguments (if request parameters that should be numeric are indeed numeric, etc).
Mostly... number one is the key. Using prepared statements is the most important and probably easiest line of defense against SQL injection.
Some ORM's take care of some of these issues for you (notice the ample use of the word some), but I would advise making sure you know these problems and how to work around them before using an abstraction like an ORM.
Sooner or later, you need to know what's going on under those wonderful layers of time-saving.
Maybe there is a better python module I could use to interface with Sql Server 2005.
Well, my advice is using an ORM like SqlAlchemy to handle this.
>>> from sqlalchemy.ext.sqlsoup import SqlSoup
>>> db = SqlSoup('mssql:///DATABASE?PWD=yourpassword&UID=some_user&dsn=your_dsn')
>>> employeedata = db.employees.filter(db.employees.company_id==query["id"])\
.filter(db.employees.name==query["name"]).one()
You can use one() if you want to raise an exception if there is more than one record, .first() if you want just the first record or .all() if you want all records.
As a side benefit, if you later change to other DBMS, the code will remain the same except for the connection URL.