Pagination for SQLite - python

Hey Friends I am working on a application which I similar to service now. I have requests coming from users and have to work on it. I am using python-flask and sqlite for this.
I am new to flask and this is my first project. Please correct me if I am wrong.
result = cur.execute("SELECT * from orders")
orders = result.fetchmany(5)
I am trying to use orders = result.paginate(...)
But it seems there's some problem.
Also, I am not sure of how to display the db data in different pages.
I want first 10 records on 1st page next 10 on 2nd page and so on..
Could you please help me?

I've never used flask but assuming that you can issue a paginate/page throw then a query that introduces a value 0-9 would allow a conditional page throw.
For example, assuming an orders tables that has 3 columns, orderdate, ordertype, orderdesc and that the order required was for the columns according to those columns (see notes) then the following would inroduce a column that is from 0 to 9 and thus allow the check for a pafe throw :-
SELECT *,
(
SELECT count()
FROM ORDERS
WHERE orderdate||ordertype||orderdesc < o.orderdate||o.ordertype||o.orderdesc
ORDER BY orderdate||ordertype||orderdesc
) % 10 AS orderby
FROM ORDERS AS o ORDER BY orderdate||ordertype||orderdesc
Note that the above relies upon sort-orders and the where clause having the same result, a more complex WHERE clause may be needed. The above is intended as an in-principle example.
Example Usage
Consider the following example of the above in use. This generates 100 rows of orders with randomly generated orderdates and ordertypes within specififc ranges and then extracts the data according to the above query. The results of the underyling data and the extracted data are shown in the results section.
/* Create Test Environment */
DROP TABLE IF EXISTS orders;
/* Generate and load some random orders */
CREATE TABLE If NOT EXISTS orders (orderdate TEXT, ordertype TEXT, orderdesc TEXT);
WITH RECURSIVE cte1(od,ot,counter) AS
(
SELECT
datetime('now','+'||(abs(random()) % 10)||' days'),
(abs(random()) % 26),
1
UNION ALL SELECT
datetime('now','+'||(abs(random()) % 10)||' days'),
(abs(random()) % 26),
counter+1
FROM cte1 LIMIT 100
)
INSERT INTO orders SELECT * FROM cte1;
/* Display the resultant data */
SELECT rowid,* FROM orders;
/* Display data with generated page throw indicator */
SELECT rowid,*,
(
SELECT count()
FROM ORDERS
WHERE orderdate||ordertype||orderdesc < o.orderdate||o.ordertype||o.orderdesc
ORDER BY orderdate||ordertype||orderdesc
) % 10 AS orderby
FROM ORDERS AS o ORDER BY orderdate||ordertype||orderdesc;
/* Clean up */
DROP TABLE IF EXISTS orders;
Results (partial)
The core data (not sorted so by rowid (rowid included for comparison purposes)) :-
The extracted data with page-throw indicator (highlighted)
Obviously you would likely not throw a page for the first row.
As concatention of the 3 columns has been used for convenience, the results may be a little confusing as e.g. 2 would appear to be greater than 11 and so on.
the rowid indicates the original position, so demonstrates that the data has been sorted.

Related

How do I nest these queries in one Replace Into query?

I have three queries and another table called output_table. This code works but needs to be executed in 1. REPLACE INTO query. I know this involves nested and subqueries, but I have no idea if this is possible since my key is the DISTINCT coins datapoints from target_currency.
How to rewrite 2 and 3 so they execute in query 1? That is, the REPLACE INTO query instead of the individual UPDATE ones:
1. conn3.cursor().execute(
"""REPLACE INTO coin_best_returns(coin) SELECT DISTINCT target_currency FROM output_table"""
)
2. conn3.cursor().execute(
"""UPDATE coin_best_returns SET
highest_price = (SELECT MAX(ask_price_usd) FROM output_table WHERE coin_best_returns.coin = output_table.target_currency),
lowest_price = (SELECT MIN(bid_price_usd) FROM output_table WHERE coin_best_returns.coin = output_table.target_currency)"""
)
3. conn3.cursor().execute(
"""UPDATE coin_best_returns SET
highest_market = (SELECT exchange FROM output_table WHERE coin_best_returns.highest_price = output_table.ask_price_usd),
lowest_market = (SELECT exchange FROM output_table WHERE coin_best_returns.lowest_price = output_table.bid_price_usd)"""
)
You can do it with the help of some window functions, a subquery, and an inner join. The version below is pretty lengthy, but it is less complicated than it may appear. It uses window functions in a subquery to compute the needed per-currency statistics, and factors this out into a common table expression to facilitate joining it to
itself.
Other than the inline comments, the main reason for the complication is original query number 3. Queries (1) and (2) could easily be combined as a single, simple, aggregate query, but the third query is not as easily addressed. To keep the exchange data associated with the corresponding ask and bid prices, this query uses window functions instead of aggregate queries. This also provides a vehicle different from DISTINCT for obtaining one result per currency.
Here's the bare query:
WITH output_stats AS (
-- The ask and bid information for every row of output_table, every row
-- augmented by the needed maximum ask and minimum bid statistics
SELECT
target_currency as tc,
ask_price_usd as ask,
bid_price_usd as bid,
exchange as market,
MAX(ask_price_usd) OVER (PARTITION BY target_currency) as high,
ROW_NUMBER() OVER (
PARTITION_BY target_currency, ask_price_usd ORDER BY exchange DESC)
as ask_rank
MIN(bid_price_usd) OVER (PARTITION BY target_currency) as low,
ROW_NUMBER() OVER (
PARTITION_BY target_currency, bid_price_usd ORDER BY exchange ASC)
as bid_rank
FROM output_table
)
REPLACE INTO coin_best_returns(
-- you must, of course, include all the columns you want to fill in the
-- upsert column list
coin,
highest_price,
lowest_price,
highest_market,
lowest_market)
SELECT
-- ... and select a value for each column
asks.tc,
asks.ask,
bids.bid,
asks.market,
bids.market
FROM output_stats asks
JOIN output_stats bids
ON asks.tc = bids.tc
WHERE
-- These conditions choose exactly one asks row and one bids row
-- for each currency
asks.ask = asks.high
AND asks.ask_rank = 1
AND bids.bid = bids.low
AND bids.bid_rank = 1
Note well that unlike the original query 3, this will consider only exchange values associated with the target currency for setting the highest_market and lowest_market columns in the destination table. I'm supposing that that's what you really want, but if not, then a different strategy will be needed.

Is there a query to retrieve data from Sybase table on record count condition

I have a situation where I need to select records from a Sybase table based on a certain condition
Record needs to extract on batches. If the total count is 2000 then I need to extract 500 in first batch and 500 in next batch till 2000 record count is reached.
I used the limit condition but it's giving a incorrect syntax
select top 2 *
from CERD_CORPORATE..BOOK
where id_bo_book in('5330')
limit(2,3)
You can't use the range for LIMIT condition, but you can use OFFSET keyword for this:
SELECT top 2 * FROM CERD_CORPORATE.BOOK
WHERE id_bo_book in('5330')
LIMIT 2 OFFSET 1;
On ASE 12.5.1 and onwards this can be done with a "SQL Derived Table" or "Inline View". The query requires that each row has a unique key so the table can be joined with itself and a count of the rows where the key value is less than the row being joined can be returned. This gives a monotonically increasing number with which to specify the limit and offset.
The equivalents of limit and offset are the values compared against x.rowcounter.
select
x.rowcounter,
x.error,
x.severity
from
(
select
t1.error,
t1.severity,
t1.description,
count(t2.error) as rowcounter
from
master..sysmessages t1,
master..sysmessages t2
where
t1.error >= t2.error
group by
t1.error,
t1.severity,
t1.description
) x
where
x.rowcounter >= 50
and x.rowcounter < 100
SQL Derived Tables are available as far back as Sybase ASE 12.5.1, SQL Derived Tables
The use of master..sysmessages in the example provides a reasonable (10,000 rows) data set with which to experiment.

Python SQL how to look for non-matches in other tables?

I'm looking for a solution so that I can check for non-matches in other tables.
Basically I have 3 tables (see below). Here I want to look into Table 1, and identify the first row that doesn't have a match in either Name or Location. If both is recognized it should move to next row and check.
I have tried to accomplish this with SQL and looping through them, but as I only want the first row that doesn't match I haven't found a smooth solution (or pretty for that sake, as I'm fairly rookie-ish).
I'm pretty sure this can be accomplished with SQL.
Table 1
Id Name Location
1 John Canada
2 James Brazil
3 Jim Hungary
Table 2 - Recognized Names
Id Name
1 John
2 James
Table 3 - Recognized Locations
Id Location
1 Brazil
2 Hungary
So I want to select from Table 1, where Name can't find a match in Table 2 or where Location can't find a match in Table 3.
In my example from above the result should be Id = 1, as Location is not in Table 3.
Thanks in advance.
You can use not exists to select if some sub-query doesn't select a row:
select
*
from
Table1 t1
where
not exists (
select * from Table2 t2 where t2.`Name` = t1.`Name`
) and
not exists (
select * from Table3 t3 where t3.`Location` = t1.`Location`
)
order by
t1.Id
limit 1
It's not a very complicated query, but still some things are going on, so here is the same one again, but with comments to explain the various parts:
select
/* I select all columns, *, for the example, but in real life scenarios
it's always better to explicitly specify which columns you need. */
*
from
/* Optionally, you can specify a short or different alias for a table (t1)
this can be helpful to make your query more readable by allowing you to explicitly
specify where a column is coming from, without cluttering the query with long names. */
Table1 t1
where
/* exists takes a sub-query, which is executed for each row of the main query.
The expression returns true if the subquery returns a row.
With not (not exists), the expression is inversed: true becomes false. */
not exists (
/* In MariaDB, backticks can be used to escape identifiers that also are
reserved words. You are allowed to use them for any identifier, but
for reserved word identifiers, they are often necessary. */
select * from Table2 t2 where t2.`Name` = t1.`Name`
)
/* Combine the two subqueries. We only want rows don't have a matching row
in sub-query one, and neither in sub-query two. */
and
not exists (
select * from Table3 t3 where t3.`Location` = t1.`Location`
)
/* Specify _some_ ordering by which you can distinguish 'first' from 'second'. */
order by
t1.Id
/* Return only 1 row (the first according to the order clause) */
limit 1

Psycopg2 - return first 1000 results and randomly pick one

Hi I have a postgresql "TABLE1" with 2.7 million records. Each record has a field "FIELD1" that may be empty or may have data. I want a SELECT statement or method that a) returns the first 1000 results from TABLE1 with FIELD1 empty, and b) randomly pick one of the records to return to a python variable. Help???
For selecting first 1000 result you can use limit in your query
SELECT field1 FROM table1 ORDER BY id Limit 1000;
The result will be a list in python. So you can use python random module to operate on the result list.
If performance is not a concern:
SELECT *
FROM (
SELECT *
FROM tbl
WHERE field1 IS NULL
ORDER BY id --?? unclear from question
LIMIT 1000
) sub
ORDER BY random()
LIMIT 1;
This returns 1 perfectly random row from the "first" 1000 empty rows.
"Empty" meaning NULL, and "first" meaning smallest id.
If performance is a concern, you need to be a lot more specific.
If your circumstances match, this related answer might of help:
Best way to select random rows PostgreSQL

SQLAlchemy Select with Exist limiting

I have a table with 4 columns (1 PK) from which I need to select 30 rows.
Of these rows, two columns (col. A and B) must exists in another table (8 columns, 1 PK, 2 are A and B).
Second table is large, contains millions of records and it's enough for me to know if even a single row exists containing values of col. A and B of 1st table.
I am using the code below:
query = db.Session.query(db.Table_1).\
filter(
exists().where(db.Table_2.col_a == db.Table_1.col_a).\
where(db.Table_2.col_b == db.Table_2.col_b)
).limit(30).all()
This query gets me the results I desire however I'm afraid it might be a bit slow since it does not imply a limit condition to exists() function nor does it do select 1 but a select *.
exists() does not accept a .limit(1)
How can I put a limit to exists to get it not to look for whole table, hence making this query run faster?
I need n rows from Table_1, which 2 columns exist in a record in
Table_2
Thank you
You can do the "select 1" thing using a more explicit form as it mentioned here, that is,
exists([1]).where(...)
However, while I've been a longtime diehard "select 1" kind of guy, I've since learned that the usage of "1" vs. "*" for performance is now a myth (more / more).
exists() is also a wrapper around select(), so you can get a limit() by constructing the select() first:
s = select([1]).where(
table1.c.col_a == table2.c.colb
).where(
table1.c.colb == table2.c.colb
).limit(30)
s = exists(s)
query=select([db.Table_1])
query=query.where(
and_(
db.Table_2.col_a == db.Table_1.col_a,
db.Table_2.col_b == db.Table_2.col_b
)
).limit(30)
result=session.execute(query)

Categories