Efficient way to run select query for millions of data - python

I want to run various select query 100 million times and I have aprox. 1 million rows in a table. Therefore, I am looking for the fastest method to run all these select queries.
So far I have tried three different methods, and the results were similar.
The following three methods are, of course, not doing anything useful, but are purely for comparing performance.
first Method:
for i in range (100000000):
cur.execute("select id from testTable where name = 'aaa';")
second method:
cur.execute("""PREPARE selectPlan AS
SELECT id FROM testTable WHERE name = 'aaa' ;""")
for i in range (10000000):
cur.execute("""EXECUTE selectPlan ;""")
third method:
def _data(n):
cur = conn.cursor()
for i in range (n):
yield (i, 'test')
sql = """SELECT id FROM testTable WHERE name = 'aaa' ;"""
cur.executemany(sql, _data(10000000))
And the table is created like this:
cur.execute("""CREATE TABLE testTable ( id int, name varchar(1000) );""")
cur.execute("""CREATE INDEX indx_testTable ON testTable(name)""")
I thought that using the prepared statement functionality would really speed up the queries, but as it seems like this will not happen, I thought you could give me a hint on other ways of doing this.

This sort of benchmark is unlikely to produce any useful data, but the second method should be fastest, as once the statement is prepared it is stored in memory by the database server. Further calls to repeat the query do not require the text of the query to be transmitted, so saving a small about of time.
This is likely to be moot as the query is very small (likely the same quantity of packets over the wire as repeating sending the query text), and the query cache will serve the same data for every request.

What's the purpose of retrieving such amount of data at once? I don't know your situation, but I'd definitely page the results using limit and offset. Take a look at:
7.6. LIMIT and OFFSET

If you just want to benchmark SQL all on it's own and not mix Python into the equation try pgbench.
http://developer.postgresql.org/pgdocs/postgres/pgbench.html
Also what is your goal here?

Related

purging a huge data mysql table using python

I have a 1000M data table where i need to have a automated script just keeping last 7 days and delete the before days. I want to do it using python and chunks concept. Want to delete chunk wise.
do we have any library with this chunk concept related to mysql on python?
If no, can anyone suggest me a best method of how to use chunk or apply this with mysql
I'm unaware of a Python package that has an API for "chunking" deletes from a MySQL table. SqlAlchemy provides a fluent interface that can do this but it's not much different than the SQL. I suggest using PyMySql.
import datetime
import pymysql.cursors
connection = pymysql.connect(
host='host',
user='user',
password='password',
database='database'
)
seven_days_before_now = datetime.datetime.now() - datetime.timedelta(days=7)
chunksize = 1000
with connection.cursor() as cursor:
sql = 'DELETE FROM `mytable` WHERE `timestamp` < %s ORDER BY `id` LIMIT %s;'
num_deleted = None
while num_deleted != 0:
num_deleted = cursor.execute(sql, (seven_days_before_now, chunksize))
connection.commit()
The LIMIT just limits the number of deleted rows to the chunksize. The ORDER BY ensures that the DELETE is deterministic and it sorts by the primary key because the primary key is guaranteed to be indexed; so even though it sorts for each chunk, at least it's sorting on an indexed column. Remove the ORDER BY if deterministic behavior is unnecessary, it will result in faster execution time.
You'll need to replace the connection details, table name, column name and chunksize. Also, this solution assumes that the table has a column named id which is the primary key and an auto-incrementing integer. You'll need to make some changes if your schema differs.
As Bernd Buffen commented: the correct way to get the behavior you desire is to partition the table. Please consider a migration to do so.
And, of course: stop using Python 2, it's been unsupported for almost two years as of the first version of this answer.

Sql Select statement Optimization

I have made an test table in sql with the following information schema as shown:
Now I extract this information using the python script the code of which is as shown:
import pandas as pd
import mysql.connector
db = mysql.connector.connect(host="localhost", user="root", passwd="abcdef")
pointer = db.cursor()
pointer.execute("use holdings")
x = "Select * FROM orders where tradingsymbol like 'TATACHEM'"
pointer.execute(x)
rows = pointer.fetchall()
rows = pd.DataFrame(rows)
stock = rows[1]
The production table contains 200 unique trading symbols and has the schema similar to the test table.
My doubt is that for the following statement:
x = "Select * FROM orders where tradingsymbol like 'TATACHEM'"
I will have to replace value of tradingsymbols 200 times which is ineffective.
Is there an effective way to do this?
If I understand you correctly, your problem is that you want to avoid sending multiple queries for each trading symbol, correct? In this case the following MySQL IN might be of help. You could then simply send one query to the database containing all tradingsymbols you want. If you want to do different things with the various trading symbols, you could select the subsets within pandas.
Another performance improvement could be pandas.read_sql since this speeds up the creation of the dataframe somewhat
Two more things to add for efficiency:
Ensure that tradingsymbols is indexed in MySQL for faster lookup processes
Make tradingsymbols an ENUM to ensure that no typos or alike are accepted. Otherwise the above-mentioned "IN" method also does not work since it does a full-text comparison.

Proper way to insert iterative data into Cassandra using Python

Let's say I have cassandra table define like this:
CREATE TABLE IF NOT EXISTS {} (
user_id bigint ,
username text,
age int,
PRIMARY KEY (user_id)
);
I have 3 list of same size let's 1 000 000 records in each list. Is it a good practice to insert data using a for loop like this:
for index, user_id in enumerate(user_ids):
query = "INSERT INTO TABLE (user_id, username, age) VALUES ({0}, '{1}', {1});".format(user_id, username[index] ,age[index])
session.execute(query)
Prepared statements with concurrent execution will be your best bet. The driver provides utility functions for concurrent execution of statements with sequences of parameters, just as you have with your lists: execute_concurrent_with_args
Zipping your lists together will produce a sequence of parameter tuples suitable for input to that function.
Something like this:
prepared = session.prepare("INSERT INTO table (user_id, username, age) VALUES (?, ?, ?)")
execute_concurrent_with_args(session, prepared, zip(user_ids, username, age))
Its probably a good idea to start by looking at the python driver getting started guide. If you have already seen that then apologies but I thought it worth mentioning.
Generally speaking you'd create your session object and then do your inserts inside your loop, probably using something like a prepared statement (talked about further down the getting started page) but also here and here
The example of the above page uses this as a good starting point
user_lookup_stmt = session.prepare("SELECT * FROM users WHERE user_id=?")
users = []
for user_id in user_ids_to_query:
user = session.execute(user_lookup_stmt, [user_id])
users.append(user)
You may also find this blog helps when talking about better throughput with the python driver
You might find the python driver github page a useful resource, in particular I found this example using a prepared statement here that might help you too.

SQLite get id and insert when using executemany

I am optimising my code, and reducing the amount of queries. These used to be in a loop but I am trying to restructure my code to be done like this. How do I get the second query working so that it uses the id entered in the first query from each row. Assume that the datasets are in the right order too.
self.c.executemany("INSERT INTO nodes (node_value, node_group) values (?, (SELECT node_group FROM nodes WHERE node_id = ?)+1)", new_values)
#my problem is here
new_id = self.c.lastrowid
connection_values.append((node_id, new_id))
#insert entry
self.c.executemany("INSERT INTO connections (parent, child, strength) VALUES (?,?,1)", connection_values)
These queries used to be a for loop but were taking too long so I am trying to avoid using a for loop and doing the query individually. I believe their might be a way with combining it into one query but I am unsure how this would be done.
You will need to either insert rows one at a time or read back the rowids that were picked by SQLite's ID assignment logic; as documented in Autoincrement in SQLite, there is no guarantee that the IDs generated will be consecutive and trying to guess them in client code is a bad idea.
You can do this implicitly if your program is single-threaded as follows:
Set the AUTOINCREMENT keyword in your table definition. This will guarantee that any generated row IDs will be higher than any that appear in the table currently.
Immediately before the first statement, determine the highest ROWID in use in the table.
oldmax ← Execute("SELECT max(ROWID) from nodes").
Perform the first insert as before.
Read back the row IDs that were actually assigned with a select statement:
NewNodes ← Execute("SELECT ROWID FROM nodes WHERE ROWID > ? ORDER BY ROWID ASC", oldmax) .
Construct the connection_values array by combining the parent ID from new_values and the child ID from NewNodes.
Perform the second insert as before.
This may or may not be faster than your original code; AUTOINCREMENT can slow down performance, and without actually doing the experiment there's no way to tell.
If your program is writing to nodes from multiple threads, you'll need to guard this algorithm with a mutex as it will not work at all with multiple concurrent writers.

how to select a long list of id's in sql using python

I have a very large db that I am working with, and I need to know how to select a large set of id's which doesn't have any real pattern to them. This is segment of code I have so far:
longIdList = [1, 3, 5 ,8 ....................................]
for id in longIdList
sql = "select * from Table where id = %s" %id
result = cursor.execute(sql)
print result.fetchone()
I was thinking, That there must be a quicker way of doing this... I mean my script needs to search through a db that has over 4 million id's. Is there a way that I can use a select command to grab them all in one shot. could I use the where statement with a list of id's? Thanks
Yes, you can use SQL's IN() predicate to compare a column to a set of values. This is standard SQL and it's supported by every SQL database.
There may be a practical limit to the number of values you can put in an IN() predicate before it becomes too inefficient or simply exceeds a length limit on SQL queries. The largest practical list of values depends on what database you use (in Oracle it's 1000, MS SQL Server it's around 2000). My feeling is that if your list exceeds a few dozen values, I'd seek another solution.
For example, #ngroot suggests using a temp table in his answer. For analysis of this solution, see this blog by StackOverflow regular #Quassnoi: Passing parameters in MySQL: IN list vs. temporary table.
Parameterizing a list of values into an SQL query a safe way can be tricky. You should be mindful of the risk of SQL injection.
Also see this popular question on Stack Overflow: Parameterizing a SQL IN clause?
You can use IN to look for multiple items simultaneously:
SELECT * FROM Table WHERE id IN (x, y, z, ...)
So maybe something like:
sql = "select * from Table where id in (%s)" % (', '.join(str(id) for id in longIdList))
Serialize the list in some fashion (comma-separated or XML would be reasonable choices), then have a stored procedure on the other side that will deserialize the list into a temp table. You can then do an INNER JOIN against the temp table.

Categories