I am trying to query a large data (10 million rows) and try to prevent out of memory, but not familiar with Python and confused with different opinions regarding the execute(), cursor iterator and fetchone()
Am I right to assume that cursor.execute() does not load all data into memory and only when I call fetchone() then it will load 1 row of data
from mysql.connector import MySQLConnection
def query():
conn = MySQLConnection(host=conf['host'],
conf['port'],
conf['user'],
conf['password'],
conf['database'])
cursor = conn.cursor(buffered=True)
cursor.execute('SELECT * FROM TABLE') # 10 million rows
does this cursor iterator does the same with fetchone() ?
for row in cursor:
print(row)
is my code snippet is safe to handle 10 million rows of data? if not, how can I safely iterate the data without out of memory?
My first suggestion is to use from mysql.connector import connect, which by the default will use the C extension (CMySQLConnection), instead of from mysql.connector import MySQLConnection (pure Python).
If you for some reason want to use the pure Python version, you can pass use_pure=True in connect()
The second suggestion is to paginate the results, if you use a buffered cursor, it will fetch the entire result set from the server. I don't know if you want that with 10M rows.
Here's some references:
https://dev.mysql.com/doc/refman/8.0/en/limit-optimization.html
https://dev.mysql.com/doc/connector-python/en/connector-python-api-mysqlcursorbuffered.html
Taken from MySQL documentation:
The fetchone() method is used by fetchall() and fetchmany(). It is also used when a cursor is used as an iterator.
The following example shows two equivalent ways to process a query result. The first uses fetchone() in a while loop, the second uses the cursor as an iterator:
# Using a while loop
cursor.execute("SELECT * FROM employees")
row = cursor.fetchone()
while row is not None:
print(row)
row = cursor.fetchone()
# Using the cursor as iterator
cursor.execute("SELECT * FROM employees")
for row in cursor:
print(row)
It also stated that:
You must fetch all rows for the current query before executing new statements using the same connection.
If you are worried about performance issues you should use fetchmany(n) in a while loop until you fetch all of the results like so:
'An iterator that uses fetchmany to keep memory usage down'
while True:
results = cursor.fetchmany(arraysize)
if not results:
break
for result in results:
yield result
This behavior adheres to PEP249, which describes how and which methods db connectors should implement. A partial answer is given in this thread.
Basically the implementation of fetchall vs fetchmany vs fetchone would be up to the developers of the library depending on the database capabilities, but it would make sense, in the case of fetchmany and fetchone, that the unfetched/remaining results would be kept server side, until requested by another call or destruction of cursor object.
So in conclusion I think it is safe to assume calling execute method does not, in this case(mysqldb), dump all the data from the query to memory.
Related
in my example, I have a list of 3000 words to search in a database, so with this pyhton code (I am using django) the operation takes a lot of time and is done in 3.5 minutes with the average of 120ms for a querie. is there a method to speed up the execution of these queries with a script python using threads or something like that?
def my_custom_sql(ent):
for e in ent:
with connection.cursor() as cursor:
entity=e.replace("'","''")
cursor.execute("SELECT object FROM yagofacts WHERE subject='{}' AND object LIKE '<wordnet_%>';".format(entity))
row = cursor.fetchall()
print(row)
Modify your method like the code shown below. It will use the same cursor to fire all the queries and hence will be faster.
with connection.cursor() as cursor:
for e in ent:
entity=e.replace("'","''")
cursor.execute("SELECT object FROM yagofacts WHERE subject='{}' AND object LIKE '<wordnet_%>';".format(entity))
row = cursor.fetchall()
print(row)
If this is not helping, then try connection pooling. It may help in your case.
I have a table with 4million rows and I use psycopg2 to execture a:
SELECT * FROM ..WHERE query
I haven't heard before of the server side cursor and I am reading its a good practice when you expect lots of results.
I find the documentation a bit limited and I have some basic questions.
First I declare the server-side cursor as:
cur = conn.cursor('cursor-name')
then I execute the query as:
cur.itersize = 10000
sqlstr = "SELECT clmn1, clmn2 FROM public.table WHERE clmn1 LIKE 'At%'"
cur.execute(sqlstr)
My question is: What do I do now? How do I get the results?
Do I iterate through the rows as:
row = cur.fetchone()
while row:
row = cur.fetchone()
or I use fetchmany() and I do this:
row = cur.fetchmany(10)
But in the second case how can I "scroll" the results?
Also what is the point of itersize?
Psycopg2 has a nice interface for working with server side cursors. This is a possible template to use:
with psycopg2.connect(database_connection_string) as conn:
with conn.cursor(name='name_of_cursor') as cursor:
cursor.itersize = 20000
query = "SELECT * FROM ..."
cursor.execute(query)
for row in cursor:
# process row
The code above creates the connection and automatically places the query result into a server side cursor. The value itersize sets the number of rows that the client will pull down at a time from the server side cursor. The value you use should balance number of network calls versus memory usage on the client. For example, if your result count is three million, an itersize value of 2000 (the default value) will result in 1500 network calls. If the memory consumed by 2000 rows is light, increase that number.
When using for row in cursor you are of course working with one row at a time, but Psycopg2 will prefetch itersize rows at a time for you.
If you want to use fetchmany for some reason, you could do something like this:
while True:
rows = cursor.fetchmany(100)
if len(rows) > 0:
for row in rows:
# process row
else:
break
This usage of fetchmany will not trigger a network call to the server for more rows until the prefetched batch has been exhausted. (This is a convoluted example that provides nothing over the code above, but demonstrates how to use fetchmany should there be a need.)
I tend to do something like this when I don't want to load millions of rows at once. You can turn a program into quite a memory hog if you load millions of rows into memory. Especially if you're making python domain objects out of those rows or something like that. I'm not sure if the uuid4 in the name is necessary, but my thought is that I want individual server side cursors that don't overlap if two processes make the same query.
from uuid import uuid4
import psycopg2
def fetch_things() -> Iterable[MyDomainObject]:
with psycopg2.connect(database_connection_string) as conn:
with conn.cursor(name=f"my_name_{uuid4()}") as cursor:
cursor.itersize = 500_000
query = "SELECT * FROM ..."
cursor.execute(query)
for row in cursor:
yield MyDomainObject(row)
I'm interested if anyone knows if this creates a storage problem on the SQL server or anything like that.
Additionally to cur.fetchmany(n) you can use PostgreSQL cursors:
cur.execute("declare foo cursor for select * from generate_series(1,1000000)")
cur.execute("fetch forward 100 from foo")
rows = cur.fetchall()
# ...
cur.execute("fetch forward 100 from foo")
rows = cur.fetchall()
# and so on
I am working on a program to clone rows in my database from one user to another. It works my selecting the rows, editing a few values and then inserting them back.
I also need to store the newly inserted rowIDs with their existing counterparts so I can clone some other link tables later on.
My code looks like the following:
import mysql.connector
from collections import namedtuple
con = mysql.connector.connect(host='127.0.0.1')
selector = con.cursor(prepared=True)
insertor = con.cursor(prepared=True)
user_map = {}
selector.execute('SELECT * FROM users WHERE companyID = ?', (56, ))
Row = namedtuple('users', selector.column_names)
for row in selector:
curr_row = Row._make(row)
new_row = curr_row._replace(userID=None, companyID=95)
insertor.execute('INSERT INTO users VALUES(?,?,?,?)', tuple(new_row))
user_map[curr_row.userID] = insertor.lastrowid
selector.close()
insertor.close()
When running this code, I get the following error:
mysql.connector.errors.InternalError: Unread result found
I'm assuming this is because I am trying to run an INSERT while I am still looping over the SELECT, but I thought using two cursors would fix that. Why do I still get this error with multiple cursors?
I found a solution using fetchall(), but I was afraid that would use too much memory as there could be thousands of results returned from the SELECT.
import mysql.connector
from collections import namedtuple
con = mysql.connector.connect(host='127.0.0.1')
cursor = con.cursor(prepared=True)
user_map = {}
cursor.execute('SELECT * FROM users WHERE companyID = ?', (56, ))
Row = namedtuple('users', cursor.column_names)
for curr_row in map(Row._make, cursor.fetchall()):
new_row = curr_row._replace(userID=None, companyID=95)
cursor.execute('INSERT INTO users VALUES(?,?,?,?)', tuple(new_row))
user_map[curr_row.userID] = cursor.lastrowid
cursor.close()
This works, but it's not very fast. I was thinking that not using fetchall() would be quicker, but it seems if I do not fetch the full result set then MySQL yells at me.
Is there a way to insert rows while looping over a result set without fetching the entire result set?
Is there a way to insert rows while looping over a result set without fetching the entire result set?
Yes. Use two MySQL connections: one for reading and the other for writing.
The performance impact isn't too bad, as long as you don't have thousands of instances of the program trying to connect to the same MySQL server.
One connection is reading a result set, and the other is inserting rows to the end of the same table, so you shouldn't have a deadlock. It would be helpful if the WHERE condition you use to read the table could explicitly exclude the rows you're inserting, if there's a way to tell the new rows apart from the old rows.
At some level, the performance impact of two connections doesn't matter because you don't have much choice. The only other way to do what you want to do is slurp the whole result set into RAM in your program, close your reading cursor, and then write.
I just started using sqlite3 with python .
I would like to know the difference between :
cursor = db.execute("SELECT customer FROM table")
for row in cursor:
print row[0]
and
cursor = db.execute("SELECT customer FROM table")
for row in cursor.fetchall():
print row[0]
Except that cursor is <type 'sqlite3.Cursor'> and cursor.fetchall() is <type 'list'>, both of them have the same result .
Is there a difference, a preference or specific cases where one is more preferred than the other ?
fetchall() reads all records into memory, and then returns that list.
When iterating over the cursor itself, rows are read only when needed.
This is more efficient when you have much data and can handle the rows one by one.
The main difference is precisely the call to fetchall(). By issuing fetchall(), it will return a list object filled with all the elements remaining in your initial query (all elements if you haven't get anything yet). This has several drawbacks:
Increments memory usage: by storing all the query's elements in a list, which could be huge
Bad performance: filling the list can be quite slow if there are many elements
Process termination: If it is a huge query then your program might crash by running out of memory.
When you instead use cursor iterator (for e in cursor:) you get the query's rows lazily. This means, returning one by one only when the program requires it.
Surely that the output of your two code snippets are the same, but internally there's a huge perfomance drawback between using the fetchall() against using only cursor.
Hope this helps!
i am inserting records to sql server from python using pymssql. The database takes 2 milliseconds to execute a query, yet it insert 6 rows per second. The only problem is at code side. how to optimize following code or what is the fastest method to insert records.
def save(self):
conn = pymssql.connect(host=dbHost, user=dbUser,
password=dbPassword, database=dbName, as_dict=True)
cur = conn.cursor()
self.pageURL = self.pageURL.replace("'","''")
query = "my query is there"
cur.execute(query)
conn.commit()
conn.close()
It looks like you're creating a new connection per insert there. That's probably the major reason for the slowdown: building new connections is typically quite slow. Create the connection outside the method and you should see a large improvement. You can also create a cursor outside function and re-use it, which will be another speedup.
Depending on your situation, you may also want to use the same transaction for more than a single insertion. This changes the behaviour a little -- since a transaction is supposed to be atomic and either completely succeeds or completely fails -- but committing a transaction is typically a slow operation, because it has to be certain the whole operation succeeded.
In addition to Thomas' great advice,
I'd suggest you look into executemany()*, e.g.:
cur.executemany("INSERT INTO persons VALUES(%d, %s)",
[ (1, 'John Doe'), (2, 'Jane Doe') ])
...where the second argument of executemany() should be a sequence of rows to insert.
This brings up another point:
You probably want to send your query and query parameters as separate arguments to either execute() or executemany(). This will allow the PyMSSQL module to handle any quoting issues for you.
*executemany() as described in the Python DB-API:
.executemany(operation,seq_of_parameters)
Prepare a database operation (query or
command) and then execute it against
all parameter sequences or mappings
found in the sequence
seq_of_parameters.