Python MySQLdb doesn't wait for the result - python

I am trying to run some querys that needs to create some temporary tables and then returns a result set, but i am unable to do that with MySQLdb api.
I already dig something about this issue like here but without success.
My query is like this:
create temporary table tmp1
select * from table1;
alter tmp1 add index(somefield);
create temporary table tmp2
select * from table2;
select * from tmp1 inner join tmp2 using(somefield);
This returns immediatly an empty result set. If i go to the mysql client and do a show full processlist i can see my queries executing. They take some minutes to complete.
Why cursor returns immediatly and don't wait to query to run.
If i try to run another query i have a "Commands out of sync; you can't run this command now"
I already tried to put my connection with autocommit to True
db = MySQLdb.connect(host='ip',
user='root',
passwd='pass',
db='mydb',
use_unicode=True
)
db.autocommit(True)
Or put every statement in is own cursor.execute() and between them db.commit() but without success too.
Can you help me to figure what is the problem? I know mysql don't support transactions for some operations like alter table, but why the api don't wait until everything is finished like it does with a select?
By the way i'm trying to do this on a ipython notebook.

I suspect that you're passing your multi-statement SQL string directly to the cursor.execute function. The thing is, each of the statements is a query in its own right so it's unclear what the result set should contain.
Here's an example to show what I mean. The first case is passing a semicolon set of statements to execute which is what I presume you have currently.
def query_single_sql(cursor):
print 'query_single_sql'
sql = []
sql.append("""CREATE TEMPORARY TABLE tmp1 (id int)""")
sql.append("""INSERT INTO tmp1 VALUES (1)""")
sql.append("""SELECT * from tmp1""")
cursor.execute(';'.join(sql))
print list(cursor.fetchall())
Output:
query_single_sql
[]
You can see that nothing is returned, even though there is clearly data in the table and a SELECT is used.
The second case is where each statement is executed as an independent query, and the results printed for each query.
def query_separate_sql(cursor):
print 'query_separate_sql'
sql = []
sql.append("""CREATE TEMPORARY TABLE tmp3 (id int)""")
sql.append("""INSERT INTO tmp3 VALUES (1)""")
sql.append("""SELECT * from tmp3""")
for query in sql:
cursor.execute(query)
print list(cursor.fetchall())
Output:
query_separate_sql
[]
[]
[(1L,)]
As you can see, we consumed the results of the cursor for each query and the final query has the results we expect.
I suspect that even though you've issued multiple queries, the API only has a handle to the first query executed and so immediately returns when the CREATE TABLE is done. I'd suggest serializing your queries as described in the second example above.

Related

About python sqlite3 order by

Now, I have a study about python sqlite3 database. I think it is very simple problem but not allow next step. Could help me?
There is print OK on vscode terminal, but not revised to DB file. I'm searching several times but I can not fix it.
If I execute the code, it not sorting on DB files.
import sqlite3
conn = sqlite3.connect('sqliteDB1.db')
cursor = conn.cursor()
cursor.execute("SELECT * FROM member")
temp123 = cursor. fetchall()
print(temp123)
cursor.execute("SELECT * FROM member ORDER BY -code")
temp321 = cursor.fetchall()
conn.commit
print(temp321)
conn.close()
A select statement just returns data from a database, it will not modify it. Moreover, tables in SQL databases are inherently unordered sets. They have no intrinsic value, and you should never rely on the order of the rows that happens to be returned unless you explicitly sort it with an order by clause.

Ending a SELECT transaction psycopg2 and postgres

I am executing a number of SELECT queries on a postgres database using psycopg2, but I am getting ERROR: Out of shared memory. It suggests that I should increase max_locks_per_transaction., but this confuses me because each SELECT query is operating on only one table, and max_locks_per_transaction is already set to 512, 8 times the default.
I am using TimescaleDB, which could be the result of a larger than normal number of locks (one for each chunk rather than one for each table, maybe), but this still can't explain running out when so many are allowed. I'm assuming what is happening here is that all the queries are all being run as part of one transaction.
The code I am using looks something as follows.
db = DatabaseConnector(**connection_params)
tables = db.get_table_list()
for table in tables:
result = db.query(f"""SELECT a, b, COUNT(c) FROM {table} GROUP BY a, b""")
print(result)
Where db.query is defined as:
def query(self, sql):
with self._connection.cursor() as cur:
cur.execute(sql)
return_value = cur.fetchall()
return return_value
and self._connection is:
self._connection = psycopg2.connect(**connection_params)
Do I need to explicitly end the transaction in some way to free up locks? And how can I go about doing this in psycopg2? I would have assumed that there was an implicit end to the transaction when the cursor is closed on __exit__. I know if I was inserting or deleting rows I would use COMMIT at the end, but it seems strange to use as I am not changing the table.
UPDATE: When I explicitly open and close the connection in the loop, the error does not show. However, I assume there is a better way to end the transaction after each SELECT than this.

jaydebeapi set autocommit off for bulk inserts

I have many rows to insert into a table and tried doing row by row but it is taking a really long time. i read this link Python+MySQL - Bulk Insert and seems like setting autocommit to be off can speed things up.
import jadebeapi
connection = jaydebeapi.connect('com.teradata.jdbc.TeraDriver', ['jdbc:teradata://some url',USER,PASS], ['tdgssconfig.jar','terajdbc4.jar'],)
cur = connection.cursor()
connection.jconn.setAutoCommit(False)
cur.execute('select * from my_table')
connection.commit()
Other queries i perform are:
l = [(1,2,3),(2,4,6).....]
for tup in l:
cur.execute('my insert statement')
#this is the really slow part.
When i have the connection.jconn.setAutoCommit(False) i always get this error:
[Teradata Database] [TeraJDBC 15.10.00.14] [Error 3932] [SQLState 25000] Only an ET or null statement is legal after a DDL Statement.
When that line and connection.commit() is commented out, the code works fine. What is the right syntax to set autocommit false?
If speed/efficiency is a concern, you should be using prepared statements and passing your parameters in as the second argument.
You could then also use .executemany():
l = [(1,2,3),(2,4,6).....]
cur.executemany('my insert statement with 3 ? params', l)
#this should be much faster

SQL query returning None in Python script while there are records in table

I am trying to write a simple Python script that gets data from an API, stores it in a MySQL database, and performs some calculations on that data. I try fetch all data from a table where I just inserted some, but that query keeps returning None.
Part that doesn't work:
import MySQLdb
db = MySQLdb.connect("localhost", "stijn", "password", "GW2")
curs = db.cursor()
curs.execute("select gw2_id, naam from PrijzenMats")
for record in curs.fetchall():
curs2 = db.cursor()
curs2.execute("insert into MaterialPrijzenLogs(mat,prijs,tijd) values(%s, %s, %s)", (record[1], prijs, tijd))
db.commit()
curs2.execute("select prijs from MaterialPrijzenLogs")
top10 = len(curs2.fetchall())/10
print(str(len(curs2.fetchall())))
That last print keeps giving 0, even when I populate the table before running the script.
Full code
I solved the problem. Apparently when you call fetchall() it doesn't just get the data from the cursor like a normal getter in Java would do, but it also deletes the data from the cursor. In my code I called fetchall() first to initialize a variable, and after that I tried to print the length of curs2.fetchall(), which had become 0 at that point. This can be easily solved by adding something like myList = curs2.fetchall() directly after curs2.execute("select prijs from MaterialPrijzenLogs") and using the myList variable in the rest of the code instead of curs2.fetchall(). I did not include the declaration of that top10 variable in the code example in my original question because I thought it had nothing to do with the problem. I edited the question so future readers can easily understand the problem.

How to return cursor in Python (sqlite)?

Using Python (2.7)and sqlite (3) I am trying to copy results of a query into a table.
Because the result of the query is very large, I would like to use “fetchmany” in batches.
The query works fine, retrieving the results in the batches as well.
The problem is that when I try to copy the results in the table, it stops after the first batch.
I suspect that the problem is the place of the cursor.
How does one returns the cursor in python ?
P.S: I have seen here many posting about cursor (closing) but haven't seen the answer to my question. Please note also that I am new to Python, so apologies if the question is trivial.
Here pieces of my codes: (example)
import sqlite3
dbLocation = 'P:/XXX/db1.db'
connection = sqlite3.connect(dbLocation)
cursor = connection.cursor()
strSQLStatement = """
SELECT
whatever1,
whaterver2
from wherever
LIMIT 10"""
cursor.execute(strSQLStatement)
#the following codes works
# printing the 10 results
while True:
results = cursor.fetchmany(2)
if not results:
break
print results
#the following codes does NOT work
# Only 2 results are processed
while True:
results = cursor.fetchmany(2)
if not results:
break
print results
cursor.executemany ('INSERT INTO NewTable (?,?)',results)
connection.commit()
Your call to executemany() on the original cursor clobbers what was there before. Create a second cursor to perform the insert.

Categories