How can we execute 2 sql queries in python program which would access the sqlite database.?
Here I can only execute the first query but not the second one.?
import sqlite3
conn=sqlite3.connect("suman1.db")
cursor=conn.cursor()
cursor
cursor.execute("select Album.AlbumId,Album.Title,Track.Name,Track.GenreId,Track.UnitPrice from Album inner join Track on Album.AlbumId=Track.AlbumId")
cursor.execute("select Track.TrackId,Album.Title,Track.Name,Track.Unitprice,Track.Bytes from Album inner join Track on Album.AlbumId=Track.AlbumId where UnitPrice=1.99 and Title like '%Season 4%'and GenreId like '%21%'")
for row in cursor.fetchall():
print row
conn.commit()
conn.close()
Both queries are executed.
The problem is that you never read the results of the first query.
(Executing another query resets the cursor.)
You have to read the results before you execute the next query:
cursor.execute("select ...")
for row in cursor:
print row
cursor.execute("select ...")
for row in cursor:
print row
or use two cursors.
Related
i made a project which collects data from user and store it on different tables, the application has a delete function which the first option is to delete a specific table which is i already did and the second one is to delete all existing tables.
How can i drop all tables inside my database?
so this is my variables.
conn = sqlite3.connect('main.db')
cursor = conn.execute("DROP TABLE")
cursor.close()
How can i drop all tables inside my database?
According to sqlitetutorial.net
SQLite allows you to drop only one table at a time. To remove multiple
tables, you need to issue multiple DROP TABLE statements.
You can do it by querying all table names (https://www.sqlite.org/faq.html#q7)
Then you can use the result to delete the tables one by one
Here is the code, the function delete_all_tables does that
TABLE_PARAMETER = "{TABLE_PARAMETER}"
DROP_TABLE_SQL = f"DROP TABLE {TABLE_PARAMETER};"
GET_TABLES_SQL = "SELECT name FROM sqlite_schema WHERE type='table';"
def delete_all_tables(con):
tables = get_tables(con)
delete_tables(con, tables)
def get_tables(con):
cur = con.cursor()
cur.execute(GET_TABLES_SQL)
tables = cur.fetchall()
cur.close()
return tables
def delete_tables(con, tables):
cur = con.cursor()
for table, in tables:
sql = DROP_TABLE_SQL.replace(TABLE_PARAMETER, table)
cur.execute(sql)
cur.close()
SQLite3 code to issue multiple DROP TABLE statements based on TEMP_% name wildcard:
.output droptables.sql
SELECT "DROP TABLE """|| sqlite_master.name ||""";" FROM sqlite_master
WHERE type = "table" AND sqlite_master.name LIKE 'TEMP_%';
.read droptables.sql
Example result in .sql output file:
DROP TABLE "TEMP_table1";
DROP TABLE "TEMP_table2";
DROP TABLE "TEMP_table3";
...
Python3 to paste SQL into:
conn = sqlite3.connect(f"main.db")
conn.row_factory = sqlite3.Row
dbc = conn.cursor()
dbc.execute(
f"DROP TABLE 'TEMP_table1';"
)
conn.commit()
I am using PyMySQL in Python 2.7. I have to create a function - where, given a table name, the query will find unique values of all the column names.
Since there are more than one tables involved, I do not want to hard-code table name. Now, a simpler query is like:
cursor.execute(" SELECT DISTINCT(`Trend`) AS `Trend` FROM `Table_1` ORDER BY `Trend` DESC ")
I want to do something like:
tab = 'Table_1'
cursor.execute(" SELECT DISTINCT(`Trend`) AS `Trend` FROM tab ORDER BY `Trend` DESC ")
I am getting the following error:
ProgrammingError: (1146, u"Table 'Table_1.tab' doesn't exist")
Can someone please help. TIA
Make sure the database you're using is correct,and use %s to format you sql statement.
DB_SCHEMA='test_db'
table_name='table1'
connection = pymysql.connect(host=DB_SERVER_IP,
port=3306,
db=DB_SCHEMA,
charset='UTF8',
cursorclass=pymysql.cursors.DictCursor
)
try:
with connection.cursor() as cursor:
sql = "SELECT DISTINCT(`Trend`) AS `Trend` FROM `%s` ORDER BY `Trend` DESC"%(table_name)
cursor.execute(sql)
connection.commit()
except Exception as e:
print e
finally:
connection.close()
Hope this helps.
The code is below, by the way the database I use is teradata ,and in a windows 7 operative system and python version 2.7.
import pyodbc
cnxn = pyodbc.connect('DSN=thisIsAbsolutelyCorrect;UID=cannottellyou;PWD=iamsosorry')
cursor1 = cnxn.cursor()
cursor1=cursor1.execute(
################## OR put your SQL dirctly between here ################
'''
create volatile table table1
(
field1 integer
,field2 integer
)on commit preserve rows;
--insert into table1
--values(12,13);
--select * from table1;
''')
######################### and here ########################
cnxn.commit()
for row in cursor1:
print row
raw_input()
But I get the error like this:
Traceback (most recent call last):
File "C:\Users\issuser\Desktop\py\test.py", line 25, in <module>
for row in cursor1:
ProgrammingError: No results. Previous SQL was not a query.
How can I solve this error?
A cursor object will have no rows to iterate through. What I think you want is to iterate through the results of an execute.
rows = curs.execute(""" sql code """).fetchall()
for row in rows:
print row
here is a template to upload to a volatile table in teradata from python2.7 using pyodbc:
import pyodbc
cnxn = pyodbc.connect('your_connection_string')
curs = cnxn.cursor()
curs.execute("""
CREATE VOLATILE TABLE TABLE_NAME
(
c_0 dec(10,0),
...
c_n dec(10,0)
) PRIMARY INDEX (c0)
ON COMMIT PRESERVE ROWS;
END TRANSACTION;
""")
curs.execute("""
INSERT INTO TABLE_NAME (c_0,...,c_n) VALUES (%s);
"""%value_string)
Depending on your settings in Teradata you must explicitly END TRANSACTION.
You can add your loop around the INSERT to upload information line by line.
Have you considered the following:
import pyodbc
cnxn = pyodbc.connect('DSN=thisIsAbsolutelyCorrect;UID=cannottellyou;PWD=iamsosorry')
cursor1 = cnxn.cursor()
RowCount=cursor1.execute(
'''
create volatile table table1
(
field1 integer
,field2 integer
)on commit preserve rows;
''').rowcount
RowCount=cursor1.execute('''insert into table1 values(12,13);''').rowcount
cnxn.commit()
for row in cursor1:
print row
raw_input()
I believe the issue is that the EXECUTE() method as you have written is expecting a cursor to be returned. DDL and DML statements like INSERT, UPDATE, DELETE to not return result sets. You may have more success in using the ROWCOUNT variable with the EXECUTE() method to process the volatile table creation and its population.
You may also have to issue a commit between the creation of the volatile table and populating the table.
I updated a database table using postgresql from python
My code was
import psycopg2
connection=psycopg2.connect("dbname=homedb user=ria")
cursor=connection.cursor()
l_dict= {'licence_id':1}
cursor.execute("SELECT * FROM im_entry.usr_table")
rows=cursor.fetchall()
for row in rows:
i=i+1
p = findmax(row)
#print p
idn="id"
idn=idn+str(i)
cursor.execute("UPDATE im_entry.pr_table SET (selected_entry) = ('"+p+"') WHERE image_1d ='"+idn+"'")
print 'DATABASE TO PRINT'
cursor.execute("SELECT * FROM im_entry.pr_table")
rows=cursor.fetchall()
for row in rows:
print row
I got the updated table displayed
But when i display updated table by psql as
homedb=# SELECT * FROM im_entry.pr_table;
i got an empty table displayed..what is wrong?? please help me
You're probably not committing the transaction, i.e. you need a connection.commit() after all your updates.
There are various different settings you can make to the isolation level, e.g. autocommit, so you don't need to issue commits yourself. See, for example, How do I do database transactions with psycopg2/python db api?
I am trying to achieve the following using Python and the MySQLdb interface:
Read the contents of a table that has a few million rows.
Process and modify the output of every row.
Put the modified rows into another table.
It seems sensible to me to iterate over each row, process on-the-fly and then insert each new row into the new table on-the-fly.
This works:
import MySQLdb
import MySQLdb.cursors
conn=MySQLdb.connect(
host="somehost",user="someuser",
passwd="somepassword",db="somedb")
cursor1 = conn.cursor(MySQLdb.cursors.Cursor)
query1 = "SELECT * FROM table1"
cursor1.execute(query1)
cursor2 = conn.cursor(MySQLdb.cursors.Cursor)
for row in cursor1:
values = some_function(row)
query2 = "INSERT INTO table2 VALUES (%s, %s, %s)"
cursor2.execute(query2, values)
cursor2.close()
cursor1.close()
conn.commit()
conn.close()
But this is slow and memory-consuming since it's using a client-side cursor for the SELECT query. If I instead use a server-side cursor for the SELECT query:
cursor1 = conn.cursor(MySQLdb.cursors.SSCursor)
Then I get a 2014 error:
Exception _mysql_exceptions.ProgrammingError: (2014, "Commands out of sync; you can't run this command now") in <bound method SSCursor.__del__ of <MySQLdb.cursors.SSCursor object at 0x925d6ec>> ignored
So it doesn't seem to like starting another cursor while iterating over a server-side cursor. Which seems to leave me stuck with a very slow client-side iterator.
Any suggestions?
You need a seperate connection to the database, since the first connection is stuck with streaming the resultset, you can't run the insert query.
Try this:
import MySQLdb
import MySQLdb.cursors
conn=MySQLdb.connect(
host="somehost",user="someuser",
passwd="somepassword",db="somedb")
cursor1 = conn.cursor(MySQLdb.cursors.SSCursor)
query1 = "SELECT * FROM table1"
cursor1.execute(query1)
insertConn=MySQLdb.connect(
host="somehost",user="someuser",
passwd="somepassword",db="somedb")
cursor2 = inserConn.cursor(MySQLdb.cursors.Cursor)
for row in cursor1:
values = some_function(row)
query2 = "INSERT INTO table2 VALUES (%s, %s, %s)"
cursor2.execute(query2, values)
cursor2.close()
cursor1.close()
conn.commit()
conn.close()
insertConn.commit()
insertConn.close()