Can the cursor.execute call below execute multiple SQL queries in one go?
cursor.execute("use testdb;CREATE USER MyLogin")
I don't have python setup yet but want to know if above form is supported by cursor.execute?
import pyodbc
# Some other example server values are
# server = 'localhost\sqlexpress' # for a named instance
# server = 'myserver,port' # to specify an alternate port
server = 'tcp:myserver.database.windows.net'
database = 'mydb'
username = 'myusername'
password = 'mypassword'
cnxn = pyodbc.connect('DRIVER={ODBC Driver 17 for SQL Server};SERVER='+server+';DATABASE='+database+';UID='+username+';PWD='+ password)
cursor = cnxn.cursor()
#Sample select query
cursor.execute("SELECT ##version;")
row = cursor.fetchone()
while row:
print(row[0])
row = cursor.fetchone()
Multiple SQL statements in a single string is often referred to as an "anonymous code block".
There is nothing in pyodbc (or pypyodbc) to prevent you from passing a string containing an anonymous code block to the Cursor.execute() method. They simply pass the string to the ODBC Driver Manager (DM) which in turn passes it to the ODBC Driver.
However, not all ODBC drivers accept anonymous code blocks by default. Some databases default to allowing only a single SQL statement per .execute() to protect us from SQL injection issues.
For example, MySQL/Connector ODBC defaults MULTI_STATEMENTS to 0 (off) so if you want to run an anonymous code block you will have to include MULTI_STATEMENTS=1 in your connection string.
Note also that changing the current database by including a USE … statement in an anonymous code block can sometimes cause problems because the database context changes in the middle of a transaction. It is often better to execute a USE … statement by itself and then continue executing other SQL statements.
Yes, it is possible.
operation = 'SELECT 1; INSERT INTO t1 VALUES (); SELECT 2'
for result in cursor.execute(operation, multi=True):
But it is not a comprehensive solution. For example, in queries with two selections, you have problems.
Consider that two types of answers must be fetch all in the cursor!
So the best solution is to break the query to sub queries and do your work step by step.
for example :
s = "USE some_db; SELECT * FROM some_table;"
s = filter(None, s.split(';'))
for i in s:
cur.execute(i.strip() + ';')
in the pyodbc documentation should give you the example your looking for. more over in the GitHub wiki: https://github.com/mkleehammer/pyodbc/wiki/Objects#cursors
you can see an example here:
cnxn = pyodbc.connect(...)
cursor = cnxn.cursor()
cursor.execute("""
select user_id, last_logon
from users
where last_logon > ?
and user_type <> 'admin'
""", twoweeks)
rows = cursor.fetchall()
for row in rows:
print('user %s logged on at %s' % (row.user_id, row.last_logon))
from this example and exploring the code, I would say your next step is testing a multi cursor.execute("<your_sql_Querie>").
if this test works, maybe try and create a CLASS then create instances of that class for each query you want to run.
This would be the basic evolution of a developers effort of reproducing documentation...hope this helps you :)
Yes, you can results for multiple queries by using the nextset() method...
query = "select * from Table1; select * from Table2"
cursor = connection.cursor()
cursor.execute(query)
table1 = cursor.fetchall()
cursor.nextset()
table2 = cursor.fetchall()
The code explains it... cursors return result "sets", which you can move between using the nextset() method.
Related
I have parameterized queries with f strings such that the queries will select some data from a series of tables and joins, and I want to insert the resulting set of data into another pre-created table (tables been designed to house these results).
Python executes the code but the query results never show up in my table.
Assuming target_table is already created in singlestore database:
qry_load = 'insert into target_table select * from some_tables'
conn = engine.connect()
trans = conn.begin()
try:
conn.execute(qry_load)
trans.commit()
except:
trans.rollback()
raise
The code executes and acts as if all is ok, but the data never shows up in the target table.
How do I see what singlestore is passing back to better debug what is happening within the database?
Just replace begin() with cursor() function:
conn = engine.connect()
trans = conn.cursor()
If not resolved
1- Verify structure of source and destination tables if they are same or not.
2- remove try ,except and rollback() block so you can know the actual error.
Ex.
qry_load = 'insert into target_table select * from some_tables'
conn = engine.connect()
trans = conn.cursor()
conn.execute(qry_load)
trans.commit()
I'm trying to create a database with the name a user will provide. As far as I know the correct way is to use the second argument of execute().
So I did as follows:
import psycopg2
conn = psycopg2.connect(host="...", dbname="...",
user="...", password="...", port='...')
cursor = conn.cursor()
query = ''' CREATE DATABASE %s ;'''
name = 'stackoverflow_example_db'
conn.autocommit = True
cursor.execute(query, (name,))
cursor.close()
conn.close()
And I got this error:
psycopg2.errors.SyntaxError: syntax error at or near "'stackoverflow_example_db'"
LINE 1: CREATE DATABASE 'stackoverflow_example_db' ;
I need to do this statement avoiding SQL injection, so using the second argument is a must.
You can't pass values as second argument of execute(), if the statement is a CREATE DATABASE one.
As pointed out by unutbu one way to approach this is using the psycopg2.sql submodule and use identifiers to build the statement avoiding SQL injection.
The code:
import psycopg2
from psycopg2 import sql
conn = psycopg2.connect(host="...", dbname="...",
user="...", password="...", port='...')
cursor = conn.cursor()
query = ''' CREATE DATABASE {} ;'''
name = 'stackoverflow_example_db'
conn.autocommit = True
cursor.execute(sql.SQL(query).format(
sql.Identifier(name)))
cursor.close()
conn.close()
Other aditional observations:
format() do not work with %s, use {} instead
Autocommit mode is a must for this statement to work
The specified connection user needs creation privileges
I'm running a series of complex sql queries in python and it involves temp tables. My auto-commit method doesn't seem to be working to retrieve the data from the temp table. The code snippet I'm using below and this is the output I'm getting:
testQuery="""
Select top 10 *
INTO #Temp1
FROM Table1 t1
JOIN Table2 t2
on t1.key=t2.key
"""
cnxn=pyodbc.connect(r'DRIVER={SQL Server Native Client 11.0};SERVER=server;DATABASE=DB;UID=UID;PWD=PWD')
cnxn.autocommit=True
cursor=cnxn.cursor()
cursor.execute(testQuery)
cursor.execute("""Select top 10 * from #Temp1""")
<pyodbc.Cursor at 0x8f78930>
cnxn=pyodbc.connect(r'DRIVER={SQL Server Native Client 11.0};SERVER=server;DATABASE=DB;UID=UID;PWD=PWD')
cnxn.autocommit=True
cursor=cnxn.cursor()
cursor.execute(testQuery)
cursor.execute("""Select top 10 * from #Temp1""")
Even though this question has a "solution", i.e., using global temp table instead of a local temp table, future readers might benefit from understanding why the problem happened in the first place.
A temporary table is automatically dropped when the last connection using said table is closed. The difference between a local temp table (#Temp1) and a global temp table (##Temp1) is that the local temp table is only visible to the connection that created it, while an existing global temp table is available to any connection.
So the following code using a local temp table will fail ...
conn = pyodbc.connect(conn_str, autocommit=True)
crsr = conn.cursor()
sql = """\
SELECT 1 AS foo, 2 AS bar INTO #Temp1
"""
crsr.execute(sql)
conn = pyodbc.connect(conn_str, autocommit=True)
crsr = conn.cursor()
sql = """\
SELECT foo, bar FROM #Temp1
"""
crsr.execute(sql)
row = crsr.fetchone()
print(row)
... while the exact same code using a global temp table will succeed ...
conn = pyodbc.connect(conn_str, autocommit=True)
crsr = conn.cursor()
sql = """\
SELECT 1 AS foo, 2 AS bar INTO ##Temp1
"""
crsr.execute(sql)
conn = pyodbc.connect(conn_str, autocommit=True)
crsr = conn.cursor()
sql = """\
SELECT foo, bar FROM ##Temp1
"""
crsr.execute(sql)
row = crsr.fetchone()
print(row)
... because the second pyodbc.connect call opens a separate second connection to the SQL Server without closing the first one.
The second connection cannot see the local temp table created by the first connection. Note that the local temp table still exists because the first connection was never closed, but the second connection cannot see it.
However, the second connection can see the global temp table because the first connection was never closed and therefore the global temp table continued to exist.
This type of behaviour has implications for ORMs and other mechanisms that may implicitly open and close connections to the server for each SQL statement that it executes.
I asked a colleague about this live and his suggestions worked. So I went and changed the testQuery to create a global temp table instead of a local (##Temp1 instead of #Temp1). And went to sql server to test whether the temp table was actually being created-it was. So I isolated that the problem was the second cursor.execute statement. I modified the code to use pandas read_sql_query instead and it all worked out! Below is the code I used:
testQuery="""
Select top 10 *
INTO ##Temp1
FROM Table1 t1
JOIN Table2 t2
on t1.key=t2.key
"""
cnxn=pyodbc.connect(r'DRIVER={SQL Server Native Client 11.0};SERVER=server;DATABASE=DB;UID=UID;PWD=PWD')
cnxn.autocommit=True
cursor=cnxn.cursor()
cursor.execute(testQuery)
cnxn.commit()
query1="Select top 10 * from ##Temp1"
data1=pd.read_sql_query(query1, cnxn)
data1[:10]
Best way to go about this is to start your SQL query with:
"SET NOCOUNT ON"
This will output the desired data
The SET NOCOUNT ON is what worked for me.
execute Method () - JDBC Driver for SQL Server | Microsoft Docs
Return Value
true, if the statement returns a result set.
false, if it returns an update count or no result.
If you want a result set, then setting SET NOCOUNT ON s the setting you need in your statements.
I'm new to mySQL and Python.
I have code to insert data from Python into mySQL,
conn = MySQLdb.connect(host="localhost", user="root", passwd="kokoblack", db="mydb")
for i in range(0,len(allnames)):
try:
query = "INSERT INTO resumes (applicant, jobtitle, lastworkdate, lastupdate, url) values ("
query = query + "'"+allnames[i]+"'," +"'"+alltitles[i]+"',"+ "'"+alldates[i]+"'," + "'"+allupdates[i]+"'," + "'"+alllinks[i]+"')"
x = conn.cursor()
x.execute(query)
row = x.fetchall()
except:
print "error"
It seems to be working fine, because "error" never appears. Instead, many rows of "1L" appear in my Python shell. However, when I go to MySQL, the "resumes" table in "mydb" remains completely empty.
I have no idea what could be wrong, could it be that I am not connected to MySQL's server properly when I'm viewing the table in MySQL? Help please.
(I only use import MySQLdb, is that enough?)
use commit to commit the changes that you have done
MySQLdb has autocommit off by default, which may be confusing at first
You could do commit like this
conn.commit()
or
conn.autocommit(True) Right after the connection is created with the DB
I use this code to retreive an id. It works:
db = MySQLdb.connect("localhost","root","","proyectoacademias" )
cursor = db.cursor()
sql = "SELECT id FROM test WHERE url=\'"
sql = sql + self.start_urls[0]
sql = sql + "\'"
cursor.execute(sql)
data = cursor.fetchone()
for row in data:
self.id_paper_web=str(row)
db.close()
It gives me the id of the current row I have to update...
But then I try to update or to insert, it doesn't work....
def guardarDatos(self):
db = MySQLdb.connect("localhost","root","","proyectoacademias" )
cursor = db.cursor()
sql = "UPDATE test SET abstract=\'"+str(self.abstracto)+"\', fecha_consulta=\'"+str(self.fecha_consulta)+"\', anio_publicacion=\'"+str(self.anio_publicacion)+"\', probabilidad="+str(self.probabilidad)+" WHERE id = "+str(self.id_paper_web)
print "\n\n\n"+sql+"\n\n\n"
cursor.execute(sql)
for i in range (len(self.nombres)):
sql = "INSERT INTO test_autores VALUES (\'"+self.nombres.keys()[i]+"\', "+str(self.id_paper_web)+", \'"+self.instituciones[self.nombres[self.nombres.keys()[i]]]+"\', "+str((i+1))+")"
print "\n\n\n"+sql+"\n\n\n"
cursor.execute(sql)
db.close()
I print every sql query I sent and they seem to be fine... no exceptions thrown, just no updates or inserts in the database...
you must commit ... or set the db to auto commit
db.commit()
lots of py sqlite3 tutorials out there
By default, the sqlite3 module opens transactions implicitly before a
Data Modification Language (DML) statement (i.e.
INSERT/UPDATE/DELETE/REPLACE), and commits transactions implicitly
before a non-DML, non-query statement (i. e. anything other than
SELECT or the aforementioned).
So if you are within a transaction and issue a command like CREATE
TABLE ..., VACUUM, PRAGMA, the sqlite3 module will commit implicitly
before executing that command. There are two reasons for doing that.
The first is that some of these commands don’t work within
transactions. The other reason is that sqlite3 needs to keep track of
the transaction state (if a transaction is active or not).
You can control which kind of BEGIN statements sqlite3 implicitly
executes (or none at all) via the isolation_level parameter to the
connect() call, or via the isolation_level property of connections.
If you want autocommit mode, then set isolation_level to None.
Otherwise leave it at its default, which will result in a plain
“BEGIN” statement, or set it to one of SQLite’s supported isolation
levels: “DEFERRED”, “IMMEDIATE” or “EXCLUSIVE”.
http://docs.python.org/library/sqlite3.html Section 11.13.6