I have parameterized queries with f strings such that the queries will select some data from a series of tables and joins, and I want to insert the resulting set of data into another pre-created table (tables been designed to house these results).
Python executes the code but the query results never show up in my table.
Assuming target_table is already created in singlestore database:
qry_load = 'insert into target_table select * from some_tables'
conn = engine.connect()
trans = conn.begin()
try:
conn.execute(qry_load)
trans.commit()
except:
trans.rollback()
raise
The code executes and acts as if all is ok, but the data never shows up in the target table.
How do I see what singlestore is passing back to better debug what is happening within the database?
Just replace begin() with cursor() function:
conn = engine.connect()
trans = conn.cursor()
If not resolved
1- Verify structure of source and destination tables if they are same or not.
2- remove try ,except and rollback() block so you can know the actual error.
Ex.
qry_load = 'insert into target_table select * from some_tables'
conn = engine.connect()
trans = conn.cursor()
conn.execute(qry_load)
trans.commit()
Related
i've been trying to get some data from my db by using below code, but the code is not working. is there any mistake that i made in the code, if so how can i fix it.
NOTE: i took the below code from just a script not a django or flesk web app.
def db():
conn = psycopg2.connect(
"dbname=mydb user=postgres password=****** host=*.*.*.*")
cur = conn.cursor()
cur.execute("""SELECT * FROM MddPublisher""")
query_results = cur.fetchall()
print(query_results)
db()
ERROR: psycopg2.errors.UndefinedTable: relation "mddpublisher" does not exist LINE 1: SELECT * FROM MddPublisher
additionally,i want to show below code to prove that connection is ok. the problem is that i can't receive data from my db whenever i try to execute select command through python.
def print_tables():
conn = psycopg2.connect(
"dbname=mydb user=postgres password=***** host=*.*.*.*.*")
cur = conn.cursor()
cur.execute("""SELECT table_name FROM information_schema.tables
WHERE table_schema = 'public'""")
for table in cur.fetchall():
print(table)
print_tables()
OUTPUT:
('MddPublisher',)
This is probably an issue with case sensitivity. Postgresql names are usually normalized to lower case. However, when used inside double quotes, they keep their case. So, to access a table named MddPublisher you must write it like "MddPublisher".
All the gory details are in Section 4.1.1, Identifiers and Key Words in the Postgresql 14 docs.
I'm trying to create a database with the name a user will provide. As far as I know the correct way is to use the second argument of execute().
So I did as follows:
import psycopg2
conn = psycopg2.connect(host="...", dbname="...",
user="...", password="...", port='...')
cursor = conn.cursor()
query = ''' CREATE DATABASE %s ;'''
name = 'stackoverflow_example_db'
conn.autocommit = True
cursor.execute(query, (name,))
cursor.close()
conn.close()
And I got this error:
psycopg2.errors.SyntaxError: syntax error at or near "'stackoverflow_example_db'"
LINE 1: CREATE DATABASE 'stackoverflow_example_db' ;
I need to do this statement avoiding SQL injection, so using the second argument is a must.
You can't pass values as second argument of execute(), if the statement is a CREATE DATABASE one.
As pointed out by unutbu one way to approach this is using the psycopg2.sql submodule and use identifiers to build the statement avoiding SQL injection.
The code:
import psycopg2
from psycopg2 import sql
conn = psycopg2.connect(host="...", dbname="...",
user="...", password="...", port='...')
cursor = conn.cursor()
query = ''' CREATE DATABASE {} ;'''
name = 'stackoverflow_example_db'
conn.autocommit = True
cursor.execute(sql.SQL(query).format(
sql.Identifier(name)))
cursor.close()
conn.close()
Other aditional observations:
format() do not work with %s, use {} instead
Autocommit mode is a must for this statement to work
The specified connection user needs creation privileges
I am making a python GUI that will look up the the status of a helpdesk ticket in a MySQL database. I connected python to an existing MySQL database with SQLAlchemy using the code below.
conn = mysql.connector.connect(user='root',
password='stuff',host='127.0.0.1',
database='mydb')
c = conn.cursor()
I only need access to one of the columns, ticket_id, in a table called tickets. Basically I want to do this:
SELECT ticket_status FROM tickets WHERE ticket_id = 123;
What would be simplest way to do this?
The following code should work at fetching a single value. If you realize later you need to fetch more than one value you can change fetchone() to fetchall()
try:
sql = '''
SELECT ticket_status FROM tickets WHERE ticket_id = 123
'''
c.execute(sql)
result = c.fetchone()
except Exception as e:
raise Exception(e)
I'm running a series of complex sql queries in python and it involves temp tables. My auto-commit method doesn't seem to be working to retrieve the data from the temp table. The code snippet I'm using below and this is the output I'm getting:
testQuery="""
Select top 10 *
INTO #Temp1
FROM Table1 t1
JOIN Table2 t2
on t1.key=t2.key
"""
cnxn=pyodbc.connect(r'DRIVER={SQL Server Native Client 11.0};SERVER=server;DATABASE=DB;UID=UID;PWD=PWD')
cnxn.autocommit=True
cursor=cnxn.cursor()
cursor.execute(testQuery)
cursor.execute("""Select top 10 * from #Temp1""")
<pyodbc.Cursor at 0x8f78930>
cnxn=pyodbc.connect(r'DRIVER={SQL Server Native Client 11.0};SERVER=server;DATABASE=DB;UID=UID;PWD=PWD')
cnxn.autocommit=True
cursor=cnxn.cursor()
cursor.execute(testQuery)
cursor.execute("""Select top 10 * from #Temp1""")
Even though this question has a "solution", i.e., using global temp table instead of a local temp table, future readers might benefit from understanding why the problem happened in the first place.
A temporary table is automatically dropped when the last connection using said table is closed. The difference between a local temp table (#Temp1) and a global temp table (##Temp1) is that the local temp table is only visible to the connection that created it, while an existing global temp table is available to any connection.
So the following code using a local temp table will fail ...
conn = pyodbc.connect(conn_str, autocommit=True)
crsr = conn.cursor()
sql = """\
SELECT 1 AS foo, 2 AS bar INTO #Temp1
"""
crsr.execute(sql)
conn = pyodbc.connect(conn_str, autocommit=True)
crsr = conn.cursor()
sql = """\
SELECT foo, bar FROM #Temp1
"""
crsr.execute(sql)
row = crsr.fetchone()
print(row)
... while the exact same code using a global temp table will succeed ...
conn = pyodbc.connect(conn_str, autocommit=True)
crsr = conn.cursor()
sql = """\
SELECT 1 AS foo, 2 AS bar INTO ##Temp1
"""
crsr.execute(sql)
conn = pyodbc.connect(conn_str, autocommit=True)
crsr = conn.cursor()
sql = """\
SELECT foo, bar FROM ##Temp1
"""
crsr.execute(sql)
row = crsr.fetchone()
print(row)
... because the second pyodbc.connect call opens a separate second connection to the SQL Server without closing the first one.
The second connection cannot see the local temp table created by the first connection. Note that the local temp table still exists because the first connection was never closed, but the second connection cannot see it.
However, the second connection can see the global temp table because the first connection was never closed and therefore the global temp table continued to exist.
This type of behaviour has implications for ORMs and other mechanisms that may implicitly open and close connections to the server for each SQL statement that it executes.
I asked a colleague about this live and his suggestions worked. So I went and changed the testQuery to create a global temp table instead of a local (##Temp1 instead of #Temp1). And went to sql server to test whether the temp table was actually being created-it was. So I isolated that the problem was the second cursor.execute statement. I modified the code to use pandas read_sql_query instead and it all worked out! Below is the code I used:
testQuery="""
Select top 10 *
INTO ##Temp1
FROM Table1 t1
JOIN Table2 t2
on t1.key=t2.key
"""
cnxn=pyodbc.connect(r'DRIVER={SQL Server Native Client 11.0};SERVER=server;DATABASE=DB;UID=UID;PWD=PWD')
cnxn.autocommit=True
cursor=cnxn.cursor()
cursor.execute(testQuery)
cnxn.commit()
query1="Select top 10 * from ##Temp1"
data1=pd.read_sql_query(query1, cnxn)
data1[:10]
Best way to go about this is to start your SQL query with:
"SET NOCOUNT ON"
This will output the desired data
The SET NOCOUNT ON is what worked for me.
execute Method () - JDBC Driver for SQL Server | Microsoft Docs
Return Value
true, if the statement returns a result set.
false, if it returns an update count or no result.
If you want a result set, then setting SET NOCOUNT ON s the setting you need in your statements.
I use this code to retreive an id. It works:
db = MySQLdb.connect("localhost","root","","proyectoacademias" )
cursor = db.cursor()
sql = "SELECT id FROM test WHERE url=\'"
sql = sql + self.start_urls[0]
sql = sql + "\'"
cursor.execute(sql)
data = cursor.fetchone()
for row in data:
self.id_paper_web=str(row)
db.close()
It gives me the id of the current row I have to update...
But then I try to update or to insert, it doesn't work....
def guardarDatos(self):
db = MySQLdb.connect("localhost","root","","proyectoacademias" )
cursor = db.cursor()
sql = "UPDATE test SET abstract=\'"+str(self.abstracto)+"\', fecha_consulta=\'"+str(self.fecha_consulta)+"\', anio_publicacion=\'"+str(self.anio_publicacion)+"\', probabilidad="+str(self.probabilidad)+" WHERE id = "+str(self.id_paper_web)
print "\n\n\n"+sql+"\n\n\n"
cursor.execute(sql)
for i in range (len(self.nombres)):
sql = "INSERT INTO test_autores VALUES (\'"+self.nombres.keys()[i]+"\', "+str(self.id_paper_web)+", \'"+self.instituciones[self.nombres[self.nombres.keys()[i]]]+"\', "+str((i+1))+")"
print "\n\n\n"+sql+"\n\n\n"
cursor.execute(sql)
db.close()
I print every sql query I sent and they seem to be fine... no exceptions thrown, just no updates or inserts in the database...
you must commit ... or set the db to auto commit
db.commit()
lots of py sqlite3 tutorials out there
By default, the sqlite3 module opens transactions implicitly before a
Data Modification Language (DML) statement (i.e.
INSERT/UPDATE/DELETE/REPLACE), and commits transactions implicitly
before a non-DML, non-query statement (i. e. anything other than
SELECT or the aforementioned).
So if you are within a transaction and issue a command like CREATE
TABLE ..., VACUUM, PRAGMA, the sqlite3 module will commit implicitly
before executing that command. There are two reasons for doing that.
The first is that some of these commands don’t work within
transactions. The other reason is that sqlite3 needs to keep track of
the transaction state (if a transaction is active or not).
You can control which kind of BEGIN statements sqlite3 implicitly
executes (or none at all) via the isolation_level parameter to the
connect() call, or via the isolation_level property of connections.
If you want autocommit mode, then set isolation_level to None.
Otherwise leave it at its default, which will result in a plain
“BEGIN” statement, or set it to one of SQLite’s supported isolation
levels: “DEFERRED”, “IMMEDIATE” or “EXCLUSIVE”.
http://docs.python.org/library/sqlite3.html Section 11.13.6