Using SQL with IBM_DB connector in Python - python

Has anyone used the ibm_db package with IBM's Python for PASE to update Db2 files on IBM i (formerly AS/400)?
I want to use Python scripts (from QSH) to update the Db2 database. My purpose is to populate values at runtime and update the fields of Db2 files. It works with static (hardcoded) values, but not dynamic ones.
Here is what I am trying, but it is not working:
import ibm_db
c1 = ibm_db.connect('*LOCAL','userid','password')
sql = """INSERT INTO TEMPLIB.TEMPPF (TYPE, DRPARTY, CRPARTY,
AMOUNT,ACNUM, DESCRIPT)
VALUES('%s', '%s', '%s', '%s', '%s', '%s'),
%(self.type, self.debitparty, self.creditparty, self.amount,
self.craccountnumber, self.description) with NC
"""
stmt = ibm_db.exec_immediate(c1, sql )
self.type, self.debitparty, etc. are Python instance variables and have values.
TYPE, DRPARTY, CRPARTY, etc. are fields of TEMPPF.
Something simpler like populating the 'sql' variable as below works:
sql = "select * from TEMPLIB.TEMPPF"
So somewhere I am not making the INSERT format correctly. Does anyone know the format please? I tried a couple of formats available on the Internet, but they are not compatible with Python, or they are not good examples.

First, your concatenation of strings with the modulus operator is not correct as %(vars) needs to reside outside the string intended to be formatted.
Second, you should be using SQL parameterization (an industry standard in any database, not just DB2) and not string interpolation of data and query statement. You can do so using the ibm_db_dbi module to pass parameters in the cursor execute call:
import ibm_db
import ibm_db_dbi # ADD DBI LAYER
db = ibm_db.connect('*LOCAL','userid','password')
# ADD FOR PYTHON STANDARD DB-API PROPERTIES (I.E., CURSOR)
conn = ibm_db_dbi.Connection(db)
cur = conn.cursor()
# PREPARED STATEMENT (WITH PLACEHOLDERS)
sql = """INSERT INTO TEMPLIB.TEMPPF (TYPE, DRPARTY, CRPARTY,
AMOUNT, ACNUM, DESCRIPT)
VALUES(?, ?, ?, ?, ?, ?)
with NC
"""
# EXECUTE ACTION QUERY BINDING PARAMS
cur.execute(sql, (self.type, self.debitparty, self.creditparty, self.amount,
self.craccountnumber, self.description))
cur.close()
conn.close()

Related

Is it possible to use psycopg2 for prepared statements?

I'm comparing some of the features of Postgres clients for compatibility and I'm having difficulty getting prepared statements to work in psychopg2. The Node.js pg package allows me to do the following where providing a name (insert-values) prepares the query server-side:
for (let rows = 0; rows < 10; rows++) {
// Providing a 'name' field allows for prepared statements / bind variables
const query = {
name: "insert-values",
text: "INSERT INTO my_table VALUES($1, $2, $3, $4);",
values: [Date.now() * 1000, Date.now(), "node pg prep statement", rows],
}
const preparedStatement = await client.query(query)
}
In Python, I'm doing something like this using psycopg2:
# insert 10 records
for x in range(10):
now = dt.datetime.utcnow()
date = dt.datetime.now().date()
cursor.execute("""
INSERT INTO trades
VALUES (%s, %s, %s, %s);
""", (now, date, "python example", x))
# commit records
connection.commit()
Is there any way to create prepared statements in Python?
edit I'm using the samples from QuestDB documentation
No prepared statement support in Psycopg2 even in 2021. Yes you can do PREPARE and use named query with parameters but there is no support from Psycopg2 in the same way as you can find it with Java JDBC or Rust Postgres drivers.
If you start writing loops with INSERT statements the full statement text will be sent every iteration and has to be parsed by the DB so will be measurable IO/CPU an overhead for big loops.
As far as I know, there is no support for "magically" preparing statements. However, you can execute SQL PREPARE and EXECUTE statements with execute().
You probably want to read the section on fast execution helpers in the manual.
Why not?:
date = dt.datetime.now().date()
insert_sql = """INSERT INTO trades
VALUES (%s, %s, %s, %s)"""
# insert 10 records
for x in range(10):
now = dt.datetime.utcnow()
cursor.execute(insert_sql, (now, date, "python example", x))
# commit records
connection.commit()
It works out to the same thing. The query is built once and then it is run multiple times with different parameter for x. As #Ture Pålsson pointed out you can combine the INSERTs using the helpers here Fast Execution.

How Can I Add New Value On Column Database /PYTHON

import sqlite3
con = sqlite3.connect("Demo.db")
cursor = con.cursor()
query = "UPDATE Table set demoName = %s Where demoId = %s"
cursor.execute(query,("demoName",1)
con.commit()
This method is not work.
I want to keep the current values ​​in the column and add more values ​​to those values. I can do that with what code. Updating doesn't work here.
UPDATE sql command is to update existing rows. Use the INSERT sql command to create new rows.
import sqlite3
con = sqlite3.connect("Demo.db")
cursor = con.cursor()
query = "INSERT INTO Table (demoName, demoId) VALUES (%s, %s)"
cursor.execute(query,("demoName",1)
con.commit()
Python official sqlite3 documentation suggests the following links for learning sql:
https://www.w3schools.com/sql/ - Tutorial, reference and examples for learning SQL syntax.
https://www.sqlite.org - The SQLite web page; the documentation describes the syntax and the available data types for the supported SQL dialect.

Python - Can I insert rows into one database using a cursor (from select) from another database?

I am trying to select data from our main database (postgres) and insert it into a temporary sqlite database for some comparision, analytics and reporting. Is there an easy way to do this in Python? I am trying to do something like this:
Get data from the main Postgres db:
import psycopg2
postgres_conn = psycopg2.connect(connection_string)
from_cursor = postgres_conn.cursor()
from_cursor.execute("SELECT email, firstname, lastname FROM schemaname.tablename")
Insert into SQLite table:
import sqlite3
sqlite_conn = sqlite3.connect(db_file)
to_cursor = sqlite_conn.cursor()
insert_query = "INSERT INTO sqlite_tablename (email, firstname, lastname) values %s"
to_cursor.some_insert_function(insert_query, from_cursor)
So the question is: is there a some_insert_function that would work for this scenario (either using pyodbc or using sqlite3)?
If yes, how to use it? Would the insert_query above work? or should it be modified?
Any other suggestions/approaches would also be appreciated in case a function like this doesn't exist in Python. Thanks in advance!
You should pass the result of your select query to execute_many.
insert_query = "INSERT INTO smallUsers values (?,?,?)"
to_cursor.executemany(insert_query, from_cursor.fetchall())
You should also use a parameterized query (? marks), as explained here: https://docs.python.org/3/library/sqlite3.html#sqlite3.Cursor.execute
If you want to avoid loading the entire source database into memory, you can use the following code to process 100 rows at a time:
while True:
current_data = from_cursor.fetchmany(100)
if not current_data:
break
to_cursor.exectutemany(insert_query, current_data)
sqlite_conn.commit()
sqlite_conn.commit()
You can look at executemany from pyodbc or sqlite. If you can build a list of parameters from your select, you can pass the list to executemany.
Depending on the number of records you plan to insert, performance can be a problem as referenced in this open issue. https://github.com/mkleehammer/pyodbc/issues/120

Python mysql connector prepared INSERT statement cutting off text

I've been trying to insert a large string into an MySQL database using pythons mysql.connector. The problem I'm having is that long strings are getting cut off at some point when using prepared statements. I'm currently using MySQL Connector/Python that is available on MySQL.com. I used the following code do duplicate the problem I'm having.
db = mysql.connector.connect(**creditials)
cursor = db.cursor()
value = []
for x in range(0, 2000):
value.append(str(x+1))
value = " ".join(value)
cursor.execute("""
CREATE TABLE IF NOT EXISTS test (
pid VARCHAR(50),
name VARCHAR(120),
data LONGTEXT,
PRIMARY KEY(pid)
)
""")
db.commit()
#this works as expected
print("Test 1")
cursor.execute("REPLACE INTO test (pid, name, data) VALUES ('try 1', 'Description', '{0}')".format(value))
db.commit()
cursor.close()
#this does not work
print("Test 2")
cursor = db.cursor(prepared=True)
cursor.execute("""REPLACE INTO test (pid, name, data) VALUE (?, ?, ?)""", ('try 2', 'Description2', value))
db.commit()
cursor.close()
Test 1 works as expected and stores all the numbers up to 2000, but test 2 get cut off right after number 65. I would rather use prepared statements than trying to sanitize incoming strings myself. Any help appreciated.
Extra information:
Computer: Windows 7 64 bit
Python: Tried on both python 3.4 and 3.3
MYSQL: 5.6.17 (Came with WAMP)
Library: MySQL Connector/Python
When MySQL Connector driver processes prepared statements, it's using a lower-level binary protocol to communicate values to the server individually. As such, it's telling the server whether the values are INTs or VARCHARs or TEXT, etc. It's not particularly smart about it, and this "behavior" is the result. In this case, it sees that the value is a Python string value and tells MySQL that it's a VARCHAR value. The VARCHAR value has a string length limit that affects the amount of data be sent to the server. What's worse, the interaction between the long value and the limited data type length can yield some strange behavior.
Ultimately, you have a few options:
Use a file-link object for your string
MySQL Connector treats files and file-like objects as BLOBs and TEXTs (depending on whether the file is open in binary or non-binary mode, respectively). You can leverage this to get the behavior you desire.
import StringIO
...
cursor = db.cursor(prepared=True)
cursor.execute("""REPLACE INTO test (pid, name, data) VALUES (?, ?, ?)""",
('try 2', 'Description', StringIO.String(value)))
cursor.close()
db.commit()
Don't use MySQL Connector prepared statements
If you don't use the prepared=True clause to your cursor creation statement, it will generate full valid SQL statements for each execution. You're not really losing too much by avoiding MySQL prepared statements in this context. You do need to pass your SQL statements in a slightly different form to get proper placeholder sanitization behavior.
cursor = db.cursor()
cursor.execute("""REPLACE INTO test (pid, name, data) VALUES (%s, %s, %s)""",
('try 2', 'Description', value))
cursor.close()
db.commit()
Use another MySQL driver
There are a couple different Python MySQL drivers:
MySQLdb
oursql

python cursor FROM %s

I try to execute this request :
cur.execute("SELECT MAX(id) FROM %s WHERE equipement_id = %s", (table,eq_id))
But i have error because the FROM %s is not build correctly.
Try like this
cur.execute("""SELECT MAX(id) FROM %s WHERE equipement_id = %s""" % (table, eq_id))
Unfortunately, you need to do it in 2 steps:
table_name = 'stuff'
query = "SELECT MAX(id) FROM {0} WHERE equipement_id = %s".format(table_name)
cur.execute(query, eq_id)
Why? - Database connectors tend to only allow you to substitute values into the query, not arbitrary pieces like table names or expressions. For example, from psycopg2 docs:
Only variable values should be bound via this method: it shouldn’t be used to set table or field names. For these elements, ordinary string formatting should be used before running execute().
And obviously:
triple-check if the value of table_name isn't user controlled and is really a valid table name!
Please read the problem with the query parameters section in psycopg2 docs (even if you end up using another database; the remarks there are quite general).

Categories