I use mysql.connector library in Python to send query to database. But, when the database is changed after the initialization, the mysql.connector’s tools answer like if the database had never change.
As example, let’s imagine I have a minimalist table students with just two columns id and name.
+----+------+
| id | name |
+----+------+
| 0 | foo |
+----+------+
In the following code, the query will ask the user with id 0. But, inside the process, some events will happened from outside the Python script and alter the database.
import mysql.connector
maindb = mysql.connector.connect(
host = "<host>",
user = "<user>",
password = "<password>",
db = "<database name>"
)
cursor = maindb.cursor()
# Here, I will send outside the python script a MySQL query to modify the name of the student from “foo” to “bar” like this:
# `UPDATE `students` SET `name` = 'bar' WHERE `students`.`id` = 0;`
cursor.execute("SELECT `id`, `name` FROM `students` WHERE `id` = 0")
result = cursor.fetchall()
print(result)
Then I get this answer [(0, 'foo')]. As you see, Python is not aware the data base has change since maindb.cursor() was called. So I get foo as name field instead of bar as expected.
So how to tell mysql.connector’s tools to take the last updates from the database when I send a query?
You will need to use a socket or if the changes occur frequently have your code re-run every x minutes
I just need to .connect() maindb object and .close() it before each new need.
maindb.connect()
cursor.execute("SELECT `id`, `name` FROM `students` WHERE `id` = 0")
result = cursor.fetchall()
print(result)
maindb.close()
The database maintains data integrity by preventing in-progress transactions from seeing changes made by other transactions (see transaction isolation levels).
You can commit your connection to allow it to see new changes:
cursor = maindb.cursor()
# Here, I will send outside the python script a MySQL query to modify the name of the student from “foo” to “bar” like this:
# `UPDATE `students` SET `name` = 'bar' WHERE `students`.`id` = 0;`
# Doesn't show the update
cursor.execute("SELECT `id`, `name` FROM `students` WHERE `id` = 0")
result = cursor.fetchall()
print(result)
# Shows the update because we have committed.
maindb.commit()
cursor.execute("SELECT `id`, `name` FROM `students` WHERE `id` = 0")
result = cursor.fetchall()
print(result)
Related
I'm writing a DBMS and am validating user inputs ( for tables in the database ) using the lengths and data_types stored in the MySQL database's INFORMATION_SCHEMA.COLUMNS for the particular table they are entering data into. Im using Python 3.8.4 ( 64 bit ) with all the needed mysql connector modules installed etc and i have the mysql server running on a local host on the same machine.
After executing the following query = "SELECT COLUMN_TYPE FROM INFORMATION_SCHEMA.COLUMNS WHERE TABLE_SCHEMA = ''my_schema' AND TABLE_NAME = 'the_specific_table';"
with the following python code:
NOTE: the particular table im using in this query is set out like below
user_id - INT - PRIMARY KEY- NOT NULL
first_name - VARCHAR(45) - NOT NULL
last_name - VARCHAR(45) - NOT NULL
import mysql.connector
connection = mysql.connector.connect(
host = host,
user = user,
passwd = password,
database = db_name)
cursor = connection.cursor()
cursor.execute("SELECT COLUMN_TYPE FROM INFORMATION_SCHEMA.COLUMNS WHERE TABLE_SCHEMA = 'SCHEMA_HERE' AND TABLE_NAME = 'TABLE_HERE';")
result = cursor.fetchall()
print(result)
The result i get from this query is the following
-- >
[(b'int',), (b'varchar(45)',), (b'varchar(45)',)]. This is completely unchanged from how it is returned when i print the cursor content out
Looping through the list gives each tuple as you would expect,however when indexing those tuples, instead of giving a value such as - string = 'hello' - print(string[0]) outputing of course 'h' - a seemingly random number is outputed instead.This means at the moment i cant work out how to validate inputs that need to be validated against the length and datatype of a column
This weird 'b' inclusion only happens on the DATA_TYPE column of the query, so if the query
cursor.execute("""SELECT TABLE_NAME,COLUMN_NAME, DATA_TYPE, CHARACTER_MAXIMUM_LENGTH, IS_NULLABLE
FROM INFORMATION_SCHEMA.COLUMNS
WHERE TABLE_NAME = 'my_table'""")
is executed the result is
[('tblusers', 'user_id', b'int', None, 'NO'), ('tblusers', 'first_name', b'varchar', 45, 'NO'), ('tblusers', 'last_name', b'varchar', 45, 'NO')]
NOTE this is my first time posting a question on here so do forgive me if i have left out any crucial pieces of infomation needed for someone to help, just let me know and i will try to add that.
Any help is hugely appreciated :)
As the title says, I'm having problems retrieving data from a SQLite DB when using WHERE statement.
Here is the piece of code that tries to get a row where an ID is given:
def check_attendance(self, cred):
query = """SELECT * FROM clients WHERE dni=?"""
self.conn.cursor().execute(query, (cred,))
record = self.conn.cursor().fetchone()
The var cred is already inside a tuple as specified by SQLite API for Python. Sadly, the query returns None when executed here.
If I do the same but using sqlite.exe, then I do get the right row back. In fact, this is the only query I cannot execute properly from my python script, everything else return rows normally.
Here it is executing from the Python script
And here is in sqlite.exe
Here is the piece that stores values in the DB:
def new_client(self, *args):
success = False
# Check if all inputs are filled
if self.dialog.content_cls.ids.user_name.text and self.dialog.content_cls.ids.user_surname.text and len(self.dialog.content_cls.ids.user_dni.text) == 8 and self.dialog.content_cls.ids.user_date.text:
# Convert str date to a datetime obj in order to use it with timedelta
paid_date = datetime.strptime(self.dialog.content_cls.ids.user_date.text, "%d-%m-%Y")
# paid_date is now YYYY-MM-DD HH-MM-SS format
# Add 30 days to paid_date
exp_date = paid_date + timedelta(days=30)
# Convert YYYY-MM-DD HH-MM-DD to string YYYY-MM-DD as we don't need clock
paid_date = datetime.strptime(str(paid_date), "%Y-%m-%d %H:%M:%S").strftime("%Y-%m-%d")
exp_date = datetime.strptime(str(exp_date), "%Y-%m-%d %H:%M:%S").strftime("%Y-%m-%d")
# Create query blueprint and try executing
query = """INSERT INTO clients (name, surname, dni, membership_date, expiration_date) VALUES (?,?,?,?,?)"""
try:
self.conn.execute(query, (self.dialog.content_cls.ids.user_name.text,
self.dialog.content_cls.ids.user_surname.text,
self.dialog.content_cls.ids.user_dni.text,
paid_date,
exp_date
)
)
success = True
except sqlite3.IntegrityError:
pass
if success:
self.conn.commit()
The try/except was used for other reasons. Adding to the database from the Python script works fine as shown in the second screenshot.
And the table clients is as follows:
c.execute(''' CREATE TABLE clients (id INTEGER PRIMARY KEY AUTOINCREMENT,
name TEXT NOT NULL,
surname TEXT NOT NULL,
dni INTEGER NOT NULL UNIQUE,
membership_date date NOT NULL,
expiration_date date NOT NULL); ''')
Using Python v3.7.7 32bit.
Thanks!
In your code, the cursor is initialized two times (cursor()).
You should either get the results from the same cursor you used to execute the SELECT statement:
def check_attendance(self, cred):
query = """SELECT * FROM clients WHERE dni=?"""
cur = self.conn.cursor()
cur.execute(query, (cred,))
record = cur.fetchone()
...or you can avoid the implicit cursor creation by using execute method directly on Connection object:
def check_attendance(self, cred):
query = """SELECT * FROM clients WHERE dni=?"""
record = self.conn.execute(query, (cred,)).fetchone()
You can read more about this approach in the documentation (https://docs.python.org/3/library/sqlite3.html#using-sqlite3-efficiently):
Using the nonstandard execute(), executemany() and executescript() methods of the Connection object, your code can be written more concisely because you don’t have to create the (often superfluous) Cursor objects explicitly. Instead, the Cursor objects are created implicitly and these shortcut methods return the cursor objects.
I am trying to update or rather modify the existing data in the particular cell of a SQLite database from Flask Python. But I can't update. Also I didn't receive any error while doing so. At the same time, I am able to insert a new data in the new row of the table.
Server side code:
#app.route("/")
#app.route("/<state>")
def get_post_javascript_data(state=None):
connection = sqlite3.connect('/home/pi/toggle7.db')
cursor = connection.cursor()
load = 'light5'
status = 'ON'
if state == 'Load1on':
sock.send('Load1ON')
print ("sent: Load1ON")
try:
cursor.execute("UPDATE user1 SET load = 'light5' WHERE id='1'")
connection.commit()
message = "success"
except:
connection.rollback()
message = "failure"
finally:
connection.close()
print (message)
UPDATE user1 SET load = 'light5' WHERE id='1'
This command updates all rows that have the value '1' in the id column.
If nothing happens, then this implies that there is no such row.
If the id column contains numbers, then you must search for a number:
... WHERE id=1
And you should always use parameters to avoid formatting problems like this (and SQL injection attacks):
cursor.execute('UPDATE user1 SET load = ? WHERE id = ?', ['light5', 1])
I'd like to have returned to me (via cx_oracle in python) the value of the Identity that's created for a row that I'm inserting. I think I can figure out the python bit on my own, if someone could please state how to modify my SQL statement to get the ID of the newly-created row.
I have a table that's created with something like the following:
CREATE TABLE hypervisor
(
id NUMBER GENERATED BY DEFAULT AS IDENTITY (
START WITH 1 NOCACHE ORDER ) NOT NULL ,
name VARCHAR2 (50)
)
LOGGING ;
ALTER TABLE hypervisor ADD CONSTRAINT hypervisor_PK PRIMARY KEY ( id ) ;
And I have SQL that's similar to the following:
insert into hypervisor ( name ) values ('my hypervisor')
Is there an easy way to obtain the id of the newly inserted row? I'm happy to modify my SQL statement to have it returned, if that's possible.
Most of the google hits on this issue were for version 11 and below, which don't support automatically-generated identity columns so hopefully someone here can help out.
Taking what user2502422 said above and adding the python bit:
newest_id_wrapper = cursor.var(cx_Oracle.STRING)
sql_params = { "newest_id_sql_param" : newest_id_wrapper }
sql = "insert into hypervisor ( name ) values ('my hypervisor') " + \
"returning id into :python_var"
cursor.execute(sql, sql_params)
newest_id=newest_id_wrapper.getvalue()
This example taken from learncodeshare.net has helped me grasp the correct syntax.
cur = con.cursor()
new_id = cur.var(cx_Oracle.NUMBER)
statement = 'insert into cx_people(name, age, notes) values (:1, :2, :3) returning id into :4'
cur.execute(statement, ('Sandy', 31, 'I like horses', new_id))
sandy_id = new_id.getvalue()
pet_statement = 'insert into cx_pets (name, owner, type) values (:1, :2, :3)'
cur.execute(pet_statement, ('Big Red', sandy_id, 'horse'))
con.commit()
It's only slightly different from ragerdl's answer, but different enough to be added here I believe!
Notice the absence of sql_params = { "newest_id_sql_param" : newest_id_wrapper }
Use the returning clause of the insert statement.
insert into hypervisor (name ) values ('my hypervisor')
returning id into :python_var
You said you could handle the Python bit ? You should be able to "bind" the return parameter in your program.
I liked the answer by Marco Polo, but it is incomplete.
The answer from FelDev is good too but does not address named parameters.
Here is a more complete example from code I wrote with a simplified table (less fields). I have omitted code on how to set up a cursor since that is well documented elsewhere.
import cx_Oracle
INSERT_A_LOG = '''INSERT INTO A_LOG(A_KEY, REGION, DIR_NAME, FILENAME)
VALUES(A_KEY_Sequence.nextval, :REGION, :DIR_NAME, :FILENAME)
RETURNING A_KEY INTO :A_LOG_ID'''
CURSOR = None
class DataProcessor(Process):
# Other code for setting up connection to DB and storing it in CURSOR
def save_log_entry(self, row):
global CURSOR
# Oracle variable to hold value of last insert
log_var = CURSOR.var(cx_Oracle.NUMBER)
row['A_LOG_ID'] = log_var
row['REGION'] = 'R7' # Other entries set elsewhere
try:
# This will fail unless row.keys() =
# ['REGION', 'DIR_NAME', 'FILE_NAME', 'A_LOG_ID']
CURSOR.execute(INSERT_A_LOG, row)
except Exception as e:
row['REJCTN_CD'] = 'InsertFailed'
raise
# Get last inserted ID from Oracle for update
self.last_log_id = log_var.getvalue()
print('Insert id was {}'.format(self.last_log_id))
Agreeing with the older answers. However, depending on your version of cx_Oracle (7.0 and newer), var.getvalue() might return an array instead of a scalar.
This is to support multiple return values as stated in this comment.
Also note, that cx_Oracle is deprecated and has moved to oracledb now.
Example:
newId = cur.var(oracledb.NUMBER, outconverter=int)
sql = """insert into Locations(latitude, longitude) values (:latitude, :longitude) returning locationId into :newId"""
sqlParam = [latitude, longitude, newId]
cur.execute(sql, sqlParam)
newIdValue = newId.getvalue()
newIdValue would return [1] instead of 1
I am using SQLAlchemy without the ORM, i.e. using hand-crafted SQL statements to directly interact with the backend database. I am using PG as my backend database (psycopg2 as DB driver) in this instance - I don't know if that affects the answer.
I have statements like this,for brevity, assume that conn is a valid connection to the database:
conn.execute("INSERT INTO user (name, country_id) VALUES ('Homer', 123)")
Assume also that the user table consists of the columns (id [SERIAL PRIMARY KEY], name, country_id)
How may I obtain the id of the new user, ideally, without hitting the database again?
You might be able to use the RETURNING clause of the INSERT statement like this:
result = conn.execute("INSERT INTO user (name, country_id) VALUES ('Homer', 123)
RETURNING *")
If you only want the resulting id:
result = conn.execute("INSERT INTO user (name, country_id) VALUES ('Homer', 123)
RETURNING id")
[new_id] = result.fetchone()
User lastrowid
result = conn.execute("INSERT INTO user (name, country_id) VALUES ('Homer', 123)")
result.lastrowid
Current SQLAlchemy documentation suggests
result.inserted_primary_key should work!
Python + SQLAlchemy
after commit, you get the primary_key column id (autoincremeted) updated in your object.
db.session.add(new_usr)
db.session.commit() #will insert the new_usr data into database AND retrieve id
idd = new_usr.usrID # usrID is the autoincremented primary_key column.
return jsonify(idd),201 #usrID = 12, correct id from table User in Database.
this question has been asked many times on stackoverflow and no answer I have seen is comprehensive. Googling 'sqlalchemy insert get id of new row' brings up a lot of them.
There are three levels to SQLAlchemy.
Top: the ORM.
Middle: Database abstraction (DBA) with Table classes etc.
Bottom: SQL using the text function.
To an OO programmer the ORM level looks natural, but to a database programmer it looks ugly and the ORM gets in the way. The DBA layer is an OK compromise. The SQL layer looks natural to database programmers and would look alien to an OO-only programmer.
Each level has it own syntax, similar but different enough to be frustrating. On top of this there is almost too much documentation online, very hard to find the answer.
I will describe how to get the inserted id AT THE SQL LAYER for the RDBMS I use.
Table: User(user_id integer primary autoincrement key, user_name string)
conn: Is a Connection obtained within SQLAlchemy to the DBMS you are using.
SQLite
======
insstmt = text(
'''INSERT INTO user (user_name)
VALUES (:usernm) ''' )
# Execute within a transaction (optional)
txn = conn.begin()
result = conn.execute(insstmt, usernm='Jane Doe')
# The id!
recid = result.lastrowid
txn.commit()
MS SQL Server
=============
insstmt = text(
'''INSERT INTO user (user_name)
OUTPUT inserted.record_id
VALUES (:usernm) ''' )
txn = conn.begin()
result = conn.execute(insstmt, usernm='Jane Doe')
# The id!
recid = result.fetchone()[0]
txn.commit()
MariaDB/MySQL
=============
insstmt = text(
'''INSERT INTO user (user_name)
VALUES (:usernm) ''' )
txn = conn.begin()
result = conn.execute(insstmt, usernm='Jane Doe')
# The id!
recid = conn.execute(text('SELECT LAST_INSERT_ID()')).fetchone()[0]
txn.commit()
Postgres
========
insstmt = text(
'''INSERT INTO user (user_name)
VALUES (:usernm)
RETURNING user_id ''' )
txn = conn.begin()
result = conn.execute(insstmt, usernm='Jane Doe')
# The id!
recid = result.fetchone()[0]
txn.commit()
result.inserted_primary_key
Worked for me. The only thing to note is that this returns a list that contains that last_insert_id.
Make sure you use fetchrow/fetch to receive the returning object
insert_stmt = user.insert().values(name="homer", country_id="123").returning(user.c.id)
row_id = await conn.fetchrow(insert_stmt)
For Postgress inserts from python code is simple to use "RETURNING" keyword with the "col_id" (name of the column which you want to get the last inserted row id) in insert statement at end
syntax -
from sqlalchemy import create_engine
conn_string = "postgresql://USERNAME:PSWD#HOSTNAME/DATABASE_NAME"
db = create_engine(conn_string)
conn = db.connect()
INSERT INTO emp_table (col_id, Name ,Age)
VALUES(3,'xyz',30) RETURNING col_id;
or
(if col_id column is auto increment)
insert_sql = (INSERT INTO emp_table (Name ,Age)
VALUES('xyz',30) RETURNING col_id;)
result = conn.execute(insert_sql)
[last_row_id] = result.fetchone()
print(last_row_id)
#output = 3
ex -