insert with select, get inserted value - python

I am using specific query for generating ids without autoincrement:
'insert into global_ids (`id`) select (((max(id)>>4)+1)<<4)+1 from global_ids'
CREATE TABLE `global_ids` (
`id` int(11) NOT NULL,
PRIMARY KEY (`id`)
) ENGINE=InnoDB DEFAULT CHARSET=utf8mb4 COLLATE=utf8mb4_unicode_ci;
I have problem of fetching last inserted row id (because i am not using autoincrement, cur.lastrowid is None)
How can i fetch result of inserted row id value in my case?

If you don't care about race conditions, you can get the highest id value in the table with SELECT MAX(id) AS id FROM global_ids.
But, you do care about race conditions. What happens if two copies of your python program are running? One may do an insert, then another may do an insert before the first one does the select I suggested. If that happens, the select will return a wrong value.
You could try
START TRANSACTION;
SELECT (((max(id)>>4)+1)<<4)+1 INTO #id FROM global_ids WITH UPDATE;
INSERT INTO global_ids (id) VALUES (#id);
COMMIT;
SELECT #id AS id;
to serialize your id generation / retrieval operation with transactions. (This won't work in MyISAM tables, because they ignore transactions.)
But, your best bet to avoid race condition hassles in MySQL is to actually use the autoincrementing feature. You could try this.
CREATE TABLE global_ids (
id INT NOT NULL AUTO_INCREMENT,
PRIMARY KEY (`id`)
)
Then issue these queries one after the other:
INSERT INTO global_ids () VALUES ();
SELECT (((LAST_INSERT_ID()>>4)+1)<<4)+1 AS id;
That will use the auto-incrementing feature to generate your transformed id values without running the risk of race conditions.

Related

Get a list of all the tables a specific ID value exists in with SQLite

I am using SQLite v3.31.1 and python 3.7.9.
How do I identify which tables a specific id value exists in? I think something like this exists in MYSQL as "INFORMATION_SCHEMA", but I have not found a SQLite alternative.
I have a table structure that follows what I believe is called "Class Table Inheritance". Basically, I have a product table that has attributes common to all products, plus tables that contain specific attributes to that class of products. The "Products" main table contains an id column that is a primary key, this is used as a foreign key in the child tables. So a specific item exists in both the product table and the child table, but nowhere else (other than some virtual tables for FTS).
My current solution is to get all tables names like
SELECT tbl_name
FROM sqlite_master
WHERE type='table'
and then loop over the tables using
SELECT 1
FROM child_table
WHERE id = value
but it seems like there should be a better way.
Thank you.
While the answer I originally posted will work, it is slow. You could add a hash map in python, but it will be slow while the hash map is rebuilt.
Instead, I am using triggers in my SQL setup to create a table with the relevant info. This is a bit more brittle, but much faster for large number of search results as the db is queried just once and the table is loaded when the database is initialized.
My example, -- there is a parent table called Products that has the id primary key.
CREATE TABLE IF NOT EXISTS ThermalFuse (
id INTEGER NOT NULL,
ratedTemperature INTEGER NOT NULL,
holdingTemperature INTEGER NOT NULL,
maximumTemperature INTEGER NOT NULL,
voltage INTEGER NOT NULL,
current INTEGER NOT NULL,
FOREIGN KEY (id) REFERENCES Product (id)
);
CREATE TABLE IF NOT EXISTS Table_Hash (
id INTEGER NOT NULL,
table_name TEXT NOT NULL,
FOREIGN KEY (id) REFERENCES Product (id)
);
CREATE TRIGGER ThermalFuse_ai AFTER INSERT ON ThermalFuse
BEGIN
INSERT INTO Table_Hash (id, table_name)
VALUES (new.id, 'ThermalFuse');
END;

Postgres SQL error: there is no unique or exclusion constraint matching the ON CONFLICT specification" DESPITE constrain being set

(sorry, there are many similar questions on SO but none I could find that match well enough)
Attempting to upsert to a Postgres RDS table via a temp table...
import sqlalchemy as sa
# assume db_engine is already set up
with db_engine.connect() as conn:
conn.execute(sa.text("DROP TABLE IF EXISTS temp_table"))
build_temp_table = f"""
CREATE TABLE temp_table (
unique_id VARCHAR(40) NOT NULL,
date TIMESTAMP,
amount NUMERIC,
UNIQUE (unique_id)
);
"""
conn.execute(sa.text(build_temp_table))
upsert_sql_string = """
INSERT INTO production_table(unique_id, date, amount)
SELECT unique_id, date, amount FROM temp_table
ON CONFLICT (unique_id)
DO UPDATE SET
date = excluded.date,
amount = excluded.amount
"""
conn.execute(sa.text(upsert_sql_string))
Note: production_table is configured the identically to temp_table
Other methods I have tried include:
Specifying unique_id as PRIMARY KEY or UNIQUE in table definition
Running ALTER TABLE temp_table ADD PRIMARY KEY (unique_id) after creating temp_table
Regardless of what I do, I get the error:
psycopg2.errors.InvalidColumnReference: there is no unique or exclusion constraint matching the ON CONFLICT specification
Thanks

How to correct autoincrement count after the deleteing a column in sqlite3 python [duplicate]

I have a few tables in SQLite and I am trying to figure out how to reset the auto-incremented database field.
I read that DELETE FROM tablename should delete everything and reset the auto-incremement field back to 0, but when I do this it just deletes the data. When a new record is inserted the autoincrement picks up where it left off before the delete.
My ident field properties are as follows:
Field Type: integer
Field Flags: PRIMARY KEY, AUTOINCREMENT, UNIQUE
Does it matter I built the table in SQLite Maestro and I am executing the DELETE statement in SQLite Maestro as well?
Any help would be great.
Try this:
delete from your_table;
delete from sqlite_sequence where name='your_table';
SQLite Autoincrement
SQLite keeps track of the largest
ROWID that a table has ever held using
the special SQLITE_SEQUENCE table. The
SQLITE_SEQUENCE table is created and
initialized automatically whenever a
normal table that contains an
AUTOINCREMENT column is created. The
content of the SQLITE_SEQUENCE table
can be modified using ordinary UPDATE,
INSERT, and DELETE statements. But
making modifications to this table
will likely perturb the AUTOINCREMENT
key generation algorithm. Make sure
you know what you are doing before you
undertake such changes.
You can reset by update sequence after deleted rows in your-table
UPDATE SQLITE_SEQUENCE SET SEQ=0 WHERE NAME='table_name';
As an alternate option, if you have the Sqlite Database Browser and are more inclined to a GUI solution, you can edit the sqlite_sequence table where field name is the name of your table. Double click the cell for the field seq and change the value to 0 in the dialogue box that pops up.
If you want to reset every RowId via content provider try this
rowCounter=1;
do {
rowId = cursor.getInt(0);
ContentValues values;
values = new ContentValues();
values.put(Table_Health.COLUMN_ID,
rowCounter);
updateData2DB(context, values, rowId);
rowCounter++;
while (cursor.moveToNext());
public static void updateData2DB(Context context, ContentValues values, int rowId) {
Uri uri;
uri = Uri.parseContentProvider.CONTENT_URI_HEALTH + "/" + rowId);
context.getContentResolver().update(uri, values, null, null);
}
If you are working with python and you want to delete all records from some table and reset AUTOINCREMENT.
You have this table
tables_connection_db.execute("CREATE TABLE MY_TABLE_DB (id_record INTEGER PRIMARY KEY AUTOINCREMENT, value_record real)")
So if you had added some records
connection_db=sqlite3.connect("name_file.db")
tables_connection_db=connection_db.cursor()
tables_connection_db.execute("DELETE FROM MY_TABLE_DB ") # delete records
connection_db.commit()
name_table="MY_TABLE_DB"
tables_connection_db.execute("UPDATE sqlite_sequence SET seq=1 WHERE name=? ",(name_table,))
connection_db.commit()
connection_db.close()

auto_increment primary key and linking tables (Mysql)

I am experimenting with Python connections to a MySQL database using MySQL workbench and I have a peculiar question.
I have a program running which displays a list of albums in stock, and then the user can choose which albums to add, then when the user confirms the order is complete, there is script to write and commit these new rows to the database.
My issue is that: when the lines are written to the order_bridge table, the entryid is unique for every row, and there is no way for this primary key to reference the order table.
I was thinking to make a trigger which insert the total cost of the order into the orders table, grab the new auto_increment value from the created row, and then update the newly written rows in order_bridge to all have the same orderid, but I haven't been able wrap my head around the process.
Here is how the tables are designed:
create table albums
( albumid int,
albumname varchar(100),
artistname varchar(50),
price int,
quantity int,
primary key (albumid)
);
create table order_bridge (
entryid int auto_increment,
orderid int,
albumid int,
quantity int,
primary key (entryid),
foreign key (orderid) references orders (orderid),
foreign key (albumid) references albums (albumid)
);
create table orders
( orderid int auto_increment,
total decimal(5,2),
entrydate datetime default current_timestamp,
primary key(orderid)
);
I have solved this at least once by actually inserting every customer that is in the site cart into the orders table.
This does three things:
prevents order ids that complete from being consecutive (useless to some, useful for others),
provides a simple method for tracking cart abandonment statistics, and of course
allows you to have that unique order id when it comes time to record transactional data (like inserting selections into the order_bridge table).
Should you wish to only record the order row after the transaction finalizes, the order of operations should be:
insert order table info (including python calculation of total)
retrieve unique order table id
insert album rows
insert order bridge row

No autoincrement for Integer Primary key in sqlite3

In the sqlite3 faq, it is mentioned that an integer primary key being fed a null value would autoincrement. But this is not happening for me.
to replicate, a table in sqlite3, CREATE TABLE dummy( serial_num INTEGER PRIMARY KEY, name TEXT); and fill it using python,
import sqlite3 as lite
con = lite.connect('some.db')
cur=con.cursor()
data = "someone's name"
cur.execute("INSERT INTO dummy VALUES(NULL, ?)", data)
con.commit()
The first attribute serial_num is being shown blank while the name attribute is fine. When I do SELECT serial_num FROM dummy I just get a bunch of blank spaces. What am I doing wrong?
This is one of SQLite's quirks. From the fine manual:
According to the SQL standard, PRIMARY KEY should always imply NOT NULL. Unfortunately, due to a long-standing coding oversight, this is not the case in SQLite. Unless the column is an INTEGER PRIMARY KEY SQLite allows NULL values in a PRIMARY KEY column. We could change SQLite to conform to the standard (and we might do so in the future), but by the time the oversight was discovered, SQLite was in such wide use that we feared breaking legacy code if we fixed the problem.
The documentation on INTEGER PRIMARY KEY is a little unclear about what precisely is required for a column to be this special INTEGER PRIMARY KEY that auto-increments but the reality is that the column needs to be NOT NULL if you want to use the NULL value to mean "give me the next auto-incrementing value" when inserting:
create table dummy (
serial_num integer primary key not null,
name text
);
If you leave out the not null, you need to do your inserts like this:
insert into dummy (name) values (?)
to get the auto-increment value for serial_num. Otherwise, SQLite has no way of telling the difference between a NULL meaning "give me the next auto-increment value" and a NULL meaning "put a NULL value in serial_num because the column allows NULLs".
The insert syntax provided above does not seem to work in the absence of not null.
Here's an example - note that the ID field is not autoincremented even though I use the insert format that you specified above.
sqlite> .schema logTable
CREATE TABLE logTable (ID INTEGER PRIMARY_KEY, ts REAL, level TEXT, message TEXT);
sqlite> INSERT into LOGTABLE (ts, level, message) VALUES (111, "autoinc test", "autoinc test");
sqlite> select * from logtable where ts = 111;
|111.0|autoinc test|autoinc test
sqlite>
It does work with the NOT NULL workaround.
sqlite> create TABLE logTable (ID INTEGER PRIMARY KEY NOT NULL, ts REAL, level TEXT, message TEXT);
sqlite> INSERT into LOGTABLE (ts, level, message) VALUES (222, "autoinc test", "autoinc test");
sqlite> select * from logtable where ts = 222;
1|222.0|autoinc test|autoinc test
I apologize for posting this as a new answer instead of commenting on the previous answer, but my reputation score is too low to add comments, and I thought that it was important to note that the alternate insert statement is not an adequate workaround.

Categories