Best approach to store time intervals - python

I have a table like this:
CREATE TABLE things
(id INTEGER PRIMARY KEY,
name TEXT,
status INTEGER);
There are these "things" which have a status that can be 1 or 0, this status is changed automatically by another part of my program.
I would like to store information about the time intervals when these things status is 0 or 1, what can I do?
I thought to create two tables:
CREATE TABLE status_from_0_to_1
(id INTEGER PRIMARY KEY,
datetime DATE,
things_id INTEGER,
FOREIGN KEY (id) REFERENCES things (id));
CREATE TABLE status_from_1_to_0
(id INTEGER PRIMARY KEY,
datetime DATE,
things_id INTEGER,
FOREIGN KEY (id) REFERENCES things (id));
But now I don't know how to write the query that check if the new status is different from the status in the table things and only in this case add a row in one of the two tables.
I don't know if this is the best approach, I also thought to create a single table (besides things) with two DATE fields but I think the query would be even more diffult.

Related

Get a list of all the tables a specific ID value exists in with SQLite

I am using SQLite v3.31.1 and python 3.7.9.
How do I identify which tables a specific id value exists in? I think something like this exists in MYSQL as "INFORMATION_SCHEMA", but I have not found a SQLite alternative.
I have a table structure that follows what I believe is called "Class Table Inheritance". Basically, I have a product table that has attributes common to all products, plus tables that contain specific attributes to that class of products. The "Products" main table contains an id column that is a primary key, this is used as a foreign key in the child tables. So a specific item exists in both the product table and the child table, but nowhere else (other than some virtual tables for FTS).
My current solution is to get all tables names like
SELECT tbl_name
FROM sqlite_master
WHERE type='table'
and then loop over the tables using
SELECT 1
FROM child_table
WHERE id = value
but it seems like there should be a better way.
Thank you.
While the answer I originally posted will work, it is slow. You could add a hash map in python, but it will be slow while the hash map is rebuilt.
Instead, I am using triggers in my SQL setup to create a table with the relevant info. This is a bit more brittle, but much faster for large number of search results as the db is queried just once and the table is loaded when the database is initialized.
My example, -- there is a parent table called Products that has the id primary key.
CREATE TABLE IF NOT EXISTS ThermalFuse (
id INTEGER NOT NULL,
ratedTemperature INTEGER NOT NULL,
holdingTemperature INTEGER NOT NULL,
maximumTemperature INTEGER NOT NULL,
voltage INTEGER NOT NULL,
current INTEGER NOT NULL,
FOREIGN KEY (id) REFERENCES Product (id)
);
CREATE TABLE IF NOT EXISTS Table_Hash (
id INTEGER NOT NULL,
table_name TEXT NOT NULL,
FOREIGN KEY (id) REFERENCES Product (id)
);
CREATE TRIGGER ThermalFuse_ai AFTER INSERT ON ThermalFuse
BEGIN
INSERT INTO Table_Hash (id, table_name)
VALUES (new.id, 'ThermalFuse');
END;

Is my foreign key usage correct? Or do I need a different query?

I will create three tables. These tables belong to a category name. The 'Department' category linked to the 'Company' category and the 'Department_Unit' category linked to the 'Department' category. The user will first select Company, then select department, then department_unit. Do you think the query I wrote below is correct? Or do I need to write a different and better query?
thankyou very much.
create_table_company= '''CREATE TABLE company(
ID SERIAL PRIMARY KEY ,
NAME VARCHAR NOT NULL ,
); '''
create_table_department = '''CREATE TABLE department (
ID SERIAL PRIMARY KEY ,
NAME VARCHAR NOT NULL ,
company_id BIGINT,
FOREIGN KEY(company_id) REFERENCES COMPANY(id)); '''
create_table_department_unit = '''CREATE TABLE department_unit(
ID SERIAL PRIMARY KEY ,
NAME VARCHAR NOT NULL ,
department_id BIGINT,
FOREIGN KEY(department_id) REFERENCES DEPARTMENT(id)); '''
This data model looks fine.
Don't be worried that you have to join three tables whenever you need the company name that belongs to a department unit: databases are optimized to deal with such joins. In the case of an OLTP workload (you always select only a few rows from a table), this can be handled very efficiently with nested loop joins.
The only thing that is missing from your schema are two indexes:
CREATE INDEX ON department (company_id);
CREATE INDEX ON department_unit (department_id);
These indexes are needed
to make such nested loop joins efficient, for example if you are searching for all units that belong to a department
to make deletes or key updates efficient, which have to search the referencing table to verify that the foreign key constraint is still satisfied

How to create a many to many relationship in SQLite DB

Im trying to figure out how to create a many to many relationship between my train table, and station table. I know I need a intermediate table, which I created I feel like I'm going in a loop thinking about what to do. I'll have a link to the data that explains the relationships between my tables and my schema below.
The only columns I'm looking at in this data are:
"GTFS_STOP_ID"(Id code for the Stop_Name),
"Stop_Name"(Name of train station), and
"Daytime Route"(which are the trains)
-One train can have many stations, and one station can also have many trains which is why in the "Daytime Routes" column you can see multiple trains. (For ex. N, W for the first row)
-How would I accurately depict relationships between these 3 columns in my csv, in my SQLITE database?
Link to data(apologies for the csv in advance as it's the only thing I was able to find online
cur.execute("""DROP TABLE IF EXISTS train""")
cur.execute(
""" CREATE TABLE train(
pk INTEGER PRIMARY KEY AUTOINCREMENT,
train_name VARCHAR
);""")
cur.execute("""DROP TABLE IF EXISTS station""")
cur.execute(
""" CREATE TABLE station(
pk INTEGER PRIMARY KEY AUTOINCREMENT,
stop_id VARCHAR,
station_name VARCHAR
);""")
cur.execute("""DROP TABLE IF EXISTS train_station""")
cur.execute(
""" CREATE TABLE train_station(
train_pk INTEGER,
station_pk INTEGER,
FOREIGN KEY (station_pk) REFERENCES station(pk)
FOREIGN KEY (train_pk) REFERENCES train(pk)
);"""
)

auto_increment primary key and linking tables (Mysql)

I am experimenting with Python connections to a MySQL database using MySQL workbench and I have a peculiar question.
I have a program running which displays a list of albums in stock, and then the user can choose which albums to add, then when the user confirms the order is complete, there is script to write and commit these new rows to the database.
My issue is that: when the lines are written to the order_bridge table, the entryid is unique for every row, and there is no way for this primary key to reference the order table.
I was thinking to make a trigger which insert the total cost of the order into the orders table, grab the new auto_increment value from the created row, and then update the newly written rows in order_bridge to all have the same orderid, but I haven't been able wrap my head around the process.
Here is how the tables are designed:
create table albums
( albumid int,
albumname varchar(100),
artistname varchar(50),
price int,
quantity int,
primary key (albumid)
);
create table order_bridge (
entryid int auto_increment,
orderid int,
albumid int,
quantity int,
primary key (entryid),
foreign key (orderid) references orders (orderid),
foreign key (albumid) references albums (albumid)
);
create table orders
( orderid int auto_increment,
total decimal(5,2),
entrydate datetime default current_timestamp,
primary key(orderid)
);
I have solved this at least once by actually inserting every customer that is in the site cart into the orders table.
This does three things:
prevents order ids that complete from being consecutive (useless to some, useful for others),
provides a simple method for tracking cart abandonment statistics, and of course
allows you to have that unique order id when it comes time to record transactional data (like inserting selections into the order_bridge table).
Should you wish to only record the order row after the transaction finalizes, the order of operations should be:
insert order table info (including python calculation of total)
retrieve unique order table id
insert album rows
insert order bridge row

insert with select, get inserted value

I am using specific query for generating ids without autoincrement:
'insert into global_ids (`id`) select (((max(id)>>4)+1)<<4)+1 from global_ids'
CREATE TABLE `global_ids` (
`id` int(11) NOT NULL,
PRIMARY KEY (`id`)
) ENGINE=InnoDB DEFAULT CHARSET=utf8mb4 COLLATE=utf8mb4_unicode_ci;
I have problem of fetching last inserted row id (because i am not using autoincrement, cur.lastrowid is None)
How can i fetch result of inserted row id value in my case?
If you don't care about race conditions, you can get the highest id value in the table with SELECT MAX(id) AS id FROM global_ids.
But, you do care about race conditions. What happens if two copies of your python program are running? One may do an insert, then another may do an insert before the first one does the select I suggested. If that happens, the select will return a wrong value.
You could try
START TRANSACTION;
SELECT (((max(id)>>4)+1)<<4)+1 INTO #id FROM global_ids WITH UPDATE;
INSERT INTO global_ids (id) VALUES (#id);
COMMIT;
SELECT #id AS id;
to serialize your id generation / retrieval operation with transactions. (This won't work in MyISAM tables, because they ignore transactions.)
But, your best bet to avoid race condition hassles in MySQL is to actually use the autoincrementing feature. You could try this.
CREATE TABLE global_ids (
id INT NOT NULL AUTO_INCREMENT,
PRIMARY KEY (`id`)
)
Then issue these queries one after the other:
INSERT INTO global_ids () VALUES ();
SELECT (((LAST_INSERT_ID()>>4)+1)<<4)+1 AS id;
That will use the auto-incrementing feature to generate your transformed id values without running the risk of race conditions.

Categories