INSERT OR REPLACE adding a whole new row SQLite - python

I'm creating a little stock tracking project using Flask. I have generally used an ORM, but was trying vanilla SQLite on this.
I have 3 tables, users, stocks and a join table of stocks_users that has a column for user_id, symbol, and quantity
db.execute(f"""
INSERT OR REPLACE INTO user_stocks(user_id, stock_id, quantity)
VALUES (
(SELECT 1 FROM users WHERE id = '{user_id}'),
(SELECT 1 FROM stocks WHERE symbol = '{symbol}'),
{shares}
)
""")
I am running this trying to insert a row if it does not exist, but if a row has the matching user_id and symbol (primary key for the users and stocks tables respectively) I would just like to update the quantity.
Any direction on this would be much appreciated.

Related

SQL database with a column being a list or a set

With a SQL database (in my case Sqlite, using Python), what is a standard way to have a column which is a set of elements?
id name items_set
1 Foo apples,oranges,tomatoes,ananas
2 Bar tomatoes,bananas
...
A simple implementation is using
CREATE TABLE data(id int, name text, items_set text);
but there are a few drawbacks:
to query all rows that have ananas, we have to use items_set LIKE '%ananas%' and some tricks with separators to avoid querying "ananas" to also return rows with "bananas", etc.
when we insert a new item in one row, we have to load the whole items_set, and see if the item is already in the list or not, before concatenating ,newitem at the end.
etc.
There is surely better, what is a standard SQL solution for a column which is a list or set?
Note: I don't know in advance all the possible values for the set/list.
I can see a solution with a few additional tables, but in my tests, it multiplies the size on disk by a factor x2 or x3, which is a problem with many gigabytes of data.
Is there a better solution?
To have a well structured SQL database, you should extract the items to their own table and use a join table between the main table and the items table
I'm not familiar with the Sqlite syntax but you should be able to create the tables with
CREATE TABLE entities(id int, name text);
CREATE TABLE entity_items(entity_id int, item_id int);
CREATE TABLE items(id int, name text);
add data
INSERT INTO entities (name) VALUES ('Foo'), ('Bar');
INSERT INTO items (name) VALUES ('tomatoes'), ('ananas'), ('bananas');
INSERT INTO entity_items (entity_id, item_id) VALUES (
(SELECT id from entities WHERE name='Foo'),
(SELECT id from items WHERE name='bananas')
);
query data
SELECT * FROM entities
LEFT JOIN entity_items
ON entities.id = entity_items.entity_id
LEFT JOIN items
ON items.id = entity_items.item_id
WHERE items.name = 'bananas';
You have probably two options. One standard approach, which is more conventional, is many-to-many relationship. Like you have three tables, for example, Employees, Projects, and ProjectEmployees. The latter describes your many-to-many relationship (each employee can work on multiple projects, each project has a team).
Having a set in a single value denormalized the table and it will complicate the things either way. But if you just, use the JSON format and the JSON functionality provided by SQLite. If your SQLite version is not recent, it may not have the JSON extension built in. You would need either updated (best option) or load the JSON extension dynamically. Not sure if you can do it using the SQLite copy supplied with Python.
To elaborate on what #ussu said, ideally your table would have one row per thing & item pair, using IDs instead of names:
id thing_id item_id
1 1 1
2 1 2
3 1 3
4 1 4
5 2 3
5 2 4
Then look-up tables for the thing and item names:
id name
1 Foo
2 Bar
id name
1 apples
2 oranges
3 tomatoes
4 bananas
In Mysql, You have set Type
Creation:
CREATE TABLE myset (col SET('a', 'b', 'c', 'd'));
Select:
mysql> SELECT * FROM tbl_name WHERE FIND_IN_SET('value',set_col)>0;
mysql> SELECT * FROM tbl_name WHERE set_col LIKE '%value%';
Insertion:
INSERT INTO myset (col) VALUES ('a,d'), ('d,a'), ('a,d,a'), ('a,d,d'), ('d,a,d');

How to get a true/false response from sqlite [duplicate]

I have an SQLite database. I am trying to insert values (users_id, lessoninfo_id) in table bookmarks, only if both do not exist before in a row.
INSERT INTO bookmarks(users_id,lessoninfo_id)
VALUES(
(SELECT _id FROM Users WHERE User='"+$('#user_lesson').html()+"'),
(SELECT _id FROM lessoninfo
WHERE Lesson="+lesson_no+" AND cast(starttime AS int)="+Math.floor(result_set.rows.item(markerCount-1).starttime)+")
WHERE NOT EXISTS (
SELECT users_id,lessoninfo_id from bookmarks
WHERE users_id=(SELECT _id FROM Users
WHERE User='"+$('#user_lesson').html()+"') AND lessoninfo_id=(
SELECT _id FROM lessoninfo
WHERE Lesson="+lesson_no+")))
This gives an error saying:
db error near where syntax.
If you never want to have duplicates, you should declare this as a table constraint:
CREATE TABLE bookmarks(
users_id INTEGER,
lessoninfo_id INTEGER,
UNIQUE(users_id, lessoninfo_id)
);
(A primary key over both columns would have the same effect.)
It is then possible to tell the database that you want to silently ignore records that would violate such a constraint:
INSERT OR IGNORE INTO bookmarks(users_id, lessoninfo_id) VALUES(123, 456)
If you have a table called memos that has two columns id and text you should be able to do like this:
INSERT INTO memos(id,text)
SELECT 5, 'text to insert'
WHERE NOT EXISTS(SELECT 1 FROM memos WHERE id = 5 AND text = 'text to insert');
If a record already contains a row where text is equal to 'text to insert' and id is equal to 5, then the insert operation will be ignored.
I don't know if this will work for your particular query, but perhaps it give you a hint on how to proceed.
I would advice that you instead design your table so that no duplicates are allowed as explained in #CLs answer below.
For a unique column, use this:
INSERT OR REPLACE INTO tableName (...) values(...);
For more information, see: sqlite.org/lang_insert
insert into bookmarks (users_id, lessoninfo_id)
select 1, 167
EXCEPT
select user_id, lessoninfo_id
from bookmarks
where user_id=1
and lessoninfo_id=167;
This is the fastest way.
For some other SQL engines, you can use a Dummy table containing 1 record.
e.g:
select 1, 167 from ONE_RECORD_DUMMY_TABLE

How can I check if data exist in table and update a value, if not insert data in table using MariaDB?

I have 2 tables. First table contains some products and the second table is used for temporary data storage. Both tables have the same column names.
Table `products`: contains this columns
- id (unique, autoincrement)
- name
- quantity
- price
- group
Table `temp_stor`: contains this columns
- id (unique, autoincrement)
- name
- quantity
- price
- group
I want to get from the first table one row (name,quantity,price,group) and insert it into the second table if the data does not exist. If the same data exists in temp_stor I want to update only one column (quantity).
For example:
I take from products the following line ('cola','1','2.5','soda'), I want to check the temp_stor to see if the line exist. temp_store table looks like this:
('milk 1L','1','1.5','milks')
('cola','1','2.5','soda')
('bread','1','0.9','pastry')
('7up','1','2.8','soda')
We see the second line exists, and I want to update it's quantity. The table will look like this:
('milk 1L','1','1.5','milks')
('cola','2','2.5','soda')
('bread','1','0.9','pastry')
('7up','1','2.8','soda')
If the table looks like this:
('milk 1L','1','1.5','milks')
('bread','1','0.9','pastry')
('7up','1','2.8','soda')
I want to insert the line into the table. So it would look like this:
('milk 1L','1','1.5','milks')
('bread','1','0.9','pastry')
('7up','1','2.8','soda')
('cola','1','2.5','soda')
Is this posible to do it through a sql query? I need to implement this into a python code. The python part I can handle , but I'm not that good to sql.
Thank you
UPDATE 1:
i forgot to specify maybe the most important thing and this is my fault. I want to check the existence of a product name inside the temp_stor table. Only the name should be unique. If the product exists i wan't to update it's quantity value only, if the product doesn't exist i want to insert it into the temp_stor table.
Assuming that "if the data does not exist" means "if the combo of name+quantity+price+group" is is not already there?
Add
UNIQUE(name, quantity, price, group)
Then
INSERT INTO products
(name, quantity, price, group)
SELECT name, quantity, price, group FROM temp_stor;
Minor(?) drawback: This will (I think) 'burn' ids. In your example, it will allocate 4 new values of id, but use only one of them.
after update to Question
Do not have the index above, instead, have this. (I assume product is spelled "name"??)
UNIQUE(name)
Then...
INSERT INTO products
(name, quantity, price, group)
SELECT name, quantity, price, group FROM temp_stor
ON DUPLICATE KEY -- this will notice UNIQUE(name)
UPDATE
quantity = VALUES(quantity);
It is unusual to use this construct without UPDATEing all the extra columns (quantity, price, group). What if temp_stor has a different value for price or group?
Take a look at How to connect Python programs to MariaDB to see how to connect to your DB.
After that you can select from temp_stor the row with the same id as the row you have obtained from products. Let row be the tuple of values you obtained.
cursor = mariadb_connection.cursor()
cursor.execute("SELECT * FROM temp_stor WHERE id=%s", (some_id,))
If the result of this query contains nothing, indicating that there is no such row, you can proceed to insert it, otherwise, update the row.
if len(cursor) == 0:
try:
cursor.execute("""INSERT INTO temp_stor VALUES (%s,%s,%s,%s,%s)""", row)
except mariadb.Error as error:
print("Error: {}".format(error))
else:
cursor.execute ("""
UPDATE temp_stor
SET id=%s, name=%s, quantity=%s, price=%s, group=%s
WHERE id=%s
""", (row[0], row[1], row[2], row[3], row[4], row[0]))
Update:
To perform something similar with just one query:
INSERT INTO temp_stor (id, name, quantity, price, group) VALUES(1, "cola", '2.5', 'soda') ON DUPLICATE KEY UPDATE quantity='2.5'
Here I am assuming that 1 is the id and "cola" is the name. If you want name to be unique, you should make that the key, because this query currently only compares keys, which in this case seems to be id.

Mysql Python, filter row and delete previus row

I'm stuck doing a script in Python.
I have a table in MySql like this:
I want to filter the type of category (B, for example) and delete the previous row from above. I'm using mysqldb in Python, and practicing with selects and other operators, but I don't know how to do it.
Can someone help me?
delete from Table where id= ( Select id from(select id from Table where CATEGORY = 'B' and SALARY < ( Select SALARY from Table where CATEGORY = 'B' order by SALARY desc LIMIT 1) LIMIT 1) as x );

How do I create an auto updateing sqlite table?

I am using a python script to create and maintain an sqlite database for anime shows I watch to help me keep better track of them and the episodes I have and need to get.
I use the script to create a table for each series eg Bleach, Black Lagoon... and in each of these tables the following information is stored:
Series table:
Season_Num # Unique season number
I_Have_Season # yes or no to say I have a directory for that season
Season_Episodes # Amount of episodes according to the TVDB that are in that season
Episodes_I_Have # The numer of episodes I have for that season
the same table is created for every series I have and a row for each season in that series that seems to work fine.
Now what I'm trying to do is create a summary table which takes the information from the tables for each series and creates just 1 table with all the information I need it has the following information:
Summary table:
Series # Unique Series name
Alt_Name # Alternate name (The series name in english)
Special_Eps # The amount of Special episodes (Season 0 in the series table)
Special_Eps_Me # The number of Special Episodes I have
Tot_Ses # The total count of the Seasons (excluding season 0)
Tot_Ses_Me # The total count of Seasons that have yes in I_Have_Season column
Tot_Episodes # Total Episodes excluding season 0 episodes
Tot_Eps_Me # Total Episodes I have excluding season 0 episodes
I think what I want to do can be done using triggers but I am unsure how to implement them so that the summary table will automatically update if for example a new season is added to a series table or the values of a series table are changed.
UPDATE:
Fabian's idea for a view instead of a table after some more thaught and research sounds like it could be what i want but if it's possible i would like to keep each series seperate in its own table for updateing instead of haveing just 1 table with every series and every season mixed in.
UPDATE 2:
I have gone ahead and put in the triggers for INSERT, UPDATE and DELETE i added them in the initial create loop of my script using variable's for the table names and the summary table appears to be updateing fine (after fixing how some of the values in it were calculated). I will test it further and hopefully it will keep working. Now i just need to get my script to add and delete tables for new series and for a series i delete.
This could be achieved using triggers. But this kind of thing is usually better done declaratively, using a view:
For example,
create table series (
series_name,
alt_name,
special_eps,
special_eps_me,
primary key(series_name)
);
create table seasons (
series_name,
season_num,
i_have_season,
episodes,
episodes_i_have,
primary key (series_name,season_num),
foreign key (series_name) references series (series_name),
check (i_have_season in ('F','T'))
);
create view everything_with_counts as
select series_name,
alt_name,
special_eps,
special_eps_me,
(select count(*) from seasons where seasons.series_name = series.series_name) as tot_ses,
(select count(*) from seasons where seasons.series_name = series.series_name and i_have_season = 'T') as tot_ses_me,
(select sum(episodes) from seasons where seasons.series_name = series.series_name) as tot_epsiodes,
(select sum(episodes_i_have) from seasons where seasons.series_name = series.series_name and i_have_season = 'T') as tot_epsiodes_me
from series;
EDIT
Since you want to stick to the trigger design: Assuming you have your series tables like this:
create table series_a (
season_num,
i_have_season,
episodes,
episodes_i_have
);
create table series_b (
season_num,
i_have_season,
episodes,
episodes_i_have
);
and so on, and your summary table like this:
create table summary (
series_name,
alt_name,
special_eps,
special_eps_me,
tot_ses,
tot_ses_me,
tot_episodes,
tot_episodes_me,
primary key(series_name));
You have to create three triggers (insert, update, delete) for each series table, e.g.:
create trigger series_a_ins after insert on series_a
begin
update summary set tot_ses = (select count(*) from series_a ),
tot_ses_me = (select count(*) from series_a where i_have_season = 'T'),
tot_episodes = (select sum(episodes) from series_a ),
tot_episodes_me = (select sum(episodes_i_have) from series_a where i_have_season = 'T')
where series_name = 'a';
end;
/* create trigger series_a_upd after update on series_a ... */
/* create trigger series_a_del after delete on series_a ... */
With this version, you have to add your summary entry manually in the summary table, and the counters get updated automatically afterwards when you modify your series_... tables.
You could use also INSERT OR REPLACE (see documentation) to create summary entries on demand.

Categories