Our IDs look something like this "CS0000001" which stands for Customer with the ID 1. Is this possible to to with SQL and Auto Increment or do i need to to that in my GUI ?
I need the leading zeroes but with auto incrementing to prevent double usage if am constructing the ID in Python and Insert them into the DB.
Is that possible?
You have few choices:
Construct the CustomerID in your code which inserts the data into
the Customer table (=application side, requires change in your code)
Create a view on top of the Customer-table that contains the logic
and use that when you need the CustomerID (=database side, requires change in your code)
Use a procedure to do the inserts and construct the CustomerID in
the procedure (=database side, requires change in your code)
Possible realization.
Create data table
CREATE TABLE data (id CHAR(9) NOT NULL DEFAULT '',
val TEXT,
PRIMARY KEY (id));
Create service table
CREATE TABLE ids (id INT NOT NULL AUTO_INCREMENT PRIMARY KEY);
Create trigger which generates id value
CREATE TRIGGER tr_bi_data
BEFORE INSERT
ON data
FOR EACH ROW
BEGIN
INSERT INTO ids () VALUES ();
SET NEW.id = CONCAT('CS', LPAD(LAST_INSERT_ID(), 7, '0'));
DELETE FROM ids;
END
Create trigger which prohibits id value change
CREATE TRIGGER tr_bu_data
BEFORE UPDATE
ON data
FOR EACH ROW
BEGIN
SET NEW.id = OLD.id;
END
Insert some data, check result
INSERT INTO data (val) VALUES ('data-1'), ('data-2');
SELECT * FROM data;
id | val
:-------- | :-----
CS0000001 | data-1
CS0000002 | data-2
Try to update, ensure id change prohibited
UPDATE data SET id = 'CS0000100' WHERE val = 'data-1';
SELECT * FROM data;
id | val
:-------- | :-----
CS0000001 | data-1
CS0000002 | data-2
Insert one more data, ensure enumeration continues
INSERT INTO data (val) VALUES ('data-3'), ('data-4');
SELECT * FROM data;
id | val
:-------- | :-----
CS0000001 | data-1
CS0000002 | data-2
CS0000003 | data-3
CS0000004 | data-4
Check service table is successfully cleared
SELECT COUNT(*) FROM ids;
| COUNT(*) |
| -------: |
| 0 |
db<>fiddle here
Disadvantages:
Additional table needed.
Generated id value edition is disabled (copy and delete old record must be used instead, custom value cannot be set).
Recently we migrated our production DB to Amazon RDS with version upgrade from 5.5 to 5.7 using AWS DMS service. After that, we are frequently getting deadlock issues for our insert...on duplicate key update queries and update queries. Whereas in MySQL 5.5 it was very minimal.
For example, say one of our table structure is as follows.
CREATE TABLE `job_notification` (
`id` int(11) NOT NULL AUTO_INCREMENT,
`uid` int(11) NOT NULL,
`job_id` int(11) NOT NULL,
`created_time` int(11) NOT NULL,
`updated_time` int(11) NOT NULL,
`notify_status` tinyint(3) DEFAULT '0'
PRIMARY KEY (`id`),
UNIQUE KEY `uid` (`uid`,`job_id`),
) ENGINE=InnoDB AUTO_INCREMENT=58303732 DEFAULT CHARSET=utf8 COLLATE=utf8_bin
Our insert query is as follows...
INSERT INTO job_notification (uid, notify_status, updated_time, created_time, job_id) VALUES
('24832194',1,1571900253,1571900253,'734749'),
('24832194',1,1571900254,1571900254,'729161'),
('24832194',1,1571900255,1571900255,'713225'),
('24832194',1,1571900256,1571900256,'701897'),
('24832194',1,1571900257,1571900257,'682155'),
('24832194',1,1571900258,1571900258,'730817'),
('24832194',1,1571900259,1571900259,'717162'),
('24832194',1,1571900260,1571900260,'712884'),
('24832194',1,1571900261,1571900261,'708267'),
('24832194',1,1571900262,1571900262,'701855'),
('24832194',1,1571900263,1571900263,'702129'),
('24832194',1,1571900264,1571900264,'726738'),
('24832194',1,1571900265,1571900265,'725105'),
('24832194',1,1571900266,1571900266,'709306'),
('24832194',1,1571900267,1571900267,'702218'),
('24832194',1,1571900268,1571900268,'700966'),
('24832194',1,1571900269,1571900269,'693848'),
('24832194',1,1571900270,1571900270,'730793'),
('24832194',1,1571900271,1571900271,'729352'),
('24832194',1,1571900272,1571900272,'729043'),
('24832194',1,1571900273,1571900273,'724631'),
('24832194',1,1571900274,1571900274,'718394'),
('24832194',1,1571900275,1571900275,'711702'),
('24832194',1,1571900276,1571900276,'707765'),
('24832194',1,1571900277,1571900277,'692288'),
('24832194',1,1571900278,1571900278,'735549'),
('24832194',1,1571900279,1571900279,'730786'),
('24832194',1,1571900280,1571900280,'706814'),
('24832194',1,1571900281,1571900281,'688999'),
('24832194',1,1571900282,1571900282,'685079'),
('24832194',1,1571900283,1571900283,'686661'),
('24832194',1,1571900284,1571900284,'722110'),
('24832194',1,1571900285,1571900285,'715277'),
('24832194',1,1571900286,1571900286,'701846'),
('24832194',1,1571900287,1571900287,'730105'),
('24832194',1,1571900288,1571900288,'725579')
ON DUPLICATE KEY UPDATE notify_status=VALUES(notify_status), updated_time=VALUES(updated_time)
Our update query is as follows...
update job_notification set notify_status = 3 where uid = 51032194 and job_id in (616661, 656221, 386760, 189461, 944509, 591552, 154153, 538703, 971923, 125080, 722110, 715277, 701846, 725579, 686661, 685079)
These queries were working fine in MySQL 5.5 with the same packet size of data and index, but after the migration deadlocks are frequently coming for this type of queries...
NB: Ours is a high-level concurrent system.
innodb_deadlock_detect is disabled. innodb_lock_wait_timeout is 50.
innodb_buffer_pool_size is 50465865728
When we explained the queries it gave a better execution plan. Still, we are getting frequent deadlocks and because of this other queries also getting slowed.
Both queries are executed as different API threads (Different connections)
using pythons Mysqldb package with autocommit enabled in the MySQL DB.
Explain Output
explain update job_notification SET notify_status = 3 where uid = 51032194 and job_id in (616661, 656221, 386760, 189461, 944509, 591552, 154153, 538703, 971923, 125080, 722110, 715277, 701846, 725579, 686661, 685079);
+----+--------+------------+------------+-------+---------------+------+-----+-------+------+----------+--------+
| id | select_type | table | partitions | type | possible_keys | key | key_len | ref | rows | filtered |Extra |
+----+----------+------------+------------+-------+---------------+------+---------+-------------+------+----------+----------+
| 1 | UPDATE | job_notification | NULL | range | uid | uid | 8 | const,const | 27 | 100.00 | Using where |
+----+-------------+--------------------------+------------+-------+---------------+------+---------+-------------+--------+-------------+
If this is primarily a many-to-many mapping table, get rid if id and follow the rest of the advice in http://mysql.rjweb.org/doc.php/index_cookbook_mysql#many_to_many_mapping_table
That will make queries run faster on both systems. Faster = less chance of deadlock.
Let's see the deadlock; there could be other things going on. Use SHOW ENGINE INNODB STATUS; promptly after a deadlock occurs.
I am trying to create a new table that combines columns from two different tables.
Let's imagine then that I have a database named db.db that includes two tables named table1 and table2.
table1 looks like this:
id | item | price
-------------
1 | book | 20
2 | copy | 30
3 | pen | 10
and table2 like this (note that has duplicated axis):
id | item | color
-------------
1 | book | blue
2 | copy | red
3 | pen | red
1 | book | blue
2 | copy | red
3 | pen | red
Now I'm trying to create a new table named new_table that combines both columns price and color over the same axis and also without duplicates. My code is the following (it does not obviously work because of my poor SQL skills):
con = sqlite3.connect(":memory:")
cur = con.cursor()
cur.execute("CREATE TABLE new_table (id varchar, item integer, price integer, color integer)")
cur.execute("ATTACH DATABASE 'db.db' AS other;")
cur.execute("INSERT INTO new_table (id, item, price) SELECT * FROM other.table1")
cur.execute("UPDATE new_table SET color = (SELECT color FROM other.table2 WHERE distinct(id))")
con.commit()
I know there are multiple errors in the last line of code but I can't get my head around it. What would be your approach to this problem? Thanks!
Something like
CREATE TABLE new_table(id INTEGER, item TEXT, price INTEGER, color TEXT);
INSERT INTO new_table(id, item, price, color)
SELECT DISTINCT t1.id, t1.item, t1.price, t2.color
FROM table1 AS t1
JOIN table2 AS t2 ON t1.id = t2.id;
Note the fixed column types; yours were all sorts of strange. item and color as integers?
If each id value is unique in the new table (Only one row will ever have an id of 1, only will be 2, and so on), that column should probably be an INTEGER PRIMARY KEY, too.
EDIT: Also, since you're creating this table in an in-memory database from tables from an attached file-based database... maybe you want a temporary table instead? Or a view might be more appropriate? Not sure what your goal is.