sql replace multiple IDs with strings from another table - python

I have the SQLite database containing four arrays:
CREATE TABLE IF NOT EXISTS EVENTS_LIST
(ID INTEGER PRIMARY KEY AUTOINCREMENT NOT NULL,
DATE TEXT,
WORKER INTEGER,
EVENT INTEGER,
REGISTRATION_METHOD INTEGER)
CREATE TABLE IF NOT EXISTS WORKERS
(ID INTEGER PRIMARY KEY AUTOINCREMENT NOT NULL,
WORKER TEXT)
CREATE TABLE IF NOT EXISTS EVENTS
(ID INTEGER PRIMARY KEY AUTOINCREMENT NOT NULL,
EVENT TEXT)
CREATE TABLE IF NOT EXISTS REGISTRATION
(ID INTEGER PRIMARY KEY AUTOINCREMENT NOT NULL,
REGISTRATION_METHOD TEXT)
I need to get the following ID's: Worker, Event and Registration replaced with the name attached to this ID in the appropriate table.
Example:
3 Workers: 0001) John 0002) Tom 0003) Mike
3 Types of registration: 0) PIN 1) Fingerprint 2) NFC Card
3 Types of events: 1) came in 2) came out 3) came out on business
I get:
DATE | WORKER | EVENT | REGISTRATION
XXXX | 0001 | 1 | 0
XXXX | 0003 | 2 | 1
I need:
DATE | WORKER | EVENT | REGISTRATION
XXXX | John | came in | PIN
XXXX | Mike | came out | Fingerprint
I found those solutions:
How to replace fetched values with another column while querying with SQL Server 2008 R2
Multiple column SQL joins in a table
The first link is very similar but related only to one column and the second link is more complicated but has a few "LEFT OUTER JOIN" commands which seems to be the good direction.
Can anybody give me directions on how to accomplish this?

Use this code:
select DATE=el.DATE,
WORKER=w.wORKER,
EVENT=e.EVENT,
REGISTRATION=r.REGISTRATION
from
IF NOT EXISTS EVENTS_LIST as el
join
IF NOT EXISTS WORKERS as w on el.WORKER=w.ID
join
IF NOT EXISTS EVENTS as e on el.EVENT=e.ID
join
IF NOT EXISTS REGISTRATION as r on el.REGISTRATION_METHOD=r.ID

This code is working for me:
select el.DATE,
w.wORKER,
e.EVENT,
r.REGISTRATION
from
EVENTS_LIST as el
join
WORKERS as w on el.WORKER=w.ID
join
EVENTS as e on el.EVENT=e.ID
join
REGISTRATION as r on el.REGISTRATION_METHOD=r.ID
Thanks!

Related

PyQT QSqlTableModel insertRecord problem in tables with an auto increment primary key

I have looked around for the solution to my problem a lot and haven't found anything. The problem is with inserting a new record into a database table that has an auto increment primary key. Below is the different scenarios and behaviors that I'm getting. Sample code.
model = QSqlTableModel(db = db, table = 'table')
record = model.record()
record.setGenerated('id', False)
record.setValue('name', 'a_name')
model.insertRecord(-1, record)
Scenario 1: Pre-populated table with some rows.
| id | name |
|----|------|
| 1 | name1|
| 2 | name2|
behavior 1: insertRecord does not add a new record to the table. If I remove the setGenerated flag and manually specify an id [record.setValue('id', 3)] I can insert a new record.
Scenario 2: Pre-populated table with some rows and first id not starting at 1.
| id | name |
|----|------|
| 3 | name1|
| 4 | name2|
behavior 2: first insertRecord adds a new record with id = 1, the second insertRecord adds another record with id = 2 . After that insertRecord does not add any new records. It will try to add a new record with id = 3, but id = 3 already exist and it won't jump to the next available id (id = 5). Again if I remove the setGenerated flag and manually specify the next available id [record.setValue('id', 5)] it works.
Scenario 3:
| id | name |
|----|------|
behavior 3: Empty table without any rows. insertRecord will start at id = 1 and continues to add new records with no problem.
How can I fix this problem without having to manually specify an auto incremented id?

SQL - Possible to Auto Increment Number but with leading zeros?

Our IDs look something like this "CS0000001" which stands for Customer with the ID 1. Is this possible to to with SQL and Auto Increment or do i need to to that in my GUI ?
I need the leading zeroes but with auto incrementing to prevent double usage if am constructing the ID in Python and Insert them into the DB.
Is that possible?
You have few choices:
Construct the CustomerID in your code which inserts the data into
the Customer table (=application side, requires change in your code)
Create a view on top of the Customer-table that contains the logic
and use that when you need the CustomerID (=database side, requires change in your code)
Use a procedure to do the inserts and construct the CustomerID in
the procedure (=database side, requires change in your code)
Possible realization.
Create data table
CREATE TABLE data (id CHAR(9) NOT NULL DEFAULT '',
val TEXT,
PRIMARY KEY (id));
Create service table
CREATE TABLE ids (id INT NOT NULL AUTO_INCREMENT PRIMARY KEY);
Create trigger which generates id value
CREATE TRIGGER tr_bi_data
BEFORE INSERT
ON data
FOR EACH ROW
BEGIN
INSERT INTO ids () VALUES ();
SET NEW.id = CONCAT('CS', LPAD(LAST_INSERT_ID(), 7, '0'));
DELETE FROM ids;
END
Create trigger which prohibits id value change
CREATE TRIGGER tr_bu_data
BEFORE UPDATE
ON data
FOR EACH ROW
BEGIN
SET NEW.id = OLD.id;
END
Insert some data, check result
INSERT INTO data (val) VALUES ('data-1'), ('data-2');
SELECT * FROM data;
id | val
:-------- | :-----
CS0000001 | data-1
CS0000002 | data-2
Try to update, ensure id change prohibited
UPDATE data SET id = 'CS0000100' WHERE val = 'data-1';
SELECT * FROM data;
id | val
:-------- | :-----
CS0000001 | data-1
CS0000002 | data-2
Insert one more data, ensure enumeration continues
INSERT INTO data (val) VALUES ('data-3'), ('data-4');
SELECT * FROM data;
id | val
:-------- | :-----
CS0000001 | data-1
CS0000002 | data-2
CS0000003 | data-3
CS0000004 | data-4
Check service table is successfully cleared
SELECT COUNT(*) FROM ids;
| COUNT(*) |
| -------: |
| 0 |
db<>fiddle here
Disadvantages:
Additional table needed.
Generated id value edition is disabled (copy and delete old record must be used instead, custom value cannot be set).

After upgrade from MySQL 5.5 to 5.7 queries are more frequently encountering deadlock

Recently we migrated our production DB to Amazon RDS with version upgrade from 5.5 to 5.7 using AWS DMS service. After that, we are frequently getting deadlock issues for our insert...on duplicate key update queries and update queries. Whereas in MySQL 5.5 it was very minimal.
For example, say one of our table structure is as follows.
CREATE TABLE `job_notification` (
`id` int(11) NOT NULL AUTO_INCREMENT,
`uid` int(11) NOT NULL,
`job_id` int(11) NOT NULL,
`created_time` int(11) NOT NULL,
`updated_time` int(11) NOT NULL,
`notify_status` tinyint(3) DEFAULT '0'
PRIMARY KEY (`id`),
UNIQUE KEY `uid` (`uid`,`job_id`),
) ENGINE=InnoDB AUTO_INCREMENT=58303732 DEFAULT CHARSET=utf8 COLLATE=utf8_bin
Our insert query is as follows...
INSERT INTO job_notification (uid, notify_status, updated_time, created_time, job_id) VALUES
('24832194',1,1571900253,1571900253,'734749'),
('24832194',1,1571900254,1571900254,'729161'),
('24832194',1,1571900255,1571900255,'713225'),
('24832194',1,1571900256,1571900256,'701897'),
('24832194',1,1571900257,1571900257,'682155'),
('24832194',1,1571900258,1571900258,'730817'),
('24832194',1,1571900259,1571900259,'717162'),
('24832194',1,1571900260,1571900260,'712884'),
('24832194',1,1571900261,1571900261,'708267'),
('24832194',1,1571900262,1571900262,'701855'),
('24832194',1,1571900263,1571900263,'702129'),
('24832194',1,1571900264,1571900264,'726738'),
('24832194',1,1571900265,1571900265,'725105'),
('24832194',1,1571900266,1571900266,'709306'),
('24832194',1,1571900267,1571900267,'702218'),
('24832194',1,1571900268,1571900268,'700966'),
('24832194',1,1571900269,1571900269,'693848'),
('24832194',1,1571900270,1571900270,'730793'),
('24832194',1,1571900271,1571900271,'729352'),
('24832194',1,1571900272,1571900272,'729043'),
('24832194',1,1571900273,1571900273,'724631'),
('24832194',1,1571900274,1571900274,'718394'),
('24832194',1,1571900275,1571900275,'711702'),
('24832194',1,1571900276,1571900276,'707765'),
('24832194',1,1571900277,1571900277,'692288'),
('24832194',1,1571900278,1571900278,'735549'),
('24832194',1,1571900279,1571900279,'730786'),
('24832194',1,1571900280,1571900280,'706814'),
('24832194',1,1571900281,1571900281,'688999'),
('24832194',1,1571900282,1571900282,'685079'),
('24832194',1,1571900283,1571900283,'686661'),
('24832194',1,1571900284,1571900284,'722110'),
('24832194',1,1571900285,1571900285,'715277'),
('24832194',1,1571900286,1571900286,'701846'),
('24832194',1,1571900287,1571900287,'730105'),
('24832194',1,1571900288,1571900288,'725579')
ON DUPLICATE KEY UPDATE notify_status=VALUES(notify_status), updated_time=VALUES(updated_time)
Our update query is as follows...
update job_notification set notify_status = 3 where uid = 51032194 and job_id in (616661, 656221, 386760, 189461, 944509, 591552, 154153, 538703, 971923, 125080, 722110, 715277, 701846, 725579, 686661, 685079)
These queries were working fine in MySQL 5.5 with the same packet size of data and index, but after the migration deadlocks are frequently coming for this type of queries...
NB: Ours is a high-level concurrent system.
innodb_deadlock_detect is disabled. innodb_lock_wait_timeout is 50.
innodb_buffer_pool_size is 50465865728
When we explained the queries it gave a better execution plan. Still, we are getting frequent deadlocks and because of this other queries also getting slowed.
Both queries are executed as different API threads (Different connections)
using pythons Mysqldb package with autocommit enabled in the MySQL DB.
Explain Output
explain update job_notification SET notify_status = 3 where uid = 51032194 and job_id in (616661, 656221, 386760, 189461, 944509, 591552, 154153, 538703, 971923, 125080, 722110, 715277, 701846, 725579, 686661, 685079);
+----+--------+------------+------------+-------+---------------+------+-----+-------+------+----------+--------+
| id | select_type | table | partitions | type | possible_keys | key | key_len | ref | rows | filtered |Extra |
+----+----------+------------+------------+-------+---------------+------+---------+-------------+------+----------+----------+
| 1 | UPDATE | job_notification | NULL | range | uid | uid | 8 | const,const | 27 | 100.00 | Using where |
+----+-------------+--------------------------+------------+-------+---------------+------+---------+-------------+--------+-------------+
If this is primarily a many-to-many mapping table, get rid if id and follow the rest of the advice in http://mysql.rjweb.org/doc.php/index_cookbook_mysql#many_to_many_mapping_table
That will make queries run faster on both systems. Faster = less chance of deadlock.
Let's see the deadlock; there could be other things going on. Use SHOW ENGINE INNODB STATUS; promptly after a deadlock occurs.

Create new SQLite table combining column from other tables with sqlite3 and python

I am trying to create a new table that combines columns from two different tables.
Let's imagine then that I have a database named db.db that includes two tables named table1 and table2.
table1 looks like this:
id | item | price
-------------
1 | book | 20
2 | copy | 30
3 | pen | 10
and table2 like this (note that has duplicated axis):
id | item | color
-------------
1 | book | blue
2 | copy | red
3 | pen | red
1 | book | blue
2 | copy | red
3 | pen | red
Now I'm trying to create a new table named new_table that combines both columns price and color over the same axis and also without duplicates. My code is the following (it does not obviously work because of my poor SQL skills):
con = sqlite3.connect(":memory:")
cur = con.cursor()
cur.execute("CREATE TABLE new_table (id varchar, item integer, price integer, color integer)")
cur.execute("ATTACH DATABASE 'db.db' AS other;")
cur.execute("INSERT INTO new_table (id, item, price) SELECT * FROM other.table1")
cur.execute("UPDATE new_table SET color = (SELECT color FROM other.table2 WHERE distinct(id))")
con.commit()
I know there are multiple errors in the last line of code but I can't get my head around it. What would be your approach to this problem? Thanks!
Something like
CREATE TABLE new_table(id INTEGER, item TEXT, price INTEGER, color TEXT);
INSERT INTO new_table(id, item, price, color)
SELECT DISTINCT t1.id, t1.item, t1.price, t2.color
FROM table1 AS t1
JOIN table2 AS t2 ON t1.id = t2.id;
Note the fixed column types; yours were all sorts of strange. item and color as integers?
If each id value is unique in the new table (Only one row will ever have an id of 1, only will be 2, and so on), that column should probably be an INTEGER PRIMARY KEY, too.
EDIT: Also, since you're creating this table in an in-memory database from tables from an attached file-based database... maybe you want a temporary table instead? Or a view might be more appropriate? Not sure what your goal is.

Append sqlite3 data from csv to a table whose 1 column is - id INTEGER PRIMARY KEY AUTOINCREMENT

So I have a table, which has the first column called id as autoincrement.
Now, Suppose I have data in the table with ids- 1,2,3
And I also have some data in the csv that starts with id 1,2,3
This is the code that I am trying to use-
cur.execute("CREATE TABLE IF NOT EXISTS sub_features (id INTEGER PRIMARY KEY AUTOINCREMENT,featureId INTEGER, name TEXT, FOREIGN KEY(featureId) REFERENCES features(id))")
df = pd.read_csv(csv_location+'/sub_features_table.csv')
df.to_sql("sub_features", con, if_exists='append', index=False)
I am getting this error-
sqlite3.IntegrityError: UNIQUE constraint failed: sub_features.id
How do I make sure that data gets appended and the id gets set as per requirement and in case the entire row is a duplicate then it gets ignored?
To explain further, Say I have a table:
id | Name
1 | Abhisek
2 | Amit
And I am trying to import this csv to the same table:
id | Name
1 | Abhisek
2 | Rahul
Then my resultant table should be:
id | Name
1 | Abhisek
2 | Amit
3 | Rahul

Categories