I have a question about SQL, especially SQLite3. I have two tables, let's name them main_table and temp_table. These tables are based on the same relational schema so they have the same columns but different rows (values).
Now what I want to do:
For each row of the main_table I want to replace it if there is a row in a temp_table with the same ID. Otherwise I want to keep the old row in the table.
I was thinking about using some joins but it does not provides the thing I want.
Would you give me an advice?
EDIT: ADITIONAL INFO:
I would like to avoid writing all columns because those tables conains tens of attributes and since I have to update all columns it couldn't be necessary to write out all of them.
If the tables have the same structure, you can simply use SELECT *:
BEGIN;
DELETE FROM main_table
WHERE id IN (SELECT id
FROM temp_table);
INSERT INTO main_table
SELECT * FROM temp_table;
COMMIT;
(This will also add any new rows in temp_table that did not previously exist in main_table.)
You have 2 approaches:
Update current rows inside main_table with data from temp_table. The relation will be based by ID.
Add a column to temp_table to mark all rows that have to be transferred to main_table or add aditional table to store IDs that have to be transferred. Then delete all rows that have to be transferred from table main_table and insert corresponding rows from temp_table using column with marks or new table.
Related
I have looked at the documentation on the available statements but I have not seen any statement that will enable me to move deleted rows to another table.
here is a snippet of sql code:
CREATE TABLE %s;
INSERT INTO rm.table_access (%s) VALUES (%s);
DELETE FROM rm.table_access
Where (%s) LIKE 'HEARTBEAT' AND -7 AND -077 AND -77
OUTPUT Deleted.(%s) INTO test_tables;
Any ideas how to approach this? Is it even possible?
No, this is not possible in BigQuery. It has no implementation of the kind of virtual tables for modified and deleted rows that some traditional RDBMS have.
To implement something similar, you would need to select the rows to be deleted into a new table FIRST (using a CREATE TABLE AS SELECT statement), then delete them from the main table.
In your case, I would suggest you create a new table with appropriate filters for each. You can use the CREATE TABLE dataset.newtable AS SELECT x FROM T, you can read more about it here. Thus, your syntax would be:
CREATE TABLE `your_new_table` AS SELECT *
FROM `source_table`
WHERE VALUES LIKE 'HEARTBEAT' AND VALUES LIKE '-7' AND VALUES LIKE '-77'
I used single quotes in the WHERE statement because I assumed that the field you are filtering is a String. Another option would be using the function REGEXP_CONTAINS(), you can find out more about it here. Your syntax for the filter would be simplified, as follows:
WHERE REGEXP_CONTAINS(VALUES,"HEARTBEAT|STRING|STRING")
Notice that the values you compare using the above method must be a String. Therefore you have to make sure of this conversion before using it, you case use the CAST() function.
In addition, if you want to delete rows in your source table, you can use DELETE. The syntax is as below:
DELETE `source_table` WHERE field_1 = 'HEARTBEAT'
Notice that you wold be deleting rows directly from your source table.
I hope it helps.
UPDATE
Creating a new table with the rows you desire and another table with the "deleted" rows.
#Table with the rows which match the filter and will be "deleted"
#Notice that you have to provide the path to you table
#`project.dataset.your_new_table`
CREATE TABLE `your_new_table` AS
SELECT field1, field2 #select the columns you want
FROM `source_table`
WHERE field1 LIKE 'HEARTBEAT' AND field1 LIKE '-7' AND field1 LIKE '-77'
Now, you get the rows which did not passed through the filter in the first step.They will compose the table with the desired rows, as below:
CREATE TABLE `table_desired_rows` AS
SELECT field1, field2 #select the columns you want
FROM `source_table`
WHERE field1 NOT LIKE 'HEARTBEAT'
AND field1 NOT LIKE '-7'
AND field1 NOT LIKE '-77'
Now you have your source table with raw data, another table with the desired rows and a table with the rows you ignored.
Second option:
If you do not need the raw data, that means you can modify the source table. You can first create a table with the ignored rows and then delete these rows from your source data.
#creating the table with the rows which will be deleted
#notice that you create a new table with these rows
CREATE TABLE `table_ignored_rows` AS
SELECT field1, field2 #select the columns you want
FROM `source_table`
WHERE field1 LIKE 'HEARTBEAT'
AND field1 LIKE '-7'
AND field1 LIKE '-77';
#now deleting the rows from the source table
DELETE `source_table` WHERE field1 LIKE 'HEARTBEAT'
AND field1 LIKE '-7'
AND field1 LIKE '-77';
I'm using plugin DataStax Python Driver for Apache Cassandra.
I want to read 100 rows from database and then insert them again into database after changing one value. I do not want to miss previous records.
I know how to get my rows:
rows = session.execute('SELECT * FROM columnfamily LIMIT 100;')
for myrecord in rows:
print(myrecord.timestamp)
I know how to insert new rows into database:
stmt = session.prepare('''
INSERT INTO columnfamily (rowkey, qualifier, info, act_date, log_time)
VALUES (, ?, ?, ?, ?)
IF NOT EXISTS
''')
results = session.execute(stmt, [arg1, arg2, ...])
My problems are that:
I do not know how to change only one value in a row.
I don't know how to insert rows into database without using CQL. My columnfamily has more than 150 columns and writing all their names in query does not seem as a best idea.
To conclude:
Is there a way to get rows, modify one value from every one of them and then insert this rows into database without using only CQL?
First, you need to select only needed columns from Cassandra - it will be faster to transfer the data. You need to include all columns of primary key + column that you want to change.
After you get the data, you can use UPDATE command to update only necessary column (example from documentation):
UPDATE cycling.cyclist_name
SET comments ='='Rides hard, gets along with others, a real winner'
WHERE id = fb372533-eb95-4bb4-8685-6ef61e994caa
You can also use prepared statement to make it more performant...
But be careful - the UPDATE & INSERT in CQL are really UPSERTs, so if you change columns that are part of primary key, then it will create new entry...
I have a list of entries that is around 6 million in a text file. I have to check against table to return ALL rows are in text file. For that purpose I want to use SEELCT IN. I want to it is OK to convert all of them in a single query and run?
I am using MySQL.
You can create a temporary table or variable in Database insert the values into that table or variable and then you can perform IN operation like given below.
SELECT field
FROM table
WHERE value IN SELECT somevalue from sometable
Thanks
I need to insert in three columns of a table in mysql at a time. First two columns are inserted by selecting data from other tables by using select statement while the third column needs to be inserted directly and it doesn't need any select. I don't know its syntax in mysql. pos is an array and i need to insert it simultaneously.
here is my sql command in python.
sql="insert into quranic_index_2(quran_wordid,translationid,pos) select quranic_words.wordid,quran_english_translations.translationid from quranic_words, quran_english_translation where quranic_words.lemma=%s and quran_english_translations.verse_no=%s and
quran_english_translations.translatorid="%s,values(%s)"
data=l,words[2],var1,words[i+1]
r=cursor.execute(sql,data)
data is passing variables in which all the variables are stored. words[i+1] holds values for pos.
Try using below sample query :
INSERT INTO table_name(field_1, field_2, field3) VALUES
('Value_1', (SELECT value_2,from user_table ), 'value_3')
I'm trying to write a function for removing columns in sqlite(Because sometimes I might want to delete columns which are too old).
From SQLite FAQ:
SQLite has limited ALTER TABLE support
that you can use to add a column to
the end of a table or to change the
name of a table. If you want to make
more complex changes in the structure
of a table, you will have to recreate
the table. You can save existing data
to a temporary table, drop the old
table, create the new table, then copy
the data back in from the temporary
table.
For example, suppose you have a table
named "t1" with columns names "a",
"b", and "c" and that you want to
delete column "c" from this table. The
following steps illustrate how this
could be done:
BEGIN TRANSACTION;
CREATE TEMPORARY TABLE t1_backup(a,b);
INSERT INTO t1_backup SELECT a,b FROM t1;
DROP TABLE t1;
CREATE TABLE t1(a,b);
INSERT INTO t1 SELECT a,b FROM t1_backup;
DROP TABLE t1_backup;
COMMIT;