Update a row with a specific id - python

id is the first column of my Sqlite table.
row is a list or tuple with the updated content, with the columns in the same order than in the database.
How can I do an update command with:
c.execute('update mytable set * = ? where id = ?', row)
without hardcoding all the column names? (I'm in prototyping phase, and this is often subject to change, that's why I don't want to hardcode the column names now).
Obviously * = ? is probably incorrect, how to modify this?
Also, having where id = ? at the end of the query should expect having id as the last element of row, however, it's the first element of row (because, still, row elements use the same column order as the database itself, and id is first column).

You could extract the column names using the table_info PRAGMA. this will have the column names in order. You could then build the statement in parts and finally combine them.
e.g. for a table defined with :-
CREATE TABLE "DATA" ("idx" TEXT,"status" INTEGER,"unit_val" TEXT DEFAULT (null) );
Then
PRAGMA table_info (data);
returns :-
i.e. you want to extract the name column.
You may be interested in - PRAGMA Statements
An alternative approach would be to extract the create sql from sqlite_master. However that would require more complex code to extract the column names.

Related

ON DUPLICATE KEY UPDATE non-index columns

I have a code that is updating a few mySQL tables with the data that is coming from a Sybase database. The table structures are exactly the same.
Since the number of tables may increase in the future, I wrote a Python script that loops over an array of table names, and based on the number of columns in each of those tables, the insert statement dynamically changes:
'''insert into databaseName.{} ({}) values ({})'''.format(table, columns, parameters)
as you can see, the value parameters are not hardcoded, which has caused this problem where I can't modify this query to do an "ON DUPLICATE KEY UPDATE".
for example, the insert statement may look like:
insert into databaseName.table_foo (col1,col2,col3,col4,col5) values (%s,%s,%s,%s,%s)
or
insert into databaseName.table_bar (col1,col2,col3) values (%s,%s,%s)
how can I use "ON DUPLICATE KEY UPDATE" in here to update non-index columns with their corresponding non-index values?
I can update this question by including more details if needed.
The easiest solution is this:
'''replace into databaseName.{} ({}) values ({})'''.format(table, columns, parameters)
This works similarly to IODKU, in that if the values conflict with a PRIMARY KEY or UNIQUE KEY of the table, it replaces the row, overwriting the other columns, instead of causing a duplicate key error.
The difference is that REPLACE does a DELETE of the old row followed by an INSERT of the new row. Whereas IODKU does either an INSERT or an UPDATE. We know this because if you create triggers on the table, you'll see which triggers are activated.
Anyway, using REPLACE would make your task a lot simpler in this case.
If you must use IODKU, you would need to add more syntax after the update at the end. Unfortunately, there is no syntax for "assign all the columns respectively to the new row's values." You must assign them individually.
For MySQL 8.0.19 or later use this syntax:
INSERT INTO t1 (a,b,c) VALUES (?,?,?) AS new
ON DUPLICATE KEY UPDATE a = new.a, b = new.b, c = new.c;
In earlier MySQL, use this syntax:
INSERT INTO t1 (a,b,c) VALUES (?,?,?)
ON DUPLICATE KEY UPDATE a = VALUES(a), b = VALUES(b), c = VALUES(c);

Delete first row from SQLITE table in python

Its a simple question, how can I just delete the first line from a table without having to give a search criteria.
Normaly it is:
c.execute('DELETE FROM name_table WHERE tada=?', (tadida,))
I just want to delete first row. Not having the WHERE part. The reason is that I want to create a FIFO table (or stack) add from the bottom and delete from the top.
I can do this by keeping track of time and date or giving the rows a ID. But I would prefer the described method.
Thanx.
I just want to delete first row
SQL tables have no inherent ordering, so there is no defined concept of first row, unless a column (or a set of columns) is specified for ordering.
Assuming that you do have an ordering colum, say id, you can use limit to restrict which row should be deleted:
delete from mytable order by id limit 1
This removes the record that has the smallest id from the table.
Unless you use a custom version of sqlite, you can't use ORDER BY or LIMIT with DELETE.
If your version of sqlite wasn't built with that option (Some OS-distributed ones are, some aren't), and building and installing a copy with it is beyond your comfort level, an alternative, assuming a column named id is used for ordering, with the smallest value of id being the oldest record:
DELETE FROM yourtable WHERE id = (SELECT min(id) FROM yourtable);

SQL- Can you Delete a row in BigQuery and Move the deleted row to another table?

I have looked at the documentation on the available statements but I have not seen any statement that will enable me to move deleted rows to another table.
here is a snippet of sql code:
CREATE TABLE %s;
INSERT INTO rm.table_access (%s) VALUES (%s);
DELETE FROM rm.table_access
Where (%s) LIKE 'HEARTBEAT' AND -7 AND -077 AND -77
OUTPUT Deleted.(%s) INTO test_tables;
Any ideas how to approach this? Is it even possible?
No, this is not possible in BigQuery. It has no implementation of the kind of virtual tables for modified and deleted rows that some traditional RDBMS have.
To implement something similar, you would need to select the rows to be deleted into a new table FIRST (using a CREATE TABLE AS SELECT statement), then delete them from the main table.
In your case, I would suggest you create a new table with appropriate filters for each. You can use the CREATE TABLE dataset.newtable AS SELECT x FROM T, you can read more about it here. Thus, your syntax would be:
CREATE TABLE `your_new_table` AS SELECT *
FROM `source_table`
WHERE VALUES LIKE 'HEARTBEAT' AND VALUES LIKE '-7' AND VALUES LIKE '-77'
I used single quotes in the WHERE statement because I assumed that the field you are filtering is a String. Another option would be using the function REGEXP_CONTAINS(), you can find out more about it here. Your syntax for the filter would be simplified, as follows:
WHERE REGEXP_CONTAINS(VALUES,"HEARTBEAT|STRING|STRING")
Notice that the values you compare using the above method must be a String. Therefore you have to make sure of this conversion before using it, you case use the CAST() function.
In addition, if you want to delete rows in your source table, you can use DELETE. The syntax is as below:
DELETE `source_table` WHERE field_1 = 'HEARTBEAT'
Notice that you wold be deleting rows directly from your source table.
I hope it helps.
UPDATE
Creating a new table with the rows you desire and another table with the "deleted" rows.
#Table with the rows which match the filter and will be "deleted"
#Notice that you have to provide the path to you table
#`project.dataset.your_new_table`
CREATE TABLE `your_new_table` AS
SELECT field1, field2 #select the columns you want
FROM `source_table`
WHERE field1 LIKE 'HEARTBEAT' AND field1 LIKE '-7' AND field1 LIKE '-77'
Now, you get the rows which did not passed through the filter in the first step.They will compose the table with the desired rows, as below:
CREATE TABLE `table_desired_rows` AS
SELECT field1, field2 #select the columns you want
FROM `source_table`
WHERE field1 NOT LIKE 'HEARTBEAT'
AND field1 NOT LIKE '-7'
AND field1 NOT LIKE '-77'
Now you have your source table with raw data, another table with the desired rows and a table with the rows you ignored.
Second option:
If you do not need the raw data, that means you can modify the source table. You can first create a table with the ignored rows and then delete these rows from your source data.
#creating the table with the rows which will be deleted
#notice that you create a new table with these rows
CREATE TABLE `table_ignored_rows` AS
SELECT field1, field2 #select the columns you want
FROM `source_table`
WHERE field1 LIKE 'HEARTBEAT'
AND field1 LIKE '-7'
AND field1 LIKE '-77';
#now deleting the rows from the source table
DELETE `source_table` WHERE field1 LIKE 'HEARTBEAT'
AND field1 LIKE '-7'
AND field1 LIKE '-77';

Postgres: autogenerate primary key in postgres using python

cursor.execute('UPDATE emp SET name = %(name)s',{"name": name} where ?)
I don't understand how to get primary key of a particular record.
I have some N number of records present in DB. I want to access those record &
manipulate.
Through SELECT query i got all records but i want to update all those records accordingly
Can someone lend a helping hand?
Thanks in Advance!
Table structure:
ID CustomerName ContactName
1 Alfreds Futterkiste
2 Ana Trujillo
Here ID is auto genearted by system in postgres.
I am accessing CustomerName of two record & updating. So here when i am updating
those record the last updated is overwrtited in first record also.
Here i want to set some condition so that When executing update query according to my record.
After Table structure:
ID CustomerName ContactName
1 xyz Futterkiste
2 xyz Trujillo
Here I want to set first record as 'abc' 2nd record as 'xyz'
Note: It ll done using PK. But i dont know how to get that PK
You mean you want to use UPDATE SQL command with WHERE statement:
cursor.execute("UPDATE emp SET CustomerName='abc' WHERE ID=1")
cursor.execute("UPDATE emp SET CustomerName='xyz' WHERE ID=2")
This way you will UPDATE rows with specific IDs.
Maybe you won't like this, but you should not use autogenerated keys in general. The only exception is when you want to insert some rows and do not do anything else with them. The proper solution is this:
Create a sequencefor your table. http://www.postgresql.org/docs/9.4/static/sql-createsequence.html
Whenever you need to insert a new row, get the next value from the generator (using select nextval('generator_name')). This way you will know the ID before you create the row.
Then insert your row by specifying the id value explicitly.
For the updates:
You can create unique constraints (or unique indexes) on sets of coulmns that are known to be unique
But you should identify the rows with the identifiers internally.
When referring records in other tables, use the identifiers, and create foreign key constraints. (Not always, but usually this is good practice.)
Now, when you need to updatea row (for example: a customer) then you should already know which customer needs to be modified. Because all records are identified by the primary key id, you should already know the id for that row. If you don't know it, but you have an unique index on a set of fields, then you can try to get the id. For example:
select id from emp where CustomerName='abc' -- but only if you have a unique constraing on CustomerName!
In general, if you want to update a single row, then you should NEVER update this way:
update emp set CustomerName='newname' where CustomerName='abc'
even if you have an unique constraint on CustomerName. The explanation is not easy, and won't fit here. But think about this: you may be sending changes in a transaction block, and there can be many opened transactions at the same time...
Of course, it is fine to update rows, if you intention is to update all rows that satisfy your condition.

SQL Query for filling a SINGLE column with values

I have an existing SQL database table I'm interacting with over pyodbc. I have written a class that uses pyodbc to interact with the database by performing reads and creating and deleting columns. The final bit of functionality I require is the ability to fill a created column (full of NULLs by default) with values from a python list (that I plan to iterate over and then finalize with db.commit()) - without having an effect on other columns or adding extra rows.
I tried the following query iterated over in a for loop;
INSERT INTO table_name (required_column) VALUES (value)
Thus the class method;
def writeToColumn(self, columnName, tableName, writeData):
for item in writeData:
self.cursor.execute('INSERT INTO ' + tableName + '(' + columnName + ') VALUES (' + item + ')')
self.cursor.commit()
Where value represent the current index value of a list.
But this adds an entire new row and fills cells of columns of not mentioned with nulls.
what I want to do is replace all of the data in a column without the other columns being effected in any way. Is there some query that can do this?
Thanks!
Not surprisingly, calling INSERT will always insert a new row, hence the name. If you need to update an existing row, you need to call UPDATE.
UPDATE table_name SET required_column=value WHERE ...
where the WHERE condition identifies your existing row somehow (probably via the primary key).

Categories