The user adds information here: the form
The information gets added to the shoes table.
The database: the database
I want to insert ShoeImage, ShoeName, ShoeStyle, ShoeColor, ShoePrice, and ShoeDescr, and NOT ShoeID (which is autoincrement),ShoeBrandID, and ShoeSizeID.
My insert statement:
$sql = "INSERT INTO $tblShoes VALUES (NULL, '$ShoeImage', '$ShoeName', '$ShoeStyle', '$ShoeColor',
'$ShoePrice', '$ShoeDescr')";
How to write this insert statement with inner join?
Might works.
INSERT INTO shoes
(
'ShoeImage',
'ShoeName',
'ShoeStyle',
'ShoeColor',
'ShoePrice',
'ShoeDescr',
'ShoeBrandID',
'ShoeSizeID'
)
VALUES(
NULL,
'$ShoeImage',
'$ShoeName',
'$ShoeStyle',
'$ShoeColor',
'$ShoePrice',
'$ShoeDescr',
(SELECT BrandID FROM shoebrand WHERE BrandName = '$ShoeBrand'),
(SELECT SizeID FROM shoesize WHERE Size = '$ShoeSize')
)
Related
Beginners question here. I wish to populate a table with many rows of data straight from a query I'm running in the same session. I wish to do it using with excutemany(). currently, I insert each row as a tuple, as shown in the script below.
Select Query to get the needed data:
This query returns data with 4 columns Parking_ID, Snapshot_Date, Snapshot_Time, Parking_Stat
park_set_stat_query = "SET #row_number = 0;"
park_set_stat_query2 = "SET #row_number2 = 0;"
# one time load to catch only the changes done in the input table
park_change_stat_query = """select in1.Parking_ID,
in1.Snapshot_Date as Snapshot_Date,
in1.Snapshot_Time as Snapshot_Time,
in1.Parking_Stat
from (SELECT
Parking_ID,
Snapshot_Date,
Snapshot_Time,
Parking_Stat,
(#row_number:=#row_number + 1) AS num1
from Fact_Parking_Stat_Input
WHERE Parking_Stat<>0) as in1
left join (SELECT
Parking_ID,
Snapshot_Date,
Snapshot_Time,
Parking_Stat,
(#row_number2:=#row_number2 + 1)+1 AS num2
from Fact_Parking_Stat_Input
WHERE Parking_Stat<>0) as in2
on in1.Parking_ID=in2.Parking_ID and in1.num1=in2.num2
WHERE (CASE WHEN in1.Parking_Stat<>in2.Parking_Stat THEN 1 ELSE 0 END=1) OR num1=1"""
Here is the insert part of the script:
as you can see below I insert each row to the destination table Fact_Parking_Stat_Input_Alter
mycursor = connection.cursor()
mycursor2 = connection.cursor()
mycursor.execute(park_set_stat_query)
mycursor.execute(park_set_stat_query2)
mycursor.execute(park_change_stat_query)
# # keep only changes in a staging table named Fact_Parking_Stat_Input_Alter
qSQLresults = mycursor.fetchall()
for row in qSQLresults:
Parking_ID = row[0]
Snapshot_Date = row[1]
Snapshot_Time = row[2]
Parking_Stat = row[3]
#SQL query to INSERT a record into the table Fact_Parking_Stat_Input_Alter.
mycursor2.execute('''INSERT into Fact_Parking_Stat_Input_Alter (Parking_ID, Snapshot_Date, Snapshot_Time, Parking_Stat)
values (%s, %s, %s, %s)''',
(Parking_ID, Snapshot_Date, Snapshot_Time, Parking_Stat))
# Commit your changes in the database
connection.commit()
mycursor.close()
mycursor2.close()
connection.close()
How can I improve the code so it will insert the data in on insert command?
Thanks
Amir
MYSQL has an INSERT INTO command that is probably far more efficient than query it in python, pulling it and re-iserting
https://www.mysqltutorial.org/mysql-insert-into-select/
Environment Details. Windows 10 - Python3.5 - Flask - SQL SERVER - PyOdbc.
cur.execute("insert into T1 (ID_CUSTOMER, ID_ENVIRONMENT) values (?, ?)",
select id from CUSTOMER where name = form['customerId'],
select id from ENVIRONMENT where name = form['environmentId'],
)
Remove the values, you can directly use INSERT SELECT.
INSERT INTO T1
(ID_CUSTOMER,
ID_ENVIRONMENT)
SELECT ID_CUSTOMER =(SELECT Max(id)
FROM customer
WHERE NAME = 'abc'),
ID_ENVIRONMENT = (SELECT Max(id)
FROM environment
WHERE NAME = 'xyz')
The Max(id) here is to make sure the inner query returns a single value
I'm having a bit of trouble trying to fix a problem I'm having in retrieving the last insert id from a query in SQLite3 using Python.
Here's a sample of my code:
import sqlite3
# Setup our SQLite Database
conn = sqlite3.connect('value_serve.db')
conn.execute("PRAGMA foreign_keys = 1") # Enable Foreign Keys
cursor = conn.cursor()
# Create table for Categories
conn.executescript('DROP TABLE IF EXISTS Category;')
conn.execute('''CREATE TABLE Category (
id INTEGER PRIMARY KEY AUTOINCREMENT,
category CHAR(132),
description TEXT,
parent_id INT,
FOREIGN KEY (parent_id) REFERENCES Category (id)
);''')
conn.execute("INSERT INTO Category (category, parent_id) VALUES ('Food', NULL)")
food_category = cursor.lastrowid
conn.execute("INSERT INTO Category (category, parent_id) VALUES ('Beverage', NULL)")
beverage_category = cursor.lastrowid
...
conn.commit() # Commit to Database
No matter what I do, when I try to get the value of 'food_category' I get a return value of 'None'.
Any help would be appreciated, thanks in advance.
The lastrowid value is set per cursor, and only visible to that cursor.
You need to execute your query on the cursor that executed the query to get the last row id. You are asking an arbitrary cursor, one that never actually is used to execute the query for a last row id, but that cursor can't know that value.
If you actually execute the query on the cursor object, it works:
cursor.execute("INSERT INTO Category (category, parent_id) VALUES ('Food', NULL)")
food_category = cursor.lastrowid
The connection.execute() function creates a new (local) cursor for that query and the last row id is only visible on that local cursor. That cursor is returned when you use connection.execute(), so you could get the same value from that return value:
cursor_used = conn.execute("INSERT INTO Category (category, parent_id) VALUES ('Food', NULL)")
food_category = cursor_used.lastrowid
I'm trying to get table name for field in result set that I got from database (Python, Postgres). There is a function in PHP to get table name for field, I used it and it works so I know it can be done (in PHP). I'm looking for similar function in Python.
pg_field_table() function in PHP gets results and field number and "returns the name of the table that field belongs to". That is exactly what I need, but in Python.
Simple exaple - create tables, insert rows, select data:
CREATE TABLE table_a (
id INT,
name VARCHAR(10)
);
CREATE TABLE table_b (
id INT,
name VARCHAR(10)
);
INSERT INTO table_a (id, name) VALUES (1, 'hello');
INSERT INTO table_b (id, name) VALUES (1, 'world');
When using psycopg2 or sqlalchemy I got right data and right field names but without information about table name.
import psycopg2
query = '''
SELECT *
FROM table_a A
LEFT JOIN table_b B
ON A.id = B.id
'''
con = psycopg2.connect('dbname=testdb user=postgres password=postgres')
cur = con.cursor()
cur.execute(query)
data = cur.fetchall()
print('fields', [desc[0] for desc in cur.description])
print('data', data)
The example above prints field names. The output is:
fields ['id', 'name', 'id', 'name']
data [(1, 'hello', 1, 'world')]
I know that there is cursor.description, but it does not contain table name, just the field name.
What I need - some way to retrieve table names for fields in result set when using raw SQL to query data.
EDIT 1: I need to know if "hello" came from "table_a" or "table_b", both fields are named same ("name"). Without information about table name you can't tell in which table the value is.
EDIT 2: I know that there are some workarounds like SQL aliases: SELECT table_a.name AS name1, table_b.name AS name2 but I'm really asking how to retrieve table name from result set.
EDIT 3: I'm looking for solution that allows me to write any raw SQL query, sometimes SELECT *, sometimes SELECT A.id, B.id ... and after executing that query I will get field names and table names for fields in the result set.
It is necessary to query the pg_attribute catalog for the table qualified column names:
query = '''
select
string_agg(format(
'%%1$s.%%2$s as "%%1$s.%%2$s"',
attrelid::regclass, attname
) , ', ')
from pg_attribute
where attrelid = any (%s::regclass[]) and attnum > 0 and not attisdropped
'''
cursor.execute(query, ([t for t in ('a','b')],))
select_list = cursor.fetchone()[0]
query = '''
select {}
from a left join b on a.id = b.id
'''.format(select_list)
print cursor.mogrify(query)
cursor.execute(query)
print [desc[0] for desc in cursor.description]
Output:
select a.id as "a.id", a.name as "a.name", b.id as "b.id", b.name as "b.name"
from a left join b on a.id = b.id
['a.id', 'a.name', 'b.id', 'b.name']
I would like to remove the duplicate data only if three columns (name, price and new price) matching with the same data. But in an other python script.
So the data can insert in to the database, but with an other python script, I want to delete this duplicate data by a cron job.
So in this case:
cur.execute("INSERT INTO cars VALUES(8,'Hummer',41400, 49747)")
cur.execute("INSERT INTO cars VALUES(9,'Volkswagen',21600, 36456)")
are duplicates. Example script with inserted data:
import psycopg2
import sys
con = None
try:
con = psycopg2.connect(database='testdb', user='janbodnar')
cur = con.cursor()
cur.execute("CREATE TABLE cars(id INT PRIMARY KEY, name VARCHAR(20), price INT, new price INT)")
cur.execute("INSERT INTO cars VALUES(1,'Audi',52642, 98484)")
cur.execute("INSERT INTO cars VALUES(2,'Mercedes',57127, 874897)")
cur.execute("INSERT INTO cars VALUES(3,'Skoda',9000, 439788)")
cur.execute("INSERT INTO cars VALUES(4,'Volvo',29000, 743878)")
cur.execute("INSERT INTO cars VALUES(5,'Bentley',350000, 434684)")
cur.execute("INSERT INTO cars VALUES(6,'Citroen',21000, 43874)")
cur.execute("INSERT INTO cars VALUES(7,'Hummer',41400, 49747)")
cur.execute("INSERT INTO cars VALUES(8,'Hummer',41400, 49747)")
cur.execute("INSERT INTO cars VALUES(9,'Volkswagen',21600, 36456)")
cur.execute("INSERT INTO cars VALUES(10,'Volkswagen',21600, 36456)")
con.commit()
except psycopg2.DatabaseError, e:
if con:
con.rollback()
print 'Error %s' % e
sys.exit(1
finally:
if con:
con.close()
You can do this in one statement without additional round-trips to the server.
DELETE FROM cars
USING (
SELECT id, row_number() OVER (PARTITION BY name, price, new_price
ORDER BY id) AS rn
FROM cars
) x
WHERE cars.id = x.id
AND x.rn > 1;
Requires PostgreSQL 8.4 or later for the window function row_number().
Out of a set of dupes the smallest id survives.
Note that I changed "new price" to new_price.
Or use the EXISTS semi-join, that #wildplasser posted as comment to the same effect.
Or, to by special request of CTE-devotee #wildplasser, with a CTE instead of the subquery ... :)
WITH x AS (
SELECT id, row_number() OVER (PARTITION BY name, price, new_price
ORDER BY id) AS rn
FROM cars
)
DELETE FROM cars
USING x
WHERE cars.id = x.id
AND x.rn > 1;
Data modifying CTE requires Postgres 9.1 or later.
This form will perform about the same as the one with the subquery.
Use a GROUP BY SQL statement to identify the rows, together with the initial primary key:
duplicate_query = '''\
SELECT MIN(id), "name", price, "new price"
FROM cars
GROUP BY "name", price, "new price"
HAVING COUNT(ID) > 1
'''
The above query selects the lowest primary key id for each group of (name, price, "new price") rows where there is more than one primary key id. For your sample data, this will return:
7, 'Hummer', 41400, 49747
9, 'Volkswagen', 21600, 36456
You can then use the returned data to delete the duplicates:
delete_dupes = '''
DELETE
FROM cars
WHERE
"name"=%(name)s AND price=%(price)s AND "new price"=%(newprice)s AND
id > %(id)s
'''
cur.execute(duplicate_query)
dupes = cur.fetchall()
cur.executemany(delete_dupes, [
dict(name=r[1], price=r[2], newprice=r[3], id=r[0])
for r in dupes])
Note that we delete any row where the primary key id is larger than the first id with the same 3 columns. For the first dupe, only the row with id 8 will match, for the second dupe the row with id 10 matches.
This does do a separate delete for each dupe found. You can combine this into one statement with a WHERE EXISTS sub-select query:
delete_dupes = '''\
DELETE FROM cars cdel
WHERE EXISTS (
SELECT *
FROM cars cex
WHERE
cex."name" = cdel."name" AND
cex.price = cdel.price AND
cex."new price" = cdel."new price" AND
cex.id > cdel.id
)
'''
cur.execute(delete_dupes)
This instructs PostgreSQL to delete any row for which there are other rows with the same name, price and new price but with a primary key that is higher than the current row.