I would like to be able to return a list of all fields (ideally with the table details) used by an given SQL query. E.g. the input of the query:
SELECT t1.field1, field3
FROM dbo.table1 AS t1
INNER JOIN dbo.table2 as t2
ON t2.field2 = t1.field2
WHERE t2.field1 = 'someValue'
would return
+--------+-----------+--------+
| schema | tablename | field |
+--------+-----------+--------+
| dbo | table1 | field1 |
| dbo | table1 | field2 |
| dbo | table1 | field3 |
| dbo | table2 | field1 |
| dbo | table2 | field2 |
+--------+-----------+--------+
Really it needs to make use of the SQL kernal (is that the right word? engine?) as there is no way that the reader can know that field3 is in table1 not table2. For this reason I would assume that the solution would be an SQL. Bonus points if it can handle SELECT * too!
I have attempted a python solution using sqlparse (https://sqlparse.readthedocs.io/en/latest/), but was having trouble with the more complex SQL queries involving temporary tables, subqueries and CTEs. Also handling of aliases was very difficult (particularly if the query used the same alias in multiple places). Obviously it could not handle cases like field3 above which had no table identifier. Nor can it handle SELECT *.
I was hoping there might be a more elgant solution within SQL Server Management Studio or even some function within SQL Server itself. We have SQL Prompt from Redgate, which must have some understand within its intellisense, of the architecture and SQL query it is formatting.
UPDATE:
As requested: the reason I'm trying to do this is to work out which Users can execute which SSRS Reports within our organisation. This is entirely dependent on them having GRANT SELECT permissions assigned to their Roles on all fields used by all datasets (in our case SQL queries) in a given report. I have already managed to report on which Users have GRANT SELECT on which fields according to their Roles. I now want to extend that to which reports those permissions allow them to run.
The column table names may be tricky because column names can be ambiguous or even derived. However, you can get the column names, sequence and type from virtually any query or stored procedure.
Example
Select column_ordinal
,name
,system_type_name
From sys.dm_exec_describe_first_result_set('Select * from YourTable',null,null )
I think I have now found an answer. Please note: I currently do not have permissions to execute these functions so I have not yet tested it - I will update the answer when I've had a chance to test it. Thanks for the answer goes to #milivojeviCH. The answer is copied from here: https://stackoverflow.com/a/19852614/6709902
The ultimate goal of selecting all the columns used in an SQL Server's execution plan solved:
USE AdventureWorksDW2012
DBCC FREEPROCCACHE
SELECT dC.Gender, dc.HouseOwnerFlag,
SUM(fIS.SalesAmount) AS SalesAmount
FROM
dbo.DimCustomer dC INNER JOIN
dbo.FactInternetSales fIS ON fIS.CustomerKey = dC.CustomerKey
GROUP BY dC.Gender, dc.HouseOwnerFlag
ORDER BY dC.Gender, dc.HouseOwnerFlag
/*
query_hash query_plan_hash
0x752B3F80E2DB426A 0xA15453A5C2D43765
*/
DECLARE #MyQ AS XML;
-- SELECT qstats.query_hash, query_plan_hash, qplan.query_plan AS [Query Plan],qtext.text
SELECT #MyQ = qplan.query_plan
FROM sys.dm_exec_query_stats AS qstats
CROSS APPLY sys.dm_exec_query_plan(qstats.plan_handle) AS qplan
cross apply sys.dm_exec_sql_text(qstats.plan_handle) as qtext
where text like '% fIS %'
and query_plan_hash = 0xA15453A5C2D43765
SeLeCt #MyQ
;WITH xmlnamespaces (default 'http://schemas.microsoft.com/sqlserver/2004/07/showplan')
SELECT DISTINCT
[Database] = x.value('(#Database)[1]', 'varchar(128)'),
[Schema] = x.value('(#Schema)[1]', 'varchar(128)'),
[Table] = x.value('(#Table)[1]', 'varchar(128)'),
[Alias] = x.value('(#Alias)[1]', 'varchar(128)'),
[Column] = x.value('(#Column)[1]', 'varchar(128)')
FROM #MyQ.nodes('//ColumnReference') x1(x)
Leads to the following output:
Database Schema Table Alias Column
------------------------- ------ ---------------- ----- ----------------
NULL NULL NULL NULL Expr1004
[AdventureWorksDW2012] [dbo] [DimCustomer] [dC] CustomerKey
[AdventureWorksDW2012] [dbo] [DimCustomer] [dC] Gender
[AdventureWorksDW2012] [dbo] [DimCustomer] [dC] HouseOwnerFlag
[AdventureWorksDW2012] [dbo] [FactInternetSal [fIS] CustomerKey
[AdventureWorksDW2012] [dbo] [FactInternetSal [fIS] SalesAmount
Related
Hello I need help with Python and SQL.
I have 2 tables:
users_table:
userid | name |
tasks_table:
userid | name | date
What I need it's get users ids from 1st table:
SELECT userid FROM users_table
And use those userids to make SELECT from second table:
SELECT count(date) from tasks_table WHERE userid=xxx
How can I do it with python? I'm tried to use loop but it didn't work for some reason maybe I did something wrong.
I'll be grateful for any help.
Thanks!
SELECT U.USER_ID,COUNT(T.DATE)AS CNT
FROM USERS_TABLE AS U
LEFT JOIN TASKS_TABLE AS T ON U.USER_ID=T.USER_ID
GROUP BY U.USER_ID
I am looking for a way to create a number of filters across a few tables in my SQL database. The 2 tables I require the data from are Order and OrderDetails.
The Order table is like this:
------------------------------------
| OrderID | CustomerID | OrderDate |
------------------------------------
The OrderDetails table is like this:
----------------------------------
| OrderID | ProductID | Quantity |
----------------------------------
I want to make it so that it counts the number of instances a particular OrderID pops up in a single day. For example, it will choose an OrderID in Order and then match it to the OrderIDs in OrderDetails, counting the number of times it pops up in OrderDetails.
-----------------------------------------------------------
| OrderID | CustomerID | OrderDate | ProductID | Quantity |
-----------------------------------------------------------
The code I used is below here:
# Execute SQL Query (number of orders made on a particular day entered by a user)
cursor.execute("""
SELECT 'order.*', count('orderdetails.orderid') as 'NumberOfOrders'
from 'order'
left join 'order'
on ('order.orderid' = 'orderdetais.orderid')
group by
'order.orderid'
""")
print(cursor.fetchall())
Also, the current output that I get is this when I should get 3:
[('order.*', 830)]
Your immediate problem is that you are abusing the use of single quotes. If you need to quote an identifiers (table name, column name and the-like), then you should use double quotes in SQLite (this actually is the SQL standard). And an expression such as order.* should not be quoted at all. You are also self-joining the orders table, while you probably want to bring the orderdetails.
You seem to want:
select
o.orderID,
o.customerID,
o.orderDate,
count(*) number_of_orders
from "order" o
left join orderdetails od on od.orderid = o.orderid
group by o.orderID, o.customerID, o.orderDate
order is a language keyword, so I did quote it - that table would be better named orders, to avoid the conflicting name. Other identifiers do not need to be quoted here.
Since all you want from orderdetails is the count, you could also use a subquery instead of aggregation:
select
o.*,
(select count(*) from orderdetails od where od.orderid = o.oderid) number_of_orders
from "order" o
Our IDs look something like this "CS0000001" which stands for Customer with the ID 1. Is this possible to to with SQL and Auto Increment or do i need to to that in my GUI ?
I need the leading zeroes but with auto incrementing to prevent double usage if am constructing the ID in Python and Insert them into the DB.
Is that possible?
You have few choices:
Construct the CustomerID in your code which inserts the data into
the Customer table (=application side, requires change in your code)
Create a view on top of the Customer-table that contains the logic
and use that when you need the CustomerID (=database side, requires change in your code)
Use a procedure to do the inserts and construct the CustomerID in
the procedure (=database side, requires change in your code)
Possible realization.
Create data table
CREATE TABLE data (id CHAR(9) NOT NULL DEFAULT '',
val TEXT,
PRIMARY KEY (id));
Create service table
CREATE TABLE ids (id INT NOT NULL AUTO_INCREMENT PRIMARY KEY);
Create trigger which generates id value
CREATE TRIGGER tr_bi_data
BEFORE INSERT
ON data
FOR EACH ROW
BEGIN
INSERT INTO ids () VALUES ();
SET NEW.id = CONCAT('CS', LPAD(LAST_INSERT_ID(), 7, '0'));
DELETE FROM ids;
END
Create trigger which prohibits id value change
CREATE TRIGGER tr_bu_data
BEFORE UPDATE
ON data
FOR EACH ROW
BEGIN
SET NEW.id = OLD.id;
END
Insert some data, check result
INSERT INTO data (val) VALUES ('data-1'), ('data-2');
SELECT * FROM data;
id | val
:-------- | :-----
CS0000001 | data-1
CS0000002 | data-2
Try to update, ensure id change prohibited
UPDATE data SET id = 'CS0000100' WHERE val = 'data-1';
SELECT * FROM data;
id | val
:-------- | :-----
CS0000001 | data-1
CS0000002 | data-2
Insert one more data, ensure enumeration continues
INSERT INTO data (val) VALUES ('data-3'), ('data-4');
SELECT * FROM data;
id | val
:-------- | :-----
CS0000001 | data-1
CS0000002 | data-2
CS0000003 | data-3
CS0000004 | data-4
Check service table is successfully cleared
SELECT COUNT(*) FROM ids;
| COUNT(*) |
| -------: |
| 0 |
db<>fiddle here
Disadvantages:
Additional table needed.
Generated id value edition is disabled (copy and delete old record must be used instead, custom value cannot be set).
I have two tables, one is an instrument_static table that looks like this
epic| name | Updated
-----------------------------------
ABC | Google |2017-02-03
The other table is market_data that looks like this
epic | name | Updated
-----------------------------------
MARKET:ABC | Google |2017-02-03
I want to join both tables using the epic but note the epic in market_data will always be prefixed with "MARKET:".Can someone kindly assist
I believe this query is identical to this one:
Sql Inner join with removing id prefix
However, as I am dealing with Postgress I have read charindex is not a supported function.
This is what I have managed to come up with so far which currently brings back an error:
SELECT * FROM instrument_static s
INNER JOIN market_data m ON
substring(m.epic, charindex(':', M.epic)+1, len(m.epic)) = s.epic
You can use string concatenation in the join clause:
select . . .
from instrument_static ins join
market_data md
on md.epic = 'MARKET:' || ins.epic and
md.name = ins.name and
md.updated = ins.updated;
There are similar methods to accomplish this using split(), like, regular expression matching, and so on.
I have 2 tables one name log contains (path, and log id, etc)
second table name articles contain (slug, id, title)
I want to make a query make join on articles.slug on a log. path
(the problem there path = /articles/slug)
how I make this join
I found pattern called '% %' I tried to use it in join but I don't know how like join on log.path = '/article/' + log.slug
log.path log.count(path)view article.slug
-------------------------------------+- -------|--------------------
/ | 479121 |
/article/candidate-is-jerk | 338647 | candidate-is-jerk
/article/bears-love-berries | 253801 | bears-love-berries
/article/bad-things-gone | 170098 | bad-things-gone
I need make join on this log.path = '/article/' + log.slug
You can express this join in SQL as:
from log l join
article a
on l.path = concat('/article/', a.slug);
or (using standard syntax):
from log l join
article a
on l.path = '/article/' || a.slug;
The second form will handle null values, by ignoring them. The first will (generally but it might depend on the database) return null if either value is null.
You can do join like this
on log.path like concat('%article%',article.slug);