I want to do a SELECT statement that will get all the data in one table + follow all the foreign keys from that table with a LEFT OUTER JOIN. For example:
`orderitem`
id
name
title_id
`title`
id
name
In the above example, I would be able to use the statement:
SELECT * FROM orderitem LEFT OUTER JOIN title on orderitem.title_id=title.id
Is there a way that I could do this not knowing the table structure? That is, to have a function like the following:
def get_select_statement(table)
???
get_select_statement(orderitem)
==> "SELECT * FROM orderitem LEFT OUTER JOIN title on orderitem.title_id=title.id"
How would this be done?
To clarify this question, I think I'm looking for the following information from this function:
What are all the column names in the given table?
What tables do they reference in a ForeignKey relationship and what is the relationship to be able to join?
In addition, note that not all orderitems will have a title, so doing any sort of INNER JOIN would delete data.
In MySQLDB you could retrieve column names by using describe statement:
DESCRIBE table_name;
And all info about foreign keys:
select *
from information_schema.KEY_COLUMN_USAGE
where TABLE_SCHEMA = "schema_name"
and TABLE_NAME="table_name"
and REFERENCED_TABLE_NAME IS NOT NULL
To evaluate this query and load the result in python you could use SQLAlchemy package, for example.
engine = sqlalchemy.create_engine("mysqldb://user:password#host/db")
res = engine.execute("DESCRIBE table_name;")
columns = [row["Field"] for row in res]
res = engine.execute("{}".format(query_for_foreign_keys))
foreign_keys = [row["COLUMN_NAME"] for row in res]
referenced_column_names = [row["REFERENCED_COLUMN_NAME"] for row in res]
referenced_table_names = [row["REFERENCED_TABLE_NAME"] for row in res]
Then you could generate the query using all the data above
Related
I have two tables BloodBank(id, name, phone, address) and BloodStock(id, a_pos, b_pos, a_neg, b_neg, bloodbank_id). I want to fetch all the columns from two tables where the variable column name (say bloodgroup) which have values like a_pos or a_neg... like that and their value should be greater than 0. How can I write ORM for the same?
SQL query is written like this to get the required results.
sql="select * from public.bloodbank_bloodbank as bb, public.bloodbank_bloodstock as bs where bs."+blood+">0 and bb.id=bs.bloodbank_id order by bs."+blood+" desc;"
cursor = connection.cursor()
cursor.execute(sql)
bloodbanks = cursor.fetchall()
You could be more specific in your questions, but I believe you have a variable called blood which contains the string name of the column and that the columns a_pos, b_pos, etc. are numeric.
You can use a dictionary to create keyword arguments from strings:
filter_dict = {bloodstock__blood + '__gt': 0}
bloodbanks = Bloodbank.objects.filter(**filter_dict)
This will get you Bloodbank objects that have a related bloodstock with a greater than zero value in the bloodgroup represented by the blood variable.
Note that the way I have written this, you don't get the bloodstock columns selected, and you may get duplicate bloodbanks. If you want to get eliminate duplicate bloodbanks you can add .distinct() to your query. The bloodstocks are available for each bloodbank instance using .bloodstock_set.all().
The ORM will generate SQL using a join. Alternatively, you can do an EXISTS in the where clause and no join.
from django.db.models import Exists, OuterRef
filter_dict = {blood + '__gt': 0}
exists = Exists(Bloodstock.objects.filter(
bloodbank_id=OuterRef('id'),
**filter_dict
)
bloodbanks = Bloodbank.objects.filter(exists)
There will be no need for a .distinct() in this case.
I'm try to filter if id of column A not exist in column B by this code.
query = db.session.query().select_from(Spare_Parts, Vendors, Replacement)\
.filter(Vendors.vendor_code == Spare_Parts.vendor_code,\
~ exists().where(Spare_Parts.spare_part_code == Replacement.spare_part_code))
I want to query the data from Spare_Parts that not have an id exist in Replacement as a foriegn key but i got the error like this.
Select statement 'SELECT *
FROM spare_parts, replacement
WHERE spare_parts.spare_part_code = replacement.spare_part_code' returned no FROM clauses due to auto-correlation; specify correlate(<tables>) to control correlation manually.
So what is a problem and how to fix that.
try to use the subquery like this instead
to filter spare_part_code from spare_parts which are not in replacement table``
SELECT *
FROM spare_parts
WHERE spare_parts.spare_part_code not in
(select distinct
replacement.spare_part_code
FROM replacement)
or you can use not exists
SELECT *
FROM spare_parts
WHERE not exists
(select 1
FROM replacement
where spare_parts.spare_parts_code = replacement.spare_parts_code)
Note: this is a question about SQL Alchemy's expression language not the ORM
SQL Alchemy is fine for adding WHERE or HAVING clauses to an existing query:
q = select([bmt_gene.c.id]).select_from(bmt_gene)
q = q.where(bmt_gene.c.ensembl_id == "ENSG00000000457")
print q
SELECT bmt_gene.id
FROM bmt_gene
WHERE bmt_gene.ensembl_id = %s
However if you try to add a JOIN in the same way you'll get an exception:
q = select([bmt_gene.c.id]).select_from(bmt_gene)
q = q.join(bmt_gene_name)
sqlalchemy.exc.NoForeignKeysError: Can't find any foreign key relationships between 'Select object' and 'bmt_gene_name'
If you specify the columns it creates a subquery (which is incomplete SQL anyway):
q = select([bmt_gene.c.id]).select_from(bmt_gene)
q = q.join(bmt_gene_name, q.c.id == bmt_gene_name.c.gene_id)
(SELECT bmt_gene.id AS id FROM bmt_gene)
JOIN bmt_gene_name ON id = bmt_gene_name.gene_id
But what I actually want is this:
SELECT
bmt_gene.id AS id
FROM
bmt_gene
JOIN bmt_gene_name ON id = bmt_gene_name.gene_id
edit: Adding the JOIN has to be after the creation of the initial query expression q. The idea is that I make a basic query skeleton then I iterate over all the joins requested by the user and add them to the query.
Can this be done in SQL Alchemy?
The first error (NoForeignKeysError) means that your table lacks foreign key definition. Fix this if you don't want to write join clauses by hand:
from sqlalchemy.types import Integer
from sqlalchemy.schema import MetaData, Table, Column, ForeignKey
meta = MetaData()
bmt_gene_name = Table(
'bmt_gene_name', meta,
Column('id', Integer, primary_key=True),
Column('gene_id', Integer, ForeignKey('bmt_gene.id')),
# ...
)
The joins in SQLAlchemy expression language work a little bit different from what you expect. You need to create Join object where you join all the tables and only then provide it to Select object:
q = select([bmt_gene.c.id])
q = q.where(bmt_gene.c.ensembl_id == 'ENSG00000000457')
j = bmt_gene # Initial table to join.
table_list = [bmt_gene_name, some_other_table, ...]
for table in table_list:
j = j.join(table)
q = q.select_from(j)
The reason why you see the subquery in your join is that Select object is treated like a table (which essentially it is) which you asked to join to another table.
You can access the current select_from of a query with the froms attribute, and then join it with another table and update the select_from.
As explained in the documentation, calling select_from usually adds another selectable to the FROM list, however:
Passing a Join that refers to an already present Table or other selectable will have the effect of concealing the presence of that selectable as an individual element in the rendered FROM list, instead rendering it into a JOIN clause.
So you can add a join like this, for example:
q = select([bmt_gene.c.id]).select_from(bmt_gene)
q = q.select_from(
join(q.froms[0], bmt_gene_name,
bmt_gene.c.id == bmt_gene_name.c.gene_id)
)
I've got a ESRI Point Shape file with (amongst others) a nMSLINK field and a DIAMETER field. The MSLINK is not unique, because of a spatial join. What I want to achieve is to keep only the features in the shapefile that have a unique MSLINK and the smallest DIAMETER value, together with the corresponding values in the other fields. I can use a searchcursor to achieve this (looping through all features and removing each feature that does not comply, but this takes ages (> 75000 features). I was wondering if eg. numpy could do the trick faster in ArcMap/arcpy.
I think, making that kind of processing would definitely be a lot faster if you work on memory instead of interacting with arcgis. For example, by putting all the rows first into a python object (probably a namedtuple would be a good option here). Then you can find out which rows you want to delete or insert.
The fastest approach depends on a) if you have a lot of (MSLINK) repeated rows, then the fastest would be inserting just the ones you need in a new layer. Or b) if the rows to be deleted are just a few compared to the total of rows, then deleting is faster.
For a) you'll need to fetch all fields into the tuple, including the point coordinates, so that you can just create a new feature class and insert the new rows.
# Example of Variant a:
from collections import namedtuple
# assuming the following:
source_fc # contains name of the fclass
the_path # contains path to the shape
cleaned_fc # the name of the cleaned fclass
# use all fields of source_fc plus the shape token to get a touple with xy
# coordinates (using 'mslink' and 'diam' here to simplify the example)
fields = ['mslink', 'diam', 'field3', ... ]
all_fields = fields + ['SHAPE#XY']
# define a namedtuple to hold and work with the rows, use the name 'point' to
# hold the coordinates-tuple
Row = namedtuple('Row', fields + ['point'])
data = []
with arcpy.da.SearchCursor(source_fc, fields) as sc:
for r in sc:
# unzip the values from each row into a new Row (namedtuple) and append
# to data
data.append(Row(*r))
# now just delete the rows we don't want, for this, the easiest way, is probably
# to order the tuple first after MSLINK and then after the diamater...
data = sorted(data, key = lambda x : (x.mslink, x.diam))
# ... now just keep the first ones for each mslink
to_keep = []
last_mslink = None
for d in data:
if last_mslink != d.mslink:
last_mslink = d.mslink
to_keep.append(d)
# create a new feature class with the same fields as the source_fc
arcpy.CreateFeatureclass_management(
out_path=the_path, out_name=cleaned_fc, template=source_fc)
with arcpy.da.InsertCursor(cleaned_fc, all_fields) as ic:
for r in to_keep:
ic.insertRow(*r)
And for alternative b) I would just fetch 3 fields, a unique ID, MSLINK and the diameter. Then make a delete list (here you only need the unique ids). Then loop again through the feature class and delete the rows with the id on your delete-list. Just to be sure, I would duplicate the feature class first, and work on a copy.
There are a few steps you can take to accomplish this task more efficiently. First and foremost, making use of the data analyst cursor as opposed to the older version of cursor will increase the speed of your process. This assumes you are working in 10.1 or beyond. Then you can employ summary statistics, namely its ability to find a minimum value based off a case field. For yours, the case field would be nMSLINK.
The code below first creates a statistics table with all unique 'nMSLINK' values, and its corresponding minimum 'DIAMETER' value. I then use a table select to select out only rows in the table whose 'FREQUENCY' field is not 1. From here I iterate through my new table and start to build a list of strings that will make up a final sql statement. After this iteration, I use the python join function to create an sql string that looks something like this:
("nMSLINK" = 'value1' AND "DIAMETER" <> 624.0) OR ("nMSLINK" = 'value2' AND "DIAMETER" <> 1302.0) OR ("nMSLINK" = 'value3' AND "DIAMETER" <> 1036.0) ...
The sql selects rows where nMSLINK values are not unique and where DIAMETER values are not the minimum. Using this SQL, I select by attribute and delete selected rows.
This SQL statement is written assuming your feature class is in a file geodatabase and that 'nMSLINK' is a string field and 'DIAMETER' is a numeric field.
The code has the following inputs:
Feature: The feature to be analyzed
Workspace: A folder that will store a couple intermediate tables temporarily
TempTableName1: A name for one temporary table.
TempTableName2: A name for a second temporary table
Field1 = The nonunique field
Field2 = The field with the numeric values that you wish to find the lowest of
Code:
# Import modules
from arcpy import *
import os
# Local variables
#Feature to analyze
Feature = r"C:\E1B8\ScriptTesting\Workspace\Workspace.gdb\testfeatureclass"
#Workspace to export table of identicals
Workspace = r"C:\E1B8\ScriptTesting\Workspace"
#Name of temp DBF table file
TempTableName1 = "Table1"
TempTableName2 = "Table2"
#Field names
Field1 = "nMSLINK" #nonunique
Field2 = "DIAMETER" #field with numeric values
#Make layer to allow selection
MakeFeatureLayer_management (Feature, "lyr")
#Path for first temp table
Table = os.path.join (Workspace, TempTableName1)
#Create statistics table with min value
Statistics_analysis (Feature, Table, [[Field2, "MIN"]], [Field1])
#SQL Select rows with frequency not equal to one
sql = '"FREQUENCY" <> 1'
# Path for second temp table
Table2 = os.path.join (Workspace, TempTableName2)
# Select rows with Frequency not equal to one
TableSelect_analysis (Table, Table2, sql)
#Empty list for sql bits
li = []
# Iterate through second table
cursor = da.SearchCursor (Table2, [Field1, "MIN_" + Field2])
for row in cursor:
# Add SQL bit to list
sqlbit = '("' + Field1 + '" = \'' + row[0] + '\' AND "' + Field2 + '" <> ' + str(row[1]) + ")"
li.append (sqlbit)
del row
del cursor
#Create SQL for selection of unwanted features
sql = " OR ".join (li)
print sql
#Select based on SQL
SelectLayerByAttribute_management ("lyr", "", sql)
#Delete selected features
DeleteFeatures_management ("lyr")
#delete temp files
Delete_management ("lyr")
Delete_management (Table)
Delete_management (Table2)
This should be quicker than a straight-up cursor. Let me know if this makes sense. Good luck!
I'd like to know if it's possible to generate a SELECT COUNT(*) FROM TABLE statement in SQLAlchemy without explicitly asking for it with execute().
If I use:
session.query(table).count()
then it generates something like:
SELECT count(*) AS count_1 FROM
(SELECT table.col1 as col1, table.col2 as col2, ... from table)
which is significantly slower in MySQL with InnoDB. I am looking for a solution that doesn't require the table to have a known primary key, as suggested in Get the number of rows in table using SQLAlchemy.
Query for just a single known column:
session.query(MyTable.col1).count()
I managed to render the following SELECT with SQLAlchemy on both layers.
SELECT count(*) AS count_1
FROM "table"
Usage from the SQL Expression layer
from sqlalchemy import select, func, Integer, Table, Column, MetaData
metadata = MetaData()
table = Table("table", metadata,
Column('primary_key', Integer),
Column('other_column', Integer) # just to illustrate
)
print select([func.count()]).select_from(table)
Usage from the ORM layer
You just subclass Query (you have probably anyway) and provide a specialized count() method, like this one.
from sqlalchemy.sql.expression import func
class BaseQuery(Query):
def count_star(self):
count_query = (self.statement.with_only_columns([func.count()])
.order_by(None))
return self.session.execute(count_query).scalar()
Please note that order_by(None) resets the ordering of the query, which is irrelevant to the counting.
Using this method you can have a count(*) on any ORM Query, that will honor all the filter andjoin conditions already specified.
I needed to do a count of a very complex query with many joins. I was using the joins as filters, so I only wanted to know the count of the actual objects. count() was insufficient, but I found the answer in the docs here:
http://docs.sqlalchemy.org/en/latest/orm/tutorial.html
The code would look something like this (to count user objects):
from sqlalchemy import func
session.query(func.count(User.id)).scalar()
Addition to the Usage from the ORM layer in the accepted answer: count(*) can be done for ORM using the query.with_entities(func.count()), like this:
session.query(MyModel).with_entities(func.count()).scalar()
It can also be used in more complex cases, when we have joins and filters - the important thing here is to place with_entities after joins, otherwise SQLAlchemy could raise the Don't know how to join error.
For example:
we have User model (id, name) and Song model (id, title, genre)
we have user-song data - the UserSong model (user_id, song_id, is_liked) where user_id + song_id is a primary key)
We want to get a number of user's liked rock songs:
SELECT count(*)
FROM user_song
JOIN song ON user_song.song_id = song.id
WHERE user_song.user_id = %(user_id)
AND user_song.is_liked IS 1
AND song.genre = 'rock'
This query can be generated in a following way:
user_id = 1
query = session.query(UserSong)
query = query.join(Song, Song.id == UserSong.song_id)
query = query.filter(
and_(
UserSong.user_id == user_id,
UserSong.is_liked.is_(True),
Song.genre == 'rock'
)
)
# Note: important to place `with_entities` after the join
query = query.with_entities(func.count())
liked_count = query.scalar()
Complete example is here.
If you are using the SQL Expression Style approach there is another way to construct the count statement if you already have your table object.
Preparations to get the table object. There are also different ways.
import sqlalchemy
database_engine = sqlalchemy.create_engine("connection string")
# Populate existing database via reflection into sqlalchemy objects
database_metadata = sqlalchemy.MetaData()
database_metadata.reflect(bind=database_engine)
table_object = database_metadata.tables.get("table_name") # This is just for illustration how to get the table_object
Issuing the count query on the table_object
query = table_object.count()
# This will produce something like, where id is a primary key column in "table_name" automatically selected by sqlalchemy
# 'SELECT count(table_name.id) AS tbl_row_count FROM table_name'
count_result = database_engine.scalar(query)
I'm not clear on what you mean by "without explicitly asking for it with execute()" So this might be exactly what you are not asking for.
OTOH, this might help others.
You can just run the textual SQL:
your_query="""
SELECT count(*) from table
"""
the_count = session.execute(text(your_query)).scalar()
def test_query(val: str):
query = f"select count(*) from table where col1='{val}'"
rtn = database_engine.query(query)
cnt = rtn.one().count
but you can find the way if you checked debug watch
query = session.query(table.column).filter().with_entities(func.count(table.column.distinct()))
count = query.scalar()
this worked for me.
Gives the query:
SELECT count(DISTINCT table.column) AS count_1
FROM table where ...
Below is the way to find the count of any query.
aliased_query = alias(query)
db.session.query(func.count('*')).select_from(aliased_query).scalar()
Here is the link to the reference document if you want to explore more options or read details.