sqlite3.OperationalError: no such column - python

The following works in SQLite Manager, but doesn't in Python. I get the following error:
sqlite3.OperationalError: no such column: domain_list.short_name
I've tried taking out the "AS domain_list" and referring to just "short_name" and also "websites.short_name" but it still doesn't work in Python. But does in SQLite Manager. It works ok with just the sub query, just not when I join the subquery to the domain_info table.
Any ideas?
SELECT
*
FROM
(
SELECT
websites.short_name
FROM
websites
INNER JOIN product_info ON product_prices.product_info_id = product_info.id
WHERE product_info.archive = 1
GROUP BY
websites.short_name
) AS domain_list
LEFT JOIN
domain_info
ON
domain_list.short_name = domain_info.domain
ORDER BY
last_checked

Related

SQLalchemy text() query with multpile JOINS over 6 tables not returning any rows

Keep in mind all these tables exist in my database, and are populated.
I tried seeing if it's a join problem but as far as i've seen in SQLalchemy docs, only natural JOIN is available in their text() implementation.
In Mysql workbench I see that the view RECEIPT exists but only the headers are there (I guessed from the select but why would the query go through if there's no data coming),
which left me to my final and most simple solution that is my WHERE clauses are simply filtering all data.
I then tried to remove some of my Where clauses even if i'll have hundreds of thousands of rows, but still no data
db.execute(text(f"""
create view RECEIPT as
SELECT
`{DATABASE_CONFIG['database']}`.`EKBE`.`BELNR` AS `BELNR`,
`{DATABASE_CONFIG['database']}`.`EKBE`.`BUZEI` AS `BUZEI`,
`{DATABASE_CONFIG['database']}`.`EKPO`.`EBELN` AS `EBELN`,
`{DATABASE_CONFIG['database']}`.`EKPO`.`EBELP` AS `EBELP`,
`{DATABASE_CONFIG['database']}`.`EKBE`.`MATNR` AS `MATNR`,
`{DATABASE_CONFIG['database']}`.`EKBE`.`MENGE` AS `EKBE_MENGE`,
`{DATABASE_CONFIG['database']}`.`EKBE`.`BLDAT` AS `BLDAT`,
`{DATABASE_CONFIG['database']}`.`EKET`.`SLFDT` AS `SLFDT`,
`{DATABASE_CONFIG['database']}`.`T001`.`BUKRS` AS `BUKRS`,
`{DATABASE_CONFIG['database']}`.`T001`.`BUTXT` AS `BUTXT`,
`{DATABASE_CONFIG['database']}`.`T001W`.`WERKS` AS `WERKS`,
`{DATABASE_CONFIG['database']}`.`T001W`.`NAME1` AS `NAME1`,
`{DATABASE_CONFIG['database']}`.`T001W`.`LAND1` AS `LAND1`,
`{DATABASE_CONFIG['database']}`.`LFA1`.`NAME1` AS `LFA1_NAME1`,
`{DATABASE_CONFIG['database']}`.`LFA1`.`LAND1` AS `LFA1_LAND1`,
`{DATABASE_CONFIG['database']}`.`LFA1`.`LIFNR` AS `LIFNR`,
`{DATABASE_CONFIG['database']}`.`EKPO`.`MENGE` AS `EKPO_MENGE`,
(`{DATABASE_CONFIG['database']}`.`EKBE`.`BLDAT` > `{DATABASE_CONFIG['database']}`.`EKET`.`SLFDT`) AS `late`
FROM `{DATABASE_CONFIG['database']}`.`LFA1`
JOIN `{DATABASE_CONFIG['database']}`.`EKKO`
JOIN `{DATABASE_CONFIG['database']}`.`EKPO`
JOIN `{DATABASE_CONFIG['database']}`.`T001`
JOIN `{DATABASE_CONFIG['database']}`.`T001W`
JOIN `{DATABASE_CONFIG['database']}`.`EKBE`
JOIN `{DATABASE_CONFIG['database']}`.`EKET`
WHERE (
(`{DATABASE_CONFIG['database']}`.`LFA1`.`LIFNR` = `{DATABASE_CONFIG['database']}`.`EKKO`.`LIFNR`) AND
(`{DATABASE_CONFIG['database']}`.`EKKO`.`EBELN` = `{DATABASE_CONFIG['database']}`.`EKPO`.`EBELN`) AND
(`{DATABASE_CONFIG['database']}`.`EKPO`.`WERKS` = `{DATABASE_CONFIG['database']}`.`T001W`.`WERKS`) AND
(`{DATABASE_CONFIG['database']}`.`EKPO`.`EBELN` = `{DATABASE_CONFIG['database']}`.`EKBE`.`EBELN`) AND
(`{DATABASE_CONFIG['database']}`.`EKPO`.`EBELN` = `{DATABASE_CONFIG['database']}`.`EKET`.`EBELN`) AND
(`{DATABASE_CONFIG['database']}`.`EKPO`.`BUKRS` = `{DATABASE_CONFIG['database']}`.`T001`.`BUKRS`)
)
"""))

SQLalchemy rowcount always -1 for statements

I was playing around with SQLalchemy and Microsoft SQL Server to get a hang of the functions when I came across a strange behavior. I was taught that the attribute rowcount on the result proxy object will tell how many rows were effected by executing a statement. However, when I select or insert single or multiple rows in my test database, I always get -1. How could this be and how can I fix this to reflect the reality?
connection = engine.connect()
metadata = MetaData()
# Ex1: select statement for all values
student = Table('student', metadata, autoload=True, autoload_with=engine)
stmt = select([student])
result_proxy = connection.execute(stmt)
results = result_proxy.fetchall()
print(result_proxy.rowcount)
# Ex2: inserting single values
stmt = insert(student).values(firstname='Severus', lastname='Snape')
result_proxy = connection.execute(stmt)
print(result_proxy.rowcout)
# Ex3: inserting multiple values
stmt = insert(student)
values_list = [{'firstname': 'Rubius', 'lastname': 'Hagrid'},
{'firstname': 'Minerva', 'lastname': 'McGonogall'}]
result_proxy = connection.execute(stmt, values_list)
print(result_proxy.rowcount)
The print function for each block seperately run example code prints -1. The Ex1 successfully fetches all rows and both insert statements successfully write the data to the database.
According to the following issue, the rowcount attribute isn't always to be trusted. Is that true here as well? And when, how can I compensate with a Count statement in a SQLalcehmy transaction?
PDO::rowCount() returning -1
The single-row INSERT … VALUES ( … ) is trivial: If the statement succeeds then one row was affected, and if it fails (throws an error) then zero rows were affected.
For a multi-row INSERT simply perform it inside a transaction and rollback if an error occurs. Then the number of rows affected will either be zero or len(values_list).
To get the number of rows that a SELECT will return, wrap the select query in a SELECT count(*) query and run that first, for example:
select_stmt = sa.select([Parent])
count_stmt = sa.select([sa.func.count(sa.text("*"))]).select_from(
select_stmt.alias("s")
)
with engine.connect() as conn:
conn.execution_options(isolation_level="SERIALIZABLE")
rows_found = conn.execute(count_stmt).scalar()
print(f"{rows_found} row(s) found")
results = conn.execute(select_stmt).fetchall()
for item in results:
print(item.id)

How can django produce this SQL?

I have the following SQL query that returns what i need:
SELECT sensors_sensorreading.*, MAX(sensors_sensorreading.timestamp) AS "last"
FROM sensors_sensorreading
GROUP BY sensors_sensorreading.chipid
In words: get the last sensor reading entry for each unique chipid.
But i cannot seem to figure out the correct Django ORM statement to produce this query. The best i could come up with is:
SensorReading.objects.values('chipid').annotate(last=Max('timestamp'))
But if i inspect the raw sql it generates:
>>> print connection.queries[-1:]
[{u'time': u'0.475', u'sql': u'SELECT
"sensors_sensorreading"."chipid",
MAX("sensors_sensorreading"."timestamp") AS "last" FROM
"sensors_sensorreading" GROUP BY "sensors_sensorreading"."chipid"'}]
As you can see, it almost generates the correct SQL, except django selects only the chipid field and the aggregate "last" (but i need all the table fields returned instead).
Any idea how to return all fields?
Assuming you also have other fields in the table besides chipid and timestamp, then I would guess this is the SQL you actually need:
select * from (
SELECT *, row_number() over (partition by chipid order by timestamp desc) as RN
FROM sensors_sensorreading
) X where RN = 1
This will return the latest rows for each chipid with all the data that is in the row.

Python Pandas to_sql removes all table indices when writing to table

I have the following code which reads a MYSQL select command formed from left joining many tables together. I then want to write the result to another table. When I do that however, (with Pandas), it works properly and the data is added to the table, but it somehow destroys all indices from the table, including the primary key.
Here is the code:
q = "SELECT util.peer_id as peer_id, util.date as ts, weekly_total_page_loads as page_loads FROM %s.%s as util LEFT JOIN \
(SELECT peer_id, date, score FROM %s.%s WHERE date = '%s') as scores \
ON util.peer_id = scores.peer_id AND util.date = scores.date WHERE util.date = '%s';"\
% (config.database_peer_groups, config.table_medians, \
config.database_peer_groups, config.db_score, date, date)
group_export = pd.read_sql(q, con = db)
q = 'USE %s;' % (config.database_export)
cursor.execute(q)
group_export.to_sql(con = db, name = config.table_group_export, if_exists = 'replace', flavor = 'mysql', index = False)
db.commit()
Any Ideas?
Edit:
It seems that,by using if_exists='replace', Pandas drops the table and recreates it, and when it recreates it, it doesn't rebuild the indices.
Furthermore, this question: to_sql pandas method changes the scheme of sqlite tables
suggests that by using a sqlalchemy engine it might potentially solve the problem.
Edit:
When I use if_exists="append" the problem doesn't appear, it is only with if_exists="replace" that the problem occures.

Sqlite3 / Webpy: "no such column" with double left join

I am trying to do Sqlite3 query via webpy framework.The query works in SQLiteManager. But with web.db i get "sqlite3.OperationalError no such column a.id".
Is this a webpy bug?
import web
db = web.database(dbn='sqlite', db='data/feed.db')
account = 1
query='''
SELECT a.id, a.url, a.title, a.description, a.account_count, b.id subscribed FROM
(SELECT feed.id, feed.url, feed.title, feed.description, count(account_feed.id) account_count
FROM feed
LEFT OUTER JOIN account_feed
ON feed.id=account_feed.feed_id AND feed.actived=1
GROUP BY feed.id, feed.url, feed.title, feed.description
ORDER BY count(account_feed.id) DESC, feed.id DESC)
a LEFT OUTER JOIN account_feed b ON a.id=b.feed_id AND b.account_id=$account'''
return list(self._db.query(query,vars=locals()))
Traceback is here:http://pastebin.com/pUA7zB9H
Not sure why you are getting the error "no such column a.id", but
it may help to
use a multiline string (easier to read),
use a parametrized argument for account (Was $account a Perl-hangover?)
query = '''
SELECT a.id, a.url, a.title, a.description, a.account_count, b.id subscribed
FROM ( {q} ) a
LEFT OUTER JOIN account_feed b
ON a.id=b.feed_id
AND b.account_id = ?'''.format(q=query)
args=[account]
cursor.execute(query,args)

Categories