SQLAlchemy ORM Update for HSTORE fields - python

I have a problem when I try to update hstore field. I have the following translation hybrid and database model.
translation_hybrid = TranslationHybrid(
current_locale='en',
default_locale='de'
)
class Book:
__tablename__ = "Book"
id = Column(UUID(as_uuid=True), primary_key=True)
title_translations = Column(MutableDict.as_mutable(HSTORE), nullable=False)
title = translation_hybrid(title_translations)
I want to update title with the current locale using a single orm query. When I try the following query
query(Book).filter(Book.id == id).update({"title": "new_title"})
ORM converts this to the following sql:
UPDATE "Book" SET coalesce(title_translations -> 'en', title_translations -> 'de') = "new_title" WHERE "Book".id = id
And this sql gives the syntax error. What is the best way to update it without fetching the model first and assigning the value to the field?

We got this to run eventually, documenting here for the benefit of others that might run into this issue; Note that we're using the new select methodology and async.
As you already suggested, we solved this by assigning the updated values directly to the record object. We're basically implementing this solution from the SQLAlchemy docu:
updated_record: models.Country = None # type: ignore
try:
# fetch current data from database and lock for update
record = await session.execute(
select(models.Country)
.filter_by(id=str(country_id))
.with_for_update(nowait=True)
)
updated_record = record.scalar_one()
logger.debug(
"update() - fetched current data from database",
record=record,
updated_record=vars(updated_record),
)
# merge country_dict (which holds the data to be updated) with the data in the DB
for key, value in country_dict.items():
setattr(updated_record, key, value)
logger.debug(
"update() - merged new data into record",
updated_record=vars(updated_record),
)
# flush data to database
await session.flush()
# refresh updated_record and commit
await session.refresh(updated_record)
await session.commit()
except Exception as e: # noqa: PIE786
logger.error("update() - an error occurred", error=str(e))
await session.rollback()
raise ValueError("Record can not be updated.")
return updated_record

I think I have solved a similar instance of the problem using the bulk update query variant.
In this case a PoC solution would look like this:
session.execute(update(Book), [{"id": id, "title": title}])
session.commit()
I am not sure why this does not trigger the coalesce() issue but it seems to be working. We should probably open an issue in SQLAlchemy as I don't have the time right now to debug it to its root cause.
UPDATE
I think that the original issue actually originates in sqlalchemy-util as the coalesce seems to arise from the expr_factory of the hybrid property here.

Related

I am trying to update data but it doesn't get updated in the database

I am new to python and Mongo db. What am I trying to do is that I want to update data in my database and code seems to be working fine. But, still the data doesn't get updated in the database.
I have tried functions like update and update_one etc. But still no luck so far.
#app.route("/users/update_remedy", methods = ['POST'])
def update_remedy():
try:
remedy = mongo.db.Home_Remedies
name = request.get_json()['name']
desc = request.get_json()['desc']
print("S")
status = remedy.update_one({"name" : name}, {"$set": {"desc" : desc}})
print("h")
return jsonify({"result" : "Remedy Updated Successfully"})
except Exception:
return 'error'
It's likely that your update_one call is looking for a document that doesn't exist. If the query on a vanilla update doesn't return any documents then no update operation will be performed. Make sure that a doc with the field {"name" : name}
actually exists. You could also check the return value from the update_one to ensure an update happened. See UpdateResult for details.

Temporarily disable increment in SQLAlchemy

I am running a Flask application with SQLAlchemy (1.1.0b3) and Postgres.
With Flask I provide an API over which the client is able to GET all instances of a type database and POST them again on a clean version of the Flask application, as a way of local backup. When the client posts them again, they should again have the same ID as they had when he downloaded them.
I don't want to disable the "increment" option for primary keys for normal operation but if the client provides an ID with a POST and wishes to give a new resource said ID I would like to set it accordingly without breaking the SQLAlchemy. How can I access/reset the current maximum value of ids?
#app.route('/objects', methods = ['POST'])
def post_object():
if 'id' in request.json and MyObject.query.get(request.json['id']) is None: #1
object = MyObject()
object.id = request.json['id']
else: #2
object = MyObject()
object.fillFromJson(request.json)
db.session.add(object)
db.session.commit()
return jsonify(object.toDict()),201
When adding a bunch of object WITH an id #1 and then trying to add on WITHOUT an id or with a used id #2, I get.
duplicate key value violates unique constraint "object_pkey"
DETAIL: Key (id)=(2) already exists.
Usually, the id is generated incrementally but when that id is already used, there is no check for that. How can I get between the auto-increment and the INSERT?
After adding an object with a fixed ID, you have to make sure the normal incremental behavior doesn't cause any collisions with future insertions.
A possible solution I can think of is to set the next insertion ID to the maximum ID (+1) found in the table. You can do that with the following additions to your code:
#app.route('/objects', methods = ['POST'])
def post_object():
fixed_id = False
if 'id' in request.json and MyObject.query.get(request.json['id']) is None: #1
object = MyObject()
object.id = request.json['id']
fixed_id = True
else: #2
object = MyObject()
object.fillFromJson(request.json)
db.session.add(object)
db.session.commit()
if fixed_id:
table_name = MyObject.__table__.name
db.engine.execute("SELECT pg_catalog.setval(pg_get_serial_sequence('%s', 'id'), MAX(id)) FROM %s;" % (table_name, table_name))
return jsonify(object.toDict()),201
The next object (without a fixed id) inserted into the table will continue the id increment from the biggest id found in the table.

How can I create a new view in bigquery using the python API?

I have some code that automatically generates a bunch of different SQL queries that I would like to insert into the bigquery to generate views, though one of the issues that I have is that these views need to be generated dynamically every night because of the changing nature of the data. So what I would like to be able to do is use the google bigquery api for python to be able to make a view. I understand how to do it using the 'bq' command line tool, but I'd like to be able to have this built directly into the code as opposed to using a shell to run bq. I have played with the code provided at
https://cloud.google.com/bigquery/bigquery-api-quickstart
I don't understand how to use this bit of code to create a view instead of just returning the results of a SELECT statement. I can see the documentation about doing table inserts here
https://cloud.google.com/bigquery/docs/reference/v2/tables/insert
but that refers to using the REST API to generate new tables as opposed to the example provided above.
Is it just not possible? Should I just give in and use bq?
Thanks
*** Some additional questions in response to Felipe's comments.
The table resource document indicates that there are a number of required fields, some of which make sense even if I don't fully understand what they're asking for, others do not. For example, externalDataConfiguration.schema. Does this refer to the schema for the database that I'm connecting to (I assume it does), or the schema for storing the data?
What about externalDataConfiguration.sourceFormat? Since I'm trying to make a view of a pre-existing database, I'm not sure I understand how the source format is relevant. Is it the source format of the database I'm making a view from? How would I identify that?
ANd externalDataConfiguration.sourceUris[], I'm not importing new data into the database, so I don't understand how this (or the previous element) are required.
What about schema?
tableReference.datasetId, tableReference.projectId, and tableReference.tableId are self explanatory.
Type would be view, and view.query would be the actual sql query used to make the view. So I get why those are required for making a view, but I don't understand the other parts.
Can you help me understand these details?
Thanks,
Brad
Using https://cloud.google.com/bigquery/docs/reference/rest/v2/tables/insert
Submit something like below, assuming you add the authorization
{
"view": {
"query": "select column1, count(1) `project.dataset.someTable` group by 1",
"useLegacySql": false
},
"tableReference": {
"tableId": "viewName",
"projectId": "projectName",
"datasetId": "datasetName"
}
}
Alternatively in Python using, assuming you have a service key setup and the environmental variable GOOGLE_APPLICATION_CREDENTIALS=/path/to/my/key. The one caveat is that as far as I can tell this can only create views using legacy sql, and as an extension can only be queried using legacy sql, though the straight API method allows legacy or standard.
from google.cloud import bigquery
def create_view(dataset_name, view_name, project, viewSQL):
bigquery_client = bigquery.Client(project=project)
dataset = bigquery_client.dataset(dataset_name)
table = dataset.table(view_name)
table.view_query = viewSQL
try:
table.create()
return True
except Exception as err:
print(err)
return False
Note: this changed a little bit with 0.28.0 of the library - see the following for further details:
Google BigQuery: creating a view via Python google-cloud-bigquery version 0.27.0 vs. 0.28.0
my example function
# create a view via python
def create_view(dataset_name, view_name, sqlQuery, project=None):
try:
bigquery_client = bigquery.Client(project=project)
dataset_ref = bigquery_client.dataset(dataset_name)
table_ref = dataset_ref.table(view_name)
table = Table(table_ref)
table.view_query = sqlQuery
table.view_use_legacy_sql = False
bigquery_client.create_table(table)
return True
except Exception as e:
errorStr = 'ERROR (create_view): ' + str(e)
print(errorStr)
raise
Everything that web UI or the bq tool does is made through the BigQuery API, so don't give up yet :).
Creating a view is akin to creating a table, just be sure to have a table resource that contains a view property when you call tables.insert().
https://cloud.google.com/bigquery/querying-data#views
https://cloud.google.com/bigquery/docs/reference/v2/tables#resource
bigquery.version -> '1.10.0'
def create_view(client, dataset_name, view_name, view_query):
try:
dataset_ref = client.dataset(dataset_name)
view = dataset_ref.table(view_name)
# view.table_type = 'VIEW'
view.view_query = view_query
view.view_query_legacy_sql = False
client.create_table(view)
pass
except Exception as e:
errorStr = 'ERROR (create_view): ' + str(e)
print(errorStr)
raise
create a table not a view !!!!
This is the right code to create a view:
def create_view(client, dataset_name, view_name, view_query):
try:
dataset_ref = client.dataset(dataset_name)
view_ref = dataset_ref.table(view_name)
table = bigquery.Table(view_ref)
table.view_query = view_query
table.view_use_legacy_sql = False
client.create_table(table)
except Exception as e:
errorStr = 'ERROR (create_view): ' + str(e)
print(errorStr)
raise
Is necessary
table = bigquery.Table(view_ref)

How to delete a record by id in Flask-SQLAlchemy

I have users table in my MySql database. This table has id, name and age fields.
How can I delete some record by id?
Now I use the following code:
user = User.query.get(id)
db.session.delete(user)
db.session.commit()
But I don't want to make any query before delete operation. Is there any way to do this? I know, I can use db.engine.execute("delete from users where id=..."), but I would like to use delete() method.
You can do this,
User.query.filter_by(id=123).delete()
or
User.query.filter(User.id == 123).delete()
Make sure to commit for delete() to take effect.
Just want to share another option:
# mark two objects to be deleted
session.delete(obj1)
session.delete(obj2)
# commit (or flush)
session.commit()
http://docs.sqlalchemy.org/en/latest/orm/session_basics.html#deleting
In this example, the following codes shall works fine:
obj = User.query.filter_by(id=123).one()
session.delete(obj)
session.commit()
In sqlalchemy 1.4 (2.0 style) you can do it like this:
from sqlalchemy import select, update, delete, values
sql1 = delete(User).where(User.id.in_([1, 2, 3]))
sql2 = delete(User).where(User.id == 1)
db.session.execute(sql1)
db.session.commit()
or
u = db.session.get(User, 1)
db.session.delete(u)
db.session.commit()
In my opinion using select, update, delete is more readable.
Style comparison 1.0 vs 2.0 can be found here.
Another possible solution specially if you want batch delete
deleted_objects = User.__table__.delete().where(User.id.in_([1, 2, 3]))
session.execute(deleted_objects)
session.commit()

add column to SQLAlchemy Table

I made a table using SQLAlchemy and forgot to add a column. I basically want to do this:
users.addColumn('user_id', ForeignKey('users.user_id'))
What's the syntax for this? I couldn't find it in the docs.
I have the same problem, and a thought of using migration library only for this trivial thing makes me
tremble. Anyway, this is my attempt so far:
def add_column(engine, table_name, column):
column_name = column.compile(dialect=engine.dialect)
column_type = column.type.compile(engine.dialect)
engine.execute('ALTER TABLE %s ADD COLUMN %s %s' % (table_name, column_name, column_type))
column = Column('new_column_name', String(100), primary_key=True)
add_column(engine, table_name, column)
Still, I don't know how to insert primary_key=True into raw SQL request.
This is referred to as database migration (SQLAlchemy doesn't support migration out of the box). You can look at using sqlalchemy-migrate to help in these kinds of situations, or you can just ALTER TABLE through your chosen database's command line utility,
See this section of the SQLAlchemy documentation: http://docs.sqlalchemy.org/en/latest/core/metadata.html#altering-schemas-through-migrations
Alembic is the latest software to offer this type of functionality and is made by the same author as SQLAlchemy.
I have a database called "ncaaf.db" built with sqlite3 and a table called "games". So I would CD into the same directory on my linux command prompt and do
sqlite3 ncaaf.db
alter table games add column q4 type float
and that is all it takes! Just make sure you update your definitions in your sqlalchemy code.
from sqlalchemy import create_engine
engine = create_engine('sqlite:///db.sqlite3')
engine.execute('alter table table_name add column column_name String')
I had the same problem, I ended up just writing my own function in raw sql. If you are using SQLITE3 this might be useful.
Then if you add the column to your class definition at the same time it seems to do the trick.
import sqlite3
def add_column(database_name, table_name, column_name, data_type):
connection = sqlite3.connect(database_name)
cursor = connection.cursor()
if data_type == "Integer":
data_type_formatted = "INTEGER"
elif data_type == "String":
data_type_formatted = "VARCHAR(100)"
base_command = ("ALTER TABLE '{table_name}' ADD column '{column_name}' '{data_type}'")
sql_command = base_command.format(table_name=table_name, column_name=column_name, data_type=data_type_formatted)
cursor.execute(sql_command)
connection.commit()
connection.close()
I've recently had this same issue so I took a point from AlexP in an earlier answer. The problem was in getting the new column into my program's metadata. Using sqlAlchemy's append_column functionality had some unexpected downstream effects ('str' object has no attribute 'dialect impl'). I corrected this by adding the column with DDL (MySQL database in this case) and then reflecting the table back from the DB into my metadata.
Code is as roughly as follows (modified slightly from what I have in order to reduce it to its minimal essence. I apologize for any mistakes - if there, they should be minor)...
try:
# Use back quotes as a protection against SQL Injection Attacks. Can we do more?
common.qry_engine.execute('ALTER TABLE %s ADD COLUMN %s %s' %
('`' + self.tbl.schema + '`.`' + self.tbl.name + '`',
'`' + self.outputs[new_col] + '`', 'VARCHAR(50)'))
except exc.SQLAlchemyError as msg:
raise GRError(desc='Unable to physically add derived column to table. Contact support.',
data=str(self.outputs), other_info=str(msg))
try: # Refresh the metadata to show the new column
self.tbl = sqlalchemy.Table(self.tbl.name, self.tbl.metadata, extend_existing=True, autoload=True)
except exc.SQLAlchemyError as msg:
raise GRError(desc='Unable to establish metadata for new column. Contact support.',
data=str(self.outputs), other_info=str(msg))
Yes you can
Install sqlalchemy-migrate (pip install sqlalchemy-migrate) and use it in your script to call Table and Column create() method:
from sqlalchemy import String, MetaData, create_engine
from migrate.versioning.schema import Table, Column
db_engine = create_engine(app.config.get('SQLALCHEMY_DATABASE_URI'))
db_meta = MetaData(bind=db_engine)
table = Table('tabel_name' , db_meta)
col = Column('new_column_name', String(20), default='foo')
col.create(table)
Just continuing the simple way proposed by chasmani, little improvement
'''
# simple migration
# columns to add:
# last_status_change = Column(BigInteger, default=None)
# last_complete_phase = Column(String, default=None)
# complete_percentage = Column(DECIMAL, default=0.0)
'''
import sqlite3
from config import APP_STATUS_DB
from sqlalchemy import types
def add_column(database_name: str, table_name: str, column_name: str, data_type: types, default=None):
ret = False
if default is not None:
try:
float(default)
ddl = ("ALTER TABLE '{table_name}' ADD column '{column_name}' '{data_type}' DEFAULT {default}")
except:
ddl = ("ALTER TABLE '{table_name}' ADD column '{column_name}' '{data_type}' DEFAULT '{default}'")
else:
ddl = ("ALTER TABLE '{table_name}' ADD column '{column_name}' '{data_type}'")
sql_command = ddl.format(table_name=table_name, column_name=column_name, data_type=data_type.__name__,
default=default)
try:
connection = sqlite3.connect(database_name)
cursor = connection.cursor()
cursor.execute(sql_command)
connection.commit()
connection.close()
ret = True
except Exception as e:
print(e)
ret = False
return ret
add_column(APP_STATUS_DB, 'procedures', 'last_status_change', types.BigInteger)
add_column(APP_STATUS_DB, 'procedures', 'last_complete_phase', types.String)
add_column(APP_STATUS_DB, 'procedures', 'complete_percentage', types.DECIMAL, 0.0)
If using docker:
go to the terminal of the container holding your DB
get into the db: psql -U usr [YOUR_DB_NAME]
now you can alter tables using raw SQL: alter table [TABLE_NAME] add column [COLUMN_NAME] [TYPE]
Note you will need to have mounted your DB for the changes to persist between builds.
Adding the column "manually" (not using python or SQLAlchemy) is perhaps the easiest?
Same problem over here. What I will do is iterating over the db and add each entry to a new database with the extra column, then delete the old db and rename the new to this one.

Categories