I have the following scenario: users can log in and update certain tables, and whenever that table is updated I would like to call a Python script that will query the same database for additional information (joins) and then use Django's email system to email all users in that group (same role) that information.
My current methodology is to create a user-defined function in SQLite3, and then create a trigger that will call that function (which is linked to a Python file). Is this possible with SQLite?
Django app, SQLite and Python.
import sqlite3
def email_materials_submittal(project_id):
print(project_id)
con = sqlite3.connect('../db.sqlite3')
con.create_function("FUNC_EMS", 1, email_materials_submittal)
con.commit()
cur = con.cursor()
cur.execute("CREATE TRIGGER TRIG_EMS AFTER INSERT ON core_level BEGIN SELECT FUNC_EMS(NEW.project_id); END;")
con.commit()
con.close()
I want SQL to call an external Python script whenever a new row is inserted into a certain table. Tried the code above, but SQLite throws the error: no such function: FUNC_EMS
This task is much easier using signals for some notifications and a combination of custom management commands and crontab jobs for others (date reminders). Django is awesome!
Related
I am using sqlite combined with tkinter to write and delete records within my Python program. The deletion works perfectly fine in my program and also when I restart the program, the record does not exist anymore.
However, I always cross check using the Linux standard software DB Browser for SQLite and look at my SQL Table. Strangely, all records still exist in the DB Browser. Now I am wondering, why's that? Why is it gone within my Python sqlite queries but not in the DB Browser? Somehow the records are still there. How can I completely destroy my records?
For deletion I use:
(The user can chose a specific entry using a listbox. Eventually, I "translate" the selected item into its specific ID and trigger the deletion.)
self.c.execute("DELETE FROM financial_table WHERE ID=?",(entry,))
self.conn.commit()
For my query I use:
(I query the data for a specific year and month.)
self.c.execute("SELECT ID, Date, Item, Price FROM financial_table WHERE strftime('%Y-%m', Date) = '{}' ORDER BY Date ".format(date))
single_dates = self.c.fetchall()
Thank you very much for your help.
The solution to my question is: I am stupid!
I was tired yesterday evening and looked at the wrong sql file in a subfolder which had the same name than the one from my python program. So it is actually working. Please excuse my stupidity.
#Bruceskyaus
Despite my stupidity I learned from your answer, especially the try ... except block. I am going to implement it. Thanks.
You may have an problem with controlling transactions on your database, but it could also be the connection itself. Make sure you don't have any uncommitted DML statements on a different connection (i.e. an INSERT, UPDATE or DELETE in your DB Browser that wasn't committed), this could cause the conn.commit() to fail. With SQLite, an uncommitted transaction could lock the entire database - for a brief period of time.
Try ensuring that there is a new cursor for the delete statement and call conn.close() after the conn.commit(). Before you execute the code, make sure that no other connections are accessing the database - including the DB Browser. Only check in the DB Browser when you have shut down the application (for this test). This eliminates multithreading or locking as a possible cause. See also SQLite - Data Persistence and SQLite - Controlling Transactions
It is also helpful to trap all errors for DML statements using a try...except block. Something like this:
import sqlite3
try:
self.conn = sqlite3.connect('mydb.db')
self.c = conn.cursor()
self.c.execute("DELETE FROM financial_table WHERE ID=?",(entry,))
self.conn.commit()
except sqlite3.Error as e:
print("An error occurred:", e.args[0])
finally:
self.conn.close()
I'm trying to automate the concurrent creation of PostgreSQL indexes via Django. However, when I try to execute SQL via Django like:
from django.db import connections
cursor = connections['mydb'].cursor()
cursor.execute('CREATE INDEX CONCURRENTLY some_index ON some_table (...)')
I get the error:
DatabaseError: CREATE INDEX CONCURRENTLY cannot run inside a transaction block
Even if I use the old #commit_manually decorator or the new #atomic decorator, I still get this error. How do I completely disable transactions in Django?
I am working on a web app written using Pyramid web application. Using MySQL to store the relational stuff. But the web app is also a data storing facility and we use Postgres for that purpose.
Note that each user's account gets its own connection parameters in Postgres. The hosts running Postgres is not going to be the same for users.
We have a couple of stored procedures that are essential for the app to function. I was wondering how to ship the procedures to each Postgres database instance. I would like to make sure that it is pretty easy to update them as well.
Here is what I could come up with so far.
I have a file in the app's code base called procedures.sql
CREATE FUNCTION {procedure_name_1} (text, text,
max_split integer) RETURNS text AS $$
BEGIN
-- do stuff --
END;
$$ LANGUAGE plpgsql;
CREATE FUNCTION {procedure_name_2} (text, text,
max_split integer) RETURNS text AS $$
BEGIN
-- do stuff --
END;
$$ LANGUAGE plpgsql;
Whenever a user wants to talk to his DB, I execute _add_procedures_to_db function from the python app.
procedure_name_map = {
'procedure_name_1': 'function_1_v1',
'procedure_name_2': 'function_2_v1'
}
def _add_procedures_to_db(connection):
cursor = connection.Cursor()
with open(PROCEDURE_FILE_PATH) as f:
sql = f.read().format(**procedure_name_map)
try:
cursor.execute(sql)
connection.commit()
except:
pass
Note that connection params will be obtained when we want to do some transaction within web response cycle from MySQL DB.
Strategy is to change function_1_v1 to function_1_v2 in case I update the code for the procedure.
But this seems like such an expensive way to do this as each time I want to connect, I will get an exception that has to be handled after first time.
So here is my question:
Is there another way to do this from within the web app code? Or should I make procedure updates a part of deployment and configuration rather than an app layer thing?
If you are looking how to change the database (tables, views, stored procedurues) between different Pyramid web application version deployments that's usually called migration.
If you are using SQLAlchemy you can use automated migration tools like Alembic
If you are using raw SQL commands, then you need to write and run a custom command line script each time you deploy an application with different version. This command line script would prepare the database for the current application version. This would include running ALTER TABLE SQL commands, etc.
I'm trying to execute TSQL queries in a remote MSSQL database by using SQLAlchemy and pymssql. I've tested my procedural query directly in the database and it works as intended, it also works if I run it directly through pymssql. If I run a regular one liner queries such as:
select table_name from INFORMATION_SCHEMA.tables
Through SQLAlchemy this also works as it should. But when I try to execute the following TSQL query it does not actually create the table:
IF NOT EXISTS (SELECT *
FROM INFORMATION_SCHEMA.TABLES
WHERE TABLE_NAME = 'SOME_TABLE')
BEGIN
CREATE TABLE SOME_TABLE (SOME_TEXT VARCHAR(255), SOME_TIME DATETIME)
END
it runs it as it was successful and if I try to read the result it from the execution it gives me "Resource already closed error" as expected since it is a CREATE query. However if I try to add data to table 'SOME_TABLE' it pukes at me and says that the table does not exist. Feels like it is only uploading the query as a function but never executes it. Any ideas? Or even better; TSQL queries that actually works when executing with SQLAlchemy and pymssql.
Thanks,
You need to commit your pending changes in the Session. For basic understanding of the process read Using the Session. Quick solution:
session.commit()
TIME and TEXT are reserve words.
I do not know how SQL Alchemy or pymmsql talks to SQL Server. Either native client or ODBC. It eventually all boils down to a tabular data stream (TDS) over a network protocol like TCP/IP.
Check out the reserve word list on TECHNET.
-- Create table ?
IF OBJECT_ID('DBO.SOMETABLE') IS NULL
CREATE TABLE DBO.SOMETABLE (MY_TEXT VARCHAR(255), MY_TIME DATETIME);
I use the OBJECT_ID function since it is less typing.
But NOT EXITS works with both SELECT FROM the sys.objects or information_schema.tables with correct WHERE clauses.
I'm trying to insert all values of a list to my sqlite3 database. When I simulate this query by using the python interactive interpreter, I am able to insert the single value to DB properly. But my code fails while using an iteration:
...
connection=lite.connect(db_name)
cursor=connection.cursor()
for name in match:
cursor.execute("""INSERT INTO video_dizi(name) VALUES (?)""",(name,))
connection.commit()
...
error:cursor.execute("""INSERT INTO video_dizi(name) VALUES (?)""",(name,))
sqlite3.OperationalError: database is locked
Any way to overcome this problem?
Do you have another connection elsewhere in your code that you use to begin a transaction that is still active (not committed) when you try to commit the operation that fails?
As this error can happen because you have opened your site.db or database file in DBbrowser type application to view in interactive database interface. Just close that it will work fine.
Because your database is use by another process or connection. If you need real concurrency, use a real RDBMS.