I'm writing a Python application that interacts with a MySql database. The basic way to do this is to connect to the database and execute the queries using the database's cursor object. Should I hardcode each query in the python code, or should I put each query in a stored procedure ? Which is the better design choice (keeping security and elegance in mind) ? Note that many of the queries are single-liners (select queries).
Related
I am using SQLAlchemy and pg8000 to connect to a Postgres database.
I have checked table pg_stat_activity, which shows me a few select queries in 'idle in transaction' state, many of those. But the application much more reads than writes, that is, inserts are few and far between.
I suspect that a new transaction is created for each query, even for simple select statements.
Is it possible to run a read-only query without the need for a transaction? So that it does not need to be committed/rolled back?
Currently, the app runs its queries with method sqlalchemy.engine.Engine.execute for CRUD operations and cursors for calling stored procedures. How should I update these method calls to indicate I want some of them not to start transactions?
I've an application developed with peewee to use mariadb, but now, I need duplicate the 'insert' to register it in a mssql (Microsft SQL Server) database.
I'm trying to use peewee-mssql and peewee-mssqlserv database drivers (or frameworks, or libraries, i'm confused about how they are called) but it seems that the select and insert functions differ from those defined in original peewee and I don't find documentation or API about this peewee-mssql drivers.
How can I use the same classes to insert records in mariadb and mssql indistinctly? It's possible?
Thanks once again.
I am working on a web app written using Pyramid web application. Using MySQL to store the relational stuff. But the web app is also a data storing facility and we use Postgres for that purpose.
Note that each user's account gets its own connection parameters in Postgres. The hosts running Postgres is not going to be the same for users.
We have a couple of stored procedures that are essential for the app to function. I was wondering how to ship the procedures to each Postgres database instance. I would like to make sure that it is pretty easy to update them as well.
Here is what I could come up with so far.
I have a file in the app's code base called procedures.sql
CREATE FUNCTION {procedure_name_1} (text, text,
max_split integer) RETURNS text AS $$
BEGIN
-- do stuff --
END;
$$ LANGUAGE plpgsql;
CREATE FUNCTION {procedure_name_2} (text, text,
max_split integer) RETURNS text AS $$
BEGIN
-- do stuff --
END;
$$ LANGUAGE plpgsql;
Whenever a user wants to talk to his DB, I execute _add_procedures_to_db function from the python app.
procedure_name_map = {
'procedure_name_1': 'function_1_v1',
'procedure_name_2': 'function_2_v1'
}
def _add_procedures_to_db(connection):
cursor = connection.Cursor()
with open(PROCEDURE_FILE_PATH) as f:
sql = f.read().format(**procedure_name_map)
try:
cursor.execute(sql)
connection.commit()
except:
pass
Note that connection params will be obtained when we want to do some transaction within web response cycle from MySQL DB.
Strategy is to change function_1_v1 to function_1_v2 in case I update the code for the procedure.
But this seems like such an expensive way to do this as each time I want to connect, I will get an exception that has to be handled after first time.
So here is my question:
Is there another way to do this from within the web app code? Or should I make procedure updates a part of deployment and configuration rather than an app layer thing?
If you are looking how to change the database (tables, views, stored procedurues) between different Pyramid web application version deployments that's usually called migration.
If you are using SQLAlchemy you can use automated migration tools like Alembic
If you are using raw SQL commands, then you need to write and run a custom command line script each time you deploy an application with different version. This command line script would prepare the database for the current application version. This would include running ALTER TABLE SQL commands, etc.
I have an online database and connect to it by using MySQLdb.
db = MySQLdb.connect(......)
cur = db.cursor()
cur.execute("SELECT * FROM YOUR_TABLE_NAME")
data = cur.fetchall()
Now, I want to write the whole database to my localhost (overwrite). Is there any way to do this?
Thanks
If I'm reading you correctly, you have two database servers, A and B (where A is a remote server and B is running on your local machine) and you want to copy a database from server A to server B?
In all honesty, if this is a one-off, consider using the mysqldump command-line tool, either directly or calling it from python.
If not, the last answer on http://bytes.com/topic/python/answers/24635-dump-table-data-mysqldb details the SQL needed to define a procedure to output tables and data, though this may well miss subtleties mysqldump does not
I am working on a project involving insertion a lot of data in to the database. I am wondering if anybody knows how to fill 2 or 3 tables in the database at the same time.An example or psueodecode would be helpful.
Thanks
If you have a lot of data to insert into the database all at once, then you probably are interested in bulk loading data. The ideal tool for that is the bulk loader that likely comes with your database -- Oracle, Microsoft SQL Server, Sybase SQL Server, and MySQL (to name the ones that come to mind) all have bulk loaders. For example, Microsoft has the bulk insert statement and the bcp program to perform this task. I recommend you look into that rather than rigging up some tool in python, with or without threads.