Is there a way to list all attached databases for a sqlite3 Connection? For instance:
con = sqlite3.connect(":memory:")
con.execute("attach database 'a.db' as 'a'")
con.execute("attach database 'b.db' as 'b'")
con.list_databases() # <- doesn't exist
The command with the sqlite3 command shell is .databases. Tried poking at sqlite_master but of course that's a table, and it exists on each attached DB. I see nothing in the docs either. Is this possible?
Finally found it by digging into the sqlite3 source code:
c.execute("PRAGMA database_list").fetchall()
Forgot about PRAGMAs 🙃
(https://github.com/sqlite/sqlite/blob/master/src/shell.c.in#L8417)
Related
I am trying to insert data into database, but here is error:
sqlite-problem-sqlite3-operationalerror-near-where-syntax-error
This is my code:
c.execute(f"INSERT INTO math(qula) WHERE name = '{member.name}' VALUES({saboloo})")
I suspect that you want to update the column qula of an existing row of the table math and not insert a new row.
Also, it's a good practice to use ? placeholders:
c.execute("UPDATE math SET qula = ? WHERE name = ?", (saboloo, member.name))
To insert data into sqlite3, first you have to import sqlite3 module in the Python standard library. You then connect to the file by passing a file path to the connect (xxxx) method in the sqlite3 module, if the database you passed in the connect method does not exist one will be created at that path and if the database exist it will connect to it.
import sqlite3
con = sqlite3.connect('/path/xxx.sqlite3')
You than have to create a cursor object using the cursor() method
c = con.cursor()
You than Prepare, SQL queries to INSERT a record into the database.
c.execute(f"INSERT INTO math(qula) VALUES({saboloo})").
I hope this one helps.
You can also read more from here Python SQLite insert data
I am trying to learn how to get Microsoft SQL query results using python and pyodbc module and have run into an issue in returning the same results using the same query that I use in Microsoft SQL Management Studio.
I've looked at the pyodbc documentation and set up my connection correctly... at least I'm not getting any connection errors at execution. The only issue seems to be returning the table data
import pyodbc
import sys
import csv
cnxn = pyodbc.connect('DRIVER={SQL Server};SERVER=<server>;DATABASE=<db>;UID=<uid>;PWD=<PWD>')
cursor = cnxn.cursor()
cursor.execute("""
SELECT request_id
From audit_request request
where request.reception_datetime between '2019-08-18' and '2019-08-19' """)
rows = cursor.fetchall()
for row in cursor:
print(row.request_id)
When I run the above code i get this in the python terminal window:
Process returned 0 (0x0) execution time : 0.331 s
Press any key to continue . . .
I tried this same query in SQL Management Studio and it returns the results I am looking for. There must be something I'm missing as far as displaying the results using python.
You're not actually setting your cursor up to be used. You should have something like this before executing:
cursor = cnxn.cursor()
Learn more here: https://github.com/mkleehammer/pyodbc/wiki/Connection#cursor
I am using an old version of sqlalchemy (0.8) and I need to execute "REINDEX DATABASE <dbname>" on PostgreSQLql 9.4 by using sqlalchemy api.
Initially I tried with:
conn = pg_db.connect()
conn.execute('REINDEX DATABASE sg2')
conn.close()
but I got error "REINDEX DATABASE cannot run inside a transaction block".
I read in internet and tried other changes:
engine.execute(text("REINDEX DATABASE sg2").execution_options(autocommit=True))
(I tried also with autocommit=False).
and
conn = engine.raw_connection()
cursor = conn.cursor()
cursor.execute('REINDEX DATABASE sg2')
cursor.close()
I always have the same error.
I tried also following:
conn.execution_options(isolation_level="AUTOCOMMIT").execute(query)
but I got error
Invalid value 'AUTOCOMMIT' for isolation_level. Valid isolation levels for postgresql are REPEATABLE READ, READ COMMITTED, READ UNCOMMITTED, SERIALIZABLE
What am I missing here ? Thanks for any help.
I was looking for a better way to try to read files and use them to create tables for MySQL, and I stumbled across agate, so I'm trying to see if it will work for my purposes.
I have created the table from a csv file using:
table=agate.Table.from_csv('testsheet.csv')
That worked just fine, and I saw agate option to save a db to sql using agatesql and the command:
table.to_sql('postgresql:///database', 'output_table')
Is there any way to use this command or to make this module work with MySQL, or will it only work with postgresql? Thanks in advance for any help
As the Agate SQL API docs indicate, the .to_sql() method can be any valid SQLAlchemy connection string or SQLAlchemy connection (Postgres was just the example used):
from sqlalchemy import create_engine
...
# CONNECTION STRING
table.to_sql('mysql://user:pwd#hostname:port/database', 'output_table')
# CONNECTION OBJECT
my_engine = create_engine('mysql://user:pwd#hostname:port/database')
table.to_sql(my_engine, 'output_table')
If needed you can interface sqlAlchemy with an available DB-API like pymsql
import pymysql
from sqlalchemy import create_engine
...
# CONNECTION OBJECT
my_engine = create_engine("mysql+pymysql://user:pwd#hostname:port/database")
table.to_sql(my_engine, 'output_table')
I have used Python to parse a txt file for specific information (dates, $ amounts, lbs, etc) and now I want to export that data to an Oracle table that I made in SQL Developer.
I have successfully connected Python to Oracle with the cx_Oracle module, but I am struggling to export or even print any data to my database from Python.
I am not proficient at using SQL, I know of simple queries and that's about it. I have explored the Oracle docs and haven't found straightforward export commands. When exporting data to an Oracle table via Python is it Python code I am going to be using or SQL code? Is it the same as importing a CSV file, for example?
I would like to understand how to write to an Oracle table from Python; I need to parse and export a very large amount of data so this won't be a one time export/import. I would also ideally like to have a way to preview my import to ensure it aligns correctly with my already created Oracle table, or if a simple undo action exists that would suffice.
If my problem is unclear I am more than happy to clarify it. Thanks for all help.
My code so far:
import cx_Oracle
dsnStr = cx_Oracle.makedsn("sole.wh.whoi.edu", "1526", "sole")
con = cx_Oracle.connect(user="myusername", password="mypassword", dsn=dsnStr)
print (con.version)
#imp 'Book1.csv' [this didn't work]
cursor = con.cursor()
print (cursor)
con.close()
From Import a CSV file into Oracle using CX_Oracle & Python 2.7 you can see overall plan.
So if you already parsed data into csv you can easily do it like:
import cx_Oracle
import csv
dsnStr = cx_Oracle.makedsn("sole.wh.whoi.edu", "1526", "sole")
con = cx_Oracle.connect(user="myusername", password="mypassword", dsn=dsnStr)
print (con.version)
#imp 'Book1.csv' [this didn't work]
cursor = con.cursor()
print (cursor)
text_sql = '''
INSERT INTO tablename (firstfield, secondfield) VALUES(:1,:2)
'''
my_file = 'C:\CSVData\Book1.csv'
cr = csv.reader(open(my_file,"rb"))
for row in cr:
print row
cursor.execute(text_sql, row)
print 'Imported'
con.close()