Python SQLite equivalent to .tables (one-liner without Pandas) - python

When using the SQLite executable, I can print to console a list of all the tables in a database with .tables. Is there an equivalent one-liner way to do this in Python using the sqlite3 module without using Pandas?
Based on the answers at List of tables, db schema, dump etc using the Python sqlite3 API, it appears you have to create a cursor first, then a SQL query to return a list of the tables (which is actually a list of tuples as unicode characters that needs to be cleaned for a pretty print to console). That's five lines, versus SQLite's one line.
Or maybe a related question is: why do I need to open a cursor to get this information in Python sqlite3?
con = sqlite3.connect(db)
cursor = con.cursor()
cursor.execute("SELECT name FROM sqlite_master WHERE type='table'")
tables = cursor.fetchall()
tables = map(str, [t[0] for t in tables])
print(tables)

It's not pretty, but since you're only using the variables once, you can simply chain all the calls.
print([str(t[0]) for t in sqlite3.connect(db).cursor().execute("SELECT name FROM sqlite_master WHERE type='table'").fetchall()])

Related

is there a way to return a specified tables column titles in Python using mysql.connector?

I'm trying to get a list of column names from a table in a SQL database. For example, if my database is called "book_shop" and the table I want to return the columns is called "books".
It's just the string formatting I'm after. I've tried the following...
SELECT *
from information_schema.columns
WHERE table_schema = 'book_shop'
ORDER BY table_name,ordinal_position
Ive got the fetchall and executed commands but it says there's something up with my SQL syntax.

About python sqlite3 order by

Now, I have a study about python sqlite3 database. I think it is very simple problem but not allow next step. Could help me?
There is print OK on vscode terminal, but not revised to DB file. I'm searching several times but I can not fix it.
If I execute the code, it not sorting on DB files.
import sqlite3
conn = sqlite3.connect('sqliteDB1.db')
cursor = conn.cursor()
cursor.execute("SELECT * FROM member")
temp123 = cursor. fetchall()
print(temp123)
cursor.execute("SELECT * FROM member ORDER BY -code")
temp321 = cursor.fetchall()
conn.commit
print(temp321)
conn.close()
A select statement just returns data from a database, it will not modify it. Moreover, tables in SQL databases are inherently unordered sets. They have no intrinsic value, and you should never rely on the order of the rows that happens to be returned unless you explicitly sort it with an order by clause.

Syntax for these two SQL Queries in cursor.execute() when I have a variable (%variable) from my python code in each query?

I'm doing a python program using impala for sql. I'm connecting to a database and I've been running a list of queries. The first thing I did was to take elements from the original database and store it into a python list, where I assigned elements of the list into variables (variable1, variable2). I'm able to reference variables from my python code in my SQL query when it's just one line, but I'm having trouble regarding joining the syntax of both of these queries via union, which I need to do for my next step. What is the syntax for joining the queries (as shown below) together? I'm trying to figure out the syntax for a union.
cursor.execute("SELECT * from database where database.variable = '%str'"%variable1)
cursor.execute("SELECT * from database where database.variable = '%str'"%variable2)
Here's how you can use variables in your SQL queries. The variable itself should be a tuple.
sql = "SELECT * from database where database.variable = %s"
prepared_variable = (variable1, ) # tuple
cursor.execute(sql, prepared_variable)

Python: How to remove single quotes from list item

I'm working on a bit of python code to run a query against a redshift (postgres) SQL database, and I'm running into an issue where I can't strip off the surrounding single quotes from a variable I'm passing to the query. I'm trying to drop a number of tables from a list. This is the basics of my code:
def func(table_list):
drop_query = 'drop table if exists %s' #loaded from file
table_name = table_list[0] #table_name = 'my_db.my_table'
con=psycopg2.connect(dbname=DB, host=HOST, port=PORT, user=USER, password=PASS)
cur=con.cursor()
cur.execute(drop_query, (table_name, )) #this line is giving me trouble
#cleanup statements for the connection
table_list = ['my_db.my_table']
when func() gets called, I am given the following error:
syntax error at or near "'my_db.my_table'"
LINE 1: drop table if exists 'my_db.my_table...
^
Is there a way I can remove the surrounding single quotes from my list item?
for the time being, I've done it (what think is) the wrong way and used string concatenation, but know this is basically begging for SQL-injection.
This is not how psycopg2 works. You are using a string operator %s to replace with a string. The reason for this is to tokenize your string safely to avoid SQL injection, psycopg2 handles the rest.
You need to modify the query before it gets to the execute statement.
drop_query = 'drop table if exists {}'.format(table_name)
I warn you however, do not allow these table names to be create by outside sources, or you risk SQL injection.
However a new version of PSYCOPG2 kind of allows something similar
http://initd.org/psycopg/docs/sql.html#module-psycopg2.sql
from psycopg2 import sql
cur.execute(
sql.SQL("insert into {} values (%s, %s)").format(sql.Identifier('my_table')),[10, 20]
)

Python - Can I insert rows into one database using a cursor (from select) from another database?

I am trying to select data from our main database (postgres) and insert it into a temporary sqlite database for some comparision, analytics and reporting. Is there an easy way to do this in Python? I am trying to do something like this:
Get data from the main Postgres db:
import psycopg2
postgres_conn = psycopg2.connect(connection_string)
from_cursor = postgres_conn.cursor()
from_cursor.execute("SELECT email, firstname, lastname FROM schemaname.tablename")
Insert into SQLite table:
import sqlite3
sqlite_conn = sqlite3.connect(db_file)
to_cursor = sqlite_conn.cursor()
insert_query = "INSERT INTO sqlite_tablename (email, firstname, lastname) values %s"
to_cursor.some_insert_function(insert_query, from_cursor)
So the question is: is there a some_insert_function that would work for this scenario (either using pyodbc or using sqlite3)?
If yes, how to use it? Would the insert_query above work? or should it be modified?
Any other suggestions/approaches would also be appreciated in case a function like this doesn't exist in Python. Thanks in advance!
You should pass the result of your select query to execute_many.
insert_query = "INSERT INTO smallUsers values (?,?,?)"
to_cursor.executemany(insert_query, from_cursor.fetchall())
You should also use a parameterized query (? marks), as explained here: https://docs.python.org/3/library/sqlite3.html#sqlite3.Cursor.execute
If you want to avoid loading the entire source database into memory, you can use the following code to process 100 rows at a time:
while True:
current_data = from_cursor.fetchmany(100)
if not current_data:
break
to_cursor.exectutemany(insert_query, current_data)
sqlite_conn.commit()
sqlite_conn.commit()
You can look at executemany from pyodbc or sqlite. If you can build a list of parameters from your select, you can pass the list to executemany.
Depending on the number of records you plan to insert, performance can be a problem as referenced in this open issue. https://github.com/mkleehammer/pyodbc/issues/120

Categories