Now, I have a study about python sqlite3 database. I think it is very simple problem but not allow next step. Could help me?
There is print OK on vscode terminal, but not revised to DB file. I'm searching several times but I can not fix it.
If I execute the code, it not sorting on DB files.
import sqlite3
conn = sqlite3.connect('sqliteDB1.db')
cursor = conn.cursor()
cursor.execute("SELECT * FROM member")
temp123 = cursor. fetchall()
print(temp123)
cursor.execute("SELECT * FROM member ORDER BY -code")
temp321 = cursor.fetchall()
conn.commit
print(temp321)
conn.close()
A select statement just returns data from a database, it will not modify it. Moreover, tables in SQL databases are inherently unordered sets. They have no intrinsic value, and you should never rely on the order of the rows that happens to be returned unless you explicitly sort it with an order by clause.
Related
I had no problem with SELECTing data in python from postgres database using cursor/execute. Just changed the sql to INSERT a row but nothing is inserted to DB. Can anyone let me know what should be modified? A little confused because everything is the same except for the sql statement.
<!-- language: python -->
#app.route("/addcontact")
def addcontact():
# this connection/cursor setting showed no problem so far
conn = pg.connect(conn_str)
cur = conn.cursor(cursor_factory=psycopg2.extras.DictCursor)
sql = f"INSERT INTO jna (sid, phone, email) VALUES ('123','123','123')"
cur.execute(sql)
return redirect("/contacts")
first look at your table setup and make sure your variables are named right in the right order, format and all that, if your not logging into the specific database on the sql server it won't know where the table is, you might need to send something like 'USE databasename' before you do your insert statement so your computer is in the right place in the server.
I might not be up to date with the language but is that 'f' supposed to be right before the quotes? if thats in ur code that'd probably throw an error unless it has a use im not aware of or its not relevant to the problem.
You have to commit your transaction by adding the line below after execute(sql)
conn.commit()
Ref: Using INSERT with a PostgreSQL Database using Python
I am trying to select data from our main database (postgres) and insert it into a temporary sqlite database for some comparision, analytics and reporting. Is there an easy way to do this in Python? I am trying to do something like this:
Get data from the main Postgres db:
import psycopg2
postgres_conn = psycopg2.connect(connection_string)
from_cursor = postgres_conn.cursor()
from_cursor.execute("SELECT email, firstname, lastname FROM schemaname.tablename")
Insert into SQLite table:
import sqlite3
sqlite_conn = sqlite3.connect(db_file)
to_cursor = sqlite_conn.cursor()
insert_query = "INSERT INTO sqlite_tablename (email, firstname, lastname) values %s"
to_cursor.some_insert_function(insert_query, from_cursor)
So the question is: is there a some_insert_function that would work for this scenario (either using pyodbc or using sqlite3)?
If yes, how to use it? Would the insert_query above work? or should it be modified?
Any other suggestions/approaches would also be appreciated in case a function like this doesn't exist in Python. Thanks in advance!
You should pass the result of your select query to execute_many.
insert_query = "INSERT INTO smallUsers values (?,?,?)"
to_cursor.executemany(insert_query, from_cursor.fetchall())
You should also use a parameterized query (? marks), as explained here: https://docs.python.org/3/library/sqlite3.html#sqlite3.Cursor.execute
If you want to avoid loading the entire source database into memory, you can use the following code to process 100 rows at a time:
while True:
current_data = from_cursor.fetchmany(100)
if not current_data:
break
to_cursor.exectutemany(insert_query, current_data)
sqlite_conn.commit()
sqlite_conn.commit()
You can look at executemany from pyodbc or sqlite. If you can build a list of parameters from your select, you can pass the list to executemany.
Depending on the number of records you plan to insert, performance can be a problem as referenced in this open issue. https://github.com/mkleehammer/pyodbc/issues/120
I have many rows to insert into a table and tried doing row by row but it is taking a really long time. i read this link Python+MySQL - Bulk Insert and seems like setting autocommit to be off can speed things up.
import jadebeapi
connection = jaydebeapi.connect('com.teradata.jdbc.TeraDriver', ['jdbc:teradata://some url',USER,PASS], ['tdgssconfig.jar','terajdbc4.jar'],)
cur = connection.cursor()
connection.jconn.setAutoCommit(False)
cur.execute('select * from my_table')
connection.commit()
Other queries i perform are:
l = [(1,2,3),(2,4,6).....]
for tup in l:
cur.execute('my insert statement')
#this is the really slow part.
When i have the connection.jconn.setAutoCommit(False) i always get this error:
[Teradata Database] [TeraJDBC 15.10.00.14] [Error 3932] [SQLState 25000] Only an ET or null statement is legal after a DDL Statement.
When that line and connection.commit() is commented out, the code works fine. What is the right syntax to set autocommit false?
If speed/efficiency is a concern, you should be using prepared statements and passing your parameters in as the second argument.
You could then also use .executemany():
l = [(1,2,3),(2,4,6).....]
cur.executemany('my insert statement with 3 ? params', l)
#this should be much faster
I am trying to run some querys that needs to create some temporary tables and then returns a result set, but i am unable to do that with MySQLdb api.
I already dig something about this issue like here but without success.
My query is like this:
create temporary table tmp1
select * from table1;
alter tmp1 add index(somefield);
create temporary table tmp2
select * from table2;
select * from tmp1 inner join tmp2 using(somefield);
This returns immediatly an empty result set. If i go to the mysql client and do a show full processlist i can see my queries executing. They take some minutes to complete.
Why cursor returns immediatly and don't wait to query to run.
If i try to run another query i have a "Commands out of sync; you can't run this command now"
I already tried to put my connection with autocommit to True
db = MySQLdb.connect(host='ip',
user='root',
passwd='pass',
db='mydb',
use_unicode=True
)
db.autocommit(True)
Or put every statement in is own cursor.execute() and between them db.commit() but without success too.
Can you help me to figure what is the problem? I know mysql don't support transactions for some operations like alter table, but why the api don't wait until everything is finished like it does with a select?
By the way i'm trying to do this on a ipython notebook.
I suspect that you're passing your multi-statement SQL string directly to the cursor.execute function. The thing is, each of the statements is a query in its own right so it's unclear what the result set should contain.
Here's an example to show what I mean. The first case is passing a semicolon set of statements to execute which is what I presume you have currently.
def query_single_sql(cursor):
print 'query_single_sql'
sql = []
sql.append("""CREATE TEMPORARY TABLE tmp1 (id int)""")
sql.append("""INSERT INTO tmp1 VALUES (1)""")
sql.append("""SELECT * from tmp1""")
cursor.execute(';'.join(sql))
print list(cursor.fetchall())
Output:
query_single_sql
[]
You can see that nothing is returned, even though there is clearly data in the table and a SELECT is used.
The second case is where each statement is executed as an independent query, and the results printed for each query.
def query_separate_sql(cursor):
print 'query_separate_sql'
sql = []
sql.append("""CREATE TEMPORARY TABLE tmp3 (id int)""")
sql.append("""INSERT INTO tmp3 VALUES (1)""")
sql.append("""SELECT * from tmp3""")
for query in sql:
cursor.execute(query)
print list(cursor.fetchall())
Output:
query_separate_sql
[]
[]
[(1L,)]
As you can see, we consumed the results of the cursor for each query and the final query has the results we expect.
I suspect that even though you've issued multiple queries, the API only has a handle to the first query executed and so immediately returns when the CREATE TABLE is done. I'd suggest serializing your queries as described in the second example above.
I'm using MysqlDB. Does it provide a way to execute multiple SELECT queries like mysqli_multi_query does? If not, is there a python library that would allow that?
There is executemany, but that's not what I'm looking for. I'm working with Sphinx and trying to get its batch queries to work.
I spent some time to dig in the source code of MySQLdb and the answer is YES you can do multiple queries with it:
import MySQLdb
db = MySQLdb.connect(user="username", db="dbname")
cursor = db.cursor()
batch_queries = '''
SELECT * FROM posts WHERE id=1;
SELECT * FROM posts WHERE id=2;
'''
cursor.execute(batch_queries)
print cursor.fetchone()
while cursor.nextset(): # iterate to next result set if there is any
print cursor.fetchone()
cursor.close()
Tested successfully in my localhost. Hope it helps.