I have made a function which counts the amount of rows there are in a table using the cursor.rowcountfunction in Python. Now I want to apply that to tables I choose through %s. The problem is that I don't know how to apply it. Here is the sample I am working with.
def data_input (table):
cursor.execute ("USE database")
cursor.execute ("TRUNCATE TABLE table1")
cursor.execute ("TRUNCATE TABLE table2")
cursor.execute ("LOAD DATA LOCAL INFILE 'table1data' INTO TABLE table1 FIELDS TERMINATED BY '\t' LINES TERMINATED BY '\n' (field1, field2, field3, field4, field5)")
cursor.execute ("LOAD DATA LOCAL INFILE 'table2data' INTO TABLE table2 FIELDS TERMINATED BY '\t' LINES TERMINATED BY '\n' (field1, field2, field3)")
cursor.execute ("SELECT * FROM %s", table)
print cursor.rowcount
data_input ("table1")
Basically what it already does is input all the data into tables in MySQL from a text file, now I want the function to also print the number of rows for a particular table. It is getting an error message saying wrong MySQL syntax so this code is wrong for the rowcount part.
query = "SELECT COUNT(*) from `%s`" %table
cursor.execute(query) #execute query separately
res = cursor.fetchone()
total_rows = res[0] #total rows
Related
in my python code I insert a value into a table.
In the table, there is a sequence which automatically assigns an ID.
After the insert, I want to get this it back in to my python application:
import cx_Oracle, sys
with cx_Oracle.connect(user=ORA_USER,password=ORA_PWD,dsn=ORA_DSN) as conn:
with conn.cursor() as cur:
cur.execute("Insert into my_table columns(data) values ('Hello')")
conn.commit()
with cx_Oracle.connect(user=ORA_USER,password=ORA_PWD,dsn=ORA_DSN) as conn:
with conn.cursor() as cur:
r = cur.execute("select id from my_table where data = 'Hello'")
print(r)
if r is None:
print("Cannot retrieve ID")
sys.exit()
Unfortunately, the result set r is always "None" even though the value has been inserted properly (checked via sqldeveloper).
What am I doing wrong?
I even open a new connection to be sure to grab the value...
After calling execute() for a SELECT statement you need to call fetchone(), fetchmany() or fetchall() as shown in the cx_Oracle documentation SQL Queries.
Or you can use an iterator:
with connection.cursor() as cursor:
try:
sql = """select systimestamp from dual"""
for r in cursor.execute(sql):
print(r)
sql = """select 123 from dual"""
(c_id,) = cursor.execute(sql).fetchone()
print(c_id)
except oracledb.Error as e:
error, = e.args
print(sql)
print('*'.rjust(error.offset+1, ' '))
print(error.message)
However to get an automatically generated ID returned without the overhead of an additional SELECT, you can change the INSERT statement to use a RETURNING INTO clause. There is an example in the cx_Oracle documentation DML RETURNING Bind Variables that shows an UPDATE. You can use similar syntax with INSERT.
With the table:
CREATE TABLE mytable
(myid NUMBER(11) GENERATED BY DEFAULT ON NULL AS IDENTITY (START WITH 1),
mydata VARCHAR2(20));
You can insert and get the generated key like:
myidvar = cursor.var(int)
sql = "INSERT INTO mytable (mydata) VALUES ('abc') RETURNING myid INTO :bv"
cursor.execute(sql, bv=myidvar)
i, = myidvar.getvalue()
print(i)
If you just want a unique identifier you get the ROWID of an inserted row without needing a bind variable. Simple access cursor.lastrowid after executing an INSERT.
Beginners question here. I wish to populate a table with many rows of data straight from a query I'm running in the same session. I wish to do it using with excutemany(). currently, I insert each row as a tuple, as shown in the script below.
Select Query to get the needed data:
This query returns data with 4 columns Parking_ID, Snapshot_Date, Snapshot_Time, Parking_Stat
park_set_stat_query = "SET #row_number = 0;"
park_set_stat_query2 = "SET #row_number2 = 0;"
# one time load to catch only the changes done in the input table
park_change_stat_query = """select in1.Parking_ID,
in1.Snapshot_Date as Snapshot_Date,
in1.Snapshot_Time as Snapshot_Time,
in1.Parking_Stat
from (SELECT
Parking_ID,
Snapshot_Date,
Snapshot_Time,
Parking_Stat,
(#row_number:=#row_number + 1) AS num1
from Fact_Parking_Stat_Input
WHERE Parking_Stat<>0) as in1
left join (SELECT
Parking_ID,
Snapshot_Date,
Snapshot_Time,
Parking_Stat,
(#row_number2:=#row_number2 + 1)+1 AS num2
from Fact_Parking_Stat_Input
WHERE Parking_Stat<>0) as in2
on in1.Parking_ID=in2.Parking_ID and in1.num1=in2.num2
WHERE (CASE WHEN in1.Parking_Stat<>in2.Parking_Stat THEN 1 ELSE 0 END=1) OR num1=1"""
Here is the insert part of the script:
as you can see below I insert each row to the destination table Fact_Parking_Stat_Input_Alter
mycursor = connection.cursor()
mycursor2 = connection.cursor()
mycursor.execute(park_set_stat_query)
mycursor.execute(park_set_stat_query2)
mycursor.execute(park_change_stat_query)
# # keep only changes in a staging table named Fact_Parking_Stat_Input_Alter
qSQLresults = mycursor.fetchall()
for row in qSQLresults:
Parking_ID = row[0]
Snapshot_Date = row[1]
Snapshot_Time = row[2]
Parking_Stat = row[3]
#SQL query to INSERT a record into the table Fact_Parking_Stat_Input_Alter.
mycursor2.execute('''INSERT into Fact_Parking_Stat_Input_Alter (Parking_ID, Snapshot_Date, Snapshot_Time, Parking_Stat)
values (%s, %s, %s, %s)''',
(Parking_ID, Snapshot_Date, Snapshot_Time, Parking_Stat))
# Commit your changes in the database
connection.commit()
mycursor.close()
mycursor2.close()
connection.close()
How can I improve the code so it will insert the data in on insert command?
Thanks
Amir
MYSQL has an INSERT INTO command that is probably far more efficient than query it in python, pulling it and re-iserting
https://www.mysqltutorial.org/mysql-insert-into-select/
Im using python3 and postgres 11.5.
This is the script :
a = cursor.execute("SELECT tablename FROM pg_catalog.pg_tables limit 5")
for table in a:
cursor.execute("SELECT * FROM pg_prewarm(public.%s)", [table[0]])
a query gets some table names , and the loop query should run table name as the %s.
but for some reason i get the arg table[0] with // /n in the query and its messing it up.
if i print a results i get table names as tuple:
[('sa1591354519',), ('sa1591397719',), ('sa1591397719',)]
so [table[0]] is a string.
the error i get:
1574683839 [16177], ERR, execute ({'Error while connecting to PostgreSQL': SyntaxError('syntax error at or near "\'sa1591440919\'"\nLINE 1: SELECT * FROM pg_prewarm(public.\'sa1591440919\')\n ^\n')},)
what can i do ?
The errors don't have anything to do with the newlines you see, which are just an artifact of the error message. If you were to print out the error, would see:
syntax error at or near "'sa1591440919'"
LINE 1: SELECT * FROM pg_prewarm(public.'sa1591440919')
^
In other words, Postgres doesn't like the table name you're passing because it contains quotes. This is happening because you're trying to treat the table names like a normal query parameter, which causes psycopg to quote them...but that's not what you want in this case.
Just replace your use of query templating with normal Python string substitution:
a = cursor.execute("SELECT tablename FROM pg_catalog.pg_tables limit 5")
for table in a:
cursor.execute("SELECT * FROM pg_prewarm(public.%s)" % (table[0]))
But this won't actually work, because cursor.execute doesn't return a value, so a will be None. You would need to do something like:
cursor.execute("SELECT tablename FROM pg_catalog.pg_tables limit 5")
a = cursor.fetchall()
for table in a:
...
I have that query in a python program:
And i should create a multidimensional array (if it possible) or four arrays from this query for every column from the query.
Can you suggest an elegant way to solve it?
conn = #connection to the server
cursor=conn.cursor()
query = (" select id, name, phone, city from guest")
cursor.execute(query)
results = cursor.fetchall
for i in results:
print i
cursor.close()
conn.close()
Not elegant but it may assist to unravel the mysterious Python Connector Cursor Class and transfers the list of tuples (see Copperfield comment) with the data from the query, into a list (phoneList) of dictionaries (entries) with details of each entry in the database, that might be easier to work with in your python script:
# ref: https://dev.mysql.com/doc/connector-python/en/connector-python-api-mysqlcursor.html
import mysql.connector
db = 'test'
table = 'phonebook'
phoneList = []
drop_table = ("DROP TABLE IF EXISTS {};").format(table)
# By default, the starting value for AUTO_INCREMENT is 1, and it will increment by 1 for each new record.
# To let the AUTO_INCREMENT sequence start with another value, use the following SQL statement:
# ALTER TABLE phonebook AUTO_INCREMENT=100;
create_table = ("CREATE TABLE {} ("
"id int NOT NULL AUTO_INCREMENT,"
"name varchar(30) NOT NULL,"
"phone varchar(30) NOT NULL,"
"city varchar(30) NOT NULL,"
"PRIMARY KEY (id))"
" ENGINE=InnoDB DEFAULT CHARSET=latin1;").format(table)
Names = {'Bill':{'phone':'55123123','city':'Melbourne'},
'Mary':{'phone':'77111123','city':'Sydney'},
'Sue':{'phone':'55888123','city':'Melbourne'},
'Harry':{'phone':'77777123','city':'Sydney'},
'Fred':{'phone':'88123444','city':'Yongala'},
'Peter':{'phone':'55999123','city':'Melbourne'}}
cnx = mysql.connector.connect(user='mysqluser', password='xxxx',host='127.0.0.1',database=db)
cursor = cnx.cursor(dictionary=True) # key to using **row format
cursor.execute(drop_table)
cursor.execute(create_table)
# populate db
for name,detail in dict.items(Names):
sql = ("INSERT INTO {} (name,phone,city) VALUES ('{}','{}','{}')".format(table,name,detail['phone'],detail['city']))
cursor.execute(sql)
sql = ("SELECT id,name,phone,city FROM {}".format(table))
cursor.execute(sql)
for row in cursor:
print("{id} {name} {phone} {city}".format(**row))
phoneList.append(row)
print phoneList[0]['name'],phoneList[0]['city']
print phoneList[3]['name'],phoneList[3]['phone']
for entries in phoneList: # list of dictionaries
print entries['name'],entries
for entries in phoneList:
for k,v in dict.items(entries):
print k,v
print "\n"
cnx.close()
I am new to psycopg2. I have to insert data into the table with no duplicates. So, first I created a temporary table where I dumped all the data. And then, I check and add the data to the actual table.
Here is the code till now:
for eachline in content:
pmid ,first_name, last_name,initial,article_title,journal,language = eachline.split("\t")
cur.execute ("INSERT INTO AUTHOR_PMID(pmid, Author_lastname, Author_firstname, Author_initial,Article_title)
SELECT DISTINCT (pmid, Author_lastname, Author_firstname, Author_initial,Article_title)
FROM AUTHOR_PMID WHERE NOT EXISTS (SELECT "X" FROM AUTHOR_pmid_temp
WHERE
AUTHOR_pmid_temp.pmid = AUTHOR_PMID.pmid
AND AUTHOR_pmid_temp.Author_lastname = AUTHOR_PMID.Author_lastname
AND AUTHOR_pmid_temp.Author_firstname = AUTHOR_PMID.Author_firstname
AND AUTHOR_pmid_temp.Author_initial = AUTHOR_PMID.Author_initial
AND AUTHOR_pmid_temp.Article_title = AUTHOR_PMID.Article_title);")
con.commit()
error: syntax error.
Where am i going wrong?
Try inserting query with triple quotes instead of single like below
for eachline in content:
pmid ,first_name, last_name,initial,article_title,journal,language = eachline.split("\t")
cur.execute ("""INSERT INTO AUTHOR_PMID(pmid, Author_lastname, Author_firstname, Author_initial,Article_title)
SELECT DISTINCT (pmid, Author_lastname, Author_firstname, Author_initial,Article_title)
FROM AUTHOR_PMID WHERE NOT EXISTS (SELECT "X" FROM AUTHOR_pmid_temp
WHERE
AUTHOR_pmid_temp.pmid = AUTHOR_PMID.pmid
AND AUTHOR_pmid_temp.Author_lastname = AUTHOR_PMID.Author_lastname
AND AUTHOR_pmid_temp.Author_firstname = AUTHOR_PMID.Author_firstname
AND AUTHOR_pmid_temp.Author_initial = AUTHOR_PMID.Author_initial
AND AUTHOR_pmid_temp.Article_title = AUTHOR_PMID.Article_title);""")
con.commit()
For more info, please check here !!!