How do I make this plpythonu stored procedure insert to database? - python

I'm trying to read lines from stdin, and insert data from those lines into a PostgreSQL db, using a plpythonu stored procedure.
When I call the procedure under Python 3, it runs (consuming a serial value for each line read),
but stores no data in the db.
When I call the same procedure from psql, it works fine, inserting a single line in the db.
For example:
Action: Run SELECT sl_insert_day('2017-01-02', '05:15'); from within psql as user jazcap53
Result: day inserted with day_id 1.
Action: Run python3 src/load/load_mcv.py < input.txt at the command line
Result: nothing inserted, but 2 serial day_id's are consumed.
Action: Run SELECT sl_insert_day('2017-01-03', '06:15'); from within psql as user jazcap53
Result: day inserted with day_id 4.
file: input.txt:
DAY, 2017-01-05, 06:00
DAY, 2017-01-06, 07:00
Output:
('sl_insert_day() succeeded',)
('sl_insert_day() succeeded',)
I'm running Fedora 25, Python 3.6.0, and PostgreSQL 9.5.6.
Thank you very much to anyone who can help me with this!
Below is an MCV example that reproduces this behavior. I expect my problem is in Step 8 or Step 6 -- the other Steps are included for completeness.
The Steps used to create the MCV:
Step 1) Create database:
In psql as user postgres,
CREATE DATABASE sl_test_mcv;
Step 2) Database init:
file: db/database_mcv.ini
[postgresql]
host=localhost
database=sl_test_mcv
user=jazcap53
password=*****
Step 3) Run database config:
file: db/config_mcv.py
from configparser import ConfigParser
def config(filename='db/database_mcv.ini', section='postgresql'):
parser = ConfigParser()
parser.read(filename)
db = {}
if parser.has_section(section):
params = parser.items(section)
for param in params:
db[param[0]] = param[1]
else:
raise Exception('Section {} not found in the {} file'.format(section, filename))
return db
Step 4) Create table:
file: db/create_tables_mcv.sql
DROP TABLE IF EXISTS sl_day CASCADE;
CREATE TABLE sl_day (
day_id SERIAL UNIQUE,
start_date date NOT NULL,
start_time time NOT NULL,
PRIMARY KEY (day_id)
);
Step 5) Create language:
CREATE LANGUAGE plpythonu;
Step 6) Create procedure:
file: db/create_procedures_mcv.sql
DROP FUNCTION sl_insert_day(date, time without time zone);
CREATE FUNCTION sl_insert_day(new_start_date date,
new_start_time time without time zone) RETURNS text AS $$
from plpy import spiexceptions
try:
plan = plpy.prepare("INSERT INTO sl_day (start_date, start_time) \
VALUES($1, $2)", ["date", "time without time zone"])
plpy.execute(plan, [new_start_date, new_start_time])
except plpy.SPIError, e:
return "error: SQLSTATE %s" % (e.sqlstate,)
else:
return "sl_insert_day() succeeded"
$$ LANGUAGE plpythonu;
Step 7) Grant privileges:
file: db/grant_privileges_mcv.sql
GRANT SELECT, UPDATE, INSERT, DELETE ON sl_day TO jazcap53;
GRANT USAGE ON sl_day_day_id_seq TO jazcap53;
Step 8) Run procedure as python3 src/load/load_mcv.py < input.txt:
file: src/load/load_mcv.py
import sys
import psycopg2
from spreadsheet_etl.db.config_mcv import config
def conn_exec():
conn = None
try:
params = config()
conn = psycopg2.connect(**params)
cur = conn.cursor()
last_serial_val = 0
while True:
my_line = sys.stdin.readline()
if not my_line:
break
line_list = my_line.rstrip().split(', ')
if line_list[0] == 'DAY':
cur.execute('SELECT sl_insert_day(\'{}\', \'{}\')'.
format(line_list[1], line_list[2]))
print(cur.fetchone())
cur.close()
except (Exception, psycopg2.DatabaseError) as error:
print(error)
finally:
if conn is not None:
conn.close()
if __name__ == '__main__':
conn_exec()

Do conn.commit() after cur.close()

Related

sqlite3 error: "Unable to resolve table" even though I already remade the table

I have searched extensively before asking this seemingly simple question. I have a python project, made a sqlite DB and some code to insert and work with it, and all was good until I decided to pull the db functions out of the main file, and pull out the db file, and put both into a folder called db. So now both the function file and the db are in the same folder, one level deep. So the error starts immediately, but the code still runs, albeit not actually doing anything, I search the internet and all I see are people saying to delete the DB file, make it again in place and that usually solves the issue, I did that twice but no luck. What am I missing here? The code runs without an error, but does not actually work, and the error I am reporting here is from the pycharm hover box.
def add_symbols_to_list(symbols_to_add) -> None:
"""This will add symbols to the current symbols list, but leave the previous entries.
:param: a list of user provided symbols as comma separated strings."""
conn = sqlite3.connect('database.db')
c = conn.cursor()
time_now = datetime.datetime.now() # get current time for the int conversion below
this_month_int = time_now.month # get the current month and set it to an int
# gets the current number of rows so new additions have the correct rowid
# c.execute("SELECT * FROM currentMonthStocks")
# current_row_number = c.execute("SELECT COUNT(*) FROM currentMonthStocks")
# rows = int(current_row_number)
# # https://www.sqlitetutorial.net/sqlite-count-function/
# db_row_id = rows + 1 # set the first row number
extra_symbols = symbols_to_add
for i in range(len(extra_symbols)):
c.execute("""INSERT INTO currentMonthStocks
(symbol, month)
VALUES (?, ?)""", (extra_symbols[i], this_month_int))
# db_row_id += 1
print("Added a symbol")
print("Symbols successfully populated into currentMonthStocks table in database.db")
new_symbols = ['tsla', 'dis', 'pltr']
add_symbols_to_list(new_symbols)
def get_symbols_at_month_start() -> None:
"""Function inserts a list of symbols to trade every month into the currentMonthStocks table in database.db.
This is called once at the start of the month, deletes the current symbols and adds the new ones.
:return: None."""
# edited out the url info for brevity
response = requests.request("POST", url, headers=headers, data=payload)
symbols = response.json()['content']['allInstrumentRows']
this_months_symbols = []
for symbol in symbols:
this_months_symbols.append(symbol['Symbol'])
# print(this_months_symbols)
# file = "database.db"
try:
conn = sqlite3.connect('database.db') # setup database connection
c = conn.cursor()
print("Database Connected")
# c.execute("""CREATE TABLE currentMonthStocks (
# id INT PRIMARY KEY,
# symbol TEXT,
# month INT)""")
# print("table created successfully")
# # checks to see if there is at least 1 row in the db, if yes it deletes all rows.
if c.execute("SELECT EXISTS(SELECT 1 FROM currentMonthStocks WHERE id=1 LIMIT 2);"):
# for i in range(len(this_months_symbols)):
c.execute("DELETE FROM currentMonthStocks")
print("Delete all rows successful")
time_now = datetime.datetime.now() # get current time for the int conversion below
this_month_int = time_now.month # get the current month and set it to an int
db_row_id = 1 # set the first row number
for i in range(len(this_months_symbols)):
c.execute("""INSERT INTO currentMonthStocks
(id, symbol, month)
VALUES (?, ?, ?)""", (db_row_id, this_months_symbols[i], this_month_int))
db_row_id += 1
# print("one more entry")
print("Symbols successfully populated into currentMonthStocks table in database.db")
conn.commit() # commits the current transaction.
print("Entries committed to database.db")
# c.close() # closes the connection to the db.
conn.close()
except sqlite3.Error as e:
print("sqlite3 error", e)
finally:
if conn:
conn.close()
print("Database.db Closed")
There was no problem, or at least no solution, pycharm is still not recognizing the table but I wrote 5 CRUD functions and they all work. So the answer is don't worry about it, just see if the DB is updating correctly.

Why SQLITE doesn't accept My INTEGER/TEXT data larger than 8, using Python 3?

Problem
I am trying to read a csv file to Pandas, and write it to a SQLite database.Process works for all the columns in the csv file except for "Fill qty" which is a Positive Integer(int64). The process changes the type from TEXT/INTEGER to BLOB.
So I tried to load only the "Fll qty" column from Pandas to SQLite, and surprisingly I noticed I can safely do that for all integers smaller than 10 (I don't have 9 in my dataset, so basically 1,2,...,8 loaded successfully).
Here is what I tried:
I tried what I could think of: change "Fill_Qty" type in Schema to INTEGER to REAL, NULL or TEXT , change data type in Pandas from int64 to float or string before inserting to SQLite table. None of them worked. By the look of it, the "Trade_History.csv" file seems to be fine in Pandas or Excel. Is there something that my eyes dont see?!? So I am really confused what is happening here!
You would need the .csv file to test the code. Here is the code and .csv file: https://github.com/Meisam-Heidari/Trading_Min_code
The code:
### Imports:
import pandas as pd
import numpy as np
import sqlite3
from sqlite3 import Error
def create_database(db_file):
try:
conn = sqlite3.connect(db_file)
finally:
conn.close()
def create_connection(db_file):
""" create a database connection to the SQLite database
specified by db_file
:param db_file: database file
:return: Connection object or None
"""
try:
conn = sqlite3.connect(db_file)
return conn
return None
def create_table(conn,table_name):
try:
c = conn.cursor()
c.execute('''CREATE TABLE {} (Fill_Qty TEXT);'''.format(table_name))
except Error as e:
print('Error Code: ', e)
finally:
conn.commit()
conn.close()
return None
def add_trade(conn, table_name, trade):
try:
print(trade)
sql = '''INSERT INTO {} (Fill_Qty)
VALUES(?)'''.format(table_name)
cur = conn.cursor()
cur.execute(sql,trade)
except Error as e:
print('Error When trying to add this entry: ',trade)
return cur.lastrowid
def write_to_db(conn,table_name,df):
for i in range(df.shape[0]):
trade = (str(df.loc[i,'Fill qty']))
add_trade(conn,table_name,trade)
conn.commit()
def update_db(table_name='My_Trades', db_file='Trading_DB.sqlite', csv_file_path='Trade_History.csv'):
df_executions = pd.read_csv(csv_file_path)
create_database(db_file)
conn = create_connection(db_file)
table_name = 'My_Trades'
create_table(conn, table_name)
# writing to DB
conn = create_connection(db_file)
write_to_db(conn,table_name,df_executions)
# Reading back from DB
df_executions = pd.read_sql_query("select * from {};".format(table_name), conn)
conn.close()
return df_executions
### Main Body:
df_executions = update_db()
Any alternatives
I am wondering if anyone have a similar experience? Any advices/solutions to help me load the data in SQLite?
I am Trying to have something light and portable and unless there is no alternatives, I prefer not to go with Postgres or MySQL.
You're not passing a container to .execute() when inserting the data. Reference: https://www.python.org/dev/peps/pep-0249/#id15
What you need to do instead is:
trade = (df.loc[i,'Fill qty'],)
# ^ this comma makes `trade` into a tuple
The types of errors you got would've been:
ValueError: parameters are of unsupported type
Or:
sqlite3.ProgrammingError: Incorrect number of bindings supplied. The
current statement uses 1, and there are 2 supplied.

Store Mysql coulmn names in array using Python mysql connector

I'm quite new to mysql as in manipulating the database itself. I succeeded to store new lines in a table but my next endeavor will be a little more complex.
I'd like to fetch the column names from an existing mysql database and save them to an array in python. I'm using the official mysql connector.
I'm thinking I can achieve this through the information_schema.columns command but I have no idea how to build the query and store the information in an array. It will be around 100-200 columns so performance might become an issue so I don't think its wise just to iterate my way through it for each column.
The base code to inject code into mysql using the connector is:
def insert(data):
query = "INSERT INTO templog(data) " \
"VALUES(%s,%s,%s,%s,%s)"
args = (data)
try:
db_config = read_db_config()
conn = MySQLConnection(db_config)
cursor = conn.cursor()
cursor.execute(query, args)
#if cursor.lastrowid:
# print('last insert id', cursor.lastrowid)
#else:
# print('last insert id not found')
conn.commit()
cursor.close()
conn.close()
except Error as error:
print(error)
As said this above code needs to be modified in order to get data from the sql server. Thanks in advance!
Thanks for the help!
Got this as working code:
def GetNames(web_data, counter):
#get all names from the database
connection = create_engine('mysql+pymysql://user:pwd#server:3306/db').connect()
result = connection.execute('select * from price_usd')
a = 0
sql_matrix = [0 for x in range(counter + 1)]
for v in result:
while a == 0:
for column, value in v.items():
a = a + 1
if a > 1:
sql_matrix[a] = str(('{0}'.format(column)))
This will get all column names from the existing sql database

Webiopi (Raspberry pi) with sql or sqlite

I've got some problem with making database work with webiopi. I'm already import sqlite3 and change folder permission but when I'm run webiopi nothing had been create. However, other function after f.write('This is a test\n') every process work normally and repeat the loop. Hope you can help me?
def loop():
db = sqlite3.connect('schedule.db')
db.execute('DROP TABLE IF EXISTS schedule')
db.execute('CREATE TABLE schedule (hour int, minute int, second int, status text)')
db.execute('INSERT INTO schedule (hour,minute,second,status) VALUES (?,?,?,?)',(sche.get('hour'),sche.get('minute'),sche.get('second'),"yes"))
db.commit()
f = open('workfile', 'w')
f.write('This is a test\n')
loop1=1
while ((now.minute >= minute_ON) and (now.minute < minute_OFF) and (loop1<stepping)):
step()
loop1=loop1+1
Thank you
For database, you may use cursor and conn for better. See doc for more.
For file, you may close() it when you don't use it for write data.
The code below may helps:
def loop():
db = sqlite3.connect('schedule.db')
cur = db.cursor() # create a cursor
cur.execute('DROP TABLE IF EXISTS schedule')
cur.execute('CREATE TABLE schedule (hour int, minute int, second int, status text)')
cur.execute('INSERT INTO schedule (hour,minute,second,status) VALUES (?,?,?,?)',(sche.get('hour'),sche.get('minute'),sche.get('second'),"yes"))
db.commit()
cur.close() # close cursor
db.close() # close connection to sqlite3
f = open('workfile', 'w')
f.write('This is a test\n')
f.close() # close file to write data
loop1=1
while ((now.minute >= minute_ON) and (now.minute < minute_OFF) and (loop1<stepping)):
step()
loop1=loop1+1

Running python script from automator

I am trying to create an ".app" from Automator to run a simple python script. When I execute the script in Automator an error occurs saying more or less "Check your action properties and execute again".
And in the history it says "Traceback (most recent call last:)". The point is that this script is running well with a Terminal session.
It seems to be at least an error with my loop "While" for renaming databases (see below) since I can execute the script up to this stage. Is it something wrong with managing sqlite databases? But I cannot understand since there is no problem with the Terminal. Is anything missing?
My python script:
#!/usr/bin/python
import sqlite3
import os.path
file_name = "newDB.data"
choice = ""
if os.path.isfile(file_name):
choice = raw_input("Erase DB? press [y] or [n]:\n")
if choice == "y":
print "erase"
while True:
try:
os.remove(file_name)
break
except OSError as e: # name the Exception `e`
print "Failed with:", e.strerror # look what it says
print "Error code:", e.code
if choice == "n":
print "Bye!"
exit()
# start sqlite connection
conn = sqlite3.connect("newDB.data")
c = conn.cursor()
# attach
c.execute("ATTACH database 'store1.data' AS db1")
c.execute("ATTACH database 'store2.data' AS db2")
# rename tables
while True:
try:
c.execute("ALTER TABLE db1.ZPATIENT RENAME TO table1")
print "table 1 renamed"
break
except:
c.execute("ALTER TABLE db1.table1 RENAME TO ZPATIENT")
print "except 1"
while True:
try:
c.execute("ALTER TABLE db2.ZPATIENT RENAME TO table2")
print "table 2 renamed"
break
except:
c.execute("ALTER TABLE db2.table2 RENAME TO ZPATIENT")
print "except 2"
# some information commands (START):
c.execute("SELECT * from table1")
print(c.fetchall())
c.execute("SELECT * from table2")
print(c.fetchall())
# some information commands (END)
#c.execute("create table ZPATIENT as select * from table1 union select * from table2") ---> first union action but some entries duplicated (one column changed?)
# remove some duplicated entries...
c.execute("create table ZPATIENT as select * from (select * from table1 union select * from table2) final group by ZDATECREATED")
c.execute("CREATE TABLE Z_PRIMARYKEY (Z_ENT int, Z_NAME text, Z_SUPER int, Z_MAX int)")
c.execute("CREATE TABLE Z_METADATA (Z_VERSION int, Z_UUID text, Z_PLIST BLOB)")
c.execute("SELECT count(*) FROM ZPATIENT")
result=c.fetchone()
number_of_rows=result[0]
print number_of_rows
start = 0
end = number_of_rows + 1
c.execute('SELECT * FROM ZPATIENT')
newresult=c.fetchall()
for row in newresult:
start += 1
end -= 1
print start
print end
# some information commands (START):
list_of_tuple = list(row)
list_of_tuple[0] = start
list_of_tuple[2] = end
row = tuple(list_of_tuple)
print row
# some information commands (END)
c.execute("UPDATE ZPATIENT SET Z_PK = ? WHERE rowid = ?", (start, start))
c.execute("UPDATE ZPATIENT SET Z_OPT = ? WHERE rowid = ?", (end, start))
c.execute("INSERT INTO Z_PRIMARYKEY (Z_ENT, Z_NAME, Z_SUPER, Z_MAX) VALUES (0, 'Patient', 0, ?)", (start,))
# close
conn.commit()
conn.close()
To be working I have two sqlite databases named store1.data and store2.data in the same folder...
If anyone has a solution... I don't know if maybe there is an easier way to execute this in one click?
One simple solution could be to avoid using automator and just make a bash script to call the python script. You can execute the bash script by double-clicking on it if that's what you wanted.
#! /bin/bash
python scriptname.py
is all you would need.

Categories