Hello I am able to query but unable to insert into my json field with the below code,problem is not very tough but due to my absolute new to mysql unable to figure out..every time i will get a variable call last_time first i need to insert then from second time i need to update the last_time..if i do manually i am able to do and getting output as needed..like the below photo..
import pymysql.cursors
import datetime
import json
last_time='2344' #this value i will get output from my program
connection = pymysql.connect(host='localhost',
user='root',
password='Admin...',
db='cl......',
charset='utf8mb4',
cursorclass=pymysql.cursors.DictCursor)
print "connect successfull"
try:
with connection.cursor() as cursor:
sql = "INSERT INTO `tt_inv_refresh` (`cust_id`, `res_type`, `refresh_dtls`) VALUES (%s, %s, %s)"
cursor.execute(sql, ('2','elb','{"vot":[{"elb":"name1","last_refreshtime":last_time},{"elb":"name2","last_refreshtime":last_time}]}'))
connection.commit()
except Exception as e:
print str(e)
print "inserted"
finally:
connection.close()
will be obliged if anyone point out the mistake in my code...thank you
You have missed the quote around last_time
Correct the line with cursore.execute to
cursor.execute(sql, ('2','elb','{"vot":
[{"elb":"name1","last_refreshtime":' + last_time + '},
{"elb":"name2","last_refreshtime":' + last_time+ '}]}'))
To avoid such issues in future, you might consider defining an object and using json.dumps
class Elb:
def toJSON(self):
return json.dumps(self, default=lambda o: o.__dict__,
sort_keys=True, indent=4)
# you can use array or dict | I just copied from one of my code
# which reqd it to be an array
def mapper(self, detailArray = []): # you can use array or dict
self.elb = detailArray[0];
self.last_refreshtime = detailArray[1];
So after you have set the data for an instance say,
el_instance = Elb()
el_instance.mapper(<array-with-data>)
You can call el_instance.toJSON() to get serialized data anywhere.
Related
In console.log I am getting the correct argument value, but when I try to add the argument value in the insert statement it is passing as 0. As a whole, the code works fine without errors, but for assg_id, instead of the actual value, it is inserting 0.
import pandas as pd
import sys
print ("parameters from nodejs", str(sys.argv[1]))
df = pd.read_csv("./userSetupData.csv")
df.head()
import mysql.connector as msql
from mysql.connector import Error
try:
conn = msql.connect(host='localhost', database='pythonTest', user='root', password='0000', auth_plugin='mysql_native_password')
if conn.is_connected():
cursor = conn.cursor()
cursor.execute("select database();")
record = cursor.fetchone()
assgn_id = str(sys.argv[1])
print('Checking the parameter value', assgn_id)
for i,row in df.iterrows():
#here %S means string values
sql = "INSERT INTO pythonTest.usr_stg VALUES (%s,%s,%s,%s,%s,assgn_id)"
cursor.execute(sql, tuple(row))
print("Record inserted")
# the connection is not auto committed by default, so we must commit to save our changes
conn.commit()
except Error as e:
print("Error while connecting to MySQL", e)
sql has normal string and it treats assgn_id as normal text in string, not Python's variable
You would have to use f-string and "{assgn_id}" to put value in this string
sql = f"INSERT INTO pythonTest.usr_stg VALUES (%s,%s,%s,%s,%s, {assgn_id})"
Or you should put assgn_id in tuple with parameters (and use %s in query)
sql = "INSERT INTO pythonTest.usr_stg VALUES (%s,%s,%s,%s,%s, %s)"
cursor.execute(sql, tuple(row.to_list() + [assgn_id]) )
We need to do bulk updates of many rows in our Postgres DB, and want to use the SQL syntax below. How do we do that using psycopg2?
UPDATE table_to_be_updated
SET msg = update_payload.msg
FROM (VALUES %(update_payload)s) AS update_payload(id, msg)
WHERE table_to_be_updated.id = update_payload.id
RETURNING *
Attempt 1 - Passing values
We need to pass a nested iterable format to the psycopg2 query. For the update_payload, I've tried passing a list of lists, list of tuples, and tuples of tuples. It all fails with various errors.
Attempt 2 - Writing custom class with __conform__
I've tried to write a custom class that we can use for these operations, which would return
(VALUES (row1_col1, row1_col2), (row2_col1, row2_col2), (...))
I've coded up like this following instructions here, but it's clear that I'm doing something wrong. For instance, in this approach I'll have to handle quoting of all values inside the table, which would be cumbersome and prone to errors.
class ValuesTable(list):
def __init__(self, *args, **kwargs):
super(ValuesTable, self).__init__(*args, **kwargs)
def __repr__(self):
data_in_sql = ""
for row in self:
str_values = ", ".join([str(value) for value in row])
data_in_sql += "({})".format(str_values)
return "(VALUES {})".format(data_in_sql)
def __conform__(self, proto):
return self.__repr__()
def getquoted(self):
return self.__repr__()
def __str__(self):
return self.__repr__()
EDIT: If doing a bulk update can be done in a faster/cleaner way using another syntax than the one in my original question, then I'm all ears!
Requirements:
Postgres table, consisting of the fields id and msg (and potentially other fields)
Python data containing new values for msg
Postgres table should be updated via psycopg2
Example Table
CREATE TABLE einstein(
id CHAR(5) PRIMARY KEY,
msg VARCHAR(1024) NOT NULL
);
Test data
INSERT INTO einstein VALUES ('a', 'empty');
INSERT INTO einstein VALUES ('b', 'empty');
INSERT INTO einstein VALUES ('c', 'empty');
Python Program
Hypothetical, self-contained example program with quotations of a famous physicist.
import sys
import psycopg2
from psycopg2.extras import execute_values
def print_table(con):
cur = con.cursor()
cur.execute("SELECT * FROM einstein")
rows = cur.fetchall()
for row in rows:
print(f"{row[0]} {row[1]}")
def update(con, einstein_quotes):
cur = con.cursor()
execute_values(cur, """UPDATE einstein
SET msg = update_payload.msg
FROM (VALUES %s) AS update_payload (id, msg)
WHERE einstein.id = update_payload.id""", einstein_quotes)
con.commit()
def main():
con = None
einstein_quotes = [("a", "Few are those who see with their own eyes and feel with their own hearts."),
("b", "I have no special talent. I am only passionately curious."),
("c", "Life is like riding a bicycle. To keep your balance you must keep moving.")]
try:
con = psycopg2.connect("dbname='stephan' user='stephan' host='localhost' password=''")
print_table(con)
update(con, einstein_quotes)
print("rows updated:")
print_table(con)
except psycopg2.DatabaseError as e:
print(f'Error {e}')
sys.exit(1)
finally:
if con:
con.close()
if __name__ == '__main__':
main()
Prepared Statements Alternative
import sys
import psycopg2
from psycopg2.extras import execute_batch
def print_table(con):
cur = con.cursor()
cur.execute("SELECT * FROM einstein")
rows = cur.fetchall()
for row in rows:
print(f"{row[0]} {row[1]}")
def update(con, einstein_quotes, page_size):
cur = con.cursor()
cur.execute("PREPARE updateStmt AS UPDATE einstein SET msg=$1 WHERE id=$2")
execute_batch(cur, "EXECUTE updateStmt (%(msg)s, %(id)s)", einstein_quotes, page_size=page_size)
cur.execute("DEALLOCATE updateStmt")
con.commit()
def main():
con = None
einstein_quotes = ({"id": "a", "msg": "Few are those who see with their own eyes and feel with their own hearts."},
{"id": "b", "msg": "I have no special talent. I am only passionately curious."},
{"id": "c", "msg": "Life is like riding a bicycle. To keep your balance you must keep moving."})
try:
con = psycopg2.connect("dbname='stephan' user='stephan' host='localhost' password=''")
print_table(con)
update(con, einstein_quotes, 100) #choose some meaningful page_size here
print("rows updated:")
print_table(con)
except psycopg2.DatabaseError as e:
print(f'Error {e}')
sys.exit(1)
finally:
if con:
con.close()
if __name__ == '__main__':
main()
Output
The above program would output the following to the debug console:
a empty
b empty
c empty
rows updated:
a Few are those who see with their own eyes and feel with their own hearts.
b I have no special talent. I am only passionately curious.
c Life is like riding a bicycle. To keep your balance you must keep moving.
Short answer! Use execute_values(curs, sql, args), see docs
For those looking for short straightforward answer. Sample code to update users in bulk;
from psycopg2.extras import execute_values
sql = """
update users u
set
name = t.name,
phone_number = t.phone_number
from (values %s) as t(id, name, phone_number)
where u.id = t.id;
"""
rows_to_update = [
(2, "New name 1", '+923002954332'),
(5, "New name 2", '+923002954332'),
]
curs = conn.cursor() # Assuming you already got the connection object
execute_values(curs, sql, rows_to_update)
If you're using the uuid for primary key, and haven't registered the uuid data type in psycopg2 (keeping uuid as a python string), you can always use this condition u.id = t.id::uuid.
Problem
I am trying to read a csv file to Pandas, and write it to a SQLite database.Process works for all the columns in the csv file except for "Fill qty" which is a Positive Integer(int64). The process changes the type from TEXT/INTEGER to BLOB.
So I tried to load only the "Fll qty" column from Pandas to SQLite, and surprisingly I noticed I can safely do that for all integers smaller than 10 (I don't have 9 in my dataset, so basically 1,2,...,8 loaded successfully).
Here is what I tried:
I tried what I could think of: change "Fill_Qty" type in Schema to INTEGER to REAL, NULL or TEXT , change data type in Pandas from int64 to float or string before inserting to SQLite table. None of them worked. By the look of it, the "Trade_History.csv" file seems to be fine in Pandas or Excel. Is there something that my eyes dont see?!? So I am really confused what is happening here!
You would need the .csv file to test the code. Here is the code and .csv file: https://github.com/Meisam-Heidari/Trading_Min_code
The code:
### Imports:
import pandas as pd
import numpy as np
import sqlite3
from sqlite3 import Error
def create_database(db_file):
try:
conn = sqlite3.connect(db_file)
finally:
conn.close()
def create_connection(db_file):
""" create a database connection to the SQLite database
specified by db_file
:param db_file: database file
:return: Connection object or None
"""
try:
conn = sqlite3.connect(db_file)
return conn
return None
def create_table(conn,table_name):
try:
c = conn.cursor()
c.execute('''CREATE TABLE {} (Fill_Qty TEXT);'''.format(table_name))
except Error as e:
print('Error Code: ', e)
finally:
conn.commit()
conn.close()
return None
def add_trade(conn, table_name, trade):
try:
print(trade)
sql = '''INSERT INTO {} (Fill_Qty)
VALUES(?)'''.format(table_name)
cur = conn.cursor()
cur.execute(sql,trade)
except Error as e:
print('Error When trying to add this entry: ',trade)
return cur.lastrowid
def write_to_db(conn,table_name,df):
for i in range(df.shape[0]):
trade = (str(df.loc[i,'Fill qty']))
add_trade(conn,table_name,trade)
conn.commit()
def update_db(table_name='My_Trades', db_file='Trading_DB.sqlite', csv_file_path='Trade_History.csv'):
df_executions = pd.read_csv(csv_file_path)
create_database(db_file)
conn = create_connection(db_file)
table_name = 'My_Trades'
create_table(conn, table_name)
# writing to DB
conn = create_connection(db_file)
write_to_db(conn,table_name,df_executions)
# Reading back from DB
df_executions = pd.read_sql_query("select * from {};".format(table_name), conn)
conn.close()
return df_executions
### Main Body:
df_executions = update_db()
Any alternatives
I am wondering if anyone have a similar experience? Any advices/solutions to help me load the data in SQLite?
I am Trying to have something light and portable and unless there is no alternatives, I prefer not to go with Postgres or MySQL.
You're not passing a container to .execute() when inserting the data. Reference: https://www.python.org/dev/peps/pep-0249/#id15
What you need to do instead is:
trade = (df.loc[i,'Fill qty'],)
# ^ this comma makes `trade` into a tuple
The types of errors you got would've been:
ValueError: parameters are of unsupported type
Or:
sqlite3.ProgrammingError: Incorrect number of bindings supplied. The
current statement uses 1, and there are 2 supplied.
I have a problem with creating SQL query for Oracle database using Python.
I want to bind string variable and it does not work, could you tell me what am I doing wrong?
This is my code:
import cx_Oracle
dokList = []
def LoadDatabase():
conn = None
cursor = None
try:
conn = cx_Oracle.connect("login", "password", "localhost")
cursor = conn.cursor()
query = "SELECT * FROM DOCUMENT WHERE DOC = :param"
for doknumber in dokList:
cursor.execute(query, {'doknr':doknumber})
print(cursor.rowcount)
except cx_Oracle.DatabaseError as err:
print(err)
finally:
if cursor:
cursor.close()
if conn:
conn.close()
def CheckData():
with open('changedNamed.txt') as f:
lines = f.readlines()
for line in lines:
dokList.append(line)
CheckData()
LoadDatabase()
The output of cursor.rowcount is 0 but it should be number greater than 0.
You're using a dictionary ({'doknr' : doknumber}) for your parameter, so it's a named parameter - the :param needs to match the key name. Try this:
query = "SELECT * FROM DOCUMENT WHERE DOC = :doknr"
for doknumber in dokList:
cursor.execute(query, {'doknr':doknumber})
print(cursor.rowcount)
For future troubleshooting, to check whether your parameter is getting passed properly, you can also try changing your query to "select :param from dual".
I'm working on an IRC bot, forked from a modular bot called Skybot.
There are two other modules that make use of the sqlite3 database by default; they have both been removed and their tables dropped, so I know that the issue is somewhere in what I'm doing.
I only call 3 db.execute() statements in the whole thing and they're all immediately committed. This thing isn't getting hammered with queries either, but the lock remains.
Relevant code:
def db_init(db):
db.execute("create table if not exists searches"
"(search_string UNIQUE PRIMARY KEY,link)")
db.commit()
return db
def get_link(db, inp):
row = db.execute("select link from searches where"
" search_string=lower(?) limit 1",
(inp.lower(),)).fetchone()
db.commit()
return row
def store_link(db, stub, search):
db.execute("insert into searches (search_string, link) VALUES (?, ?)", (search.lower(), stub))
db.commit()
return stub
If the script only has to touch db_init() and get_link() it breezes through, but if it needs to call store_link() while the database is unlocked it will do the insert, but doesn't seem to be committing it in a way that future calls to get_link() can read it until the bot restarts.
The bot's db.py:
import os
import sqlite3
def get_db_connection(conn, name=''):
"returns an sqlite3 connection to a persistent database"
if not name:
name = '%s.%s.db' % (conn.nick, conn.server)
filename = os.path.join(bot.persist_dir, name)
return sqlite3.connect(filename, isolation_level=None)
bot.get_db_connection = get_db_connection
I did adjust the isolation_level myself, that was originally timeout=10. I am fairly stumped.
EDIT: The usages of get_db_connection():
main.py (main loop):
def run(func, input):
args = func._args
if 'inp' not in input:
input.inp = input.paraml
if args:
if 'db' in args and 'db' not in input:
input.db = get_db_connection(input.conn)
if 'input' in args:
input.input = input
if 0 in args:
out = func(input.inp, **input)
else:
kw = dict((key, input[key]) for key in args if key in input)
out = func(input.inp, **kw)
else:
out = func(input.inp)
if out is not None:
input.reply(unicode(out))
...
def start(self):
uses_db = 'db' in self.func._args
db_conns = {}
while True:
input = self.input_queue.get()
if input == StopIteration:
break
if uses_db:
db = db_conns.get(input.conn)
if db is None:
db = bot.get_db_connection(input.conn)
db_conns[input.conn] = db
input.db = db
try:
run(self.func, input)
except:
traceback.print_exc()
Send conn in your functions, along with db, as mentioned. If you wrote the code yourself, you'll know where the database actually is. Conventionally you would do something like:
db = sqlite3.connect('database.db')
conn = db.cursor()
Then for general usage:
db.execute("...")
conn.commit()
Hence, in your case:
def db_init(conn,db):
db.execute("create table if not exists searches"
"(search_string UNIQUE PRIMARY KEY,link)")
conn.commit()
return db
def get_link(conn,db, inp):
row = db.execute("select link from searches where"
" search_string=lower(?) limit 1",
(inp.lower(),)).fetchone()
conn.commit()
return row
def store_link(conn,db, stub, search):
db.execute("insert into searches (search_string, link) VALUES (?, ?)", (search.lower(), stub))
conn.commit()
return stub
On the basis that you have set the isolation_level to automatic updates:
sqlite3.connect(filename, isolation_level=None)
There is no need whatsoever for the commit statements in your code
Edit:
Wrap your execute statements in try statements, so that you at least have a chance of finding out what is going on i.e.
import sqlite3
def get_db(name=""):
if not name:
name = "db1.db"
return sqlite3.connect(name, isolation_level=None)
connection = get_db()
cur = connection.cursor()
try:
cur.execute("create table if not exists searches"
"(search_string UNIQUE PRIMARY KEY,link)")
except sqlite3.Error as e:
print 'Searches create Error '+str(e)
try:
cur.execute("insert into searches (search_string, link) VALUES (?, ?)", ("my search", "other"))
except sqlite3.Error as e:
print 'Searches insert Error '+str(e)
cur.execute("select link from searches where search_string=? limit 1", ["my search"])
s_data = cur.fetchone()
print 'Result:', s_data