mysqldb programming Error - python

I'm getting an error trying to insert into a table using MySQLdb and can not figure it out. I looked up how to catch the error but when I add the code to trap it, I don't get any additional info when I tried using raise e instead of print e and when I try print e I get an error. I need some help getting the error to give me more detail. I'll include the whole code in case someone is awesome enough to see where I went wrong in the first place. Then I don't even care about the error trapping. Ok, yes I do because I am sure I will need it again soon. lol. I actually have 8 columns I need to update. I'm trying to use a process of elimination and only trying a couple at a time. The ones like UPC, Name, and a couple others insert fine but Modl, Path, and Desc are causing errors. Any help with either would be appreciated and thanks.
# -*- coding: utf-8 -*-
# Define your item pipelines here
#
# Don't forget to add your pipeline to the ITEM_PIPELINES setting
# See: http://doc.scrapy.org/en/latest/topics/item-pipeline.html
from __future__ import print_function
from datetime import date, datetime, timedelta
import MySQLdb
#from scrapy.extensions import DropItem
#from bff.items import ItemInfo
class mySQLPipeline(object):
def process_item(self, item, spider):
Product = item['ProdName']
Path = item['ProdPath']
UPC = item['ProdUPC']
Modl = item['ProdModel']
Desc = item['ProdDesc']
Price = item['ProdPrice']
Stock = item['InStock']
#Ships = item['Ships']
Name = item['StoreName']
#Not Failing during insert Product, Price, Stock, Name
#FAILing during insert Modl, Path, Desc,
db = MySQLdb.connect(user='****', passwd='****',
host='127.0.0.1',
port=****,
db='****')
cursor = db.cursor()
# add_Product = ("INSERT INTO walmart_products (ProdName, StoreName) VALUES (%s, %s,)", Product, Name,)
# add_Product = ("INSERT INTO walmart_products, (ProdName)"
# "VALUES (%s)", (Name))
# "VALUES (%(Name)s)")
add_Product = ("INSERT INTO walmart_products "
"(InStock, StoreName) "
"VALUES (%s, %s)")
#item['Ships'],
data_Product = (Stock, Name)
#Add new product
#try:
cursor.execute(add_Product, data_Product)
#except MySQLdb.IntegrityError, e:
#print e
# handle a specific error condition
#except MySQLdb.Error, e:
#raise e
# handle a generic error condition
#except MySQLdb.Warning, e:
#print e
# handle warnings, if the cursor you're using raises them
#except MySQLdb.ProgrammingError, e:
#print e
# Make sure data is committed to the database
db.commit()
cursor.close()
db.close()
return item

I figured out what was happening. It was actually a couple of things. One a couple of them I had the column set to varchar(200) and was getting over 2400 chars so increasing it fixed that. On the URL on I wasn't escaping it and that was causing the errors. Thanks again. – Nick just now edit

Related

sqlite3 error: "Unable to resolve table" even though I already remade the table

I have searched extensively before asking this seemingly simple question. I have a python project, made a sqlite DB and some code to insert and work with it, and all was good until I decided to pull the db functions out of the main file, and pull out the db file, and put both into a folder called db. So now both the function file and the db are in the same folder, one level deep. So the error starts immediately, but the code still runs, albeit not actually doing anything, I search the internet and all I see are people saying to delete the DB file, make it again in place and that usually solves the issue, I did that twice but no luck. What am I missing here? The code runs without an error, but does not actually work, and the error I am reporting here is from the pycharm hover box.
def add_symbols_to_list(symbols_to_add) -> None:
"""This will add symbols to the current symbols list, but leave the previous entries.
:param: a list of user provided symbols as comma separated strings."""
conn = sqlite3.connect('database.db')
c = conn.cursor()
time_now = datetime.datetime.now() # get current time for the int conversion below
this_month_int = time_now.month # get the current month and set it to an int
# gets the current number of rows so new additions have the correct rowid
# c.execute("SELECT * FROM currentMonthStocks")
# current_row_number = c.execute("SELECT COUNT(*) FROM currentMonthStocks")
# rows = int(current_row_number)
# # https://www.sqlitetutorial.net/sqlite-count-function/
# db_row_id = rows + 1 # set the first row number
extra_symbols = symbols_to_add
for i in range(len(extra_symbols)):
c.execute("""INSERT INTO currentMonthStocks
(symbol, month)
VALUES (?, ?)""", (extra_symbols[i], this_month_int))
# db_row_id += 1
print("Added a symbol")
print("Symbols successfully populated into currentMonthStocks table in database.db")
new_symbols = ['tsla', 'dis', 'pltr']
add_symbols_to_list(new_symbols)
def get_symbols_at_month_start() -> None:
"""Function inserts a list of symbols to trade every month into the currentMonthStocks table in database.db.
This is called once at the start of the month, deletes the current symbols and adds the new ones.
:return: None."""
# edited out the url info for brevity
response = requests.request("POST", url, headers=headers, data=payload)
symbols = response.json()['content']['allInstrumentRows']
this_months_symbols = []
for symbol in symbols:
this_months_symbols.append(symbol['Symbol'])
# print(this_months_symbols)
# file = "database.db"
try:
conn = sqlite3.connect('database.db') # setup database connection
c = conn.cursor()
print("Database Connected")
# c.execute("""CREATE TABLE currentMonthStocks (
# id INT PRIMARY KEY,
# symbol TEXT,
# month INT)""")
# print("table created successfully")
# # checks to see if there is at least 1 row in the db, if yes it deletes all rows.
if c.execute("SELECT EXISTS(SELECT 1 FROM currentMonthStocks WHERE id=1 LIMIT 2);"):
# for i in range(len(this_months_symbols)):
c.execute("DELETE FROM currentMonthStocks")
print("Delete all rows successful")
time_now = datetime.datetime.now() # get current time for the int conversion below
this_month_int = time_now.month # get the current month and set it to an int
db_row_id = 1 # set the first row number
for i in range(len(this_months_symbols)):
c.execute("""INSERT INTO currentMonthStocks
(id, symbol, month)
VALUES (?, ?, ?)""", (db_row_id, this_months_symbols[i], this_month_int))
db_row_id += 1
# print("one more entry")
print("Symbols successfully populated into currentMonthStocks table in database.db")
conn.commit() # commits the current transaction.
print("Entries committed to database.db")
# c.close() # closes the connection to the db.
conn.close()
except sqlite3.Error as e:
print("sqlite3 error", e)
finally:
if conn:
conn.close()
print("Database.db Closed")
There was no problem, or at least no solution, pycharm is still not recognizing the table but I wrote 5 CRUD functions and they all work. So the answer is don't worry about it, just see if the DB is updating correctly.

Why SQLITE doesn't accept My INTEGER/TEXT data larger than 8, using Python 3?

Problem
I am trying to read a csv file to Pandas, and write it to a SQLite database.Process works for all the columns in the csv file except for "Fill qty" which is a Positive Integer(int64). The process changes the type from TEXT/INTEGER to BLOB.
So I tried to load only the "Fll qty" column from Pandas to SQLite, and surprisingly I noticed I can safely do that for all integers smaller than 10 (I don't have 9 in my dataset, so basically 1,2,...,8 loaded successfully).
Here is what I tried:
I tried what I could think of: change "Fill_Qty" type in Schema to INTEGER to REAL, NULL or TEXT , change data type in Pandas from int64 to float or string before inserting to SQLite table. None of them worked. By the look of it, the "Trade_History.csv" file seems to be fine in Pandas or Excel. Is there something that my eyes dont see?!? So I am really confused what is happening here!
You would need the .csv file to test the code. Here is the code and .csv file: https://github.com/Meisam-Heidari/Trading_Min_code
The code:
### Imports:
import pandas as pd
import numpy as np
import sqlite3
from sqlite3 import Error
def create_database(db_file):
try:
conn = sqlite3.connect(db_file)
finally:
conn.close()
def create_connection(db_file):
""" create a database connection to the SQLite database
specified by db_file
:param db_file: database file
:return: Connection object or None
"""
try:
conn = sqlite3.connect(db_file)
return conn
return None
def create_table(conn,table_name):
try:
c = conn.cursor()
c.execute('''CREATE TABLE {} (Fill_Qty TEXT);'''.format(table_name))
except Error as e:
print('Error Code: ', e)
finally:
conn.commit()
conn.close()
return None
def add_trade(conn, table_name, trade):
try:
print(trade)
sql = '''INSERT INTO {} (Fill_Qty)
VALUES(?)'''.format(table_name)
cur = conn.cursor()
cur.execute(sql,trade)
except Error as e:
print('Error When trying to add this entry: ',trade)
return cur.lastrowid
def write_to_db(conn,table_name,df):
for i in range(df.shape[0]):
trade = (str(df.loc[i,'Fill qty']))
add_trade(conn,table_name,trade)
conn.commit()
def update_db(table_name='My_Trades', db_file='Trading_DB.sqlite', csv_file_path='Trade_History.csv'):
df_executions = pd.read_csv(csv_file_path)
create_database(db_file)
conn = create_connection(db_file)
table_name = 'My_Trades'
create_table(conn, table_name)
# writing to DB
conn = create_connection(db_file)
write_to_db(conn,table_name,df_executions)
# Reading back from DB
df_executions = pd.read_sql_query("select * from {};".format(table_name), conn)
conn.close()
return df_executions
### Main Body:
df_executions = update_db()
Any alternatives
I am wondering if anyone have a similar experience? Any advices/solutions to help me load the data in SQLite?
I am Trying to have something light and portable and unless there is no alternatives, I prefer not to go with Postgres or MySQL.
You're not passing a container to .execute() when inserting the data. Reference: https://www.python.org/dev/peps/pep-0249/#id15
What you need to do instead is:
trade = (df.loc[i,'Fill qty'],)
# ^ this comma makes `trade` into a tuple
The types of errors you got would've been:
ValueError: parameters are of unsupported type
Or:
sqlite3.ProgrammingError: Incorrect number of bindings supplied. The
current statement uses 1, and there are 2 supplied.

inserting json to mysql using python

Hello I am able to query but unable to insert into my json field with the below code,problem is not very tough but due to my absolute new to mysql unable to figure out..every time i will get a variable call last_time first i need to insert then from second time i need to update the last_time..if i do manually i am able to do and getting output as needed..like the below photo..
import pymysql.cursors
import datetime
import json
last_time='2344' #this value i will get output from my program
connection = pymysql.connect(host='localhost',
user='root',
password='Admin...',
db='cl......',
charset='utf8mb4',
cursorclass=pymysql.cursors.DictCursor)
print "connect successfull"
try:
with connection.cursor() as cursor:
sql = "INSERT INTO `tt_inv_refresh` (`cust_id`, `res_type`, `refresh_dtls`) VALUES (%s, %s, %s)"
cursor.execute(sql, ('2','elb','{"vot":[{"elb":"name1","last_refreshtime":last_time},{"elb":"name2","last_refreshtime":last_time}]}'))
connection.commit()
except Exception as e:
print str(e)
print "inserted"
finally:
connection.close()
will be obliged if anyone point out the mistake in my code...thank you
You have missed the quote around last_time
Correct the line with cursore.execute to
cursor.execute(sql, ('2','elb','{"vot":
[{"elb":"name1","last_refreshtime":' + last_time + '},
{"elb":"name2","last_refreshtime":' + last_time+ '}]}'))
To avoid such issues in future, you might consider defining an object and using json.dumps
class Elb:
def toJSON(self):
return json.dumps(self, default=lambda o: o.__dict__,
sort_keys=True, indent=4)
# you can use array or dict | I just copied from one of my code
# which reqd it to be an array
def mapper(self, detailArray = []): # you can use array or dict
self.elb = detailArray[0];
self.last_refreshtime = detailArray[1];
So after you have set the data for an instance say,
el_instance = Elb()
el_instance.mapper(<array-with-data>)
You can call el_instance.toJSON() to get serialized data anywhere.

Database is locking but all statements are followed by commit?

I'm working on an IRC bot, forked from a modular bot called Skybot.
There are two other modules that make use of the sqlite3 database by default; they have both been removed and their tables dropped, so I know that the issue is somewhere in what I'm doing.
I only call 3 db.execute() statements in the whole thing and they're all immediately committed. This thing isn't getting hammered with queries either, but the lock remains.
Relevant code:
def db_init(db):
db.execute("create table if not exists searches"
"(search_string UNIQUE PRIMARY KEY,link)")
db.commit()
return db
def get_link(db, inp):
row = db.execute("select link from searches where"
" search_string=lower(?) limit 1",
(inp.lower(),)).fetchone()
db.commit()
return row
def store_link(db, stub, search):
db.execute("insert into searches (search_string, link) VALUES (?, ?)", (search.lower(), stub))
db.commit()
return stub
If the script only has to touch db_init() and get_link() it breezes through, but if it needs to call store_link() while the database is unlocked it will do the insert, but doesn't seem to be committing it in a way that future calls to get_link() can read it until the bot restarts.
The bot's db.py:
import os
import sqlite3
def get_db_connection(conn, name=''):
"returns an sqlite3 connection to a persistent database"
if not name:
name = '%s.%s.db' % (conn.nick, conn.server)
filename = os.path.join(bot.persist_dir, name)
return sqlite3.connect(filename, isolation_level=None)
bot.get_db_connection = get_db_connection
I did adjust the isolation_level myself, that was originally timeout=10. I am fairly stumped.
EDIT: The usages of get_db_connection():
main.py (main loop):
def run(func, input):
args = func._args
if 'inp' not in input:
input.inp = input.paraml
if args:
if 'db' in args and 'db' not in input:
input.db = get_db_connection(input.conn)
if 'input' in args:
input.input = input
if 0 in args:
out = func(input.inp, **input)
else:
kw = dict((key, input[key]) for key in args if key in input)
out = func(input.inp, **kw)
else:
out = func(input.inp)
if out is not None:
input.reply(unicode(out))
...
def start(self):
uses_db = 'db' in self.func._args
db_conns = {}
while True:
input = self.input_queue.get()
if input == StopIteration:
break
if uses_db:
db = db_conns.get(input.conn)
if db is None:
db = bot.get_db_connection(input.conn)
db_conns[input.conn] = db
input.db = db
try:
run(self.func, input)
except:
traceback.print_exc()
Send conn in your functions, along with db, as mentioned. If you wrote the code yourself, you'll know where the database actually is. Conventionally you would do something like:
db = sqlite3.connect('database.db')
conn = db.cursor()
Then for general usage:
db.execute("...")
conn.commit()
Hence, in your case:
def db_init(conn,db):
db.execute("create table if not exists searches"
"(search_string UNIQUE PRIMARY KEY,link)")
conn.commit()
return db
def get_link(conn,db, inp):
row = db.execute("select link from searches where"
" search_string=lower(?) limit 1",
(inp.lower(),)).fetchone()
conn.commit()
return row
def store_link(conn,db, stub, search):
db.execute("insert into searches (search_string, link) VALUES (?, ?)", (search.lower(), stub))
conn.commit()
return stub
On the basis that you have set the isolation_level to automatic updates:
sqlite3.connect(filename, isolation_level=None)
There is no need whatsoever for the commit statements in your code
Edit:
Wrap your execute statements in try statements, so that you at least have a chance of finding out what is going on i.e.
import sqlite3
def get_db(name=""):
if not name:
name = "db1.db"
return sqlite3.connect(name, isolation_level=None)
connection = get_db()
cur = connection.cursor()
try:
cur.execute("create table if not exists searches"
"(search_string UNIQUE PRIMARY KEY,link)")
except sqlite3.Error as e:
print 'Searches create Error '+str(e)
try:
cur.execute("insert into searches (search_string, link) VALUES (?, ?)", ("my search", "other"))
except sqlite3.Error as e:
print 'Searches insert Error '+str(e)
cur.execute("select link from searches where search_string=? limit 1", ["my search"])
s_data = cur.fetchone()
print 'Result:', s_data

SQL datatbase not accepting UTF -8 Characters [duplicate]

This question already has answers here:
Python & MySql: Unicode and Encoding
(3 answers)
Closed 9 years ago.
I have a python program wherein I access a url and extract data. I then input this data into a mysql table. The mysql table has columns pid ,position,club, points,s,availability, rating,name . I have no issues with the python program ( I hope) but the database apparently does not seem to accept names with UTF alphabets ex: Jääskeläinen . How do I make the database to accept these names? I tried using the answer given here. But the program still gives me the following error:
Traceback (most recent call last):
File "C:\Users\GAMER\Desktop\Padai\Fall 13\ADB\player_extract.py", line 49, in <module>
sql += "('{0}', '{1}', '{2}', '{3}', '{4}','{5}','{6}','{7}')".format(count,position,club, points,s,availability, rating,name)
UnicodeEncodeError: 'ascii' codec can't encode characters in position 1-2: ordinal not in range(128)
excepted Goalkeepers Jääskeläinen West Ham 67 £5.5
My python code is this"
# -*- coding: utf-8 -*-
from bs4 import BeautifulSoup
from urllib import urlopen
from pprint import pprint
import MySQLdb
import traceback
import re
#fetch players list from site
url = "http://fantasy.premierleague.com/player-list/"
html = urlopen(url).read()
soup = BeautifulSoup(html)
h2s = soup.select("h2")
tables = soup.select("table")
first = True
title =""
players = []
for i,table in enumerate(tables):
if first:
title = h2s[int(i/2)].text
for tr in table.select("tr"):
player = (title,)
for td in tr.select("td"):
player = player + (td.text,)
if len(player) > 1:
players.append(player)
first = not first
##SQL connectivity and data entry
db = MySQLdb.connect(host="localhost", user="root", passwd="hassan28", db = "adbpro")
cur = db.cursor()
try:
count = 1
for i in players:
position, name, club, points, price = i
s = price[1:]
name = name.replace("'"," ")
rating = 4
availability = 1
sql = "INSERT INTO players (pid,position,club,points,price,availability,rating,name) VALUES "
try:
sql += "('{0}', '{1}', '{2}', '{3}', '{4}','{5}','{6}','{7}')".format(count,position,club, points,s,availability, rating,name)
cur.execute(sql)
count +=1
except UnicodeError:
traceback.print_exc()
print "excepted", position, name, club, points, price
continue
#print sql
db.commit()
except:
print sql
traceback.print_exc()
db.rollback()
cur.execute("SELECT * FROM PLAYERS")
print "done"
Any help will be greatly appreciated.
This is not a database problem; you are trying to interpolate Unicode values into a byte string, triggering an implicit encoding.
Don't use string formatting here, use SQL parameters instead:
sql = "INSERT INTO players (pid,position,club,points,price,availability,rating,name) VALUES (%s, %s, %s, %s, %s, %s, %s, %s)"
params = (count, position, club, points, s, availability, rating, name)
cur.execute(sql, params)
Here the %s tell MySQLdb where to expect SQL parameters, and you pass in the parameters as a separate list to cursor.execute().
Do remember to tell the database connection that you want to use UTF-8 for Unicode values:
db = MySQLdb.connect(host="localhost", user="root", passwd="hassan28",
db="adbpro", charset='utf8')
Seems like a duplicate of this question. Just for others, the solution is "When you connect() to your database, pass the charset='utf8' parameter."

Categories