I'm getting the following error:
Traceback (most recent call last):
File "/home/pi/Nike/test_two.py", line 43, in <module>
do_query()
File "/home/pi/Nike/test_two.py", line 33, in do_query
for(Product,Bin,Size,Color) in records:
ValueError: too many values to unpack
Code:
def do_query():
connection = sqlite3.connect('test_db.db')
cursor = connection.cursor()
cursor.execute("SELECT * FROM TESTER ORDER BY CheckNum")
records = cursor.fetchall()
for(Product,Bin,Size,Color) in records:
row_1.append(Product)
row_2.append(Bin)
row_3.append(Size)
row_4.append(Color)
connection.commit()
cursor.close()
connection.close()
do_query()
I'm trying to load each column of a table into seperate python list. I am using Python, and sqlite3. Why am I getting this error?
You are using "SELECT *" which will return every column from the table. My guess is that the table in question contains more columns then the 4 you specified.
A better way would actually be specifying in the SQL which columns you want so that your code will not break if columns are added to the database.
Something like "SELECT col1, col2 FROM table"
You can run the sqlite3 tool on the db file and then view the table schema with ".schema <table_name>"
Related
I am trying to transfer data from postgresql db table to sqlite table using python. Here is my code:
import sqlite3
import csv
connect = sqlite3.connect("server.db")
cursor = connect.cursor()
sql = """CREATE TABLE users (
name TEXT,
id INT,
xp INT
)"""
cursor.execute(sql)
connect.commit()
with open('users.csv', 'r', encoding="utf-8") as f:
no_records = 0
for row in f:
cursor.execute(f"INSERT INTO users (?,?,?)", row.split(','))
connect.commit()
no_records += 1
connect.close()
But when running this script I get sqlite3.OperationalError: near "?": Syntax error:
Traceback (most recent call last):
File "C:\Users\belog\sort_hat\cr.py", line 19, in <module>
cursor.execute(f"INSERT INTO users (?,?,?)", row.split(','))
sqlite3.OperationalError: near "?": syntax error
How to fix it and is it possible to import data is easier without using python?
Your syntax is missing VALUES:
INSERT INTO users VALUES(?,?,?)
I am trying to insert value into SQL SERVER using python.
I wrote my python program as below.
import pyodbc
import subprocess
cnx = pyodbc.connect("DSN=myDSN;UID=myUID;PWD=myPassword;port=1433")
runcmd1 = subprocess.check_output(["usbrh", "-t"])[0:5]
runcmd2 = subprocess.check_output(["usbrh", "-h"])[0:5]
cursor = cnx.cursor()
cursor.execute("SELECT * FROM T_TABLE-A;")
cursor.execute('''
INSERT INTO T_TABLE-A (TEMP,RH,DATE,COMPNAME)
VALUES
(runcmd1,runcmd2,GETDATE(),'TEST_Py')
''')
cnx.commit()
Then get error like below.
# python inserttest.py
Traceback (most recent call last):
File "inserttest.py", line 13, in <module>
''')
pyodbc.ProgrammingError: ('42S22', "[42S22] [FreeTDS][SQL Server]Invalid column name 'runcmd1'. (207) (SQLExecDirectW)")
If I wrote like below, it's OK to insert.
import pyodbc
cnx = pyodbc.connect("DSN=myDSN;UID=myUID;PWD=myPassword;port=1433")
cursor = cnx.cursor()
cursor.execute("SELECT * FROM T_TABLE-A;")
cursor.execute('''
INSERT INTO T_TABLE-A (TEMP,RH,DATE,COMPNAME)
VALUES
(20.54,56.20,GETDATE(),'TEST_P')
''')
cnx.commit()
The command USBRH -t gets temperature and USBRH -h gets humidity. They work well in individual python program.
Does anyone have idea to solve this error?
Thanks a lot in advance.
check the data types returning from these two lines
runcmd1 = subprocess.check_output(["usbrh", "-t"])[0:5]
runcmd2 = subprocess.check_output(["usbrh", "-h"])[0:5]
runcmd1 and runcmd2 should be in 'double' data type since it accepts 20.54.
cursor.execute('''
INSERT INTO T_TABLE-A (TEMP,RH,DATE,COMPNAME)
VALUES
(runcmd1,runcmd2,GETDATE(),'TEST_Py')
''')
won't work because you are embedding the names of the Python variables, not their values. You need to do
sql = """\
INSERT INTO T_TABLE-A (TEMP,RH,DATE,COMPNAME)
VALUES
(?, ?, GETDATE(),'TEST_Py')
"""
cursor.execute(sql, runcmd1, runcmd2)
I’m trying to INSERT INTO / ON DUPLICATE KEY UPDATE taking the values from one table and inserting into another. I have the following Python code.
try:
cursor.execute("SELECT LocationId, ProviderId FROM CQCLocationDetailsUpdates")
rows = cursor.fetchall()
for row in rows:
maria_cnxn.execute('INSERT INTO CQCLocationDetailsUpdates2 (LocationId, ProviderId) VALUES (%s,%s) ON DUPLICATE KEY UPDATE ProviderId = VALUES(%s)', row)
mariadb_connection.commit()
except TypeError as error:
print(error)
mariadb_connection.rollback()
If I change this script just to INSERT INTO it work fine, the problem seems to be when I add the ON DUPLICATE KEY UPDATE. What do I have wrong? LocationId is the PRIMARY KEY
I get this error.
Traceback (most recent call last):
File "C:/Users/waynes/PycharmProjects/DRS_Dev/CQC_Locations_Update_MariaDB.py", line 228, in <module>
maria_cnxn.execute('INSERT INTO CQCLocationDetailsUpdates2 (LocationId, ProviderId) VALUES (%s,%s) ON DUPLICATE KEY UPDATE ProviderId = VALUES(%s)', row)
File "C:\Users\waynes\PycharmProjects\DRS_Dev\venv\lib\site-packages\mysql\connector\cursor.py", line 548, in execute
stmt = RE_PY_PARAM.sub(psub, stmt)
File "C:\Users\waynes\PycharmProjects\DRS_Dev\venv\lib\site-packages\mysql\connector\cursor.py", line 79, in __call__
"Not enough parameters for the SQL statement")
mysql.connector.errors.ProgrammingError: Not enough parameters for the SQL statement
Your error is because row is a 2 element tuple and your SQL statement requires three %s vars.
It is however possible to use an INSERT .. SELECT .. ON DUPLICATE KEY like:
maria_cnxn.execute('INSERT INTO CQCLocationDetailsUpdates2 (LocationId,
ProviderId)
SELECT LocationId, ProviderId
FROM CQCLocationDetailsUpdates orig
ON DUPLICATE KEY UPDATE CQCLocationDetailsUpdates2.ProviderID = orig.ProviderID')
Whenever you end up doing a loop around a SQL statement you should look to see if there is a SQL way of doing this.
I have a trouble with my program. I want input database from file txt. This is my source code
import MySQLdb
import csv
db=MySQLdb.connect(user='root',passwd='toor',
host='127.0.0.1',db='data')
cursor=db.cursor()
csv_data=csv.reader(file('test.txt'))
for row in csv_data:
sql = "insert into `name` (`id`,`Name`,`PoB`,`DoB`) values(%s,%s,%s,%s);"
cursor.execute(sql,row)
db.commit()
cursor.close()
After run that program, here the error
Traceback (most recent call last):
File "zzz.py", line 9, in <module>
cursor.execute(sql,row)
File "/home/tux/.local/lib/python2.7/site-packages/MySQLdb/cursors.py", line 187, in execute
query = query % tuple([db.literal(item) for item in args])
TypeError: not enough arguments for format string
and this is my test.txt
4
zzzz
sby
2017-10-10
Please help, and thanks in advance.
Now that you have posted the CSV file, the error should now be obvious to you - each line contains only one field, not the four that the SQL statement requires.
If that is the real format of your data file, it is not CSV data. Instead you need to read each group of four lines as one record, something like this might work:
LINES_PER_RECORD = 4
SQL = 'insert into `name` (`id`,`Name`,`PoB`,`DoB`) values (%s,%s,%s,%s)'
with open('test.txt') as f:
while True:
try:
record = [next(f).strip() for i in range(LINES_PER_RECORD)]
cursor.execute(SQL, record)
except StopIteration:
# insufficient lines available for record, treat as end of file
break
db.commit()
I am streaming tweets to a postgres database with a python script (using psycopg2). I would like to be able to schedule this script in a windows task manager. The only issue I have to overcome is to be able to rename the table in postgres. Is it possible?
x = datetime.date.today() - datetime.timedelta(days=1)
con = psycopg2.connect("dbname='test' user='postgres'")
cur = con.cursor()
cur.execute("DROP TABLE IF EXISTS schemaname.%s", (x))
** UPDATE
That answer does get my further, now it just complains about the numbers.
Traceback (most recent call last):
File "Z:/deso-gis/scripts/test123.py", line 26, in <module>
cur.execute("DROP TABLE IF EXISTS tweets_days.%s" % x)
psycopg2.ProgrammingError: syntax error at or near ".2016"
LINE 1: DROP TABLE IF EXISTS tweets_days.2016-02-29
I believe you are getting arror at line
cur.execute("DROP TABLE IF EXISTS schemaname.%s", (x))
because psycopg generates not what you want:
DROP TABLE IF EXISTS schemaname."table_name"
try using
cur.execute("DROP TABLE IF EXISTS schemaname.%s" % x)
This is not as secure as could be but now table name is name not SQL string.