I am trying to insert data into database, but here is error:
sqlite-problem-sqlite3-operationalerror-near-where-syntax-error
This is my code:
c.execute(f"INSERT INTO math(qula) WHERE name = '{member.name}' VALUES({saboloo})")
I suspect that you want to update the column qula of an existing row of the table math and not insert a new row.
Also, it's a good practice to use ? placeholders:
c.execute("UPDATE math SET qula = ? WHERE name = ?", (saboloo, member.name))
To insert data into sqlite3, first you have to import sqlite3 module in the Python standard library. You then connect to the file by passing a file path to the connect (xxxx) method in the sqlite3 module, if the database you passed in the connect method does not exist one will be created at that path and if the database exist it will connect to it.
import sqlite3
con = sqlite3.connect('/path/xxx.sqlite3')
You than have to create a cursor object using the cursor() method
c = con.cursor()
You than Prepare, SQL queries to INSERT a record into the database.
c.execute(f"INSERT INTO math(qula) VALUES({saboloo})").
I hope this one helps.
You can also read more from here Python SQLite insert data
Related
Hopefully someone can help me with this !! I am using cx_Oracle for oracle DB connection, I want to store a few SQL queries in excel. by running script in python, the sql can be imported from excel can be executed.
The sql1 has successfully import the sql1 but the value cannot pass to c.execute. How can I make it right? Adding """ will not help.
excel_data_df = pandas.read_excel('C:\\Python\Excel\sql.xlsx', sheet_name='SQL1')
caseno = excel_data_df['Case no']
sql1 = excel_data_df['SQL']
c = conn.cursor()
c.execute(sql1)*
Many Thanks for your help
I have used Python to parse a txt file for specific information (dates, $ amounts, lbs, etc) and now I want to export that data to an Oracle table that I made in SQL Developer.
I have successfully connected Python to Oracle with the cx_Oracle module, but I am struggling to export or even print any data to my database from Python.
I am not proficient at using SQL, I know of simple queries and that's about it. I have explored the Oracle docs and haven't found straightforward export commands. When exporting data to an Oracle table via Python is it Python code I am going to be using or SQL code? Is it the same as importing a CSV file, for example?
I would like to understand how to write to an Oracle table from Python; I need to parse and export a very large amount of data so this won't be a one time export/import. I would also ideally like to have a way to preview my import to ensure it aligns correctly with my already created Oracle table, or if a simple undo action exists that would suffice.
If my problem is unclear I am more than happy to clarify it. Thanks for all help.
My code so far:
import cx_Oracle
dsnStr = cx_Oracle.makedsn("sole.wh.whoi.edu", "1526", "sole")
con = cx_Oracle.connect(user="myusername", password="mypassword", dsn=dsnStr)
print (con.version)
#imp 'Book1.csv' [this didn't work]
cursor = con.cursor()
print (cursor)
con.close()
From Import a CSV file into Oracle using CX_Oracle & Python 2.7 you can see overall plan.
So if you already parsed data into csv you can easily do it like:
import cx_Oracle
import csv
dsnStr = cx_Oracle.makedsn("sole.wh.whoi.edu", "1526", "sole")
con = cx_Oracle.connect(user="myusername", password="mypassword", dsn=dsnStr)
print (con.version)
#imp 'Book1.csv' [this didn't work]
cursor = con.cursor()
print (cursor)
text_sql = '''
INSERT INTO tablename (firstfield, secondfield) VALUES(:1,:2)
'''
my_file = 'C:\CSVData\Book1.csv'
cr = csv.reader(open(my_file,"rb"))
for row in cr:
print row
cursor.execute(text_sql, row)
print 'Imported'
con.close()
I've struggled over this issue for over an hour now. I'm trying to create a Sqlite database using a dbf table. When I create a list of records derived from a dbf to be used as input for the Sqlite executemany statement, the Sqlite table comes out empty. When I try to replicate the issue using Python interactively, the Sqlite execution is successful. The list generated from the dbf is populated when I run it - so the problem lies in the executemany statement.
import sqlite3
from dbfpy import dbf
streets = dbf.Dbf("streets_sample.dbf")
conn = sqlite3.connect('navteq.db')
conn.execute('PRAGMA synchronous = OFF')
conn.execute('PRAGMA journal_mode = MEMORY')
conn.execute('DROP TABLE IF EXISTS STREETS')
conn.execute('''CREATE TABLE STREETS
(blink_id CHAR(8) PRIMARY KEY,
bst_name VARCHAR(39),
bst_nm_pref CHAR(2));''')
alink_id = []
ast_name = []
ast_nm_pref = []
for i in streets:
alink_id.append(i["LINK_ID"])
ast_name.append(i["ST_NAME"])
ast_nm_pref.append(i["ST_NM_PREF"])
streets_table = zip(alink_id, ast_name, ast_nm_pref)
conn.executemany("INSERT OR IGNORE INTO STREETS VALUES(?,?,?)", streets_table)
conn.close()
This may not be the only issue, but you want to call conn.commit() to save the changes to the SQLite database. Reference: http://www.python.org/dev/peps/pep-0249/#commit
I have a script that stores results in pdf format in a particular folder. I want to create a mysql database ( which is successful with the below code ), and populate the pdf results to it. what would be the best way , storing the file as such , or as reference to the location. The file size would be around 2MB. Could someone help in explaining the same with some working examples. I am new to both python and mysql.Thanks in advance.
To clarify more : I tried using LOAD DATA INFILE and the BLOB type for the result file column , but it dosent seem to work .I am using pymysql api module to connect to the database.Below code is to connect to the database and is successful.
import pymsql
conn = pymysql.connect(host='hostname', port=3306, user='root', passwd='abcdef', db='mydb')
cur = conn.cursor()
cur.execute("SELECT * FROM userlogin")
for r in cur.fetchall():
print(r)
cur.close()
conn.close()
Since you seem to be close to getting mysql to store strings for you (user names), your best bet is to just stick with what you did there and store the file path just as you stored the strings in your userlogin table (but in a different table with a foreign key to userlogin). It will probably be the most efficient approach in the long run anyway, especially if you store important metadata along with the file path (like keywords or even complete n-gram sets)... now you're talking about a file indexing system like Google Desktop or Xapian... just so you know what you're up against if you want to do this the "best" way.
I would like to get some understanding on the question that I was pretty sure was clear for me. Is there any way to create table using psycopg2 or any other python Postgres database adapter with the name corresponding to the .csv file and (probably the most important) with columns that are specified in the .csv file.
I'll leave you to look at the psycopg2 library properly - this is off the top of my head (not had to use it for a while, but IIRC the documentation is ample).
The steps are:
Read column names from CSV file
Create "CREATE TABLE whatever" ( ... )
Maybe INSERT data
import os.path
my_csv_file = '/home/somewhere/file.csv'
table_name = os.path.splitext(os.path.split(my_csv_file)[1])[0]
cols = next(csv.reader(open(my_csv_file)))
You can go from there...
Create a SQL query (possibly using a templating engine for the fields and then issue the insert if needs be)