import csv
import sqlite3
open("shows.db", "w").close()
con = sqlite3.connect('shows.db')
db = con.cursor()
db.execute("CREATE TABLE shows (id INTEGER, title TEXT, PRIMARY KEY(id))")
db.execute("CREATE TABLE genres (show_id INTEGER, genre TEXT, FOREIGN KEY(show_id) REFERENCES shows(id))")
with open("/Users/xxx/Downloads/CS50 2019 - Lecture 7 - Favorite TV Shows (Responses) - Form Responses 1.csv", "r") as file:
reader = csv.DictReader(file)
for row in reader:
title = row["title"].strip().upper()
id = db.execute("INSERT INTO shows (title) VALUES(?)", (title,))
for genre in row["genres"].split(", "):
db.execute("INSERT INTO genres (show_id, genre) VALUES(?, ?)", id,genre)
con.commit()
con.close()
When I run this code I think in this line "db.execute("INSERT INTO genres (show_id, genre) VALUES(?, ?)", id,genre)" the problem happens.
My console says
"db.execute("INSERT INTO genres (show_id, genre) VALUES(?, ?)", id,genre)
TypeError: function takes at most 2 arguments (3 given)"
I don't under stand why it says 3 given even though I gave two argument ( id, genre )
enter image description here
Problems with code
This line returns the cursor. In order to get the result, you will need to call terminal operations such as .fetchall(), .fetchmany() or .fetchone()
id = db.execute("INSERT INTO shows (title) VALUES(?)", (title,))
As you didn't call the terminal operator or print out the result, you wouldn't know that the INSERT operation returns None, not actual id
Minor: generally, it isn't advised to call variables the same name as built-in Python functions. See id.
As I suggested in the comment, you will need to insert a tuple:
db.execute(
"INSERT INTO genres (show_id, genre) VALUES (?, ?)",
(id_, genre)
)
Solution
You will need to select title from the shows table after insertion. Also, retrieve the id from the selection and then insert it into the genres table.
The simplified version of the code, to showcase how to do it:
import csv
import sqlite3
open("shows.db", "w").close()
con = sqlite3.connect('shows.db')
db = con.cursor()
db.execute("CREATE TABLE shows (id INTEGER, title TEXT, PRIMARY KEY(id))")
db.execute("CREATE TABLE genres (show_id INTEGER, genre TEXT, FOREIGN KEY(show_id) REFERENCES shows(id))")
shows = ["FRIENS", "Game of Trhones", "Want", "Scooby Doo"]
genres = ["Comedy", "Fantasy", "Action", "Cartoon"]
for ind, show in enumerate(shows):
db.execute("INSERT INTO shows (title) VALUES(?)", (show,))
id_ = con.execute(
"SELECT id FROM shows WHERE title = :show ORDER BY id DESC LIMIT 1",
{
"show": show
},
).fetchone()
db.execute(
"INSERT INTO genres (show_id, genre) VALUES (?, ?)",
(id_[0], genres[ind], )
)
con.commit()
con.close()
For more details, check my code on the GitHub.
SELECT statement may look a bit complex. What it does in a nutshell, it takes the matching title and returns the largest id. As titles may be the same, you always take the last inserted that matches.
General suggestions on debugging issues like that
Try using print as much as possible
Use dir and type functions to see methods and be able to google types
Search docs or examples on GitHub
I can see your issue. It's because you're trying the add in the id and genre into the query like they do in the sqlite3 documentation. In the documentation they did what your trying to do in a tuple but you did it in the function call.
Try this instead:
sql_query = ("INSERT INTO genres (show_id, genre) VALUES(?, ?)", id, genre)
db.execute(sql_query)
Or you could put it into a one-liner:
# notice how there is 2 parenthesis
# ↓ ↓
db.execute(("INSERT INTO genres (show_id, genre) VALUES(?, ?)", id, genre))
# ↑ ↑
Related
I am trying to create a training app in python to work with a database of movies, adding movie details via a text menu prompting user input for all fields (movie name, actors, company, etc.). I am using PostgreSQL as the database and import psycopg2 in Python.
From user input, I am collecting data which I then want to store in my database tables 'movies' and 'actors'. For one movie, there are several actors. I have this code:
def insert_movie(name, actors, company, year):
connection = psycopg2.connect(user='postgres', password='postgres', database='movie')
cursor = connection.cursor()
query1 = "INSERT INTO movies (name, company, year) VALUES (%s, %s, %s);"
cursor.execute(query1, (name, company, year))
movie_id = cursor.fetchone[0]
print(movie_id)
query2 = 'INSERT INTO actors (last_name, first_name, actor_ordinal) VALUES (%s, %s, %s);'
for actor in actors:
cursor.execute(query2, (tuple(actor)))
rows = cursor.fetchall()
actor_id1 = [row[0] for row in rows]
actor_id2 = [row[1] for row in rows]
print(actor_id1)
print(actor_id2)
connection.commit()
connection.close()
This works great for printing movie_id after query1. However for printing actor_id2, I get IndexError: list index out of range.
If I leave only actor_id1 in query3 like this:
query2 = 'INSERT INTO actors (last_name, first_name, actor_ordinal) VALUES (%s, %s, %s);'
for actor in actors:
cursor.execute(query2, (tuple(actor)))
rows = cursor.fetchall()
actor_id1 = [row[0] for row in rows]
print(actor_id1)
, I get printed the following result:
movie_id --> 112
actor2_id --> 155
The problem that I cannot retrieve actor1_id with this code, which is 154.
Can anyone help with using fetchall correctly here?
OK, I have found out the answer. The fetch should be used inside the loop as we should execute fetch for every row and not after the whole query for all rows altogether:
query2 = 'INSERT INTO actors (last_name, first_name, actor_ordinal) VALUES (%s, %s, %s);'
actor_ids = []
for actor in actors:
cursor.execute(query2, (tuple(actor)))
actor_id = cursor.fetchone()[0]
actor_ids.append(actor_id)
print(actor_ids)
I built a scrapy spider to run through a job site and return all jobs I'm qualified for so I don't have to scroll through them every day.
I'm parsing the json correctly:
jsonresponse = json.loads(response.body)
for item in jsonresponse:
yield{
'id':item['id'],
'date':item['date'],
'company':item['company'],
'position':item['position'],
'description':item['description'],
'url':item['url'],
}
And I set up a function to take whatever .csv file I save the results as, create a new one as a backup, then create a SQLite db to dump results into:
def close(self, reason):
csv_file = max(glob.iglob('*.csv'), key=os.path.getctime)
with open(csv_file) as input, open('jobs2.csv', 'w', newline='') as output:
writer = csv.writer(output)
for row in csv.reader(input):
if any(field.strip() for field in row):
writer.writerow(row)
db = sqlite3.connect(':memory:')
csv_data = csv.DictReader(open('jobs.csv'))
cur = db.cursor()
cur.execute('''CREATE TABLE jobs_table(date TEXT PRIMARY KEY,
id TEXT,
company TEXT,
position TEXT,
description TEXT,
url TEXT)
''')
db.commit()
print('')
print('DB CREATED')
print('')
Then, I'm defining a list of "skills" to check against each row in the .csv file to see if I'm qualified for the position:
# skills = {'python'}
skills = ('python')
for row in csv_data:
if skills in row.get('description').lower():
print('Job Match!')
print('')
print('')
print(row)
print('')
print('')```
And this is where I run into issues. It prints that there is a Match, and it prints the OrderedDict for the FIRST result:
2019-06-18 14:59:56 [scrapy.extensions.feedexport] INFO: Stored csv feed (309 items) in: jobs.csv```
DB CREATED
Job Match!
OrderedDict([('id', '73345'), ('date', '2019-06-11T14:16:33-07:00'), ('company', 'JBS Custom Software Solutions'), ('position', 'Full Stack Developer'), ('description', 'JBS Full-Stack Developer (Python, JavaScript, PostgreSQL)Required; 3+ years working with Python 3+ years working with JavaScript; Strong knowledge of modern JavaScript development practices; Strong computer science skills'), ('url', 'https://entrenous.com/jobs/73472')])```
Then, problems occur when I try to run cur.execute() to INSERT rows INTO my "jobs_table". Here's what I've tried (COMMANDS) and what it's kicked back (ERROR):
COMMAND:
cur.execute('INSERT INTO jobs_table(date, id, company, position, description, url) VALUES(%s, %s, %s, %s, %s, %s)', row)
ERROR:
(sqlite3.OperationalError: near "%": syntax error)
COMMAND:
cur.execute('INSERT INTO jobs_table(date, id, company, position, description, url) VALUES(?, ?, ?, ?, ?, ?)', row)
ERROR:
(sqlite3.ProgrammingError: Binding 1 has no name, but you supplied a dictionary (which has only names).)
COMMAND:
cur.execute("INSERT INTO jobs_table(date, id, company, position, description, url) VALUES(date, id, company, position, description, url)", row)
ERROR:
(sqlite3.OperationalError: no such column: date)
COMMAND:
cur.execute("INSERT INTO jobs_table(date, id, company, position, description, url) VALUES(:date, :id, :company, :position, :description, :url)", row)
ERROR:
(sqlite3.IntegrityError: UNIQUE constraint failed: jobs_table.date)
COMMAND:
cur.execute("INSERT INTO jobs_table(date, id, company, position, description, url) VALUES('date', 'id', 'company', 'position', 'description', 'url')", row)
ERROR:
(sqlite3.IntegrityError: UNIQUE constraint failed: jobs_table.date)
I'm wrapping it up with a simple commit, close and a print function to let me know everything's fine and I can go back to bed, but as things stand, I can't go back to bed.
db.commit()
db.close()
print("JOBS IMPORTED!")
I posted a similar question earlier, but wasn't nearly as clear with what I wanted to gain from posting here, so here's what I want:
I want to be able to save only jobs with ["descriptions"] that contain my ["skills"].
The rest of them are useless to me.
Is there anyone out there who can help me with this?
If you get this error: (sqlite3.IntegrityError: UNIQUE constraint failed: jobs_table.date)
Make sure you're INSERTING things in the right order...
Moving on!
As you can see I am trying to fetch data from this API-endpoint https://api.coindesk.com/v1/bpi/currentprice.json and I have chosen few data I want to fetch and store it in SQLite.
When I try to save it in a database it gives me this error:
Traceback (most recent call last):
File "bitcoin.py", line 41, in <module>
cur.execute("INSERT INTO COINS (Identifier, symbol, description) VALUES (?, ?, ?);", to_db)
sqlite3.ProgrammingError: Binding 1 has no name, but you supplied a dictionary (which has only names).
How can I store the some of the data from API-endpoint into the database?
I'm doing this to learn programming and still new to this so hopefully, you can guide me in the right way.
Here is what I have tried so far:
import requests
import sqlite3
con = sqlite3.connect("COINS.db")
cur = con.cursor()
cur.execute('DROP TABLE IF EXISTS COINS')
cur.execute(
"CREATE TABLE COINS (Identifier INTEGER PRIMARY KEY, symbol TEXT, description TEXT);"
)
r = requests.get('https://api.coindesk.com/v1/bpi/currentprice.json')
to_db = r.json() # I do not have to do it in json, CSV would also be another
# solution but the data that is been stored cannot be static.
# It has to automatically fetch the data from API-endpoint
cur.execute("INSERT INTO COINS (Identifier, symbol, description) VALUES (?, ?, ?);", to_db)
con.commit()
con.close()
import requests
import sqlite3
con = sqlite3.connect("COINS.db")
cur = con.cursor()
cur.execute('DROP TABLE IF EXISTS COINS')
cur.execute(
"CREATE TABLE COINS (id INTEGER NOT NULL PRIMARY KEY AUTOINCREMENTUNIQUE,
symbol TEXT, description TEXT);")
r = requests.get('https://api.coindesk.com/v1/bpi/currentprice.json')
to_db = r.json()
des=to_db['bpi']['USD']['description']
code=to_db['bpi']['USD']['code']
cur.execute("INSERT INTO COINS (symbol, description) VALUES (?, ?);",
(des,code))
con.commit()
con.close()
Check full code
my python program isn't working properly and it's something with the submit button and it gives me an error saying:
TypeError: 'str' object is not callable
help please. Here is the part of the code that doesn't work:
def submit():
g_name = ent0.get()
g_surname = ent1.get()
g_dob = ent2.get()
g_tutorg = ent3.get() #Gets all the entry boxes
g_email = ent4.get()
cursor = db.cursor()
sql = '''INSERT into Students, (g_name, g_surname, g_dob, g_tutorg, g_email) VALUES (?,?,?,?,?)'''
cursor.execute(sql (g_name, g_surname, g_dob, g_tutorg, g_email))
#Puts it all on to SQL
db.commit()
mlabe2=Label(mGui,text="Form submitted, press exit to exit").place(x=90,y=0)
I'm not sure what else you need so here's the rest of the SQL part that creates the table
cursor = db.cursor()
cursor.execute("""
CREATE TABLE IF NOT EXISTS Students(
StudentID integer,
Name text,
Surname text,
DOB blob,
Tutor_Grop blob,
Email blob,
Primary Key(StudentID));
""") #Will create if it doesn't exist
db.commit()
I've been trying so long and couldn't find a solution to this problem so if you can help that would be great thanks
problem maybe in your line:
cursor.execute(sql (g_name, g_surname, g_dob, g_tutorg, g_email))
try change it like this:
cursor.execute(sql, (g_name, g_surname, g_dob, g_tutorg, g_email))
Edit:
I call SQLite inserts in my simple app with this code:
data = (None, spath, sfile, sfilename, sha256hash, )
cur.execute("INSERT INTO filesoid VALUES (?, ?, ?, ?, ?)", data)
and it works ok.
You're not passing the values for your variables correctly. The way you've called cursor.execute(sql()) makes the interpreter think it's a function.
You need to format the sql string correctly:
sql = '''INSERT into Students, ({}, {}, {}, {}, {}) VALUES (?,?,?,?,?)'''.format(g_name, g_surname, g_dob, g_tutorg, g_email)
then use:
cursor.execute(sql)
EDIT:
or you may need to pass a tuple with data:
sql = '''INSERT into Students VALUES (?,?,?,?,?)'''
'data = (g_name, g_surname, g_dob, g_tutorg, g_email)
and then use
cursor.execute(sql, data)'
It depends on what those values actually are, and without seeing the database, I can't tell.
I'm having a bit of trouble trying to fix a problem I'm having in retrieving the last insert id from a query in SQLite3 using Python.
Here's a sample of my code:
import sqlite3
# Setup our SQLite Database
conn = sqlite3.connect('value_serve.db')
conn.execute("PRAGMA foreign_keys = 1") # Enable Foreign Keys
cursor = conn.cursor()
# Create table for Categories
conn.executescript('DROP TABLE IF EXISTS Category;')
conn.execute('''CREATE TABLE Category (
id INTEGER PRIMARY KEY AUTOINCREMENT,
category CHAR(132),
description TEXT,
parent_id INT,
FOREIGN KEY (parent_id) REFERENCES Category (id)
);''')
conn.execute("INSERT INTO Category (category, parent_id) VALUES ('Food', NULL)")
food_category = cursor.lastrowid
conn.execute("INSERT INTO Category (category, parent_id) VALUES ('Beverage', NULL)")
beverage_category = cursor.lastrowid
...
conn.commit() # Commit to Database
No matter what I do, when I try to get the value of 'food_category' I get a return value of 'None'.
Any help would be appreciated, thanks in advance.
The lastrowid value is set per cursor, and only visible to that cursor.
You need to execute your query on the cursor that executed the query to get the last row id. You are asking an arbitrary cursor, one that never actually is used to execute the query for a last row id, but that cursor can't know that value.
If you actually execute the query on the cursor object, it works:
cursor.execute("INSERT INTO Category (category, parent_id) VALUES ('Food', NULL)")
food_category = cursor.lastrowid
The connection.execute() function creates a new (local) cursor for that query and the last row id is only visible on that local cursor. That cursor is returned when you use connection.execute(), so you could get the same value from that return value:
cursor_used = conn.execute("INSERT INTO Category (category, parent_id) VALUES ('Food', NULL)")
food_category = cursor_used.lastrowid