Basic writing to a CSV file in python? - python

i was looking for help writing to a csv file in python 3. I have the code below but it only seems to write to the first line, whenever i run the code again it overwrites the first line.
import csv
with open("scores1.csv", "w") as csvfile:
fieldnames = ["score", "username","topic","difficulty",]
writer = csv.DictWriter(csvfile, fieldnames=fieldnames)
writer.writeheader()
score = int(input("score" ))
user = input("user: ")
topic = input("topic: ")
difficulty = input("difficulty: ")
writer.writerow({"score": score, "username": user, "topic": topic, "difficulty": difficulty})
print ()
csvfile.close()

writerow(), unsurprisingly writes a single row. So it needs to be enclosed in a for loop iterating over each record to be written to the file.

Related

How to print only a the content of a cell in a specific row from a csv file in Python

I'm new to Python so excuse me if my question is kind of dumb.
I send some data into a csv file (I'm making a password manager). So I send this to this file (in this order), the name of the site, the e-mail corresponding and finally the password.
But I would like to print all the names already written in the csv file but here is my problem, for the first row it does print the whole row but for the following rows it works just well.
Here is my code, I hope u can help me with this.
csv_file = csv.reader(open('mycsvfile.csv', 'r'), delimiter=';')
try :
print("Here are all the sites you saved :")
for row in csv_file :
print(row[0])
except :
print("Nothing already saved")
Maybe it can help, but here is how I wrote my data into the csv file:
#I encrypt the email and the password thanks to fernet and an already written key
#I also make sure that the email is valid
file = open('key.key', 'rb')
key = file.read()
file.close()
f = Fernet(key)
website = input("web site name : \n")
restart = True
while restart :
mail = input("Mail:\n")
a = isvalidEmail(mail)
if a == True :
print("e-mail validated")
restart = False
else :
print("Wrong e-mail")
pws = input("password :\n")
psw_bytes = psw.encode()
mail_bytes = mail.encode()
psw_encrypted_in_bytes = f.encrypt(psw_bytes)
mail_encrypted_in_bytes = f.encrypt(mail_bytes)
mail_encrypted_str = mail_encrypted_in_bytes.decode()
psw_encrypted_str = psw_encrypted_in_bytes.decode()
f = open('a.csv', 'a', newline='')
tup1 = (website, mail_encrypted_str, psw_encrypted_str)
writer = csv.writer(f, delimiter = ';')
writer.writerow(tup1)
print("Saved ;)")
f.close()
return
And here is my output (I have already saved data)
Output (First, you see the name of the ws with the email and the psw encrypted then just the name which is what I want
I finally succeed, instead of using a csv.Reader, i used a csv.DictReader and as all the names i'm looking for are on the same column, i juste have to use the title of the columns.
So here is the code :
with open('mycsv.csv', newline='') as csvfile:
data = csv.DictReader(csvfile)
print("Websites")
print("---------------------------------")
for row in data:
print(row['The_title_of_my_column'])
make list from csv.reader()
rows = [row for row in csv_file]
and now you can get element by identifier using rows as list of lists
rows[id1][id2]

How to remove specifc row from csv file using python

I am working on one program and trying to achieve following functionalities.
add new student
Remove student based on id
here is my code
from csv import writer
import csv
def add(file_name, list_of_elem):
# Open file in append mode
with open(file_name, 'a+', newline='') as write_obj:
# Create a writer object from csv module
csv_writer = writer(write_obj)
# Add contents of list as last row in the csv file
csv_writer.writerow(list_of_elem)
def remove():
id = input("Enter ID : ")
with open('students.csv', 'rb') as inp, open('students.csv', 'wb') as out:
writer = csv.writer(out)
for row in csv.reader(inp):
if row[0] != id:
writer.writerow(row)
# List of strings
row_contents = [11,'mayur','Java','Tokyo','Morning']
# Append a list as new line to an old csv file
add('students.csv', row_contents)
remove()
add function works properly but when i tried remove function it removes all existing entries.Could anyone please help me.
First I will show the code and below I will left some comments about the changes.
from csv import writer
import csv
def add(file_name, list_of_elem):
# Open file in append mode
with open(file_name, 'a+', newline = '') as write_obj:
# Create a writer object from csv module
csv_writer = writer(write_obj)
# Add contents of list as last row in the csv file
csv_writer.writerow(list_of_elem)
def remove():
idt = input("Enter ID : ")
with open('students.csv', 'r') as inp:
newrows = []
data = csv.reader(inp)
for row in data:
if row[0] != idt:
newrows.append(row)
with open('students.csv', 'w') as out:
csv_writer = writer(out)
for row in newrows:
csv_writer.writerow(row)
def display():
with open('students.csv','r') as f:
data = csv.reader(f)
for row in data:
print(row)
# List of strings
row_contents = [10,'mayur','Java','Tokyo','Morning']
add('students.csv', row_contents)
row_contents = [11,'mayur','Java','Tokyo','Morning']
add('students.csv', row_contents)
row_contents = [12,'mayur','Java','Tokyo','Morning']
add('students.csv', row_contents)
# Append a list as new line to an old csv file
display()
remove()
If your file is a CSV, you should use a text file, instead of a binary one.
I changed the name of the variable id to ìdt because id is built-in to return the identity of an object and it's not a good practice overwrite built-in functions.
To remove only rows with an specific idt you should read all the file, store into a var (list), remove what you want to delete and only after that save the result.
You should use a temporary file instead of opening and writing to the same file simultaneously. Checkout this answer: https://stackoverflow.com/a/17646958/14039323

Add data to new column and first row in CSV file

I have a python function that creates a CSV file using a Postgresql copy statement. I need to add a new column to this spreadsheet called 'UAL' with an example value in the first row of say 30,000, but without editing the copy statement. This is the current code:
copy_sql = 'COPY (
SELECT
e.name AS "Employee Name",
e.title AS "Job Title"
e.gross AS "Total Pay",
e.total AS "Total Pay & Benefits",
e.year AS "Year",
e.notes AS "Notes",
j.name AS "Agency",
e.status AS "Status"
FROM employee_employee e
INNER JOIN jurisdiction_jurisdiction j on e.jurisdiction_id = j.id
WHERE
e.year = 2011 AND
j.id = 4479
ORDER BY "Agency" ASC, "Total Pay & Benefits" DESC
)'
with open(path, 'w') as csvfile:
self.cursor.copy_expert(copy_sql, csvfile)
What I am trying to do is use something like csv.writer to add content like this:
with open(path, 'w') as csvfile:
self.cursor.copy_expert(copy_sql, csvfile)
writer = csv.writer(csvfile)
writer.writerow('test123')
But this is adding the text to the last row. I am also unsure how to add a new header column. Any advice?
adding a header is easy: write the header before the call to copy_expert.
with open(path, 'w') as csvfile:
writer = csv.writer(csvfile)
writer.writerow(["my","super","header"])
self.cursor.copy_expert(copy_sql, csvfile)
But adding a column cannot be done without re-reading the file again and add your info on each row, so the above solution doesn't help much.
If the file isn't too big and fits in memory, you could write the sql output to a "fake" file:
import io
fakefile = io.StringIO()
self.cursor.copy_expert(copy_sql, fakefile)
now rewind the file and parse it as csv, add the extra column when writing it back
import csv
fakefile.seek(0)
with open(path, 'w', newline="") as csvfile:
writer = csv.writer(csvfile)
reader = csv.reader(fakefile) # works if copy_expert uses "," as separator, else change it
writer.writerow(["my","super","header","UAL"])
for row in reader:
writer.writerow(row+[30000])
or instead of the inner loop:
writer.writerows(row+[30000] for row in reader)
And if the file is too big, write it in a temp file, and proceed the same way (less performant)

Trying to read files named file1,file2,file3 using for loop in Python

I am pretty new to python and trying to run a script to edit csv files. The problem I am facing is that I need to split the csv files into smaller pieces(as they are large files and getting memory errors) and then run another script to edit the files but when im trying to append these two scripts and run the test, the script is reading only the first small file and not reading the rest of the files.
For example: When I split the main csv file, the files are getting split and the names come as big-1.csv,big-2.csv. Then when the script is picking up the files to edit, only big-1.csv is getting edited and rest are not getting edited.
The script is:
import csv
from csv import DictWriter
divisor = 990
outfileno = 1
outfile = None
with open('MOCK_DATA.csv', 'r', newline='') as infile:
infile_iter = csv.reader(infile, delimiter='\t')
header = next(infile_iter)
for index, row in enumerate(infile_iter):
if index % divisor == 0:
if outfile:
outfile.close()
outfilename = 'big-{}.csv'.format(outfileno)
outfile = open(outfilename, 'w', newline='')
outfileno += 1
writer = csv.writer(outfile, delimiter='\t', quoting=csv.QUOTE_NONE)
writer.writerow(header)
writer.writerow(row)
# Don't forget to close the last file
if outfile:
outfile.close()
#export the data
# with correct quoting, and that you are stuck with what you have.
for i in range(1,2):
with open("big-" + str(i) + ".csv") as people_file:
next(people_file)
corrected_people = []
for person_line in people_file:
chomped_person_line = person_line.rstrip()
person_tokens = chomped_person_line.split(",")
# check that each field has the expected type
try:
corrected_person = {
"id": person_tokens[0],
"first_name":person_tokens[1],
"last_name": "".join(person_tokens[2:-3]),
"email":person_tokens[-3],
"gender":person_tokens[-2],
"ip_address":person_tokens[-1]
}
if not corrected_person["ip_address"].startswith(
"") and corrected_person["ip_address"] !="n/a":
raise ValueError
corrected_people.append(corrected_person)
except (IndexError, ValueError):
# print the ignored lines, so manual correction can be performed later.
print("Could not parse line: " + chomped_person_line)
with open("fix-" + str(i) + ".csv", "w") as corrected_people_file:
writer = DictWriter(
corrected_people_file,
fieldnames=[
"id","first_name","last_name","email","gender","ip_address"
],delimiter=',')
writer.writeheader()
writer.writerows(corrected_people)
I think this maybe an issue with reading the smaller files in the for loop. The script is running without any error. Please help.

Python CSV read file and select columns and write to new CSV file

I have a CSV file which has certain columns which I need to extract. One of those columns is a text string from which I need to extract the first and last items. I have a print statement in a for loop which get exactly what I need but cannot figure out how to either get that data into a list or dict. Not sure which is the best to use.
Code so far:
f1 = open ("report.csv","r") # open input file for reading
users_dict = {}
with open('out.csv', 'wb') as f: # output csv file
writer = csv.writer(f)
with open('report.csv','r') as csvfile: # input csv file
reader = csv.DictReader(csvfile, delimiter=',')
for row in reader:
print row['User Name'],row['Address'].split(',')[0],row['Last Login DateTime'],row['Address'].split(',')[7]
users_dict.update(row)
#users_list.append(row['Address'].split(','))
#users_list.append(row['Last Login DateTime'])
#users_list.append(row[5].split(',')[7])
print users_dict
f1.close()
Input from file:
User Name,Display Name,Login Name,Role,Last Login DateTime,Address,Application,AAA,Exchange,Comment
SUPPORT,SUPPORT,SUPPORT,124,2015-05-29 14:32:26,"Test Company,Bond St,London,London,1111 111,GB,test#test.com,IS",,,LSE,
Output on print:
SUPPORT Test Company 2015-05-29 14:32:26 IS
Using this code, I've got the line you need:
import csv
f1 = open ("report.csv","r") # open input file for reading
users_dict = {}
with open('out.csv', 'wb') as f: # output csv file
writer = csv.writer(f)
with open('report.csv','r') as csvfile: # input csv file
reader = csv.DictReader(csvfile, delimiter=',')
for row in reader:
print row['User Name'],row['Address'].split(',')[0],row['Last Login DateTime'],row['Address'].split(',')[7]
users_dict.update(row)
#users_list.append(row['Address'].split(','))
#users_list.append(row['Last Login DateTime'])
#users_list.append(row[5].split(',')[7])
print users_dict
f1.close()
The only changes:
Including the import csv at the top.
Indenting the code just after the with open('out.csv' ......
Does this solve your problem?
With some testing I finally get the line to write the csv file:
for row in reader:
writer.writerow([row['User Name'],row['Address'].split(',')[0],row['Last Login DateTime'],row['Address'].split(',')[7]])

Categories