delete all .csv file content in python - python

so, my program will replace the old data with new data which is the program will put in same .csv file. and after i run the program it is not replaced. below is my code.
TEXTFILE = open("data.csv", "w")
for i in book_list:
TEXTFILE.write("{},{},{},{}".format(i[0],i[1],i[2],i[3]))
TEXTFILE.close()
book_list is the list that save new data to stored
the result i got:
k,k,45,c
a,a,65,r
d,s,65,r
as,as,65,r
df,df6,65,r
as,as,6,r
as,as,46,r
as,as,45,r
as,as,56,rk,k,45,r
a,a,65,r
d,s,65,r
as,as,65,r
df,df6,65,r
as,as,6,r
as,as,46,r
as,as,45,r
as,as,56,r
it stored to csv with combining old and new content.
well the origininal file is looks like this:
k,k,45,r
a,a,65,r
d,s,65,r
as,as,65,r
df,df6,65,r
as,as,6,r
as,as,46,r
as,as,45,r
as,as,56,r
idk how to explain. but I expect that the result will change one line with new data in the fourth row. for example, the previous line is k,k,45,r (line 1). and the program will change it become k,k,45,c that way
hope you all can help me :)

Try using .truncate() , if called after opening file it destroys it's content.
TEXTFILE = open("data.csv", "w")
TEXTFILE.truncate()
for i in book_list:
TEXTFILE.write("{},{},{},{}".format(i[0],i[1],i[2],i[3]))
TEXTFILE.close()
I hope I understood you correctly.

Related

Text file is not updating live updating

I am trying to read and write to a text file . The reading and writing parts works fine but the actual file does not update until after the program has executed...I understand that this is because the data is being stored in a buffer and is being uploaded after.so I came across this How come a file doesn't get written until I stop the program?
and tried the .flush, os.sync, etc: this did not have an affect though..maybe I'm not seeing something.
Note that the .flush does not work in the Postdata sub ... I think it's because of the way that subroutine is coded.
Read does as it is expected.
Post takes an index and a line index and edits that specific text at that position.
def Getdata(Index,lineindex):#indexed so can say get data at index 3 and it will return it
Datafile = open("Trade data/3rd file", "a+")
Linetoget = linecache.getline('Trade data/Databaseforbot', lineindex).split("|")
Traddetail = Linetoget[Index]
print(Traddetail)
return Traddetail
def Postdata(index,lineindex,data):#will work fine the first time, but run it as PostdataV1(3,2) it will convert
Getdata(3,2)
with fileinput.FileInput('Trade data/Databaseforbot', inplace=True, backup='.bak') as file:
entireline= linecache.getline('Trade data/Databaseforbot', lineindex)
splitted = entireline.split("|")
Traddetail = splitted[index]
Newline = entireline.replace(Traddetail, str(index+1)+"*"+data)
for line in file:
print(line.replace(entireline, Newline), end='')
#os.fsync(file)
file.close()
Getdata(3, 2)
Postdata(3,2,"QW")
Getdata(3, 2)
The data file stores this data:
1|https://app.libertex.com/products/stock/BA/|3*45#4|4*0|5*0|6*0|7*0|8*0|9*Up|CDwindow-5C5C0883A51583A013B50FDC5A1798B7
2|https://app.libertex.com/products/energetics/NG/|3*56#5|4*0|5*0|6*0|7*0|8*0|9*Up|CDwindow-5C5C0883A51583A013B50FDC5A1798B7
3|https://app.libertex.com/products/metal/XAUUSD/|3*45#4|4*0|5*0|6*0|7*0|8*0|9*Up|CDwindow-5C5C0883A51583A013
Is there a way to live update the file so I can call other parts of the code to read the data from the file...I will be using something like getch to run other stuff...I don't mind if I have to pause postng data while reading... I tried doing a second file that the data eg: filex which is read from in Getdata() and the post data first writes to filey then copy everything to filex, but that did not work either.
Also there wil be around maybe 10-50 lines in the text file, if that helps.

Split at every new instance of string Python

I am trying to split some file metadata taken from dropbox at every instance of 'FileMetadata' and write to a text file. It's printing in my console as I need but appending to the text file the new line isn't coming through.
To provide some context to the code I am getting the file meta data and writing it to a file and reading it to then split it.
with open (write_file, 'rt') as read_file:
contents = read_file.read()
data = contents.split('FileMetadata')
print (data)
with open (write_file, 'w') as file1:
file1.write(str(data))
It appears you want a newline for every part that was split by the 'FileMetadata' string.
Instead of your file1.write(str(data)), did you try file1.write("\n".join(data))?

Replace string in specific line of nonstandard text file

Similar to posting: Replace string in a specific line using python, however results were not forethcomming in my slightly different instance.
I working with python 3 on windows 7. I am attempting to batch edit some files in a directory. They are basically text files with .LIC tag. I'm not sure if that is relevant to my issue here. I am able to read the file into python without issue.
My aim is to replace a specific string on a specific line in this file.
import os
import re
groupname = 'Oldtext'
aliasname = 'Newtext'
with open('filename') as f:
data = f.readlines()
data[1] = re.sub(groupname,aliasname, data[1])
f.writelines(data[1])
print(data[1])
print('done')
When running the above code I get an UnsupportedOperation: not writable. I am having some issue writing the changes back to the file. Based on suggestion of other posts, I edited added the w option to the open('filename', "w") function. This causes all text in the file to be deleted.
Based on suggestion, the r+ option was tried. This leads to successful editing of the file, however, instead of editing the correct line, the edited line is appended to the end of the file, leaving the original intact.
Writing a changed line into the middle of a text file is not going to work unless it's exactly the same length as the original - which is the case in your example, but you've got some obvious placeholder text there so I have no idea if the same is true of your actual application code. Here's an approach that doesn't make any such assumption:
with open('filename', 'r') as f:
data = f.readlines()
data[1] = re.sub(groupname,aliasname, data[1])
with open('filename', 'w') as f:
f.writelines(data)
EDIT: If you really wanted to write only the single line back into the file, you'd need to use f.tell() BEFORE reading the line, to remember its position within the file, and then f.seek() to go back to that position before writing.

overwrite some data in a python file

I was hoping someone could help me, I am currently trying to add some data into a text file, however the way I am doing it isnt giving me what I want. I Have a file with 20+ lines in it with text and want to overwrite the first 30 characters of the file with 30 new characters. The code I have deletes all the content and adds the 30 characters only. Please help :)
file=open("text.txt", "w")
is there something wrong with this to why its reoving all of the original data too instead of simply overwriting over it?
One way is to read the whole file into a single string, create a new string with first 30 characters replaced and rewrite the whole file. This can be done like this:
with open("text.txt", "r") as f:
data = f.read()
new_thirty_characters = '<put your data here>'
new_data = new_thirty_characters + data[30:]
with open("text.txt", "w") as f:
f.write(new_data)
Ideally, you have to check that file contains more than 30 characters after it's read. Also, do not use file and other reserved names as variable names.

Python: Saving to CSV file, accidentally writing to next column instead of row after manually opening the file

I've noticed a really weird bug and didn't know if anyone else had seen this / knows how to stop it.
I'm writing to a CSV file using this:
def write_to_csv_file(self, object, string):
with open('data_model_1.csv', 'a') as f:
writer = csv.writer(f)
writer.writerow([object, string])
and then write to the file:
self.write_to_csv_file(self.result['outputLabel'], string)
If I open the CSV file to look at the results, the next time I write to the file, it will start in column 3 of the last line (column 1 is object, column 2 is string).
If I run self.write_to_csv_file(self.result['outputLabel'], string) multiple times without manually opening the file (obviously I open the file in the Python script), everything is fine.
It's only when I open the file so I get the issue of starting on Column 3.
Any thoughts on how to fix this?
You're opening the file in append mode, so the data is appended to the end of the file. If the file doesn't end in a newline, rows may get concatenated. Try writing a newline to the file before appending new rows:
with open("data_model_1.csv", "a") as f:
f.write("\n")

Categories