This question already has answers here:
Python seek to read constantly growing files
(1 answer)
How can I tail a log file in Python?
(14 answers)
Closed 2 years ago.
GOAL
I have a text file which receives receives at least 20 more lines every seconds.
What I want is that every time a new line appears in my txt file, I want the line to go through a process. For example, the line will be uppercase and print it out. And then wait for the next line to appear.
WHAT I TRIED
My idea was to read the file like we can normally read a file line by line in python because by the time it was reading the lines, some more lines would be added. However, python reads the lines too fast...
This was my code:
file = open('/file/path/text.txt')
for line in file:
line = line.upper
print(line)
time.sleep(1)
I know there is PySpark, but is there an easier solution? Because I only have one text file and it must go through a simple process (eg: upper case read lines, print line and then wait for next line to appear).
I am a newbie in python so I am sorry if the answer is obvious for some of you. I would appreciate any help. Thank you.
Related
This question already has answers here:
read the whole file at once
(2 answers)
Closed 7 months ago.
Let us say I wanted to read a whole file at once instead of going through it line by line (let's say for example to speed up retrieval of the times 'i' occurs in the file). How would I go about reading it as a whole instead of the lines in which it is written?
with open("file.ext", 'r') as f:
data = f.read()
As others have mentioned, you can find an official Python tutorial for Reading and Writing Files which explains this. You can also see the Methods of File Objects section, which explains the use of f.read(size) and f.readline() and the difference between between them.
This question already has answers here:
How to delete a specific line in a file?
(17 answers)
Closed 3 years ago.
So my problem is: I am working with Python on a Discord bot (using discord.py).
However, I want the bot to read a txt file, choose one line out of it and delete this line after that.
This is how far I've come:
list = open(r"C:\Users\Max\Documents\Python\Discord Bots\Test File\Text\text.txt","r+")
readlist = list.readlines()
text = random.choice(readlist)
for readlist in list: <-- I guess there is the problem
readlist = "\n" <-- here too
list.close()
You need to write the lines back to the file. Otherwise, you're just modifying the local variable. To do this with the same file object, you'll need to seek to the beginning of the file before you write. You'll probably also want to use a list of lines, e.g. from readlines, rather than iterating through the lines while you're writing them. You can then truncate any additional lines you didn't write over at the end.
For more information about reading and writing files and I/O, see https://docs.python.org/3/tutorial/inputoutput.html#reading-and-writing-files and https://docs.python.org/3/library/io.html.
Also, in your code, you're shadowing readlist and the built-in list.
This also has nothing to do with discord.py or Discord bots.
This question already has answers here:
How should I read a file line-by-line in Python?
(3 answers)
Closed 3 years ago.
I have a very big file (~10GB) and I want to read it in its wholeness. In order to achieve this, I cut it into chunks. However, I have troubles cutting the big file into exploitable pieces: I want thousands lines together without having them splitted in the middle. I have found a function here on SO that I have arranged a bit:
def readPieces(file):
while True:
data = file.read(4096).strip()
if not data:
break
yield data
with open('bigfile.txt', 'r') as f:
for chunk in readPieces(f):
print(chunk)
I can specify the bytes I want to read (here 4MB) but when I do so my lines get cut in the middle, and if I remove it, it'll read the big file that will lead to a process stop. How can I do this?
Also, the lines in my file haven't equal size.
The following code reads the file line by line, the previous line gets garbage collected.
with open('bigfile.txt') as file:
for line in file:
print(line)
This question already has answers here:
Deleting a line in a file
(4 answers)
Closed 4 years ago.
I want to read one text file line by line and delete the line. So steps are like below.
Read the line
Delete the line
Repeat step 1 and 2 until the file is not empty
For example data in file is like below.
1,2,3,4,5
a,b,c,d,e
q,w,e,r,t
a,s,d,f,g
So, steps will be ...
while Reading
Read 1,2,3,4,5
Delete 1,2,3,4,5
then next line
Read a,b,c,d,e
Delete a,b,c,d,e
..
..
and so on until the file is not empty.
I know Python well, but I am not able to do it.
Don't even try!
At its lowest level, a file is a sequential stream of bytes. You can read bytes from that stream, overwrite bytes, and position your cursor (position where to read of write) anywhere between the beginning and the end of a file. When you are at end of file and write, you extend the file. You can also truncate a file (remove everything past the cursor).
So there is no way to remove the initial part of a file! It is commonly done by writing a new file containing what needs to be kept remove the original file and rename the new file to give it the original name.
I assume that you are trying the wrong tool to solve your real problem...
This question already has answers here:
Read multiple block of file between start and stop flags
(4 answers)
Closed 6 years ago.
I have a VERY large file formatted like this:
(mydelimiter)
line
line
(mydelimiter)
line
line
(mydelimiter)
Since the file is so large I can't read it all into memory at once. So I would like to read each chunk between "(mydelimiter)" at a time, perform some operations on it, then read in the next chunk.
This is the code I have so far:
with open(infile,'r') as f:
chunk = []
for line in f:
chunk.append(line)
Now, I'm not sure how to tell python "keep appending lines UNTIL you hit another line with '(mydelimiter)' in it", and then save the line where it stopped abd start there in the next iteration of the for loop.
Note: it's also not possible to read in a certain number of lines at a time since each chunk is variable length.
Aren't you perhaps over thinking this? Something as simple as the following code can do the trick for you
with open(infile,'r') as f:
chunk = []
for line in f:
if line == 'my delimiter':
call_something(chunk)
chunk=[]
else :
chunk.append(line)