This question already has answers here:
How to get line count of a large file cheaply in Python?
(44 answers)
Closed 8 years ago.
I have a .json file which contains a JSON item on every line. I want to try and count how many JSON items I have within the file using Python. I am currently just counting the lines using the following code
count =0
with open(file) as f:
for line in f:
count+=1
This seems like an unefficient way of doing it though. Is there a more efficient way of either calculating the number of JSON items in a .json file or counting the number of lines within a file?
Edit to fix my wrong doing:
This is a one liner for counting lines in file.
num = sum(1 for line in open(filename))
Related
This question already has answers here:
Print string to text file
(8 answers)
Closed 1 year ago.
I'm looking for a way to get the output that I'm printing out here written in a .txt file
for y in list(permutations(sllist, 3)):
print("".join(y))
The Output in the console looks like that:
ccccaaaaaaaabbbbdddddddddd
ccccaaaaaaaabbbbeeeeeee
ccccaaaaaaaabbbbfff
ccccaaaaaaaabbbbgggggggggggg
ccccaaaaaaaabbbb2001
ccccaaaaaaaabbbb01
ccccaaaaaaaabbbb06
And that's exactly how I want to write it in the .txt file. Neatly arranged among each other.
Thanks in advance.
Use a context manager to open the file, so that it will be automatically and correctly closed, even if the code inside raises exceptions.
As pointed out in the comments, there is no need to create the whole list in memory, you can iterate over the object returned by permutations and each element will be produced as needed.
from itertools import permutations
sllist = "abcd"
with open("output.txt", "w") as f:
for y in permutations(sllist, 3):
line = "".join(y)
print(line)
f.write(f"{line}\n")
Cheers!
This question already has answers here:
How to read a file line-by-line into a list?
(28 answers)
Reading a text file and splitting it into single words in python
(6 answers)
Closed 3 years ago.
I have a TXT file which contains 100000 lines of short strings from 1-3 digits long. I need a for loop that loops 100000 times, and in the loop I need to read a line then add it to an array and finally move down to the next line. The txt file is called Rand_Numbers and I have opened it as...
C_txt = open("Rand_Numbers", "r")
I don't really know how to do much in python as i'm fairly new to the programming scene. Any help is much appreciated.
welcome to Python. how about
dump_here = []
with open('your_file.txt') as my_file:
for line in my_file:
dump_here.append(line)
This question already has answers here:
How do you read a file into a list in Python? [duplicate]
(8 answers)
Closed 3 years ago.
What i am trying to do is read from a file, and insert the content from a file into a list in python 3. the file should look like this:
♥A
♣A
♥Q
♠Q
and the result i am expecting when i read from the file is that when i print the specific list is that it shows up like this
['♥A', '♣A', '♥Q', '♠Q']
How would i go about solving this?
And i have tried multiple solutions for this, like using for loops, but i dont understand how to do this
You can use the open function.
f = open("file.txt", "r")
l = list()
for line in f:
l.append(line)
f.close()
print(l)
This question already has answers here:
iterating over file object in Python does not work, but readlines() does but is inefficient
(3 answers)
Closed 3 years ago.
I was doing an exercise with the help of tutorial of python, trying to combine the codes used to read contents of file line by line and to make a list of lines from a file. The output I expected is that the text will be printed 2 times, which doesn’t work.
My operating system is Windows 10.
I use sublime text 3.
with open(filename) as f:
for line in f:
print(line)
lines = f.readlines()
for line in lines:
print(line.strip())
Output:
In Python you can use list.
In Python you can use class.
In Python you can use dictionary.
[Finished in 0.1s]
Why doesn’t print(line.strip()) work?
The reason is f is a file iterator which is completely exhausted after the loop (it is one-time consumable). There is nothing in f to read from and hence f.readlines() returns an empty list.
In fact, if you want to iterate over file lines over again, you should use f.readlines() to store list and iterate over that list any number of times.
This question already has answers here:
How to get line count of a large file cheaply in Python?
(44 answers)
Closed 9 years ago.
Does there exist a way of finding the number of lines in a csv file without actually loading the whole file in memory (in Python)?
I'd expect there can be some special optimized function for it. All I can imagine now is read it line by line and count the lines, but it kind of kills all the possible sense in it since I only need the number of lines, not the actual content.
You don't need to load the whole file into memory since files are iterable in terms of their lines:
with open(path) as fp:
count = 0
for _ in fp:
count += 1
Or, slightly more idiomatic:
with open(path) as fp:
for (count, _) in enumerate(fp, 1):
pass
Yes you need to read the whole file in memory before knowing how many lines are in it.
Just think the file to be a long long string Aaaaabbbbbbbcccccccc\ndddddd\neeeeee\n
to know how many 'lines' are in the string you need to find how many \n characters are in it.
If you want an approximate number what you can do is to read few lines (~20) and see how many characters are per lines and then from the file's size (stored in the file descriptor) get a possible estimate.