I'm a new coder and am currently trying to write a piece of code that, from an opened txt document, will print out the line number that each piece of information is on.
I've opened the file and striped it of all it's commas. I found online that you can use a function called enumerate() to get the line number. However when I run the code instead of getting numbers like 1, 2, 3 I get information like: 0x113a2cff0. Any idea of how to fix this problem/what the actual problem is? The code for how I used enumerate is below.
my_document = open("data.txt")
readDocument = my_document.readlines()
invalidData = []
for data in readDocument:
stripDocument = data.strip()
if stripDocument.isnumeric() == False:
data = (enumerate(stripDocument))
invalidData.append(data)
First of all, start by opening the document and already reading its content, and it's a good practice to use with, as it closes the document after the use. The readlines function gathers all the lines (this assumes the data.txt file is in the same folder as your .py one:
with open("data.txt") as f:
lines = f.readlines()
After, use enumerate to add index to the lines, so you can read them, use them, or even save the indexes:
for index, line in enumerate(lines):
print(index, line)
As last point, if you have breaklines on your data.txt, the lines will contain a \n, and you can remove them with the line.strip(), if you need.
The full code would be:
with open("data.txt") as f:
lines = f.readlines()
for index, line in enumerate(lines):
print(index, line.strip())
Taking your problem statement:
trying to write a piece of code that, from an opened txt document, will print out the line number that each piece of information is on
You're using enumerate incorrectly as #roganjosh was trying to explain:
with open("data.txt") as my_document:
for i, data in enumerate(my_document):
print(i, data)
The way you're doing it now, you're not removing the commas. The strip() method without arguments only deletes whitespaces leading and trailing the line. If you only want the data, this would work:
invalidData = []
for row_number, data in enumerate(readDocument):
stripped_line = ''.join(data.split(','))
if not stripped_line.isnumeric():
invalidData.append((row_number, data))
You can use the enumerate() function to enumerate a list. This will return a list of tuples containing the index first, then the line string. Like this:
(0, 'first line')
Your readDocument is a list of the lines, so it might be a good idea to name it accordingly.
lines = my_document.readlines()
for i, line in enumerate(lines):
print i, line
Related
I have the following problem. I am supposed to open a CSV file (its an excel table) and read it without using any library.
I tried already a lot and have now the first row in a tuple and this in a list. But only the first line. The header. But no other row.
This is what I have so far.
with open(path, 'r+') as file:
results=[]
text = file.readline()
while text != '':
for line in text.split('\n'):
a=line.split(',')
b=tuple(a)
results.append(b)
return results
The output should: be every line in a tuple and all the tuples in a list.
My question is now, how can I read the other lines in python?
I am really sorry, I am new to programming all together and so I have a real hard time finding my mistake.
Thank you very much in advance for helping me out!
This problem was many times on Stackoverflow so you should find working code.
But much better is to use module csv for this.
You have wrong indentation and you use return results after reading first line so it exits function and it never try read other lines.
But after changing this there are still other problems so it still will not read next lines.
You use readline() so you read only first line and your loop will works all time with the same line - and maybe it will never ends because you never set text = ''
You should use read() to get all text which later you split to lines using split("\n") or you could use readlines() to get all lines as list and then you don't need split(). OR you can use for line in file: In all situations you don't need while
def read_csv(path):
with open(path, 'r+') as file:
results = []
text = file.read()
for line in text.split('\n'):
items = line.split(',')
results.append(tuple(items))
# after for-loop
return results
def read_csv(path):
with open(path, 'r+') as file:
results = []
lines = file.readlines()
for line in lines:
line = line.rstrip('\n') # remove `\n` at the end of line
items = line.split(',')
results.append(tuple(items))
# after for-loop
return results
def read_csv(path):
with open(path, 'r+') as file:
results = []
for line in file:
line = line.rstrip('\n') # remove `\n` at the end of line
items = line.split(',')
results.append(tuple(items))
# after for-loop
return results
All this version will not work correctly if you will '\n' or , inside item which shouldn't be treated as end of row or as separtor between items. These items will be in " " which also can make problem to remove them. All these problem you can resolve using standard module csv.
Your code is pretty well and you are near goal:
with open(path, 'r+') as file:
results=[]
text = file.read()
#while text != '':
for line in text.split('\n'):
a=line.split(',')
b=tuple(a)
results.append(b)
return results
Your Code:
with open(path, 'r+') as file:
results=[]
text = file.readline()
while text != '':
for line in text.split('\n'):
a=line.split(',')
b=tuple(a)
results.append(b)
return results
So enjoy learning :)
One caveat is that the csv may not end with a blank line as this would result in an ugly tuple at the end of the list like ('',) (Which looks like a smiley)
To prevent this you have to check for empty lines: if line != '': after the for will do the trick.
I have two text files with numbers that I want to do some very easy calculations on (for now). I though I would go with Python. I have two file readers for the two text files:
with open('one.txt', 'r') as one:
one_txt = one.readline()
print(one_txt)
with open('two.txt', 'r') as two:
two_txt = two.readline()
print(two_txt)
Now to the fun (and for me hard) part. I would like to loop trough all the numbers in the second text file and then subtract it with the second number in the first text file.
I have done this (extended the coded above):
with open('two.txt') as two_txt:
for line in two_txt:
print line;
I don't know how to proceed now, because I think that the second text file would need to be converted to string in order do make some parsing so I get the numbers I want. The text file (two.txt) looks like this:
Start,End
2432009028,2432009184,
2432065385,2432066027,
2432115011,2432115211,
2432165329,2432165433,
2432216134,2432216289,
2432266528,2432266667,
I want to loop trough this, ignore the Start,End (first line) and then once it loops only pick the first values before each comma, the result would be:
2432009028
2432065385
2432115011
2432165329
2432216134
2432266528
Which I would then subtract with the second value in one.txt (contains numbers only and no Strings what so ever) and print the result.
There are many ways to do string operations and I feel lost, for instance I don't know if the methods to read everything to memory are good or not.
Any examples on how to solve this problem would be very appreciated (I am open to different solutions)!
Edit: Forgot to point out, one.txt has values without any comma, like this:
102582
205335
350365
133565
Something like this
with open('one.txt', 'r') as one, open('two.txt', 'r') as two:
next(two) # skip first line in two.txt
for line_one, line_two in zip(one, two):
one_a = int(split(line_one, ",")[0])
two_b = int(split(line_two, " ")[1])
print(one_a - two_b)
Try this:
onearray = []
file = open("one.txt", "r")
for line in file:
onearray.append(int(line.replace("\n", "")))
file.close()
twoarray = []
file = open("two.txt", "r")
for line in file:
if line != "Start,End\n":
twoarray.append(int(line.split(",")[0]))
file.close()
for i in range(0, len(onearray)):
print(twoarray[i] - onearray[i])
It should do the job!
i want to generate a list of server addresses and credentials reading from a file, as a single list splitting from newline in file.
file is in this format
login:username
pass:password
destPath:/directory/subdir/
ip:10.95.64.211
ip:10.95.64.215
ip:10.95.64.212
ip:10.95.64.219
ip:10.95.64.213
output i want is in this manner
[['login:username', 'pass:password', 'destPath:/directory/subdirectory', 'ip:10.95.64.211;ip:10.95.64.215;ip:10.95.64.212;ip:10.95.64.219;ip:10.95.64.213']]
i tried this
with open('file') as f:
credentials = [x.strip().split('\n') for x in f.readlines()]
and this returns lists within list
[['login:username'], ['pass:password'], ['destPath:/directory/subdir/'], ['ip:10.95.64.211'], ['ip:10.95.64.215'], ['ip:10.95.64.212'], ['ip:10.95.64.219'], ['ip:10.95.64.213']]
am new to python, how can i split by newline character and create single list. thank you in advance
You could do it like this
with open('servers.dat') as f:
L = [[line.strip() for line in f]]
print(L)
Output
[['login:username', 'pass:password', 'destPath:/directory/subdir/', 'ip:10.95.64.211', 'ip:10.95.64.215', 'ip:10.95.64.212', 'ip:10.95.64.219', 'ip:10.95.64.213']]
Just use a list comprehension to read the lines. You don't need to split on \n as the regular file iterator reads line by line. The double list is a bit unconventional, just remove the outer [] if you decide you don't want it.
I just noticed you wanted the list of ip addresses joined in one string. It's not clear as its off the screen in the question and you make no attempt to do it in your own code.
To do that read the first three lines individually using next then just join up the remaining lines using ; as your delimiter.
def reader(f):
yield next(f)
yield next(f)
yield next(f)
yield ';'.join(ip.strip() for ip in f)
with open('servers.dat') as f:
L2 = [[line.strip() for line in reader(f)]]
For which the output is
[['login:username', 'pass:password', 'destPath:/directory/subdir/', 'ip:10.95.64.211;ip:10.95.64.215;ip:10.95.64.212;ip:10.95.64.219;ip:10.95.64.213']]
It does not match your expected output exactly as there is a typo 'destPath:/directory/subdirectory' instead of 'destPath:/directory/subdir' from the data.
This should work
arr = []
with open('file') as f:
for line in f:
arr.append(line)
return [arr]
You could just treat the file as a list and iterate through it with a for loop:
arr = []
with open('file', 'r') as f:
for line in f:
arr.append(line.strip('\n'))
In Python:
Let's say I have a loop, during each cycle of which I produce a list with the following format:
['n1','n2','n3']
After each cycle I would like to write to append the produced entry to a file (which contains all the outputs from the previous cycles). How can I do that?
Also, is there a way to make a list whose entries are the outputs of this cycle? i.e.
[[],[],[]] where each internal []=['n1','n2','n3] etc
Writing single list as a line to file
Surely you can write it into a file like, after converting it to string:
with open('some_file.dat', 'w') as f:
for x in xrange(10): # assume 10 cycles
line = []
# ... (here is your code, appending data to line) ...
f.write('%r\n' % line) # here you write representation to separate line
Writing all lines at once
When it comes to the second part of your question:
Also, is there a way to make a list whose entries are the outputs of this cycle? i.e. [[],[],[]] where each internal []=['n1','n2','n3'] etc
it is also pretty basic. Assuming you want to save it all at once, just write:
lines = [] # container for a list of lines
for x in xrange(10): # assume 10 cycles
line = []
# ... (here is your code, appending data to line) ...
lines.append('%r\n' % line) # here you add line to the list of lines
# here "lines" is your list of cycle results
with open('some_file.dat', 'w') as f:
f.writelines(lines)
Better way of writing a list to file
Depending on what you need, you should probably use one of the more specialized formats, than just a text file. Instead of writing list representations (which are okay, but not ideal), you could use eg. csv module (similar to Excel's spreadsheet): http://docs.python.org/3.3/library/csv.html
f=open(file,'a') first para is the path of file,second is the pattern,'a' is append,'w' is write, 'r' is read ,and so on
im my opinion,you can use f.write(list+'\n') to write a line in a loop ,otherwise you can use f.writelines(list),it also functions.
Hope this can help you:
lVals = []
with open(filename, 'a') as f:
for x,y,z in zip(range(10), range(5, 15), range(10, 20)):
lVals.append([x,y,z])
f.write(str(lVals[-1]))
I'm trying to split a file with a list comprehension using code similar to:
lines = [x for x in re.split(r"\n+", file.read()) if not re.match(r"com", x)]
However, the lines list always has an empty string as the last element. Does anyone know a way to avoid this (excluding the cludge of putting a pop() afterwards)?
Put the regular expression hammer away :-)
You can iterate over a file directly; readlines() is almost obsolete these days.
Read about str.strip() (and its friends, lstrip() and rstrip()).
Don't use file as a variable name. It's bad form, because file is a built-in function.
You can write your code as:
lines = []
f = open(filename)
for line in f:
if not line.startswith('com'):
lines.append(line.strip())
If you are still getting blank lines in there, you can add in a test:
lines = []
f = open(filename)
for line in f:
if line.strip() and not line.startswith('com'):
lines.append(line.strip())
If you really want it in one line:
lines = [line.strip() for line in open(filename) if line.strip() and not line.startswith('com')]
Finally, if you're on python 2.6, look at the with statement to improve things a little more.
lines = file.readlines()
edit:
or if you didnt want blank lines in there, you can do
lines = filter(lambda a:(a!='\n'), file.readlines())
edit^2:
to remove trailing newines, you can do
lines = [re.sub('\n','',line) for line in filter(lambda a:(a!='\n'), file.readlines())]
another handy trick, especially when you need the line number, is to use enumerate:
fp = open("myfile.txt", "r")
for n, line in enumerate(fp.readlines()):
dosomethingwith(n, line)
i only found out about enumerate quite recently but it has come in handy quite a few times since then.
This should work, and eliminate the regular expressions as well:
all_lines = (line.rstrip()
for line in open(filename)
if "com" not in line)
# filter out the empty lines
lines = filter(lambda x : x, all_lines)
Since you're using a list comprehension and not a generator expression (so the whole file gets loaded into memory anyway), here's a shortcut that avoids code to filter out empty lines:
lines = [line
for line in open(filename).read().splitlines()
if "com" not in line]