I have 2 text files I wants to compare column 3 and 4 t only if column 1 and 2 is the same in the files .
Text 1 :
12345,67890,4.6,5.7
89736,62828,5.1,4.2
63793,38392,5.4,7.3
Text 2:
12345,67890,4.6,5.7
63793,38392,5.4,7.3
My code :
pre = open ("g.txt","r")
post = open ("g2.utm","r")
line = pre.readlines()
if not line:
break
if line.startswith("L"):
print ("\n") #to avoid the header
else :
v = line[0:5]
l = line[6:11]
i = line[12:14]
k = line[15:17]
line2 = post.readlines()
if not line2:
break
if line2.startswith("L"):
print ("\n") #to avoid the header
else :
v2 = line[0:5]
l2 = line[6:11]
i2 = line[12:14]
k2 = line[15:17]
if v == v2 and l == l2 :
d = (i - i2)
h = (k - k2)
if d >= 6.25 and h >=6.25:
print (v2,l2,"not ok")
print ("Done")
Your code is too much repetitive and messy. Let me suggest you some modification in your code. First read the file line by line. How could you do that?
with open("g.txt","r") as f:
for line in f:
a_line_of_the_file = line
Next, instead of accessing the values with index, you can split them with commas and save them to a list.
valuelist = a_line_of_the_file.split(',')
# contains ["12345","67890","4.6","5.7"] at first iteration.
When you have two list from each row of two files, you can always compare them by index like:
if valuelist1[0]== valueList2[0]:
do_something
Cast the value first if you need another datatype.
You should now solve your problem yourselves. If you still got error, plz inform.
Related
I've been trying to figure this out for about a year now and I'm really burnt out on it so please excuse me if this explanation is a bit rough.
I cannot include job data, but it would be accurate to imagine 2 csv files both with the first column populated with values (Serial numbers/phone numbers/names, doesn't matter - just values). Between both csv files, some values would match while other values would only be contained in one or the other (Timmy is in both files and is a match, Robert is only in file 1 and does not match any name in file 2).
I can successfully output a csv value ONCE that exists in the both csv files (I.e. both files contain "Value78", output file will contain "Value78" only once).
When I try to tack on an else statement to my if condition, to handle non-matching items, the program will output 1 entry for every item it does not match with (makes 100% sense, matches happen once but every other comparison result besides the match is a non-match).
I cannot envision a structure or method to hold the fields that don't match back so that they can be output once and not overrun my terminal or output file.
My goal is to output two csv files, matches and non-matches, with the non-matches having only one entry per value.
Anyways, onto the code:
import csv
MYUNITS = 'MyUnits.csv'
VENDORUNITS = 'VendorUnits.csv'
MATCHES = 'Matches.csv'
NONMATCHES = 'NonMatches.csv'
with open(MYUNITS,mode='r') as MFile,
open(VENDORUNITS,mode='r') as VFile,
open(MATCHES,mode='w') as OFile,
open(NONMATCHES,mode'w') as NFile:
MyReader = csv.reader(MFile,delimiter=',',quotechar='"')
MyList = list(MyReader)
VendorReader = csv.reader(VFile,delimiter=',',quotechar='"')
VList = list(VendorReader)
for x in range(len(MyList)):
for y in range(len(VList)):
if str(MyList[x][0]) == str(VList[y][0]):
OFile.write(MyList[x][0] + '\n')
else:
pass
The "else: pass" is where the logic of filtering out non-matches is escaping me. Outputting from this else statement will write the non-matching value (len(VList) - 1) times for an iteration that DOES produce 1 match, the entire len(VList) for an iteration with no match. I've tried using a counter and only outputting if the counter equals the len(VList), (incrementing in the else statement, writing output under the scope of the second for loop), but received the same output as if I tried outputting non-matches.
Below is one way you might go about deduplicating and then writing to a file:
import csv
MYUNITS = 'MyUnits.csv'
VENDORUNITS = 'VendorUnits.csv'
MATCHES = 'Matches.csv'
NONMATCHES = 'NonMatches.csv'
list_of_non_matches = []
with open(MYUNITS,mode='r') as MFile,
open(VENDORUNITS,mode='r') as VFile,
open(MATCHES,mode='w') as OFile,
open(NONMATCHES,mode'w') as NFile:
MyReader = csv.reader(MFile,delimiter=',',quotechar='"')
MyList = list(MyReader)
VendorReader = csv.reader(VFile,delimiter=',',quotechar='"')
VList = list(VendorReader)
for x in range(len(MyList)):
for y in range(len(VList)):
if str(MyList[x][0]) == str(VList[y][0]):
OFile.write(MyList[x][0] + '\n')
else:
list_of_non_matches.append(MyList[x][0])
# Remove duplicates from the non matches
new_list = []
[new_list.append(x) for x in list_of_non_matches if x not in new_list]
# Write the new list to a file
for i in new_list:
NFile.write(i + '\n')
Does this work?
import csv
MYUNITS = 'MyUnits.csv'
VENDORUNITS = 'VendorUnits.csv'
MATCHES = 'Matches.csv'
NONMATCHES = 'NonMatches.csv'
with open(MYUNITS,'r') as MFile,
(VENDORUNITS,'r') as VFile,
(MATCHES,'w') as OFile,
(NONMATCHES,mode,'w') as NFile:
MyReader = csv.reader(MFile,delimiter=',',quotechar='"')
MyList = list(MyReader)
MyVals = [x for x in MyList]
MyVals = [x[0] for x in MyVals]
VendorReader = csv.reader(VFile,delimiter=',',quotechar='"')
VList = list(VendorReader)
vVals = [x for x in VList]
vVals = [x[0] for x in vVals]
for val in MyVals:
if val in vVals:
OFile.write(Val + '\n')
else:
NFile.write(Val + '\n')
#for x in range(len(MyList)):
# for y in range(len(VList)):
# if str(MyList[x][0]) == str(VList[y][0]):
# OFile.write(MyList[x][0] + '\n')
# else:
# pass
Sorry, I had some issues with my PC. I was able to solve my own question the night I posted. The solution I used is so simple I'm kicking myself for not figuring it out way sooner:
import csv
MYUNITS = 'MyUnits.csv'
VENDORUNITS = 'VendorUnits.csv'
MATCHES = 'Matches.csv'
NONMATCHES = 'NonMatches.csv'
with open(MYUNITS,mode='r') as MFile,
open(VENDORUNITS,mode='r') as VFile,
open(MATCHES,mode='w') as OFile,
open(NONMATCHES,mode'w') as NFile:
MyReader = csv.reader(MFile,delimiter=',',quotechar='"')
MyList = list(MyReader)
VendorReader = csv.reader(VFile,delimiter=',',quotechar='"')
VList = list(VendorReader)
for x in range(len(MyList)):
tmpStr = ''
for y in range(len(VList)):
if str(MyList[x][0]) == str(VList[y][0]):
tmpStr = '' #Sets to blank so comparison fails, works because break
OFile.write(MyList[x][0] + '\n')
break
else:
tmp = str(MyList[x][0])
if tmp != '':
NFile.write(tmp + '\n')
I have a file that puts out lines that have two values each. I need to compare the second value in every line to make sure those values are not repeated more than once. I'm very new to coding so any help is appreciated.
My thinking was to turn each line into a list with two items each, and then I could compare the same position from a couple lists.
This is a sample of what my file contains:
20:19:18 -1.234567890
17:16:15 -1.098765432
14:13:12 -1.696969696
11:10:09 -1.696969696
08:07:06 -1.696969696
Here's the code I'm trying to use. Basically I want it to ignore those first two lines and print out the third line, since it gets repeated more than once:
with open('my_file') as txt:
for line in txt: #this section turns the file into lists
linelist = '%s' % (line)
lista = linelist.split(' ')
n = 1
for line in lista:
listn = line[n]
listo = line[n + 1]
listp = line[n + 2]
if listn[1] == listo[1] and listn[1] == listp[1]:
print line
else:
pass
n += 1
What I want to see is:
14:13:12 -1.696969696
But I keep getting an error on the long if statement of "string index out of range"
You would be a lot better off using a dictionary type structure. Dictionary allows you to quickly check for existence.
Basically check if the 2nd value is a key in your dict. If a key then print the line. Else just add the 2nd value as a key for later.
myDict = {}
with open('/home/dmoraine/pylearn/%s' % (file)) as txt:
for line in txt:
key = line.split()[1]
if key in myDict:
print(line)
else:
myDict[key] = None #value doesn't matter
Some simple debugging highlights the functional problem:
with open('my_file.txt') as txt:
for line in txt: #this section turns the file into lists
linelist = '%s' % (line)
lista = linelist.split(' ')
print(linelist, lista)
n = 1
for line in lista:
print("line", n, ":\t", line)
listn = line[n]
listo = line[n + 1]
listp = line[n + 2]
print(listn, '|',listo, '|',listp)
if listn[1] == listo[1] and listn[1] == listp[1]:
print(line)
n += 1
Output:
20:19:18 -1.234567890
['20:19:18', '-1.234567890\n']
17:16:15 -1.098765432
['17:16:15', '-1.098765432\n']
14:13:12 -1.696969696
['14:13:12', '-1.696969696\n']
11:10:09 -1.696969696
['11:10:09', '-1.696969696\n']
08:07:06 -1.696969696
['08:07:06', '-1.696969696\n']
line 1 : 08:07:06
8 | : | 0
In short, you've mis-handled the variables. When you get to the second loop, lista is the "words" of the final line; you've read and discarded all of the others. line iterates through these individual words. Your listn/o/p variables are, therefore, individual characters. Thus, there is no such thing as listn[1], and you get an error.
Instead, you need to build some sort of list of the floating-point numbers. For instance, using your top loop as a starting point:
float_list = {}
for line in txt: #this section turns the file into lists
lista = line.split(' ')
my_float = float(lista[1]) # Convert the second field into a float
float_list.append(my_float)
Now you need to write code that will find duplicates in float_list. Can you take it from there?
Ended up turning each line into a list, and then making a dictionary of all the lists. Thank you all for your help.
I have a file that looks like this:
%Labelinfo
string1
string2
%Labelinfo2
string3
string4
string5
I would like to create dictionary that has key a string that is %Labelinfo, and value that is a concatenation of strings from one Labelinfo to next. Basically this :
{%Labelinfo : string1+string2 , %Labelinfo : string2+string3+string4}
Problem is that there can be any number of lines between two "Labelinfo" lines. For example, between %Labelinfo to %Labelinfo2 can be 5 lines. Then, between %Labelinfo2 to %Labelinfo3 can be, let's say 4 lines.
However, the line that containes "Labelinfo" always starts with the same character, for example %.
How do I solve this problem?
#!/usr/bin/env python
# coding:utf-8
'''黄哥Python'''
d = {}
with open('Labelinfo.txt') as f:
for line in f:
if len(line) > 1:
if '%Labelinf' in line:
key = line.strip()
d[key] = ""
else:
d[key] += line.strip() + "+"
d = {key: d[key][:-1] for key in d}
print d
{'%Labelinfo2': 'string3+string4+string5', '%Labelinfo': 'string1+string2'}
Here's how I would write it:
The program loops through every line in the file. Checks to see if that line is empty, if it is, ignore it. If it isn't empty, then we process the line. Anything with a % at the start denotes a variable, so let's go ahead and add that to the dictionary and set that to a variable, current. Then we keep on adding to the dictionary at key current, until the next %
di = {}
with open("fasta.txt","r") as f:
current = ""
for line in f:
line = line.strip()
if line == "":
continue
if line[0] == "%":
di[line] = ""
current = line
else:
if di[current] == "":
di[current] = line
else:
di[current] += "+" + line
print(di)
Output:
{'%Labelinfo2': 'string3+string4+string5', '%Labelinfo': 'string1+string2'}
Note: Dictionaries do not enforce error, so they will be out of order; but stil accessible in the same way. And, just a heads up, your example output is slightly wrong, you forgot to put in the 2 after one of the %Labelinfo.
import re
d = {}
text = open('fasta.txt').read()
for el in [ x for x in re.split(r'\s+', text) if x]:
if el.startswith('%'):
key = el
d[key] = ''
else:
value = d[key] + el
d[key] = value
print(d)
{'%Labelinfo': 'string1string2', '%Labelinfo2': 'string3string4string5'}
My code looks like this:
Right now my code outputs two text file named absorbance.txt and energy.txt separately. I need to modify it so that it outputs only one file named combined.txt such that every line of combined.txt has two values separated by comma. The first value must be from absorbance.txt and second must be from energy.txt. ( I apologize if anyone is confused by my writting, Please comment if you need more clarification)
g = open("absorbance.txt", "w")
h = open("Energy.txt", "w")
ask = easygui.fileopenbox()
f = open( ask, "r")
a = f.readlines()
bg = []
wavelength = []
for string in a:
index_j = 0
comma_count = 0
for j in string:
index_j += 1
if j == ',':
comma_count += 1
if comma_count == 1:
slicing_point = index_j
t = string[slicing_point:]
new_str = string[:(slicing_point- 1)]
new_energ = (float(1239.842 / int (float(new_str))) * 8065.54)
print >>h, new_energ
import math
list = []
for i in range(len(ref)):
try:
ans = ((float (ref[i]) - float (bg[i])) / (float(sample[i]) - float(bg[i])))
print ans
base= 10
final_ans = (math.log(ans, base))
except:
ans = -1 * ((float (ref[i]) - float (bg[i])) / (float(sample[i]) - float(bg[i])))
print ans
base= 10
final_ans = (math.log(ans, base))
print >>g, final_ans
Similar to Robert's approach, but aiming to keep control flow as simple as possible.
absorbance.txt:
Hello
How are you
I am fine
Does anybody want a peanut?
energy.txt:
123
456
789
Code:
input_a = open("absorbance.txt")
input_b = open("energy.txt")
output = open("combined.txt", "w")
for left, right in zip(input_a, input_b):
#use rstrip to remove the newline character from the left string
output.write(left.rstrip() + ", " + right)
input_a.close()
input_b.close()
output.close()
combined.txt:
Hello, 123
How are you, 456
I am fine, 789
Note that the fourth line of absorbance.txt was not included in the result, because energy.txt does not have a fourth line to go with it.
You can open both text files and append them to the new text file as shown below. This is what I gave based on your question, not necessarily the code your provided.
combined = open("Combined.txt","w")
with open(r'Engery.txt', "rU") as EnergyLine:
with open(r'Absorbance.txt', "rU") as AbsorbanceLine:
for line in EnergyLine:
Eng = line[:-1]
for line2 in AbsorbanceLine:
Abs = line2[:-1]
combined.write("%s,%s\n" %(Eng,Abs))
break
combined.close()
I have a file that looks like this:
<s0> 3
line1
line2
line3
<s1> 5
line1
line2
<s2> 4
etc. up to more than a thousand
Each sequence has a header like <s0> 3, which in this case states that three lines follow. In the example above, the number of lines below <s1> is two, so I have to correct the header to <s1> 2.
The code I have below picks out the sequence headers and the correct number of lines below them. But for some reason, it never gets the details of the last sequence. I know something is wrong but I don't know what. Can someone point me to what I am doing wrong?
import re
def call():
with open('trial_perl.txt') as fp:
docHeader = open("C:\path\header.txt","w")
c = 0
c1 = 0
header = []
k = -1
for line in fp:
if line.startswith("<s"):
#header = line.split(" ")
#print header[1]
c = 0
else:
c1 = c + 1
c += 1
if c == 0 and c1>0:
k +=1
printing = c1
if printing >= 0:
s = "<s%s>" % (k)
#print "%s %d" % (s, printing)
docHeader.write(s+" "+str(printing)+"\n")
call()
you have no sentinel at the end of the last sequence in your data, so your code will need to deal with the last sequence AFTER the loop is done.
If I may suggest some python tricks to get to your results; you don't need those c/c1/k counter variables, as they make the code more difficult to read and maintain. Instead, populate a map of sequence header to sequence items and then use the map to do all your work:
(this code works only if all sequence headers are unique - if you have duplicates, it won't work)
with open('trial_perl.txt') as fp:
docHeader = open("C:\path\header.txt","w")
data = {}
for line in fp:
if line.startswith("<s"):
current_sequence = line
# create a list with the header as the key
data[current_sequence] = []
else:
# add each sequence to the list we defined above
data[current_sequence].append(line)
Your map is ready! It looks like this:
{"<s0> 3": ["line1", "line2", "line5"],
"<s1> 5": ["line1", "line2"]}
You can iterate it like this:
for header, lines in data.items():
# header is the key, or "<s0> 3"
# lines is the list of lines under that header ["line1", "line2", etc]
num_of_lines = len(lines)
The main problem is that you neglect to check the value of c after you have read the last line. You probably had difficulty spotting this problem because of all the superfluous code. You don't have to increment k, since you can extract the value from the <s...> tag. And you don't have to have all three variables c, c1, and printing. A single count variable will do.
import re, sys
def call():
with open('trial_perl.txt') as fp:
docHeader = sys.stdout #open("C:\path\header.txt","w")
count = 0
id = None
for line in fp:
if line.startswith("<s"):
if id != None:
tag = '<s%s>' % id
docHeader.write('<s%d> %d\n' % (id, count))
count = 0
id = int(line[2:line.find('>')])
else:
count += 1
if id != None:
tag = '<s%s>' % id
docHeader.write('<s%d> %d\n' % (id, count))
call()
Another approach using groupby from itertools, where you take the maximum number of line in each group - a group corresponding to a sequence of header + line in your file: :
from itertools import groupby
def call():
with open('stack.txt') as fp:
header = [-1]
lines = [0]
for line in fp:
if line.startswith("<s"):
header.append(header[-1]+1)
lines.append(0)
else:
header.append(header[-1])
lines.append(lines[-1] +1)
with open('result','w') as f:
for key, group in groupby(zip(header[1:],lines[1:]), lambda x: x[0]):
f.write(str(("<s%d> %d\n" % max(group))))
f.close()
call()
#<s0> 3
#<s1> 2
stack.txt is the file containing your data:
<s0> 3
line1
line2
line3
<s1> 5
line1
line2