Datasets: Two Large text files for train and test that all words of them are tokenized. a part of data is like the following: " the fulton county grand jury said friday an investigation of atlanta's recent primary election produced `` no evidence '' that any irregularities took place . "
Question: How can I replace every word in the test data not seen in training with the word "unk" in Python?
So far, I made the dictionary by the following codes to count the frequency of each word in the file:
#open text file and assign it to varible with the name "readfile"
readfile= open('C:/Users/amtol/Desktop/NLP/Homework_1/brown-train.txt','r')
writefile=open('C:/Users/amtol/Desktop/NLP/Homework_1/brown-trainReplaced.txt','w')
# Create an empty dictionary
d = dict()
# Loop through each line of the file
for line in readfile:
# Split the line into words
words = line.split(" ")
# Iterate over each word in line
for word in words:
# Check if the word is already in dictionary
if word in d:
# Increment count of word by 1
d[word] = d[word] + 1
else:
# Add the word to dictionary with count 1
d[word] = 1
#replace all words occurring in the training data once with the token<unk>.
for key in list(d.keys()):
line= d[key]
if (line==1):
line="<unk>"
writefile.write(str(d))
else:
writefile.write(str(d))
#close the file that we have created and we wrote the new data in that
writefile.close()
Honestly the above code doesn't work with writefile.write(str(d)) which I want to write the result in the new textfile, but by print(key, ":", line) it works and shows the frequency of each word but in the console which doesn't create the new file. if you also know the reason for this, please let me know.
First off, your task is to replace the words in test file that are not seen in train file. Your code never mentions the test file. You have to
Read the train file, gather what words are there. This is mostly okay; but you need to .strip() your line or the last word in each line will end with a newline. Also, it would make more sense to use set instead of dict if you don't need to know the count (and you don't, you just want to know if it's there or not). Sets are cool because you don't have to care if an element is in already or not; you just toss it in. If you absolutely need to know the count, using collections.Counter is easier than doing it yourself.
Read the test file, and write to replacement file, as you are replacing the words in each line. Something like:
with open("test", "rt") as reader:
with open("replacement", "wt") as writer:
for line in reader:
writer.write(replaced_line(line.strip()) + "\n")
Make sense, which your last block does not :P Instead of seeing whether a word from test file is seen or not, and replacing the unseen ones, you are iterating on the words you have seen in the train file, and writing <unk> if you've seen them exactly once. This does something, but not anything close to what it should.
Instead, split the line you got from the test file and iterate on its words; if the word is in the seen set (word in seen, literally) then replace its contents; and finally add it to the output sentence. You can do it in a loop, but here's a comprehension that does it:
new_line = ' '.join(word if word in seen else '<unk>'
for word in line.split(' '))
Related
I'm writing a function called HASHcount(name,list), which receives 2 parameters, the name one is the name of the file that will be analized, a text file structured like this:
Date|||Time|||Username|||Follower|||Text
So, basically my input is a tweets list, with several rows structured like above. The list parameter is a list of hashtags I want to count in that text file. I want my function to check how many times each word of the list given occurred in the tweets list, and give as output a dictionary with each word count, even if the word is missing.
For instance, with the instruction HASHcount(December,[Peace, Love]) the program should give as output a dictionary made by checking how many times the word Peace and the word Love have been used as hashtag in the Text field of each tweet in the file called December.
Also, in the dictionary the words have to be without the hashtag simbol.
I'm stuck on making this function, I'm at this point but I'm having some issues concerning the dictionary:
def HASHcount(name,list):
f = open(name,"r")
dic={}
l = f.readline()
for word in list:
dic[word]=0
for line in f:
li_lis=line.split("|||")
li_tuple=tuple(li_lis)
if word in li_tuple[4]:
dic[word]=dic[word]+1
return dic
The main issue is that you are iterating over the lines in the file for each word, rather than the reverse. Thus the first word will consume all the lines of the file, and each subsequent word will have 0 matches.
Instead, you should do something like this:
def hash_count(name, words):
dic = {word:0 for word in words}
with open(name) as f:
for line in f:
line_text = line.split('|||')[4]
for word in words:
# Check if word appears as a hashtag in line_text
# If so, increment the count for word
return dic
There are several issues with your code, some of which have already been pointed out, while others (e.g concerning the identification of hashtags in a tweet's text) have not. Here's a partial solution not covering the fine points of the latter issue:
def HASHcount(name, words):
dic = dict.fromkeys(words, 0)
with open(name,"r") as f:
for line in f:
for w in words:
if '#' + w in line:
dic[w] += 1
return dic
This offers several simplifications keyed on the fact that hashtags in a tweet do start with # (which you don't want in the dic) -- as a result it's not worth analyzing each line since the # cannot be present except in the text.
However, it still has a fraction of a problem seen in other answers (except the one which just commented out this most delicate of parts!-) -- it can get false positives by partial matches. When the check is just like word in linetext the problem would be huge -- e.g if a word is cat it gets counted as hashtag even if present in perfectly ordinary text (on its own or as part of another word, e.g vindicative). With the '#' + approach, it's a bit better, but still, prefix matches would lead to a false positive, e.g #catalog would erroneously be counted as a hit for cat.
As some suggested, regular expressions can help with that. However, here's an alternative for the body of the for w in words loop...
for w in words:
where = line.find('#' + w)
if where == -1: continue
after = line[where + len(w) + 1]
if after in chars_acceptable_in_hashes: continue
dic[w] += 1
The only issue remaining is to determine which characters can be part of hashtags, i.e, the set chars_acceptable_in_hashes -- I haven't memorized Twitter's specs so I don't know it offhand, but surely you can find out. Note that this works at end of line, too, because line has not be stripped, so it's known to end with a \n. which is not in the acceptable set (so a hashtag at the very end of the line will be "properly terminated" too).
I like using collections module. This worked for me.
from collections import defaultdict
def HASHcount(file_to_open, lst):
with open(file_to_open) as my_file:
my_dict= defaultdict(int)
for line in my_file:
line = line.split('|||')
txt = line[4].strip(" ")
if txt in lst:
my_dict[txt] += 1
return my_dict
I am kind of stuck on a question that I have to do regarding iambic pentameters, but because it is long, I'll try to simplify it.
So I need to get some words and their stress patterns from a text file that look somewhat like this:
if, 0
music,10
be,1
the,0
food,1
of,0
love,1
play,0
on,1
hello,01
world,1
And from the file, you can assume there will be much more words for different sentences. I am trying to get sentences from a text file which have multiple sentences, and to see if the sentence (ignoring punctuation and case) is an iambic pentameter.
For example if the text file contains this:
If music be the food of love play on
hello world
The first sentence will be assigned from the stress dictionary like this: 0101010101, and the second is obviously not a pentameter(011). I would like it so that it only prints sentences which are iambic pentameters.
Sorry if this is a convoluted or messy question.
This is what I have so far:
import string
dict = {};
sentence = open('sentences.txt')
stress = open('stress.txt')
for some in stress:
word,number = some.split(',')
dict[word] = number
for line in sentence:
one = line.split()
I don't think you are building your dictionary of stresses correctly. It's crucial to remember to get rid of the implicit \n character from lines as you read them in, as well as strip any whitespace from words after you've split on the comma. As things stand, the line if, 0 will be split to ['if', ' 0\n'] which isn't what you want.
So to create your dictionary of stresses you could do something like this:
stress_dict = {}
with open('stress.txt', 'r') as f:
for line in f:
word_stress = line.strip().split(',')
word = word_stress[0].strip().lower()
stress = word_stress[1].strip()
stress_dict[word] = stress
For the actual checking, the answer by #khelwood is a good way, but I'd take extra care to handle the \n character as you read in the lines and also make sure that all the characters in the line were lowercase (like in your dictionary).
Define a function is_iambic_pentameter to check whether a sentence is an iambic pentameter (returning True/False) and then check each line in sentences.txt:
def is_iambic_pentameter(line):
line_stresses = [stress_dict[word] for word in line.split()]
line_stresses = ''.join(line_stresses)
return line_stresses == '0101010101'
with open('sentences.txt', 'r') as f:
for line in f:
line = line.rstrip()
line = line.lower()
if is_iambic_pentameter(line):
print line
As an aside, you might be interested in NLTK, a natural language processing library for Python. Some Internet searching finds that people have written Haiku generators and other scripts for evaluating poetic forms using the library.
I wouldn't have thought iambic pentameter was that clear cut: always some words end up getting stressed or unstressed in order to fit the rhythm. But anyway. Something like this:
for line in sentences:
words = line.split()
stresspattern = ''.join([dict[word] for word in words])
if stresspattern=='0101010101':
print line
By the way, it's generally a bad idea to be calling your dictionary 'dict', since you're hiding the dict type.
Here's how the complete code could look like:
#!/usr/bin/env python3
def is_iambic_pentameter(words, word_stress_pattern):
"""Whether words are a line of iambic pentameter.
word_stress_pattern is a callable that given a word returns
its stress pattern
"""
return ''.join(map(word_stress_pattern, words)) == '01'*5
# create 'word -> stress pattern' mapping, to implement word_stress_pattern(word)
with open('stress.txt') as stress_file:
word_stress_pattern = dict(map(str.strip, line.split(','))
for line in stress_file).__getitem__
# print lines that use iambic pentameter
with open('sentences.txt') as file:
for line in file:
if is_iambic_pentameter(line.casefold().split(), word_stress_pattern):
print(line, end='')
I do have a file f1 which has words and an emotional values (values from +6 to -6)
normal 0
sad -2
happy 4
I have another file f2 which has texts (tweets) containing say average of 4 or 5 words (line by line).
I want to read text in f2 line by line and for each line, for every word I have to search whether it is there in f1. If it is, then I have to get the value and add it. Likewise I have to sum values for every word (if it is in the list) in the sentence and print it.
So print should be like this (for example for first three lines)
3
0
-2
I have a code like this.I am getting error "value error:mixing iterating and read method will loose data" Please correct the code or at least give a new method to do this.
f2=open("file2.txt","r")
for line in f2:
l=f2.readline()
afinn = dict(map(lambda (k,v): (k,int(v)),[ line.split('\t') for line in open("file1.txt") ]))
value= sum(map(lambda word: afinn.get(word, 0), l.lower().split()))
print value
f1.close()
f2.close()
There are several problems with your code:
for line in f2:
l=f2.readline()
You're iterating over the file implicitly and explicitly at the same time - not a good idea. In the first iteration line will contain the first line of your file, and l will contain the second line. In the next iteration, line and l will contain the third and fourth line, respectively (and so on). Pick one - I would choose the first one and drop the readline() call.
Then, you reassign line in your list comprehension that's reading file1.txt. That means you're overwriting line, and you're reading file1.txt again and again during each iteration - a huge waste. Read it once, store it and refer to that in your loop.
Furthermore, dict(map(lambda(...))) is rather unpythonic - we do have dict comprehensions for that. But in this case, a simpler version is probably even better:
This is how you could fill your words dictionary (you could do that as a one-liner too, but readability counts, so let's keep it simple):
with open("file1.txt") as f1:
words = {}
for line in f1:
word, score = line.split()
words[word] = int(score)
Now you could go and read your input file:
with open("file2.txt") as f2:
for line in f2:
contents = line.split()
value = sum(words.get(word, 0) for word in contents)
print value
It seems that you are using my word list AFINN from http://www2.compute.dtu.dk/pubdb/views/edoc_download.php/6010/zip/imm6010.zip
Note that there is a tab character between the 'word' and the value and that some of the 'words' are not single words but phrases such as 'not good'. You should be using another split character. Copying and modifying Tim Pitzcker's code:
with open("AFINN-111.txt") as f1:
words = {}
for line in f1:
word, score = line.split('\t')
words[word] = float(score)
Yours and Tim Pitzcker's code may also have a problem with the tokenization of the second file, e.g., the below code really doesn't work because split is splitting by default on the whitespace, ignoring the comma:
line = 'It what bad, plain and simply bad.'
contents = line.split()
value = sum(words.get(word, 0) for word in contents)
You probably need to look into re.split() or nltk.word_tokenize as well as lowercase the words.
def myfunc(filename):
filename=open('hello.txt','r')
lines=filename.readlines()
filename.close()
lengths={}
for line in lines:
for punc in ".,;'!:&?":
line=line.replace(punc," ")
words=line.split()
for word in words:
length=len(word)
if length not in lengths:
lengths[length]=0
lengths[length]+=1
for length,counter in lengths.items():
print(length,counter)
filename.close()
Use Counter. (<2.7 version)
You are counting the frequency of words in a single line.
for line in lines:
for word in length.keys():
print(wordct,length)
length is dict of all distinct words plus their frequency, not their length
length.get(word,0)+1
so you probably want to replace the above with
for line in lines:
....
#keep this at this indentaiton - will have a v large dict but of all words
for word in sorted(length.keys(), key=lambda x:len(x)):
#word, freq, length
print(word, length[word], len(word), "\n")
I would also suggest
Dont bring the file into memory like that, the file objects and handlers are now iterators and well optimised for reading from files.
drop the wordct and so on in the main lines loop.
rename length to something else - perhaps words or dict_words
Errr, maybe I misunderstood - are you trying to count the number of distinct words in the file, in which case use len(length.keys()) or the length of each word in the file, presumably ordered by length....
The question has been more clearly defined now so replacing the above answer
The aim is to get a frequency of word lengths throughout the whole file.
I would not even bother with line by line but use something like:
fo = open(file)
d_freq = {}
st = 0
while 1:
next_space_index = fo.find(" ", st+1)
word_len = next_space_index - st
d_freq.get(word_len,0) += 1
print d_freq
I think that will work, not enough time to try it now. HTH
I am dealing with an extremely large text file (around 3.77 GB), and trying to extract all the sentences a specific word occurs in and write out to a text file.
So the large text file is just many lines of text:
line 1 text ....
line 2 text ....
I have also extracted the unique word list from the text file, and want to extract all the sentences each word occurs in and write out the context associated with the word. Ideally, the output file will take the format of
word1 \t sentence 1\n sentence 2\n sentence N\n
word2 \t sentence 1\n sentence 2\n sentence M\n
The current code I have is something like this :
fout=open('word_context_3000_4000(4).txt','a')
for x in unique_word[3000:4000]:
fout.write('\n'+x+'\t')
fin=open('corpus2.txt')
for line in fin:
if x in line.strip().split():
fout.write(line)
else:
pass
fout.close()
Since the unique word list is big, so I process the word list chunk by chunk. But, somehow, the code failed to get the context for all the words, and only returned the context for the first hundreds of words in the unique word list.
Does any one have worked on the similar problem before? I am using python, btw.
Thanks a lot.
First problem, you never close fin.
Maybe you should try something like this :
fout=open('word_context_3000_4000(4).txt','a')
fin=open('corpus2.txt')
for x in unique_word[3000:4000]:
fout.write('\n'+x+'\t')
fin.seek(0) # go to the begining of the file
for line in fin:
if x in line.strip().split():
fout.write(line)
else:
pass
fout.close()
fin.close()