Counting Hashtag - python

I'm writing a function called HASHcount(name,list), which receives 2 parameters, the name one is the name of the file that will be analized, a text file structured like this:
Date|||Time|||Username|||Follower|||Text
So, basically my input is a tweets list, with several rows structured like above. The list parameter is a list of hashtags I want to count in that text file. I want my function to check how many times each word of the list given occurred in the tweets list, and give as output a dictionary with each word count, even if the word is missing.
For instance, with the instruction HASHcount(December,[Peace, Love]) the program should give as output a dictionary made by checking how many times the word Peace and the word Love have been used as hashtag in the Text field of each tweet in the file called December.
Also, in the dictionary the words have to be without the hashtag simbol.
I'm stuck on making this function, I'm at this point but I'm having some issues concerning the dictionary:
def HASHcount(name,list):
f = open(name,"r")
dic={}
l = f.readline()
for word in list:
dic[word]=0
for line in f:
li_lis=line.split("|||")
li_tuple=tuple(li_lis)
if word in li_tuple[4]:
dic[word]=dic[word]+1
return dic

The main issue is that you are iterating over the lines in the file for each word, rather than the reverse. Thus the first word will consume all the lines of the file, and each subsequent word will have 0 matches.
Instead, you should do something like this:
def hash_count(name, words):
dic = {word:0 for word in words}
with open(name) as f:
for line in f:
line_text = line.split('|||')[4]
for word in words:
# Check if word appears as a hashtag in line_text
# If so, increment the count for word
return dic

There are several issues with your code, some of which have already been pointed out, while others (e.g concerning the identification of hashtags in a tweet's text) have not. Here's a partial solution not covering the fine points of the latter issue:
def HASHcount(name, words):
dic = dict.fromkeys(words, 0)
with open(name,"r") as f:
for line in f:
for w in words:
if '#' + w in line:
dic[w] += 1
return dic
This offers several simplifications keyed on the fact that hashtags in a tweet do start with # (which you don't want in the dic) -- as a result it's not worth analyzing each line since the # cannot be present except in the text.
However, it still has a fraction of a problem seen in other answers (except the one which just commented out this most delicate of parts!-) -- it can get false positives by partial matches. When the check is just like word in linetext the problem would be huge -- e.g if a word is cat it gets counted as hashtag even if present in perfectly ordinary text (on its own or as part of another word, e.g vindicative). With the '#' + approach, it's a bit better, but still, prefix matches would lead to a false positive, e.g #catalog would erroneously be counted as a hit for cat.
As some suggested, regular expressions can help with that. However, here's an alternative for the body of the for w in words loop...
for w in words:
where = line.find('#' + w)
if where == -1: continue
after = line[where + len(w) + 1]
if after in chars_acceptable_in_hashes: continue
dic[w] += 1
The only issue remaining is to determine which characters can be part of hashtags, i.e, the set chars_acceptable_in_hashes -- I haven't memorized Twitter's specs so I don't know it offhand, but surely you can find out. Note that this works at end of line, too, because line has not be stripped, so it's known to end with a \n. which is not in the acceptable set (so a hashtag at the very end of the line will be "properly terminated" too).

I like using collections module. This worked for me.
from collections import defaultdict
def HASHcount(file_to_open, lst):
with open(file_to_open) as my_file:
my_dict= defaultdict(int)
for line in my_file:
line = line.split('|||')
txt = line[4].strip(" ")
if txt in lst:
my_dict[txt] += 1
return my_dict

Related

How to compare contents of two large text files in Python?

Datasets: Two Large text files for train and test that all words of them are tokenized. a part of data is like the following: " the fulton county grand jury said friday an investigation of atlanta's recent primary election produced `` no evidence '' that any irregularities took place . "
Question: How can I replace every word in the test data not seen in training with the word "unk" in Python?
So far, I made the dictionary by the following codes to count the frequency of each word in the file:
#open text file and assign it to varible with the name "readfile"
readfile= open('C:/Users/amtol/Desktop/NLP/Homework_1/brown-train.txt','r')
writefile=open('C:/Users/amtol/Desktop/NLP/Homework_1/brown-trainReplaced.txt','w')
# Create an empty dictionary
d = dict()
# Loop through each line of the file
for line in readfile:
# Split the line into words
words = line.split(" ")
# Iterate over each word in line
for word in words:
# Check if the word is already in dictionary
if word in d:
# Increment count of word by 1
d[word] = d[word] + 1
else:
# Add the word to dictionary with count 1
d[word] = 1
#replace all words occurring in the training data once with the token<unk>.
for key in list(d.keys()):
line= d[key]
if (line==1):
line="<unk>"
writefile.write(str(d))
else:
writefile.write(str(d))
#close the file that we have created and we wrote the new data in that
writefile.close()
Honestly the above code doesn't work with writefile.write(str(d)) which I want to write the result in the new textfile, but by print(key, ":", line) it works and shows the frequency of each word but in the console which doesn't create the new file. if you also know the reason for this, please let me know.
First off, your task is to replace the words in test file that are not seen in train file. Your code never mentions the test file. You have to
Read the train file, gather what words are there. This is mostly okay; but you need to .strip() your line or the last word in each line will end with a newline. Also, it would make more sense to use set instead of dict if you don't need to know the count (and you don't, you just want to know if it's there or not). Sets are cool because you don't have to care if an element is in already or not; you just toss it in. If you absolutely need to know the count, using collections.Counter is easier than doing it yourself.
Read the test file, and write to replacement file, as you are replacing the words in each line. Something like:
with open("test", "rt") as reader:
with open("replacement", "wt") as writer:
for line in reader:
writer.write(replaced_line(line.strip()) + "\n")
Make sense, which your last block does not :P Instead of seeing whether a word from test file is seen or not, and replacing the unseen ones, you are iterating on the words you have seen in the train file, and writing <unk> if you've seen them exactly once. This does something, but not anything close to what it should.
Instead, split the line you got from the test file and iterate on its words; if the word is in the seen set (word in seen, literally) then replace its contents; and finally add it to the output sentence. You can do it in a loop, but here's a comprehension that does it:
new_line = ' '.join(word if word in seen else '<unk>'
for word in line.split(' '))

count word in textfile

I have a textfile that I wanna count the word "quack" in.
textfile named "quacker.txt" example:
This is the textfile quack.
Oh, and how quack did quack do in his exams back in 2009?\n Well, he passed with nine P grades and one B.\n He says that quack he wants to go to university in the\n future but decided to try and make a career on YouTube before that Quack....\n So, far, it’s going very quack well Quack!!!!
So here I want 7 as the output.
readf= open("quacker.txt", "r")
lst= []
for x in readf:
lst.append(str(x).rstrip('\n'))
readf.close()
#above gives a list of each row.
cv=0
for i in lst:
if "quack" in i.strip():
cv+=1
above only works for one "quack" in the element of the list
Well if the file isn't too long, you could try:
with open('quacker.txt') as f:
text = f.read().lower() # make it all lowercase so the count works below
quacks = text.count('quack')
As #PadraicCunningham mentioned in the comments, this would also count the 'quack' in
words like 'quacks' or 'quacking'. But if that's not an issue, then this is fine.
you're incrementing by one if the line contains the string, but what if the line has several occurrences of 'quack'?
try:
for line in lst:
for word in line.split():
if 'quack' in word:
cv+=1
You need to lower, strip and split to get an accurate count:
from string import punctuation
with open("test.txt") as f:
quacks = sum(word.lower().strip(punctuation) == "quack"
for line in f for word in line.split())
print(quacks)
7
You need to split each word in the file into individual words or you will get False positives using in or count. word.lower().strip(punctuation) lowers each word and removes any punctuation, sum will sum all the times word.lower().strip(punctuation) == "quack" is True.
In your own code x is already a string so calling str(x)... is unnecessary, you could also just check each line the first time you iterate, there is no need to add the strings to a list and then iterate a second time. Why you only get one returned is most like because all the data is actually on a single line, you are also comparing quack to Quack which will not work, you need to lower the string.

lower() and searching for a word in a file using a list

So what I'd like to do is to make all the lines lowercase and then use my part_list to search for all words matching in frys.txt and to append it to items. I'm having a lot of trouble creating a loop that goes through each word in the list and just actually finding the words in frys.txt. I'm even trying to find doubles if that is at all possible. But the main thing I want to be able to do is just find that the word exists and to append it to items if it does.
Any suggestions would be great!
items = []
part_list = ['ccs', 'fcex', '8-12', '8-15', '8-15b', '80ha3']
f = open("C:/Users/SilenX/Desktop/python/frys.txt", "r+")
searchlines = f.readlines()
f.close()
for n, line in enumerate(searchlines):
p = 0
if part_list[p] in line.split():
part_list[p] = part_list[p + 1]
parts = searchlines[n]
parts = parts.strip('\n')
items.append(parts)
print items
You're doing some complex stuff with enumeration that I really don't think is necessary, and it definitely looks like your inner "loop" isn't doing what you want (because as you've written it, it isn't a loop). Try this:
part_list = ['ccs', 'fcex', '8-12', '8-15', '8-15b', '80ha3']
items = []
f = open("C:/Users/SilenX/Desktop/python/frys.txt", "r") # Open the file
for line in f:
for token in line.lower().split(): # Loop over lowercase words in the line
if token in part_list: # If it's one of the words you're looking for,
items.append(token) # Append it to your list.
f.close()
print items
This will find all the words in the file that appear in your list. It will not identify words in your file that are attached to something else, like "ccs." or "fcex8-12". If you want that, you'll have to reverse the way the search works, so that you count how many times each word in part_list appears in the line rather than counting how many words in the line are in part_list.

Python: counting unique instance of words across several lines

I have a text file with several observations. Each observation is in one line. I would like to detect unique occurrence of each word in a line. In other words, if same word occurs twice or more on the same line, it is still counted as once. However, I would like to count the frequency of occurrence of each words across all observations. This means that if a word occurs in two or more lines,I would like to count the number of lines it occurred in. Here is the program I wrote and it is really slow in processing large number of file. I also remove certain words in the file by referencing another file. Please offer suggestions on how to improve speed. Thank you.
import re, string
from itertools import chain, tee, izip
from collections import defaultdict
def count_words(in_file="",del_file="",out_file=""):
d_list = re.split('\n', file(del_file).read().lower())
d_list = [x.strip(' ') for x in d_list]
dict2={}
f1 = open(in_file,'r')
lines = map(string.strip,map(str.lower,f1.readlines()))
for line in lines:
dict1={}
new_list = []
for char in line:
new_list.append(re.sub(r'[0-9#$?*_><#\(\)&;:,.!-+%=\[\]\-\/\^]', "_", char))
s=''.join(new_list)
for word in d_list:
s = s.replace(word,"")
for word in s.split():
try:
dict1[word]=1
except:
dict1[word]=1
for word in dict1.keys():
try:
dict2[word] += 1
except:
dict2[word] = 1
freq_list = dict2.items()
freq_list.sort()
f1.close()
word_count_handle = open(out_file,'w+')
for word, freq in freq_list:
print>>word_count_handle,word, freq
word_count_handle.close()
return dict2
dict = count_words("in_file.txt","delete_words.txt","out_file.txt")
You're running re.sub on each character of the line, one at a time. That's slow. Do it on the whole line:
s = re.sub(r'[0-9#$?*_><#\(\)&;:,.!-+%=\[\]\-\/\^]', "_", line)
Also, have a look at sets and the Counter class in the collections module. It may be faster if you just count and then discard those you don't want afterwards.
Without having done any performance testing, the following come to mind:
1) you're using regexes -- why? Are you just trying to get rid of certain characters?
2) you're using exceptions for flow control -- although it can be pythonic (better to ask forgiveness than permission), throwing exceptions can often be slow. As seen here:
for word in dict1.keys():
try:
dict2[word] += 1
except:
dict2[word] = 1
3) turn d_list into a set, and use python's in to test for membership, and simultaneously ...
4) avoid heavy use of replace method on strings -- I believe you're using this to filter out the words that appear in d_list. This could be accomplished instead by avoiding replace, and just filtering the words in the line, either with a list comprehension:
[word for word words if not word in del_words]
or with a filter (not very pythonic):
filter(lambda word: not word in del_words, words)
import re
u_words = set()
u_words_in_lns = []
wordcount = {}
words = []
# get unique words per line
for line in buff.split('\n'):
u_words_in_lns.append(set(line.split(' ')))
# create a set of all unique words
map( u_words.update, u_words_in_lns )
# flatten the sets into a single list of words again
map( words.extend, u_words_in_lns)
# count everything up
for word in u_words:
wordcount[word] = len(re.findall(word,str(words)))

python -- trying to count the length of the words from a file with dictionaries

def myfunc(filename):
filename=open('hello.txt','r')
lines=filename.readlines()
filename.close()
lengths={}
for line in lines:
for punc in ".,;'!:&?":
line=line.replace(punc," ")
words=line.split()
for word in words:
length=len(word)
if length not in lengths:
lengths[length]=0
lengths[length]+=1
for length,counter in lengths.items():
print(length,counter)
filename.close()
Use Counter. (<2.7 version)
You are counting the frequency of words in a single line.
for line in lines:
for word in length.keys():
print(wordct,length)
length is dict of all distinct words plus their frequency, not their length
length.get(word,0)+1
so you probably want to replace the above with
for line in lines:
....
#keep this at this indentaiton - will have a v large dict but of all words
for word in sorted(length.keys(), key=lambda x:len(x)):
#word, freq, length
print(word, length[word], len(word), "\n")
I would also suggest
Dont bring the file into memory like that, the file objects and handlers are now iterators and well optimised for reading from files.
drop the wordct and so on in the main lines loop.
rename length to something else - perhaps words or dict_words
Errr, maybe I misunderstood - are you trying to count the number of distinct words in the file, in which case use len(length.keys()) or the length of each word in the file, presumably ordered by length....
The question has been more clearly defined now so replacing the above answer
The aim is to get a frequency of word lengths throughout the whole file.
I would not even bother with line by line but use something like:
fo = open(file)
d_freq = {}
st = 0
while 1:
next_space_index = fo.find(" ", st+1)
word_len = next_space_index - st
d_freq.get(word_len,0) += 1
print d_freq
I think that will work, not enough time to try it now. HTH

Categories