I have this simple code that reads a txt file and accepts a word from the user to check if that word is in the txt document or not. It looks like this works only for a single word. I have to modify this code so that the user can input two or more words. Example; GOING HOME instead of just HOME. Any help please.
word = input('Enter any word that you want to find in text File :')
f = open("AM30.EB","r")
if word in f.read().split():
print('Word Found in Text File')
else:
print('Word not found in Text File')
I'm not sure this is exactly what you are looking for
f = open("AM30.EB","r")
word_list = []
while True:
word = input('Enter any word that you want to find in text File or 1 to stop entering words :')
if word == "1": break
word_list.append(word)
file_list = f.read().split()
for word in word_list:
if word in file_list:
print("Found word - {}".format(word))
These are case-sensitive solutions!
All words in query separately:
words = input('Enter all words that you want to find in text File: ').split()
f_data = []
with open("AM30.EB", "r") as f:
f_data = f.read().split()
results = list(map(lambda x: any([y == x for y in f_data]), words))
print("Found ")
for i in range(len(words)):
print(f"'{words[i]}'", end="")
if i < len(words) - 1:
print("and", end="")
print(f": {all(results)}")
Any word in query:
words = input('Enter any word that you want to find in the text File: ').split()
f_data = []
with open("AM30.EB", "r") as f:
f_data = f.read().split()
results = list(map(lambda x: any([y == x for y in f_data]), words))
if any(results):
for i in range(len(words)):
print(f"Found '{words[i]}': {results[i]}")
Exact phrase in query:
phrase = input('Enter a phrase that you want to find in the text File: ')
f_data = ""
with open("AM30.EB", "r") as f:
f_data = f.read()
print(f"Found '{phrase}': {f_data.count(phrase) > 0}")
This is case sensitive and checks for each word individually. Not sure if this is what you were looking for but hope it helps!
file1 = open('file.txt', 'r').read().split()
wordsFoundList = []
userInput = input('Enter any word or words that you want to find in text File :').split()
for word in userInput:
if word in file1:
wordsFoundList.append(word)
if len(wordsFoundList) == 0:
print("No words found in text file")
else:
print("These words were found in text file: " + str(wordsFoundList))
Related
I want to open text file and slice words based on blank spaces, but cut by \n. Why does it work like this? Is the problem in the text file or that my code is wrong?
def process(w):
output =""
for ch in w:
if ch.isalpha() :
output += ch
return output.lower()
words = set()
fname = input("file name: ")
file = open(fname, "r")
for line in file:
lineWords = line.split()
for word in lineWords:
words.add(process(lineWords))
print("Number of words used =", len(words))
print(words)
Text file:
Result:
I want to tally up the word freq. from text files. The issue I'm facing is that only the last word is tallied.
def main():
rep = input("Enter a text file: ")
infield = open(rep, 'r')
s = infield.read()
punctuation = [',',';','.',':','!',"'","\""]
for ch in punctuation:
s = s.replace(ch,' ')
s = s.split()
wordcount = {}
for word in s:
if word not in wordcount:
count_1 = s.count(word)
wordcount = {word:count_1}
#s.append(w:s.count(w))
print (wordcount)
main()
Expected: A tallied word count for words in a text file in a key-value format/ a dictionary.
Actual: {'fun': 2}
Fun is the last word of the text file and indeed comes up only twice.
Also, the indentation that is displayed isn't reflective of what I have.
Your problem is here:
wordcount = {word:count_1}
You're overwriting the dictionary on every loop iteration.
Make it:
wordcount[word] = count_1
Though, to be honest, the much better approach is to use the standard library's collections.Counter container.
def main():
import collections
rep = input("Enter a text file: ")
infield = open(rep, 'r')
s = infield.read()
punctuation = [',',';','.',':','!',"'","\""]
for ch in punctuation:
s = s.replace(ch,' ')
s = s.split()
wordcount = collections.Counter(s) # <===
print (wordcount.most_common()) # <===
main()
No point in manually doing something that is already done in the standard library (since Python 2.7):
from collections import Counter
import re
rep = input("Enter a text file: ")
infield = open(rep, 'r')
s = infield.read()
s = re.split(r'[ ,;.:!\'"]', s)
wordcount = Counter(s)
del wordcount['']
print (wordcount)
There is a difference between re.split() and string.split(): the former creates empty words when there are several delimiters in a row, the latter doesn't. That's why del wordcount['']
You had a couple of issues, but the most pressing one was this bit of code:
for word in s:
if word not in wordcount:
count_1 = s.count(word)
wordcount = {word:count_1}
You were setting wordcount to a single-key dictionary at every new word. This is how I would have written it...
def main():
punctuation = [',',';','.',':','!',"'","\""]
rep = input("Enter a text file: ")
with open(rep, 'r') as infield:
s = infield.read()
for ch in punctuation:
s = s.replace(ch, ' ')
s = s.split()
wordcount = {}
for word in s:
if word not in wordcount.keys():
wordcount[word] = 1
else:
wordcount[word] += 1
print(wordcount)
main()
Use wordcount.update({word: count_1}) instead: wordcount = {word:count_1}.
Full example:
# coding: utf-8
PUNCTUATION = [',', ';', '.', ':', '!', "'", "\""]
if __name__ == '__main__':
wordcount = {}
rep = input("Enter a text file: ")
infield = open(rep, 'r')
s = infield.read()
for ch in PUNCTUATION:
s = s.replace(ch, ' ')
s = s.split()
for word in s:
if word not in wordcount:
count_1 = s.count(word)
wordcount.update({word: count_1})
print(wordcount)
I have a file name called words.txt having Dictionary words in it. I call this file and ask the user to enter a word. Then try to find out whether this word is present in this file or not if yes print True else Word not found.
wordByuser = input("Type a Word:")
file = open('words.txt', 'r')
if wordByuser in file: #or if wordByuser==file:
print("true")
else:
print("No word found")
The words.txt files contain each letter in a single line and then the new letter on the second line.
Use this one line solution:
lines = file.read().splitlines()
if wordByuser in lines:
....
Read the file first, also use snake_case https://www.python.org/dev/peps/pep-0008/
user_word = input("Type a Word:")
with open('words.txt') as f:
content = f.read()
if user_word in content:
print(True)
else:
print('Word not found')
This function should do it:
def searchWord(wordtofind):
with open('words.txt', 'r') as words:
for word in words:
if wordtofind == word.strip():
return True
return False
You just need to add .read() to the file class you initiated.
Like this:
wordByuser = input("Type a Word:")
file = open('words.txt', 'r')
data = file.read()
if wordByuser in data:
print("true")
else:
print("No word found")
I have written a small script to compare a text files content to another text file containing a word list, however running it says that the matches cannot be found, I cannot fix the code to successfully compare them with correct results.
wordlist = input("What is your word list called?")
f = open(wordlist)
t = f.readlines()
l = ''.join(t).lower()
chatlog = input("What is your chat log called?")
with open(chatlog) as f:
found = False
for line in f:
line = line.lower()
if l in line:
print(line)
found = True
if not found:
print("not here")
wordlist = input("What is your word list called?")
f = open(wordlist)
l = set(w.strip().lower() for w in f)
chatlog = input("What is your chat log called?")
with open(chatlog) as f:
found = False
for line in f:
line = line.lower()
if any(w in line for w in l):
print(line)
found = True
if not found:
print("not here")
I am trying to write a program that opens a text document and replaces all four letter words with **. I have been messing around with this program for multiple hours now. I can not seem to get anywhere. I was hoping someone would be able to help me out with this one. Here is what I have so far. Help is greatly appreciated!
def censor():
filename = input("Enter name of file: ")
file = open(filename, 'r')
file1 = open(filename, 'w')
for element in file:
words = element.split()
if len(words) == 4:
file1 = element.replace(words, "xxxx")
alist.append(bob)
print (file)
file.close()
here is revised verison, i don't know if this is much better
def censor():
filename = input("Enter name of file: ")
file = open(filename, 'r')
file1 = open(filename, 'w')
i = 0
for element in file:
words = element.split()
for i in range(len(words)):
if len(words[i]) == 4:
file1 = element.replace(i, "xxxx")
i = i+1
file.close()
for element in file:
words = element.split()
for word in words:
if len(word) == 4:
etc etc
Here's why:
say the first line in your file is 'hello, my name is john'
then for the first iteration of the loop: element = 'hello, my name is john'
and words = ['hello,','my','name','is','john']
You need to check what is inside each word thus for word in words
Also it might be worth noting that in your current method you do not pay any attention to punctuation. Note the first word in words above...
To get rid of punctuation rather say:
import string
blah blah blah ...
for word in words:
cleaned_word = word.strip(string.punctuation)
if len(cleaned_word) == 4:
etc etc
Here is a hint: len(words) returns the number of words on the current line, not the length of any particular word. You need to add code that would look at every word on your line and decide whether it needs to be replaced.
Also, if the file is more complicated than a simple list of words (for example, if it contains punctuation characters that need to be preserved), it might be worth using a regular expression to do the job.
It can be something like this:
def censor():
filename = input("Enter name of file: ")
with open(filename, 'r') as f:
lines = f.readlines()
newLines = []
for line in lines:
words = line.split()
for i, word in enumerate(words):
if len(word) == 4:
words[i] == '**'
newLines.append(' '.join(words))
with open(filename, 'w') as f:
for line in newLines:
f.write(line + '\n')
def censor(filename):
"""Takes a file and writes it into file censored.txt with every 4-letterword replaced by xxxx"""
infile = open(filename)
content = infile.read()
infile.close()
outfile = open('censored.txt', 'w')
table = content.maketrans('.,;:!?', ' ')
noPunc = content.translate(table) #replace all punctuation marks with blanks, so they won't tie two words together
wordList = noPunc.split(' ')
for word in wordList:
if '\n' in word:
count = word.count('\n')
wordLen = len(word)-count
else:
wordLen = len(word)
if wordLen == 4:
censoredWord = word.replace(word, 'xxxx ')
outfile.write(censoredWord)
else:
outfile.write(word + ' ')
outfile.close()