I'm looking for a resource to learning me how to connect TKinter and JSON
Like take an input value (word) and search for that word in JSON then print out the result of the search
by way, I have already the python application working through terminal but I want to go further step and build a GUI
Thank you,
import json #import the JSON module
from difflib import get_close_matches #difflib module provides classes and functions
#for comparing sequences
#get_close_matches Return a list of
#the best “good enough” matches
data = json.load(open("data.json")) #load JSON to python dictionary
def translate(w):
w = w.lower() #change the input to lower case
if w in data: #first scenario check if the word exist in the dictionary, if exist load the data
return data[w]
elif w.title() in data: #When the user inputs a proper noun
return data[w.title()] #returns the definition of names that start with a capital letter
elif w.upper() in data: #definition of acronyms
return data[w.upper()]
elif len(get_close_matches(w, data.keys())) > 0: #second scenario compare the word and get the best match
#ask the user if the result of matching what is looking for
YN = input("Did you mean %s instead? Enter y if yes or n if no:" % get_close_matches(w, data.keys())[0])
if YN == "y":
return data[get_close_matches(w, data.keys())[0]]
elif YN == "n":
return "The word doesn't exist. Please double check it."
else:
return "We didn't understand your entry."
#third scenario the word not match or can't found
else:
return "The word doesn't exsit. Please double check it."
word = input("Enter word: ")
#in some cases the word have more than one definition so we need to make the output more readable
output = translate(word)
if type(output) == list:
for item in output:
print(item)
else:
print(output)
Here's how to do it!
Sketch some pictures of what your current program would look like if it had a GUI.
Would "Did you mean %s instead?" be a popup box?
Would you have a list of all the known words?
Build the UI using tkinter.
Connect the UI up to the functions in your program. (You do have those, don't you?)
Your program isn't quite ready yet, since you are doing stuff like using input inside your functions. Rework it so that it makes sense to have these outside your functions, then it'll probably be ready.
Related
Cheers, I am looking for help with my small Python project. Problem says, that program has to be able to decipher "monoalphabetic substitute cipher", while we have complete database, which words will definetely (at least once) be ciphered.
I have tried to create such a database with words, that are ciphered:
lst_sample = []
n = int(input('Number of words in database: '))
for i in range(n):
x = input()
lst_sample.append(x)
Way, that I am trying to "decipher" is to observe words', let's say structure, where different letter I am assigning numbers based on their presence in word (e.g. feed = 0112, hood = 0112 are the same, because it is combination of three different letters in such a combination). I am using subprogram pattern() for it:
def pattern(word):
nextNum = 0
letternNums = {}
wordPattern = []
for letter in word:
if letter not in letterNums:
letternNums[letter] = str(nextNum)
nextNum += 1
wordPattern.append(letterNums[letter])
return ''.join(wordPattern)
Right after, I have made database of ciphered words:
lst_en = []
q = input('Insert ciphered words: ')
if q == '':
print(lst_en)
else:
lst_en.append(q)
With such a databases I could finally create process to deciphering.
for i in lst_en:
for q in lst_sample:
x = p
word = i
if pattern(x) == pattern(word):
print(x)
print(word)
print()
If words in database lst_sample have different letter length (e.g. food, car, yellow), there is no problem to assign decrypted words, even when they have the same length, I can sort them based on their different structure: (e.g. puff, sort).
The main problem, which I am not able to solve, comes, when word has the same length and structure (e.g. jane, word).
I have no idea how to solve this problem, while keeping such an script architecture as described above. Is there any way, how that could be solved using another if statement or anything similar? Is there any way, how to solve it with infortmation that words in lst_sample will for sure be in ciphered text?
Thanks for all help!
Now, I'm learning Python and I want to make a dictionary, where the user can add words (in first step just the word, later definition).
word = input('Write a word here')
print('You added ' + word)
So, what I would like is the user can add more word, and the program save it to other string.
How can I do this?
Typically, this could be done in a while-loop where the loop-condition variable is updated upon user input:
continue_condition = True
words = []
while continue_condition:
word = input("Write a word here")
words.append(word)
continue_condition = input("Would you like to add another word? Then please type `Y`") == "Y"
If you want to populate a dictionary instead of a list, just adapt this code to your specific needs.
this will help to automate:
dict = {} # variable to store the key and value
def add(): # add function to add more word in our dictionary.
word = input('enter the word: ') # take the user input
dict[word] = word; # this will add the word to our dict for simplicity this is sample so we are using the same key and value.
if('y' == input('do you want to add more word (y/n): ')): # check if user want to add more word to dictionary.
add() # if yes the call function again ---recursion.
add() # call function for add word for first time.
print(dict) # print all the words in our dict.
So I have a small project with python.
A random song name and artist are chosen.
The artist and the first letter of each word in the song title are displayed.
The user has two chances to guess the name of the song.
If the user guesses the answer correctly the first time, they score 3 points. If the user guesses
the answer correctly the second time they score 1 point. The game repeats.
The game ends when a player guesses the song name incorrectly the second time.
So far I've created a text document and put a few lines of song titles.
In my code I have used the following:
random_lines = random.choice(open("songs.txt").readlines())
This randomly picks a line in the code and does nothing with it.
I am asking where I go from here. I need to display the first letters of each word on the line. I then need a counter or some sort to add chances. I also need to write something that will check to see if they have it correct and add to a score counter.
OK, now just continue with your plan, it's good. Now you have to get the first letter from each word in line. You can do that with:
res = []
for i in line.split():
res.append(i[0])
There you are, you have the first letter of every word in the list res. Now you need to check if the user entered the title correctly. Maybe the best idea would be to keep everything lower-cased (in your file and in the user input) for easier checking. Now you just have to transform the user input to lower-case. You can do it with:
user_entry = input('Song title:')
if user_entry.lower() == line.lower():
score += 3
else:
user_entry_2 = input('Song title:')
if user_entry_2.lower() == line.lower():
score += 1
else:
print('Game over.')
sys.exit()
You should make this into a function ad call it in a loop until user misses. The function could return the current score which you could print out (in that case you should remove sys.exit() call)
I hope this is clear enough. If not, write the question in the comments :)
Assuming your random choice string contains the data in the format {songname} - {artist}
Then you first need to get the song name and the artist as a separate strings.
Print the first letters and ask for input.
After which you need to compare the strings and do some logic with the points.
points = 0;
while(1):
random_line = 'Song - artist' #change this with your random string
song, artist = random_line.split('-')
print("{0} - {1}".format(song.strip()[:2], artist.strip()[:2]))
for i in range(0,3):
if (i == 2):
print('You died with {} points'.format(points))
exit(0)
elif(random_line.lower() == input('Gues the song: ').lower()):
points += 2 - i
print('correct guess. points: ' + str(points))
break
else:
print('Try again')
I have written a program which checks curse word in a text document.
I convert the document into a list of words and pass each word to the site for checking if it is a curse word or not.
Problem is if the text is too big, it is running very slow.
How do I make it faster?
import urllib.request
def read_text():
quotes = open(r"C:\Self\General\Pooja\Edu_Career\Learning\Python\Code\Udacity_prog_foundn_python\movie_quotes.txt") #built in function
contents_of_file = quotes.read().split()
#print(contents_of_file)
quotes.close()
check_profanity(contents_of_file)
def check_profanity(text_to_check):
flag = 0
for word in text_to_check:
connection = urllib.request.urlopen("http://www.wdylike.appspot.com/?q="+word)
output = connection.read()
# print(output)
if b"true" in output: # file is opened in bytes mode and output is in byte so compare byte to byte
flag= flag +1
if flag > 0:
print("profanity alert")
else:
print("the text has no curse words")
connection.close()
read_text()
The website you are using supports more than one word per fetch. Hence, to make your code faster:
A) Break the loop when you find the first curse word.
B) Send super word to site.
Hence:
def check_profanity(text_to_check):
flag = 0
super_word = ''
for i in range(len(text_to_check)):
if i < 100 and i < len(text_to_check): #100 or max number of words you can check at the same time
super_word = super_word + " " + word
else:
connection = urllib.request.urlopen("http://www.wdylike.appspot.com/?q="+super_word)
super_word = ''
output = connection.read()
if b"true" in output:
flag = flag +1
break
if flag > 0:
print("profanity alert")
else:
print("the text has no curse words")
First off, as Menno Van Dijk suggests, storing a subset of common known curse words locally would allow rapid checks for profanity up front, with no need to query the website at all; if a known curse word is found, you can alert immediately, without checking anything else.
Secondly, inverting that suggestion, cache at least the first few thousand most common known non-cursewords locally; there is no reason that every text containing the word "is", "the" or "a" should be rechecking those words over and over. Since the vast majority of written English uses mostly the two thousand most common words (and an even larger majority uses almost exclusively the ten thousand most common words), that can save an awful lot of checks.
Third, uniquify your words before checking them; if a word is used repeatedly, it's just as good or bad the second time as it was the first, so checking it twice is wasteful.
Lastly, as MTMD suggests, the site allows you to batch your queries, so do so.
Between all of these suggestions, you'll likely go from a 100,000 word file requiring 100,000 connections to requiring only 1-2. While multithreading might have helped your original code (at the expense of slamming the webservice), these fixes should make multithreading pointless; with only 1-2 requests, you can wait the second or two it would take for them to run sequentially.
As a purely stylistic issue, having read_text call check_profanity is odd; those should really be separate behaviors (read_text returns text which check_profanity can then be called on).
With my suggestions (assumes existence of files with one known word per line, one for bad words, one for good):
import itertools # For islice, useful for batching
import urllib.request
def load_known_words(filename):
with open(filename) as f:
return frozenset(map(str.rstrip, f))
known_bad_words = load_known_words(r"C:\path\to\knownbadwords.txt")
known_good_words = load_known_words(r"C:\path\to\knowngoodwords.txt")
def read_text():
with open(r"C:\Self\General\Pooja\Edu_Career\Learning\Python\Code\Udacity_prog_foundn_python\movie_quotes.txt") as quotes:
return quotes.read()
def check_profanity(text_to_check):
# Uniquify contents so words aren't checked repeatedly
if not isinstance(text_to_check, (set, frozenset)):
text_to_check = set(text_to_check)
# Remove words known to be fine from set to check
text_to_check -= known_good_words
# Precheck for any known bad words so loop is skipped completely if found
has_profanity = not known_bad_words.isdisjoint(text_to_check)
while not has_profanity and text_to_check:
block_to_check = frozenset(itertools.islice(text_to_check, 100))
text_to_check -= block_to_check
with urllib.request.urlopen("http://www.wdylike.appspot.com/?q="+' '.join(block_to_check)) as connection:
output = connection.read()
# print(output)
has_profanity = b"true" in output
if has_profanity:
print("profanity alert")
else:
print("the text has no curse words")
text = read_text()
check_profanity(text.split())
There a few things you can do:
Read batches of text
Give each batch of text to a worker process which then checks for profanity.
Introduce a cache which saves commonly used curse words offline to minimize the amount of required HTTP requests
Use multithreading.
Read batches of text.
Assign each batch to a thread and check all the batches seperately.
Send many words at once. Change number_of_words to the number of words you want to send at once.
import urllib.request
def read_text():
quotes = open("test.txt")
contents_of_file = quotes.read().split()
quotes.close()
check_profanity(contents_of_file)
def check_profanity(text):
number_of_words = 200
word_lists = [text[x:x+number_of_words] for x in range(0, len(text), number_of_words)]
flag = False
for word_list in word_lists:
connection = urllib.request.urlopen("http://www.wdylike.appspot.com/?q=" + "%20".join(word_list))
output = connection.read()
if b"true" in output:
flag = True
break
connection.close()
if flag:
print("profanity alert")
else:
print("the text has no curse words")
read_text()
I need to write a function that will search for words in a matrix. For the moment i'm trying to search line by line to see if the word is there. This is my code:
def search(p):
w=[]
for i in p:
w.append(i)
s=read_wordsearch() #This is my matrix full of letters
for line in s:
l=[]
for letter in line:
l.append(letter)
if w==l:
return True
else:
pass
This code works only if my word begins in the first position of a line.
For example I have this matrix:
[[a,f,l,y],[h,e,r,e],[b,n,o,i]]
I want to find the word "fly" but can't because my code only works to find words like "here" or "her" because they begin in the first position of a line...
Any form of help, hint, advice would be appreciated. (and sorry if my english is bad...)
You can convert each line in the matrix to a string and try to find the search work in it.
def search(p):
s=read_wordsearch()
for line in s:
if p in ''.join(line):
return True
I'll give you a tip to search within a text for a word. I think you will be able to extrapolate to your data matrix.
s = "xxxxxxxxxhiddenxxxxxxxxxxx"
target = "hidden"
for i in xrange(len(s)-len(target)):
if s[i:i+len(target)] == target:
print "Found it at index",i
break
If you want to search for words of all length, if perhaps you had a list of possible solutions:
s = "xxxxxxxxxhiddenxxxtreasurexxxxxxxx"
targets = ["hidden","treasure"]
for i in xrange(len(s)-1):
for j in xrange(i+1,len(s)):
if s[i:j] in targets:
print "Found",s[i:j],"at index",
def search(p):
w = ''.join(p)
s=read_wordsearch() #This is my matrix full of letters
for line in s:
word = ''.join(line)
if word.find(w) >= 0:
return True
return False
Edit: there is already lot of string functions available in Python. You just need to use strings to be able to use them.
join the characters in the inner lists to create a word and search with in.
def search(word, data):
return any(word in ''.join(characters) for characters in data)
data = [['a','f','l','y'], ['h','e','r','e'], ['b','n','o','i']]
if search('fly', data):
print('found')
data contains the matrix, characters is the name of each individual inner list. any will stop after it has found the first match (short circuit).