Related
okay so, I have created a list contains 100 words, what is easiest way to print the 50th word in python with out knowing the number of words in the list. so let's say that that I have 1098 words in a list, how do I find the middle word.
groceries=["rice","curd","milk","wheat","kjhdlu","rewqtq","erhaehwjrtj","weghfoiuqg","kgfu"]
middle_index=len(groceries)//2
middle_word= groceries[middle_index]
print(middle_word)
I did this I got the right answer but is there a easier way to do it
Well, depending on what you mean by "easiest".
You can actually just do it in one line, and I think it is already far more than easy:
groceries[len(groceries)//2]
This is the easiest and most efficient method.
print(groceries[len(groceries)//2])
And if your list contains only numeric elements, then you can use the median function of the statistics module to do that.
Eg:
import statistics as st
mid = st.median(your_list_of_integers)
print(mid)
OP says "a list contains 100 words, what is easiest way to print the 50th word"
50th word will be index 49
groceries=["rice","curd","milk","wheat","kjhdlu","rewqtq","erhaehwjrtj","weghfoiuqg","kgfu"]
if len(groceries)%2==1:
middle_index=len(groceries)//2 # for odd values
else:
middle_index=len(groceries)//2 -1 # for even values
The assignment is to write a Caesar Cipher algorithm that receives 2 parameters, the first being a String parameter, the second telling how far to shift the alphabet. The first part is to set up a method and set up two strings, one normal and one shifted. I have done this. Then I need to make a loop to iterate through the original string to build a new string, by finding the original letters and selecting the appropriate new letter from the shifted string. I've spent at least two hours staring at this one, and talked to my teacher so I know I'm doing some things right. But as for what goes in the while loop, I really don't have a clue. Any hints or pushes in the right direction would be very helpful so I at least have somewhere to start would be great, thank you.
def cipher(x, dist):
alphabet = "abcdefghijklmnopqrstuvwxyz"
shifted = "xyzabcdefghijklmnopqrstuvw"
stringspot = 0
shiftspot = (x.find("a"))
aspot = (x.find("a"))
while stringspot < 26:
aspot = shifted(dist)
shifted =
stringspot = stringspot + 1
ans =
return ans
print(cipher("abcdef", 1))
print(cipher("abcdef", 2))
print(cipher("abcdef", 3))
print(cipher("dogcatpig", 1))
Here are some pushes and hints:
You should validate your inputs. In particular, make sure that the shift distance is "reasonable," where reasonable means something you can handle. I recommend <=25.
If the maximum shift amount is 25, the letter 'a' plus 25 would get 'z'. The letter 'z' plus 25 will go past the end of the alphabet. But it wouldn't go past the end of TWO alphabets. So that's one way to handle wrap-around.
User #zondo, in his solution, handles upper-case letters. You didn't mention if you want to handle them or not. You may want to clarify that with your teacher.
If you know about dictionaries, you might want to build one to make it easy to map the old letters to the new letters.
You need to realize that strings are treated as tuples or lists - you can index them. I don't see you doing that in your code.
You can get an "ASCII code" number for a letter using ord(). The numbers are arbitrary, but both upper and lower case numbers are packed together tightly in ranges of 26. This means you can do math with them. (For example, ord('a') is 97. Not super useful. But ord('b') - ord('a') is 1, which might be good to know.)
alphabet and shifted are supposed to be a mapping between the original stream and the ciphertext. The loop's job is to iterate over all letters in the stream substitute them. More specifically, the letter in alphabet and the substitute letter in shifted reside at the same index, hence the mapping. In pseudocode:
ciphertext = empty
for each letter in x
i = index of letter in alphabet
new_letter = shifted[i]
add new_letter to ciphertext
The whole loop can be simplified to a comprehension list, but this shouldn't be your primary concern.
For more direct mapping than doing as in the pseudocode above, look into dictionaries.
Another thing that stands out in your code is the generation of shifted, which should depend on the argument dist so it can't just be hardcoded. So, if dist is 5, the first letter in shifted should be whatever lies at the 0+5 in alphabet, and so on. Hint: modulo operator.
I am trying to solve the reverse Boggle problem. Simply put, given a list of words, come up with a 4x4 grid of letters in which as many words in the list can be found in sequences of adjacent letters (letters are adjacent both orthogonally and diagonally).
I DO NOT want to take a known board and solve it. That is an easy TRIE problem and has been discussed/solved to death here for people's CS projects.
Example word list:
margays, jaguars, cougars, tomcats, margay, jaguar, cougar, pumas, puma, toms
Solution:
ATJY
CTSA
OMGS
PUAR
This problem is HARD (for me). Algorithm I have so far:
For each word in the input, make a list of all possible ways it can legally be appear on the board by itself.
Try all possible combinations of placing word #2 on those boards and keep the ones that have no conflicts.
Repeat till end of list.
...
Profit!!! (for those that read /.)
Obviously, there are implementation details. Start with the longest word first. Ignore words that are substrings of other words.
I can generate all 68k possible boards for a 7 character word in around 0.4 seconds. Then when I add an additional 7 character board, I need to compare 68k x 68k boards x 7 comparisons. Solve time becomes glacial.
There must be a better way to do this!!!!
Some code:
BOARD_SIDE_LENGTH = 4
class Board:
def __init__(self):
pass
def setup(self, word, start_position):
self.word = word
self.indexSequence = [start_position,]
self.letters_left_over = word[1:]
self.overlay = []
# set up template for overlay. When we compare boards, we will add to this if the board fits
for i in range(BOARD_SIDE_LENGTH*BOARD_SIDE_LENGTH):
self.overlay.append('')
self.overlay[start_position] = word[0]
self.overlay_count = 0
#classmethod
def copy(boardClass, board):
newBoard = boardClass()
newBoard.word = board.word
newBoard.indexSequence = board.indexSequence[:]
newBoard.letters_left_over = board.letters_left_over
newBoard.overlay = board.overlay[:]
newBoard.overlay_count = board.overlay_count
return newBoard
# need to check if otherboard will fit into existing board (allowed to use blank spaces!)
# otherBoard will always be just a single word
#classmethod
def testOverlay(self, this_board, otherBoard):
for pos in otherBoard.indexSequence:
this_board_letter = this_board.overlay[pos]
other_board_letter = otherBoard.overlay[pos]
if this_board_letter == '' or other_board_letter == '':
continue
elif this_board_letter == other_board_letter:
continue
else:
return False
return True
#classmethod
def doOverlay(self, this_board, otherBoard):
# otherBoard will always be just a single word
for pos in otherBoard.indexSequence:
this_board.overlay[pos] = otherBoard.overlay[pos]
this_board.overlay_count = this_board.overlay_count + 1
#classmethod
def newFromBoard(boardClass, board, next_position):
newBoard = boardClass()
newBoard.indexSequence = board.indexSequence + [next_position]
newBoard.word = board.word
newBoard.overlay = board.overlay[:]
newBoard.overlay[next_position] = board.letters_left_over[0]
newBoard.letters_left_over = board.letters_left_over[1:]
newBoard.overlay_count = board.overlay_count
return newBoard
def getValidCoordinates(self, board, position):
row = position / 4
column = position % 4
for r in range(row - 1, row + 2):
for c in range(column - 1, column + 2):
if r >= 0 and r < BOARD_SIDE_LENGTH and c >= 0 and c < BOARD_SIDE_LENGTH:
if (r*BOARD_SIDE_LENGTH+c not in board.indexSequence):
yield r, c
class boardgen:
def __init__(self):
self.boards = []
def createAll(self, board):
# get the next letter
if len(board.letters_left_over) == 0:
self.boards.append(board)
return
next_letter = board.letters_left_over[0]
last_position = board.indexSequence[-1]
for row, column in board.getValidCoordinates(board, last_position):
new_board = Board.newFromBoard(board, row*BOARD_SIDE_LENGTH+column)
self.createAll(new_board)
And use it like this:
words = ['margays', 'jaguars', 'cougars', 'tomcats', 'margay', 'jaguar', 'cougar', 'pumas', 'puma']
words.sort(key=len)
first_word = words.pop()
# generate all boards for the first word
overlaid_boards = []
for i in range(BOARD_SIDE_LENGTH*BOARD_SIDE_LENGTH):
test_board = Board()
test_board.setup(first_word, i)
generator = boardgen()
generator.createAll(test_board)
overlaid_boards += generator.boards
This is an interesting problem. I can't quite come up with a full, optimized solution, but there here are some ideas you might try.
The hard part is the requirement to find the optimal subset if you can't fit all the words in. That's going to add a lot to the complexity. Start by eliminating word combinations that obviously aren't going to work. Cut any words with >16 letters. Count the number of unique letters needed. Be sure to take into account letters repeated in the same word. For example, if the list includes "eagle" I don't think you are allowed to use the same 'e' for both ends of the word. If your list of needed letters is >16, you have to drop some words. Figuring out which ones to cut first is an interesting sub-problem... I'd start with the words containing the least used letters. It might help to have all sub-lists sorted by score.
Then you can do the trivial cases where the total of word lengths is <16. After that, you start with the full list of words and see if there's a solution for that. If not, figure out which word(s) to drop and try again.
Given a word list then, the core algorithm is to find a grid (if one exists) that contains
all of those words.
The dumb brute-force way would be to iterate over all the grids possible with the letters you need, and test each one to see if all your words fit. It's pretty harsh though: middle case is 16! = 2x10exp13 boards. Exact formula for n unique letters is... (16!)/(16-n)! x pow(n, 16-n). Which gives a worst case in the range of 3x10exp16. Not very manageable.
Even if you can avoid rotations and flips, that only saves you 1/16 of the search space.
A somewhat smarter greedy algorithm would be to sort the words by some criteria, like difficulty or length. A recursive solution would be to take the top word remaining on the list, and attempt to place it on the grid. Then recurse with that grid and the remaining word list. If you fill up the grid before you run out of words, then you have to back track and try another way of placing the word. A greedier approach would be to try placements that re-use the most letters first.
You can do some pruning. If at any point the number of spaces left in the grid is less than the remaining set of unique letters needed, then you can eliminate those sub-trees. There are a few other cases where it's obvious there's no solution that can be cut, especially when the remaining grid spaces are < the length of the last word.
The search space for this depends on word lengths and how many letters are re-used. I'm sure it's better than brute-force, but I don't know if it's enough to make the problem reasonable.
The smart way would be to use some form of dynamic programming. I can't quite see the complete algorithm for this. One idea is to have a tree or graph of the letters, connecting each letter to "adjacent" letters in the word list. Then you start with the most-connected letter and try to map the tree onto the grid. Always place the letter that completes the most of the word list. There'd have to be some way of handling the case of multiple of the same letter in the grid. And I'm not sure how to order it so you don't have to search every combination.
The best thing would be to have a dynamic algorithm that also included all the sub word lists. So if the list had "fog" and "fox", and fox doesn't fit but fog does, it would be able to handle that without having to run the whole thing on both versions of the list. That's adding complexity because now you have to rank each solution by score as you go. But in the cases where all the words won't fit it would save a lot of time.
Good luck on this.
There are a couple of general ideas for speeding up backtrack search you could try:
1) Early checks. It usually helps to discard partial solutions that can never work as early as possible, even at the cost of more work. Consider all two-character sequences produced by chopping up the words you are trying to fit in - e.g. PUMAS contributes PU, UM, MA, and AS. These must all be present in the final answer. If a partial solution does not have enough overlapped two-character spaces free to contain all of the overlapped two-character sequences it does not yet have, then it cannot be extended to a final answer.
2) Symmetries. I think this is probably most useful if you are trying to prove that there is no solution. Given one way of filling in a board, you can rotate and reflect that solution to find other ways of filling in a board. If you have 68K starting points and one starting point is a rotation or reflection of another starting point, you don't need to try both, because if you can (or could) solve the problem from one starting point you can get the answer from the other starting point by rotating or reflecting the board. So you might be able to divide the number of starting points you need to try by some integer.
This problem is not the only one to have a large number of alternatives at each stage. This also affects the traveling salesman problem. If you can accept not having a guarantee that you will find the absolute best answer, you could try not following up the least promising of these 68k choices. You need some sort of score to decide which to keep - you might wish to keep those which use as many letters already in place as possible. Some programs for the traveling salesman problems discard unpromising links between nodes very early. A more general approach which discards partial solutions rather than doing a full depth first search or branch and bound is Limited Discrepancy Search - see for example http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.34.2426.
Of course some approaches to the TSP discard tree search completely in favor of some sort of hill-climbing approach. You might start off with a filled boggle square and repeatedly attempt to find your words in it, modifying a few characters in order to force them in, trying to find steps which successively increase the number of words that can be found in the square. The easiest form of hill-climbing is repeated simple hill-climbing from multiple random starts. Another approach is to restart the hill-climbing by randomizing only a portion of the solution so far - since you don't know the best size of portion to randomize you might decide to chose the size of portion to randomize at random, so that at least some fraction of the time you are randomizing the correct size of region to produce a new square to start from. Genetic algorithms and simulated annealing are very popular here. A paper on a new idea, Late Acceptance Hill-Climbing, also describes some of its competitors - http://www.cs.nott.ac.uk/~yxb/LAHC/LAHC-TR.pdf
I've done a lot of Googling, but haven't found anything, so I'm really sorry if I'm just searching for the wrong things.
I am writing an implementation of the Ghost for MIT Introduction to Programming, assignment 5.
As part of this, I need to determine whether a string of characters is the start of any valid word. I have a list of valid words ("wordlist").
Update: I could use something that iterated through the list each time, such as Peter's simple suggestion:
def word_exists(wordlist, word_fragment):
return any(w.startswith(word_fragment) for w in wordlist)
I previously had:
wordlist = [w for w in wordlist if w.startswith(word_fragment)]
(from here) to narrow the list down to the list of valid words that start with that fragment and consider it a loss if wordlist is empty. The reason that I took this approach was that I (incorrectly, see below) thought that this would save time, as subsequent lookups would only have to search a smaller list.
It occurred to me that this is going through each item in the original wordlist (38,000-odd words) checking the start of each. This seems silly when wordlist is ordered, and the comprehension could stop once it hits something that is after the word fragment. I tried this:
newlist = []
for w in wordlist:
if w[:len(word_fragment)] > word_fragment:
# Take advantage of the fact that the list is sorted
break
if w.startswith(word_fragment):
newlist.append(w)
return newlist
but that is about the same speed, which I thought may be because list comprehensions run as compiled code?
I then thought that more efficient again would be some form of binary search in the list to find the block of matching words. Is this the way to go, or am I missing something really obvious?
Clearly it isn't really a big deal in this case, but I'm just starting out with programming and want to do things properly.
UPDATE:
I have since tested the below suggestions with a simple test script. While Peter's binary search/bisect would clearly be better for a single run, I was interested in whether the narrowing list would win over a series of fragments. In fact, it did not:
The totals for all strings "p", "py", "pyt", "pyth", "pytho" are as follows:
In total, Peter's simple test took 0.175472736359
In total, Peter's bisect left test took 9.36985015869e-05
In total, the list comprehension took 0.0499348640442
In total, Neil G's bisect took 0.000373601913452
The overhead of creating a second list etc clearly took more time than searching the longer list. In hindsight, this was likely the best approach regardless, as the "reducing list" approach increased the time for the first run, which was the worst case scenario.
Thanks all for some excellent suggestions, and well done Peter for the best answer!!!
Generator expressions are evaluated lazily, so if you only need to determine whether or not your word is valid, I would expect the following to be more efficient since it doesn't necessarily force it to build the full list once it finds a match:
def word_exists(wordlist, word_fragment):
return any(w.startswith(word_fragment) for w in wordlist)
Note that the lack of square brackets is important for this to work.
However this is obviously still linear in the worst case. You're correct that binary search would be more efficient; you can use the built-in bisect module for that. It might look something like this:
from bisect import bisect_left
def word_exists(wordlist, word_fragment):
try:
return wordlist[bisect_left(wordlist, word_fragment)].startswith(word_fragment)
except IndexError:
return False # word_fragment is greater than all entries in wordlist
bisect_left runs in O(log(n)) so is going to be considerably faster for a large wordlist.
Edit: I would guess that the example you gave loses out if your word_fragment is something really common (like 't'), in which case it probably spends most of its time assembling a large list of valid words, and the gain from only having to do a partial scan of the list is negligible. Hard to say for sure, but it's a little academic since binary search is better anyway.
You're right that you can do this more efficiently given that the list is sorted.
I'm building off of #Peter's answer, which returns a single element. I see that you want all the words that start with a given prefix. Here's how you do that:
from bisect import bisect_left
wordlist[bisect_left(wordlist, word_fragment):
bisect_left(wordlist, word_fragment[:-1] + chr(ord(word_fragment[-1])+1))]
This returns the slice from your original sorted list.
As Peter suggested I would use the Bisect module. Especially if you're reading from a large file of words.
If you really need speed you could make a daemon ( How do you create a daemon in Python? ) that has a pre-processed data structure suited for the task
I suggest you could use "tries"
http://www.topcoder.com/tc?module=Static&d1=tutorials&d2=usingTries
There are many algorithms and data structures to index and search
strings inside a text, some of them are included in the standard
libraries, but not all of them; the trie data structure is a good
example of one that isn't.
Let word be a single string and let dictionary be a large set of
words. If we have a dictionary, and we need to know if a single word
is inside of the dictionary the tries are a data structure that can
help us. But you may be asking yourself, "Why use tries if set
and hash tables can do the same?" There are two main reasons:
The tries can insert and find strings in O(L) time (where L represent
the length of a single word). This is much faster than set , but is it
a bit faster than a hash table.
The set and the hash tables
can only find in a dictionary words that match exactly with the single
word that we are finding; the trie allow us to find words that have a
single character different, a prefix in common, a character missing,
etc.
The tries can be useful in TopCoder problems, but also have a
great amount of applications in software engineering. For example,
consider a web browser. Do you know how the web browser can auto
complete your text or show you many possibilities of the text that you
could be writing? Yes, with the trie you can do it very fast. Do you
know how an orthographic corrector can check that every word that you
type is in a dictionary? Again a trie. You can also use a trie for
suggested corrections of the words that are present in the text but
not in the dictionary.
an example would be:
start={'a':nodea,'b':nodeb,'c':nodec...}
nodea={'a':nodeaa,'b':nodeab,'c':nodeac...}
nodeb={'a':nodeba,'b':nodebb,'c':nodebc...}
etc..
then if you want all the words starting with ab you would just traverse
start['a']['b'] and that would be all the words you want.
to build it you could iterate through your wordlist and for each word, iterate through the characters adding a new default dict where required.
In case of binary search (assuming wordlist is sorted), I'm thinking of something like this:
wordlist = "ab", "abc", "bc", "bcf", "bct", "cft", "k", "l", "m"
fragment = "bc"
a, m, b = 0, 0, len(wordlist)-1
iterations = 0
while True:
if (a + b) / 2 == m: break # endless loop = nothing found
m = (a + b) / 2
iterations += 1
if wordlist[m].startswith(fragment): break # found word
if wordlist[m] > fragment >= wordlist[a]: a, b = a, m
elif wordlist[b] >= fragment >= wordlist[m]: a, b = m, b
if wordlist[m].startswith(fragment):
print wordlist[m], iterations
else:
print "Not found", iterations
It will find one matched word, or none. You will then have to look to the left and right of it to find other matched words. My algorithm might be incorrect, its just a rough version of my thoughts.
Here's my fastest way to narrow the list wordlist down to a list of valid words starting with a given fragment :
sect() is a generator function that uses the excellent Peter's idea to employ bisect, and the islice() function :
from bisect import bisect_left
from itertools import islice
from time import clock
A,B = [],[]
iterations = 5
repetition = 10
with open('words.txt') as f:
wordlist = f.read().split()
wordlist.sort()
print 'wordlist[0:10]==',wordlist[0:10]
def sect(wordlist,word_fragment):
lgth = len(word_fragment)
for w in islice(wordlist,bisect_left(wordlist, word_fragment),None):
if w[0:lgth]==word_fragment:
yield w
else:
break
def hooloo(wordlist,word_fragment):
usque = len(word_fragment)
for w in wordlist:
if w[:usque] > word_fragment:
break
if w.startswith(word_fragment):
yield w
for rep in xrange(repetition):
te = clock()
for i in xrange(iterations):
newlistA = list(sect(wordlist,'VEST'))
A.append(clock()-te)
te = clock()
for i in xrange(iterations):
newlistB = list(hooloo(wordlist,'VEST'))
B.append(clock() - te)
print '\niterations =',iterations,' number of tries:',repetition,'\n'
print newlistA,'\n',min(A),'\n'
print newlistB,'\n',min(B),'\n'
result
wordlist[0:10]== ['AA', 'AAH', 'AAHED', 'AAHING', 'AAHS', 'AAL', 'AALII', 'AALIIS', 'AALS', 'AARDVARK']
iterations = 5 number of tries: 30
['VEST', 'VESTA', 'VESTAL', 'VESTALLY', 'VESTALS', 'VESTAS', 'VESTED', 'VESTEE', 'VESTEES', 'VESTIARY', 'VESTIGE', 'VESTIGES', 'VESTIGIA', 'VESTING', 'VESTINGS', 'VESTLESS', 'VESTLIKE', 'VESTMENT', 'VESTRAL', 'VESTRIES', 'VESTRY', 'VESTS', 'VESTURAL', 'VESTURE', 'VESTURED', 'VESTURES']
0.0286089433154
['VEST', 'VESTA', 'VESTAL', 'VESTALLY', 'VESTALS', 'VESTAS', 'VESTED', 'VESTEE', 'VESTEES', 'VESTIARY', 'VESTIGE', 'VESTIGES', 'VESTIGIA', 'VESTING', 'VESTINGS', 'VESTLESS', 'VESTLIKE', 'VESTMENT', 'VESTRAL', 'VESTRIES', 'VESTRY', 'VESTS', 'VESTURAL', 'VESTURE', 'VESTURED', 'VESTURES']
0.415578236899
sect() is 14.5 times faster than holloo()
PS:
I know the existence of timeit, but here, for such a result, clock() is fully sufficient
Doing binary search in the list is not going to guarantee you anything. I am not sure how that would work either.
You have a list which is ordered, it is a good news. The algorithmic performance complexity of both your cases is O(n) which is not bad, that you just have to iterate through the whole wordlist once.
But in the second case, the performance (engineering performance) should be better because you are breaking as soon as you find that rest cases will not apply. Try to have a list where 1st element is match and rest 38000 - 1 elements do not match, you will the second will beat the first.
I am curious what is the most efficient algorithm (or commonly used) to count the number of occurrences of a string in a chunk of text.
From what I read, the Boyer–Moore string search algorithm is the standard for string searches but I am not sure if counting occurrences in an efficient way would be same as searching a string.
In Python this is what I want:
text_chunck = "one two three four one five six one"
occurance_count(text_chunck, "one") # gives 3.
EDIT: It seems like python str.count serves as such a method; however, I am not able to find what algorithm it uses.
For starters, yes, you can accomplish this with Boyer-Moore very efficiently. However, depending on some other parameters of your problem, there might be a better solution.
The Aho-Corasick string matching algorithm will find all occurrences of a set of pattern strings in a target string and does so in time O(m + n + z), where m is the length of the string to search, n is the combined length of all the patterns to match, and z is the total number of matches produced. This is linear in the size of the source and target strings if you just have one string to match. It also will find overlapping occurrences of the same string. Moreover, if you want to check how many times a set of strings appears in some source string, you only need to make one call to the algorithm. On top of this, if the set of strings that you want to search for never changes, you can do the O(n) work as preprocessing time and then find all matches in O(m + z).
If, on the other hand, you have one source string and a rapidly-changing set of substrings to search for, you may want to use a suffix tree. With O(m) preprocessing time on the string that you will be searching in, you can, in O(n) time per substring, check how many times a particular substring of length n appears in the string.
Finally, if you're looking for something you can code up easily and with minimal hassle, you might want to consider looking into the Rabin-Karp algorithm, which uses a roling hash function to find strings. This can be coded up in roughly ten to fifteen lines of code, has no preprocessing time, and for normal text strings (lots of text with few matches) can find all matches very quickly.
Hope this helps!
Boyer-Moore would be a good choice for counting occurrences, since it has some overhead that you would only need to do once. It does better the longer the pattern string is, so for "one" it would not be a good choice.
If you want to count overlaps, start the next search one character after the previous match. If you want to ignore overlaps, start the next search the full pattern string length after the previous match.
If your language has an indexOf or strpos method for finding one string in another, you can use that. If it proves to slow, then choose a better algorithm.
Hellnar,
You can use a simple dictionary to count occurrences in a String. The algorithm is a counting algorithm, here is an example:
"""
The counting algorithm is used to count the occurences of a character
in a string. This allows you to compare anagrams and strings themselves.
ex. animal, lamina a=2,n=1,i=1,m=1
"""
def count_occurences(str):
occurences = {}
for char in str:
if char in occurences:
occurences[char] = occurences[char] + 1
else:
occurences[char] = 1
return occurences
def is_matched(s1,s2):
matched = True
s1_count_table = count_occurences(s1)
for char in s2:
if char in s1_count_table and s1_count_table[char]>0:
s1_count_table[char] -= 1
else:
matched = False
break
return matched
#counting.is_matched("animal","laminar")
This example just returns True or False if the strings match. Keep in mind, this algorithm counts the number of times a character shows up in a string, this is good for anagrams.