enumeration of character sequence "permutations" (python) - python

I have following problem:
There are n=20 characters in the sequence. For each position there is a predefined list of possible characters which can be 1 to m (where m usually is a single digit).
How can I enumerate all possible permutations efficiently?
Or in essence is there some preexisting library (numpy?) that could do that before I try it myself?

itertools.product seems to offer what I need. I just need to pass it a list of list:
itertools.product(*positions)
where positions is a list of lists (eg which chars at which positions).
In my case the available options for each position are small and often also just 1 so that keeps the number of possibilities in check but might crash your application if too many get generated.
I then build the final string:
for s in itertools.product(*positions):
result = ''.join(s)
results.append(result)

Related

Create unique id of fixed length only using given symbols?

I am trying to see how I can create a set of unique IDs of a fixed length (say length 12) in python which uses a specific subset of all alphanumeric characters. The usecase here is that these IDs need to be read by people and referred to in printed documents, and so I am trying to avoid using characters L, I, O and numbers 0, 1. I of course need to be able to generate a new ID as needed.
I looked into the UUID function in other answers but wasn't able to find a way to use that function to meet my requirements. I've done a lot of searching, but apologies if this is duplicate.
Edit: So far I tried using UUID as described here. And also the hashids function. But could not figure out a way to do it using them. The next best solution I could come up with is create a list of random strings, and check against all existing ids. But that seems woefully inefficient.
For a set of characters to sample you could use string.ascii_uppercase (A-Z) plus string.digits (0-9), but then remove unwanted characters 'LIO01'. From there you can use random.choices to generate a sequence of length k while allowing repeated characters.
import string
import random
def unique_id(size):
chars = list(set(string.ascii_uppercase + string.digits).difference('LIO01'))
return ''.join(random.choices(chars, k=size))
>>> unique_id(12)
'HBFXXHWZ8349'
>>> unique_id(12)
'A7W5WK636BYN'
>>> unique_id(12)
'WJ2JBX924NVK'
You could use an iterator like itertools.combinations
import itertools
import string
valid_chars = set(string.ascii_lowercase + string.digits) - set('lio01')
# Probably would want to persist the used values by using some sort of database/file
# instead of this
used = set()
unique_id_generator = itertools.combinations(valid_chars, 12)
generated = "".join(next(unique_id_generator))
while generated in used:
generated = "".join(next(unique_id_generator))
# Once an unused value has been found, add it to used list (or some sort of database where you can keep track)
used.add(generated)
This generator will continue to produce all possible combinations (without replacement) of all ascii lower case characters and digits excluding the ones you mentioned. If you need this upper case, you can use .upper() and if you want to allow replacement, you can use itertools.combinations_with_replacement.
If 'xyz' is not considered to be the same as 'xzy', take a look at itertools.permutations.
I bumped to a similar problem and the simplest solution I could think of is this one:
Answer
from secrets import token_urlsafe
id = ''.join([c for c in token_urlsafe(10) if c not in '-_OI0l'])[:5]
print(id) # 'a3HkR'
Explanation
token_urlsafe(10) String with 10 random chars from [a-z, A-Z, 0-9, -, _]
if c not in '-_OI0l' remove characters you don't want
[:5] Take just 5 from the beginning, if you want 5 for example.
Strengths
Readable
One-liner
Customizable
Can be highly secure if needed
Limitations
You can check the uniqueness in other ways, or just pick as long an id as needed so that randomness takes care of that for you.
The above example can create 459 165 024 different ids.
If you remove many characters or you want more characters you have to make the number in token_urlsafe(number) also bigger to not run into an IndexError.

Is there a way to generate possible short forms?

Consider the string Building Centre. If asked to abbreviate this to fit a specific number of characters, you and I may choose very different but equally valid representations. For instance, three valid 7 character representations are:
BLD CNT
BLD CTR
BLDNGCT
These are generated by:
Using only existing letters in the string (can't abbreviate using z)
Using them in the order they appear (LBD is not valid since L does not come before B in Building).
Selecting up to as many characters (including spaces) as indicated.
I'm looking to write a breadth or depth of search based algorithm to generate all such short forms for a given string and desired length.
Before I go about writing the script, I am wondering if something similar has already been implemented. If not, how would you suggest I write something like this? Besides itertools, are there any useful libraries?
Yes, this can be beautifully done with itertools:
import itertools
text = 'Building Centre'
length = 7
shorts = [''.join(short) for short in itertools.combinations(text, length)]
print(shorts) # 6435 different versions!
Note that itertools.combinations does indeed preserve the order. You way want to check out the docs
Edit
If short forms with fewer than length characters should be allowed as well, you can use
shorts = list(itertools.chain(*((''.join(short) for short in itertools.combinations(text, l))
for l in range(1, length + 1))))
As stated in the comments, some short forms get generated twice. To fix this, use e.g. shorts = list(set(shorts)).

Most efficient way to check if any substrings in list are in another list of strings

I have two lists, one of words, and another of character combinations. What would be the fastest way to only return the combinations that don't match anything in the list?
I've tried to make it as streamlined as possible, but it's still very slow when it uses 3 characters for the combinations (goes up to 290 seconds for 4 characters, not even going to try 5)
Here's some example code, currently I'm converting all the words to a list, and then searching the string for each list value.
#Sample of stuff
allCombinations = ["a","aa","ab","ac","ad"]
allWords = ["testing", "accurate" ]
#Do the calculations
allWordsJoined = ",".join( allWords )
invalidCombinations = set( i for i in allCombinations if i not in allWordsJoined )
print invalidCombinations
#Result: set(['aa', 'ab', 'ad'])
I'm just curious if there's a better way to do this with sets? With a combination of 3 letters, there are 18278 list items to search for, and for 4 letters, that goes up to 475254, so currently my method isn't really fast enough, especially when the word list string is about 1 million characters.
Set.intersection seems like a very useful method if you need the whole string, so surely there must be something similar to search for a substring.
The first thing that comes to mind is that you can optimize lookup by checking current combination against combinations that are already "invalid". I.e. if ab is invalid, than ab.? will be invalid too and there's no point to check such.
And one more thing: try using
for i in allCombinations:
if i not in allWordsJoined:
invalidCombinations.add(i)
instead of
invalidCombinations = set(i for i in allCombinations if i not in allWordsJoined)
I'm not sure, but less memory allocations can be a small boost for real data run.
Seeing if a set contains an item is O(1). You would still have to iterate through your list of combinations (with some exceptions. If your word doesn't have "a" it's not going to have any other combinations that contain "a". You can use some tree-like data structure for this) to compare with your original set of words.
You shouldn't convert your wordlist to a string, but rather a set. You should get O(N) where N is the length of your combinations.
Also, I like Python, but it isn't the fastest of languages. If this is the only task you need to do, and it needs to be very fast, and you can't improve the algorithm, you might want to check out other languages. You should be able to very easily prototype something to get an idea of the difference in speed for different languages.

Memory error while solving an anagram

I am trying to solve the below question:
An anagram is a type of word play, the result of rearranging the letters of a word or phrase to produce a new word or phrase, using all the original letters exactly once; e.g., orchestra = carthorse. Using the word list at http://www.puzzlers.org/pub/wordlists/unixdict.txt, write a program that finds the sets of words that share the same characters that contain the most words in them.
It's failing even with just 1000 bytes of file size. Also every time a new list is created, so why does Python keep the old list in memory? I am getting the below error.
l=list(map(''.join, itertools.permutations(i)))
gives me:
MemoryError
Here's my code:
import itertools
def anagram():
f=open('unixdict.txt')
f2=open('result_anagram.txt','w')
words = f.read(1000).split('\n')
for i in words:
l=[]
l=list(map(''.join, itertools.permutations(i)))
l.remove(i)
for anagram in l:
if l==i:
f2.write(i + "\n")
return True
anagram()
Changed the above code to, as per suggestion. But still getting the memory error.
import itertools
def anagram():
f=open('unixdict.txt')
f2=open('result_anagram.txt','w')
words = set(line.rstrip('\n') for line in f)
for i in words:
l= map(''.join, itertools.permutations(i))
l =(x for x in l if x!=i)
for anagram in l:
if anagram in words:
f2.write(i + "\n")
return True
anagram()
MemoryError
[Finished in 22.2s]
This program is going to be horribly inefficient no matter what you do.
But you can fix this MemoryError so it'll just take forever to run instead of failing.
First, note that a 12-letter word has 479,001,600 permutations. Storing all of those in memory is going to take more than 2GB of memory. So, how do you solve that? Just don't store them all in memory. Leave the iterator as an iterator instead of making a list, and then you'll only have to fit one at a time, instead of all of them.
There's one problem here: You're actually using that list in the if l==i: line. But clearly that's a mistake. There's no way that a list of strings can ever equal a single string. You might as well replace that line with raise TypeError, at which point you can just replace the whole loop and fail a whole lot faster. :)
I think what you wanted there is if anagram in words:. In which case you have no need for l, except for in the for loop, which means you can safely leave it as a lazy iterator:
for i in words:
l = map(''.join, itertools.permutations(i))
l = (x for x in l if x != i)
for anagram in l:
if anagram in words:
f2.write(i + "\n")
I'm assuming Python 3.x here, since otherwise the list call was completely unnecessary. If you're using 2.x, replace that map with itertools.imap.
As a side note, f.read(1000) is usually going to get part of an extra word at the end, and the leftover part in the next loop. Try readlines. While it's useless with no argument, with an argument it's very useful:
Read and return a list of lines from the stream. hint can be specified to control the number of lines read: no more lines will be read if the total size (in bytes/characters) of all lines so far exceeds hint.
So, f.readlines(1000) will let you read buffers of about 1K at a time, without getting partial lines. Of course now, instead of having to split on newlines, you have to rstrip them:
words = [line.rstrip('\n') for line in f.readlines(1000)]
However, you've got another problem. If you're only reading about 100 words at a time, the chances of finding an anagram are pretty slim. For example, orchestra is not going to be anywhere near carthorse in the dictionary, so there's no way to find unless you remember the entire file. But that should be fine; a typical Unix dictionary like web2 has around 200K lines; you an easily read that into memory and keep it around as a set without making even a dent on your 2GB. So:
words = set(line.rstrip('\n') for line in f)
Also, note that you're trying to print out every word in the dictionary that has an anagram (multiple times, if it has multiple anagrams). Even with an efficient algorithm, that's going to take a long time—and spew out more data than you could possibly want. A more useful program might be one that takes an input word (e.g., via input or sys.argv[1]) and outputs just the anagrams of that word.
Finally:
Even after using l as a generator it taking up too much off time though no failing with memory error. Can you explain the importance of words as a set rather than a list. [Finished in 137.4s] just for 200 bytes, you have mentioned it before, but how to overcome it using words as set?
As I said at the top, "This program is going to be horribly inefficient no matter what you do."
In order to find the anagrams of a 12-letter word, you're going through 479 million permutations, and checking each one against a dictionary of about 200 thousand words, so that's 479M * 200K = 95 trillion checks, for each word. There are two ways to improve this, the first involving using the right data structures for the job, and the second involving the right algorithms for the job.
Changing the collection of things to iterate over from a list into a generator (a lazy iterable) turns something that took linear space (479M strings) into something that takes constant space (some fixed-size iterator state, plus one string at a time). Similarly, changing the collection of words to check against from a list into a set turns something that takes linear time (comparing a string against every element in the list) into something that takes constant time (hashing a string, then seeing if there's anything in the set with that hash value). So, this gets rid of the * 200K part of your problem.
But you've still got the 479M part of the problem. And you can't make that go away with a better data structure. Instead, you have to rethink the problem. How can you check whether any permutation of a word matches any other words, without trying all the permutations?
Well, some permutation of the word X matches the word Y if and only if X and Y have the same letters. It doesn't matter what order the letters in X were in; if the set is the same, there is at least one matching permutation (or exactly one, depending on how you count duplicate letters), and if not, there are exactly 0. So, instead of iterating through all the permutations in the word to look up, just look up its set. But it does matter if there are duplicates, so you can't just use set here. You could use some kind of multi-set (collections.Counter) works… or, with very little loss in efficiency and a big gain in simplicity, you could just sort the letters. After all, if two words have the same letters in some arbitrary order, they have the same letters in the same order when they're both sorted.
Of course you need to know which words are anagrams, not just that there is an anagram, so you can't just look it up in a set of letter sets, you have to look it up in a dictionary that maps letter sets to words. For example, something like this:
lettersets = collections.defaultdict(set)
for word in words:
lettersets[''.join(sorted(word))].add(word)
So now, to look up the anagrams for a word, all you have to do is:
anagrams = lettersets[''.join(sorted(word))]
Not only is that simple and readable, it's also constant-time.
And if you really want to print out the massive list of all anagrams of all words… well, that's easy too:
for _, words in lettersets.items():
for word in words:
print('{} is an anagram of {}'.format(word, ', '.join(words - {word})))
Now, instead of taking 479M*200K time to find anagrams for one word, or 479M*200K*200K time to find all anagrams for all words, it takes constant time to find anagrams for one word, or 200K time to find all anagrams for all words. (Of course there is 200K setup time added to the start to create the mapping, but spending 200K time up-front to save 200K, much less 479M*200K, time for each lookup is an obvious win.)
Things get a little trickier when you want to, e.g., find partial anagrams, or sentence anagarms, but you want to follow the same basic principles: find data structures that let you do things in constant or logarithmic time instead of linear or worse, and find algorithms that don't require you to brute-force your way through an exponential or factorial number of candidates.
import urllib
def anagram():
f=urllib.urlopen('http://www.puzzlers.org/pub/wordlists/unixdict.txt')
words = f.read().split('\n')
d={''.join(sorted(x)):[] for x in words} #create dic with empty list as default
for x in words:
d[''.join(sorted(x))].append(x)
max_len= max( len(v) for k,v in d.iteritems())
for k,v in d.iteritems():
if len(v)>=max_len:
print v
anagram()
Output:
['abel', 'able', 'bale', 'bela', 'elba']
['alger', 'glare', 'lager', 'large', 'regal']
['angel', 'angle', 'galen', 'glean', 'lange']
['evil', 'levi', 'live', 'veil', 'vile']
['caret', 'carte', 'cater', 'crate', 'trace']
['elan', 'lane', 'lean', 'lena', 'neal']
Finished in 5.7 secs
Here's a hint on solving the problem: two strings are anagrams of each other if they have the same collection of letters. You can sort the words (turning e.g. "orchestra" into "acehorrst"), then just see two words have the same sorted order. If they do, then the original words must have been anagrams of each other, since they have all the same letters (in a different order).

Search for motifs with degenerate positions

I have a 15-mer nucleotide motif that uses degenerate nucleotide sequences. Example: ATNTTRTCNGGHGCN.
I would search a set of sequences for the occurrence of this motif. However, my other sequences are exact sequences, i.e. they have no ambiguity.
I have tried doing a for loop within the sequences to search for this, but I have not been able to do non-exact searches. The code I use is modeled after the code on the Biopython cookbook.
for pos,seq in m.instances.search(test_seq):
print pos, seq
I would like to search for all possible exact instances of the non-exact 15-mer. Is there a function available, or would I have to resort to defining my own function for that? (I'm okay doing the latter, just wanted to triple-check with the world that I'm not duplicating someone else's efforts before I go ahead - I have already browsed through what I thought was the relevant parts of the docs.)
Use Biopython's nt_search. It looks for a subsequence in a DNA sequence, expanding ambiguity codes to the possible nucleotides in that position. Example:
>>> from Bio import SeqUtils
>>> pat = "ATNTTRTCNGGHGCN"
>>> SeqUtils.nt_search("CCCCCCCATCTTGTCAGGCGCTCCCCCC", pat)
['AT[GATC]TT[AG]TC[GATC]GG[ACT]GC[GATC]', 7]
It returns a list where the first item is the search pattern, followed by the positions of the matches.

Categories