Python find longest ORF in DNA sequence - python

Can someone show me a straightforward solution for how to calculate the longest open reading frame (ORF) in a DNA sequence? ATG is the start codon (i.e., the beginning of an ORF) and TAG, TGA, and TAA are stop codons (i.e., the end of an ORF).
Here's some code that produces errors (and uses an external module called BioPython):
import sys
from Bio import SeqIO
currentCid = ''
buffer = []
for record in SeqIO.parse(open(sys.argv[1]),"fasta"):
cid = str(record.description).split('.')[0][1:]
if currentCid == '':
currentCid = cid
else:
if cid != currentCid:
buffer.sort(key = lambda x : len(x[1]))
print '>' + buffer[-1][0]
print buffer[-1][1]
currentCid = cid
buffer = [(str(record.description),str(record.seq))]
else:
buffer.append((str(record.description),str(record.seq)))
buffer.sort(key = lambda x : len(x[1]))
print '>' + buffer[-1][0]
print buffer[-1][1]
Is it possible to write this procedure with the least amount of external dependencies (or at least get the above code to work)?
Here's what my input looks like:
ACCGCCGCGAACATCGCCGAGATCCTGCCGCCGCAGCCGAGCCGGCTGGTCGAGTATGCGCAACGACGCG
CGTCCGGCAGCATCCCGGCGATCATGGCGCGCTGGGATGCACGCGTACTGCAGGACAACGAACCATTCAC
CGCAGTCTATGGCGGCGCGTCGTACATCAACAACGACCTGTTCCTCGCCCGCCTCGCCGACTGGGGCGTG
TCGGCCGGCAACTACAGCGGCGAGATCGGCGGCGCGACACCGCCGCTGCGCTGGCGCCCGCTGCGGCTGC
TGCGTTCGCTGCCGGTGTTCTGGCGCATGCTGCGTGTCGCGCGCGGGCACCTGCCGACGCTCGAGCGCGG
CTTGCAGCGCTTCGACCAGGAACTCGCGACGCTCGTCGAGCGACGCGCCGACGGCCAGCAACTGGCCGAC
TGGTTCACGCGCTTCTACGTGTTCGTCGTGCAGGGCAACCTGTGCATCGCGTCGTCGCTGGCCAGCAGCG
GCGGCGCACTGTGGGGCCGTCCGCCGACCGCATACGGCCAGCTCGACGACAGCCCGCACCGGCTGCCGTG
GGAAACCGATCCGGGCACCGCACGGCCCGCGCCCACCCACCTGCCGCTGCAGGCGTTTCCCGCCTGGCCG
CTGCCGGTCCGCGTGCTCCACGCGCTCGGCGCGCCCGGCATGCGCGGCTGGTATCTGCAGGTGCGCGAGT
GGTATCGCGACAACCTGATGCGCGTGTTCTTCCGCCTGCATCATGCGATGCCGGCCGCCGATCGCGACAC
GTGGTTCGCGCCCCATCCCGATCGCCGCGAACGCAACGGCAGCTTCTGGCAGGACGGCGGCGAAGGCACC
GACGAGGCAGCCGGCTTCATGATCTATCCGGGCCACACGCAAGGCGTGCTCGGCCACGACATCCTGCTGG
AAGACACGCTCGACCCGGGCCGGCACGCGCAGTACCAGGCCGCGCGCGCCGTGATCGCGCGCATGGGCGG
CCGGCTGTCGCACGGCGCGACGCTGCTGCGCGAGCTGCGCAAGCCGTCGGCCGTGCTGCCGCGCGTCGAT
GCGGCGTGGATCGGGCGCGAGGTGCGGCTCAGCGACGGCCAGCTGACGCTGGTCGAATGAACGCGATGCG
GTTGCCGCGCACCCGAGCACGGGCCCGGGCCTGAACTGCCGATCAGCGTACCGGCGTGCGGACGACTCCG
TCGACCTTCAGCGTGCGCCGGTCGTGCGCGGCTTCGTATTCGACCGTCTGCGCAGGCGTGACGGCGCCGT
ATGAATGGCCGTTCACGTAGACGGTGCCGTCCCGCAGCTCGACCCGGTCGCCGTTGACCGTCGCTGTGGC
CCGTTCACCCTGCAGCACCGCGCCCGAACAACCTGCAGTCGAAAAACTGCGGACCGACGTGCCCGGCATC
GCGGCGATCCCGCCCTGGTCCGCCGCATGCGCCGCGCTGCACGGCGGCGCATCCATGCTGCCGGCAGCGT
GGACCGCGCCGGCGCTGATGCCGCATCCGGCAAGCAGCGCAATCGTCATCGGCTTCAGATGGTTCATGGT
GAGCTCCGTTGTCCGCCGCCGCGGATCGATGACCGGCCGACGCCCGTGCTCGCATGGCAGGCCGGCCGGC
CGGATGCATCCAGTATGCGTCCGGTTCGCGGCATTCCGCCATCGTCGCCGATACCGCTCATCGCCGCCCG
GTTCGCTCCCGCAGCGGCCTCTGGAAGCACCTCCCGCGGGGCAACCCGTCCCCATGAAAATCCACCTTGA
TCAAGTTGCGACTCGCAACTATTATTGATTGCGATCCGCAACCTTTCCGGACCCGCCATGGACCTCATCG
ACGCTCCCGCCAAGCCCCGCGAAGCCACGATCCTCGAGCTGCGCGACTTCTCCCGCAAACTGGTTCGCGA
GCTCGGCTTCATGCGCGCGACGCTGGCCGACAGCGACTGGGCGCCTT
My output should be:
The longest substring that begins with ATG (i.e., the start of an ORF) and ends with either TAG, TGA, or TAA as stop codons (i.e., the end of an ORF).

You should look into regular expressions:
import re
max(re.findall(r'ATG(?:(?!TAA|TAG|TGA)...)*(?:TAA|TAG|TGA)',s), key = len)
There is a good tutorial here, that focuses on the use of regular expressions with DNA strings

Since BioPython is a well-established and widely available module that's specifically designed for these sorts of questions, there's little reason to avoid it and re-invent the wheel. That said it is useful to use regexes to identify start codons:
from Bio import Seq
import regex as re
startP = re.compile('ATG')
nuc = input_seq.replace('\n','')
longest = (0,)
for m in startP.finditer(nuc, overlapped=True):
if len(Seq.Seq(nuc)[m.start():].translate(to_stop=True)) > longest[0]:
pro = Seq.Seq(nuc)[m.start():].translate(to_stop=True)
longest = (len(pro),
m.start(),
str(pro),
nuc[m.start():m.start()+len(pro)*3+3])
Note that this uses the regex module, not the re module; the former allows easier identification of overlapping matches. We can let BioPython count triplets and look for stop codons, rather than try to labor through regexes to do that.
Here, longest yields the length of the protein encoded by the ORF, the start site (note, using 0-based numbering), the protein sequence encoded by the ORF, and the sequence of the ORF itself, including the stop codon.
(338,
93,
'MARWDARVLQDNEPFTAVYGGASYINNDLFLARLADWGVSAGNYSGEIGGATPPLRWRPLRLLRSLPVFWRMLRVARGHLPTLERGLQRFDQELATLVERRADGQQLADWFTRFYVFVVQGNLCIASSLASSGGALWGRPPTAYGQLDDSPHRLPWETDPGTARPAPTHLPLQAFPAWPLPVRVLHALGAPGMRGWYLQVREWYRDNLMRVFFRLHHAMPAADRDTWFAPHPDRRERNGSFWQDGGEGTDEAAGFMIYPGHTQGVLGHDILLEDTLDPGRHAQYQAARAVIARMGGRLSHGATLLRELRKPSAVLPRVDAAWIGREVRLSDGQLTLVE',
'ATGGCGCGCTGGGATGCACGCGTACTGCAGGACAACGAACCATTCACCGCAGTCTATGGCGGCGCGTCGTACATCAACAACGACCTGTTCCTCGCCCGCCTCGCCGACTGGGGCGTGTCGGCCGGCAACTACAGCGGCGAGATCGGCGGCGCGACACCGCCGCTGCGCTGGCGCCCGCTGCGGCTGCTGCGTTCGCTGCCGGTGTTCTGGCGCATGCTGCGTGTCGCGCGCGGGCACCTGCCGACGCTCGAGCGCGGCTTGCAGCGCTTCGACCAGGAACTCGCGACGCTCGTCGAGCGACGCGCCGACGGCCAGCAACTGGCCGACTGGTTCACGCGCTTCTACGTGTTCGTCGTGCAGGGCAACCTGTGCATCGCGTCGTCGCTGGCCAGCAGCGGCGGCGCACTGTGGGGCCGTCCGCCGACCGCATACGGCCAGCTCGACGACAGCCCGCACCGGCTGCCGTGGGAAACCGATCCGGGCACCGCACGGCCCGCGCCCACCCACCTGCCGCTGCAGGCGTTTCCCGCCTGGCCGCTGCCGGTCCGCGTGCTCCACGCGCTCGGCGCGCCCGGCATGCGCGGCTGGTATCTGCAGGTGCGCGAGTGGTATCGCGACAACCTGATGCGCGTGTTCTTCCGCCTGCATCATGCGATGCCGGCCGCCGATCGCGACACGTGGTTCGCGCCCCATCCCGATCGCCGCGAACGCAACGGCAGCTTCTGGCAGGACGGCGGCGAAGGCACCGACGAGGCAGCCGGCTTCATGATCTATCCGGGCCACACGCAAGGCGTGCTCGGCCACGACATCCTGCTGGAAGACACGCTCGACCCGGGCCGGCACGCGCAGTACCAGGCCGCGCGCGCCGTGATCGCGCGCATGGGCGGCCGGCTGTCGCACGGCGCGACGCTGCTGCGCGAGCTGCGCAAGCCGTCGGCCGTGCTGCCGCGCGTCGATGCGGCGTGGATCGGGCGCGAGGTGCGGCTCAGCGACGGCCAGCTGACGCTGGTCGAATGA')

Check this out:
https://www.kaggle.com/xiangma/orf-finder?scriptVersionId=6709465
As shown in the link above, there are two methods to do this:
Please note I set ORF length limitation above 1000bp and you can adjust it with your needs.
First one:
from Bio import SeqIO
records = SeqIO.parse('dna2.fasta', 'fasta')
for record in records:
for strand, seq in (1, record.seq), (-1, record.seq.reverse_complement()):
for frame in range(3):
length = 3 * ((len(seq)-frame) // 3)
for pro in seq[frame:frame+length].translate(table = 1).split("*")[:-1]:
if 'M' in pro:
orf = pro[pro.find('M'):]
pos = seq[frame:frame+length].translate(table=1).find(orf)*3 + frame +1
if len(orf)*3 +3 > 1300:
print("{}...{} - length {}, strand {}, frame {}, pos {}, name {}".format\
(orf[:3], orf[-3:], len(orf)*3+3, strand, frame, pos, record.id))
Second one, which uses the regex:
from Bio import SeqIO
import re
records = SeqIO.parse('dna2.fasta', 'fasta')
for record in records:
for strand, seq in (1, record.seq), (-1, record.seq.reverse_complement()):
for frame in range(3):
index = frame
while index < len(record) - 6:
match = re.match('(ATG(?:\S{3})*?T(?:AG|AA|GA))', str(seq[index:]))
if match:
orf = match.group()
index += len(orf)
if len(orf) > 1300:
pos = str(record.seq).find(orf) + 1
print("{}...{} - length {}, strand {}, frame {}, pos {}, name {}".format\
(orf[:6], orf[-3:], len(orf), strand, frame, pos, record.id))
else: index += 3

Related

How can I clean this data for easier visualizing?

I'm writing a program to read a set of data rows and quantify matching sets. I have the code below however would like to cut, or filter the numbers which is not being recognized as a match.
import collections
a = "test.txt" #This can be changed to a = input("What's the filename? ", )
line_file = open(a, "r")
print(line_file.readable()) #Readable check.
#print(line_file.read()) #Prints each individual line.
#Code for quantity counter.
counts = collections.Counter() #Creates a new counter.
with open(a) as infile:
for line in infile:
for number in line.split():
counts.update((number,))
for key, count in counts.items():
print(f"{key}: x{count}")
line_file.close()
This is what it outputs, however I'd like for it to not read the numbers at the end and pair the matching sets accordingly.
A2-W-FF-DIN-22: x1
A2-FF-DIN: x1
A2-W-FF-DIN-11: x1
B12-H-BB-DD: x2
B12-H-BB-DD-77: x1
C1-GH-KK-LOP: x1
What I'm aiming for is so that it ignored the "-77" in this, and instead counts the total as x3
B12-H-BB-DD: x2
B12-H-BB-DD-77: x1
Split each element on the dashes and check the last element is a number. If so, remove it, then continue on.
from collections import Counter
def trunc(s):
parts = s.split('-')
if parts[-1].isnumeric():
return '-'.join(parts[:-1])
return s
with open('data.txt') as f:
data = [trunc(x.rstrip()) for x in f.readlines()]
counts = Counter(data)
for k, v in counts.items():
print(k, v)
Output
A2-W-FF-DIN 2
A2-FF-DIN 1
B12-H-BB-DD 3
C1-GH-KK-LOP 1
You could use a regular expression to create a matching group for a digit suffix. If each number is its own string, e.g. "A2-W-FF-DIN-11", then a regular expression like (?P<base>.+?)(?:-(?P<suffix>\d+))?\Z could work.
Here, (?P<base>.+?) is a non-greedy match of any character except for a newline grouped under the name "base", (?:-(?P<suffix>\d+))? matches 0 or 1 occurrences of something like -11 occurring at the end of the "base" group and puts the digits in a group named "suffix", and \Z is the end of the string.
This is what it does in action:
>>> import re
>>> regex = re.compile(r"(?P<base>.+?)(?:-(?P<suffix>\d+))?\Z")
>>> regex.match("A2-W-FF-DIN-11").groupdict()
{'base': 'A2-W-FF-DIN', 'suffix': '11'}
>>> regex.match("A2-W-FF-DIN").groupdict()
{'base': 'A2-W-FF-DIN', 'suffix': None}
So you can see, in this instance, whether or not the string has a digital suffix, the base is the same.
All together, here's a self-contained example of how it might be applied to data like this:
import collections
import re
regex = re.compile(r"(?P<base>.+?)(?:-(?P<suffix>\d+))?\Z")
sample_data = [
"A2-FF-DIN",
"A2-W-FF-DIN-11",
"A2-W-FF-DIN-22",
"B12-H-BB-DD",
"B12-H-BB-DD",
"B12-H-BB-DD-77",
"C1-GH-KK-LOP"
]
counts = collections.Counter()
# Iterates through the data and updates the counter.
for datum in sample_data:
# Isolates the base of the number from any digit suffix.
number = regex.match(datum)["base"]
counts.update((number,))
# Prints each number and prints how many instances were found.
for key, count in counts.items():
print(f"{key}: x{count}")
For which the output is
A2-FF-DIN: x1
A2-W-FF-DIN: x2
B12-H-BB-DD: x3
C1-GH-KK-LOP: x1
Or in the example code you provided, it might look like this:
import collections
import re
# Compiles a regular expression to match the base and suffix
# of a number in the file.
regex = re.compile(r"(?P<base>.+?)(?:-(?P<suffix>\d+))?\Z")
a = "test.txt"
line_file = open(a, "r")
print(line_file.readable()) # Readable check.
# Creates a new counter.
counts = collections.Counter()
with open(a) as infile:
for line in infile:
for number in line.split():
# Isolates the base match of the number.
counts.update((regex.match(number)["base"],))
for key, count in counts.items():
print(f"{key}: x{count}")
line_file.close()

How can I pull out text snippets around specific words?

I have a large txt file and I'm trying to pull out every instance of a specific word, as well as the 15 words on either side. I'm running into a problem when there are two instances of that word within 15 words of each other, which I'm trying to get as one large snippet of text.
I'm trying to get chunks of text to analyze about a specific topic. So far, I have working code for all instances except the scenario mentioned above.
def occurs(word1, word2, filename):
import os
infile = open(filename,'r') #opens file, reads, splits into lines
lines = infile.read().splitlines()
infile.close()
wordlist = [word1, word2] #this list allows for multiple words
wordsString = ''.join(lines) #splits file into individual words
words = wordsString.split()
f = open(filename, 'w')
f.write("start")
f.write(os.linesep)
for word in wordlist:
matches = [i for i, w in enumerate(words) if w.lower().find(word) != -1]
for m in matches:
l = " ".join(words[m-15:m+16])
f.write(f"...{l}...") #writes the data to the external file
f.write(os.linesep)
f.close
So far, when two of the same word are too close together, the program just doesn't run on one of them. Instead, I want to get out a longer chunk of text that extends 15 words behind and in front of furthest back and forward words
This snippet will get number of words around the chosen keyword. If there are some keywords together, it will join them:
s = '''xxx I have a large txt file and I'm xxx trying to pull out every instance of a specific word, as well as the 15 words on either side. I'm running into a problem when there are two instances of that word within 15 words of each other, which I'm trying to get as one large snippet of text.
I'm trying to xxx get chunks of text to analyze about a specific topic. So far, I have working code for all instances except the scenario mentioned above. xxx'''
words = s.split()
from itertools import groupby, chain
word = 'xxx'
def get_snippets(words, word, l):
snippets, current_snippet, cnt = [], [], 0
for v, g in groupby(words, lambda w: w != word):
w = [*g]
if v:
if len(w) < l:
current_snippet += [w]
else:
current_snippet += [w[:l] if cnt % 2 else w[-l:]]
snippets.append([*chain.from_iterable(current_snippet)])
current_snippet = [w[-l:] if cnt % 2 else w[:l]]
cnt = 0
cnt += 1
else:
if current_snippet:
current_snippet[-1].extend(w)
else:
current_snippet += [w]
if current_snippet[-1][-1] == word or len(current_snippet) > 1:
snippets.append([*chain.from_iterable(current_snippet)])
return snippets
for snippet in get_snippets(words, word, 15):
print(' '.join(snippet))
Prints:
xxx I have a large txt file and I'm xxx trying to pull out every instance of a specific word, as well as the 15
other, which I'm trying to get as one large snippet of text. I'm trying to xxx get chunks of text to analyze about a specific topic. So far, I have working
topic. So far, I have working code for all instances except the scenario mentioned above. xxx
With the same data and different lenght:
for snippet in get_snippets(words, word, 2):
print(' '.join(snippet))
Prints:
xxx and I'm
I have xxx trying to
trying to xxx get chunks
mentioned above. xxx
As always, a variety of solutions avaliable here. A fun one would a be a recursive wordFind, where it searches the next 15 words and if it finds the target word it can call itself.
A simpler, though perhaps not efficient, solution would be to add words one at a time:
for m in matches:
l = " ".join(words[m-15:m])
i = 1
while i < 16:
if (words[m+i].lower() == word):
i=1
else:
l.join(words[m+(i++)])
f.write(f"...{l}...") #writes the data to the external file
f.write(os.linesep)
Or if you're wanting the subsequent uses to be removed...
bExtend = false;
for m in matches:
if (!bExtend):
l = " ".join(words[m-15:m])
f.write("...")
bExtend = false
i = 1
while (i < 16):
if (words[m].lower() == word):
l.join(words[m+i])
bExtend = true
break
else:
l.join(words[m+(i++)])
f.write(l)
if (!bExtend):
f.write("...")
f.write(os.linesep)
Note, have not tested so may require a bit of debugging. But the gist is clear: add words piecemeal and extend the addition process when a target word is encountered. This also allows you to extend with other target words other than the current one with a bit of addition to to the second conditional if.

Regex Python find everything between four characters

I have a string that holds data. And I want everything in between ({ and })
"({Simple Data})"
Should return "Simple Data"
Or regex:
s = '({Simple Data})'
print(re.search('\({([^})]+)', s).group(1))
Output:
'Simple Data'
You could try the following:
^\({(.*)}\)$
Group 1 will contain Simple Data.
See an example on regexr.
If the brackets are always positioned at the beginning and the end of the string, then you can do this:
l = "({Simple Data})"
print(l[2:-2])
Which resulst in:
"Simple Data"
In Python you can access single characters via the [] operator. With this you can access the sequence of characters starting with the third one (index = 2) up to the second-to-last (index = -2, second-to-last is not included in the sequence).
You could try this regex (?s)\(\{(.*?)\}\)
which simply captures the contents between the delimiters.
Beware though, this doesn't account for nesting.
If nesting is a concern, the best you can to with standard Python re engine
is to get the inner nest only, using this regex:
\(\{((?:(?!\(\{|\}\).)*)\}\)
Hereby I designed a tokenizer aimming at nesting data. OP should check out here.
import collections
import re
Token = collections.namedtuple('Token', ['typ', 'value', 'line', 'column'])
def tokenize(code):
token_specification = [
('DATA', r'[ \t]*[\w]+[\w \t]*'),
('SKIP', r'[ \t\f\v]+'),
('NEWLINE', r'\n|\r\n'),
('BOUND_L', r'\(\{'),
('BOUND_R', r'\}\)'),
('MISMATCH', r'.'),
]
tok_regex = '|'.join('(?P<%s>%s)' % pair for pair in token_specification)
line_num = 1
line_start = 0
lines = code.splitlines()
for mo in re.finditer(tok_regex, code):
kind = mo.lastgroup
value = mo.group(kind)
if kind == 'NEWLINE':
line_start = mo.end()
line_num += 1
elif kind == 'SKIP':
pass
else:
column = mo.start() - line_start
yield Token(kind, value, line_num, column)
statements = '''
({Simple Data})
({
Parent Data Prefix
({Nested Data (Yes)})
Parent Data Suffix
})
'''
queue = collections.deque()
for token in tokenize(statements):
if token.typ == 'DATA' or token.typ == 'MISMATCH':
queue.append(token.value)
elif token.typ == 'BOUND_L' or token.typ == 'BOUND_R':
print(''.join(queue))
queue.clear()
Output of this code should be:
Simple Data
Parent Data Prefix
Nested Data (Yes)
Parent Data Suffix

Python: Counting a specific set of character occurrences in lines of a file

I am struggling with a small program in Python which aims at counting the occurrences of a specific set of characters in the lines of a text file.
As an example, if I want to count '!' and '#' from the following lines
hi!
hello#gmail.com
collection!
I'd expect the following output:
!;2
#;1
So far I got a functional code, but it's inefficient and does not use the potential that Python libraries have.
I have tried using collections.counter, with limited success. The efficiency blocker I found is that I couldn't select specific sets of characters on counter.update(), all the rest of the characters found were also counted. Then I would have to filter the characters I am not interested in, which adds another loop...
I also considered regular expressions, but I can't see an advantage in this case.
This is the functional code I have right now (the simplest idea I could imagine), which looks for special characters in file's lines. I'd like to see if someone can come up with a neater Python-specific idea:
def count_special_chars(filename):
special_chars = list('!"#$%&\'()*+,-./:;<=>?#[\\]^_`{|}~ ')
dict_count = dict(zip(special_chars, [0] * len(special_chars)))
with open(filename) as f:
for passw in f:
for c in passw:
if c in special_chars:
dict_count[c] += 1
return dict_count
thanks for checking
Why not count the whole file all together? You should avoid looping through string for each line of the file. Use string.count instead.
from pprint import pprint
# Better coding style: put constant out of the function
SPECIAL_CHARS = '!"#$%&\'()*+,-./:;<=>?#[\\]^_`{|}~ '
def count_special_chars(filename):
with open(filename) as f:
content = f.read()
return dict([(i, content.count(i)) for i in SPECIAL_CHARS])
pprint(count_special_chars('example.txt'))
example output:
{' ': 0,
'!': 2,
'.': 1,
'#': 1,
'[': 0,
'~': 0
# the remaining keys with a value of zero are ignored
...}
Eliminating the extra counts from collections.Counter is probably not significant either way, but if it bothers you, do it during the initial iteration:
from collections import Counter
special_chars = '''!"#$%&'()*+,-./:;<=>?#[\\]^_`{|}~ '''
found_chars = [c for c in open(yourfile).read() if c in special_chars]
counted_chars = Counter(found_chars)
need not to process file contents line-by-line
to avoid nested loops, which increase complexity of your program
If you want to count character occurrences in some string, first, you loop over the entire string to construct an occurrence dict. Then, you can find any occurrence of character from the dict. This reduce complexity of the program.
When constructing occurrence dict, defaultdict would help you to initialize count values.
A refactored version of the program is as below:
special_chars = list('!"#$%&\'()*+,-./:;<=>?#[\\]^_`{|}~ ')
dict_count = defaultdict(int)
with open(filename) as f:
for c in f.read():
dict_count[c] += 1
for c in special_chars:
print('{0};{1}'.format(c, dict_count[c]))
ref. defaultdict Examples: https://docs.python.org/3.4/library/collections.html#defaultdict-examples
I did something like this where you do not need to use the counter library. I used it to count all the special char but you can adapt to put the count in a dict.
import re
def countSpecial(passwd):
specialcount = 0
for special in special_chars:
lenght = 0
#print special
lenght = len(re.findall(r'(\%s)' %special , passwd))
if lenght > 0:
#print lenght,special
specialcount = lenght + specialcount
return specialcount

Python calculate ORFs from any arbitrary reading frame

I have a big fasta file in this format:
>gi|142022655|gb|EQ086233.1|522 marine metagenome JCVI_SCAF_1096627390048 genomic scaffold, whole genome shotgun sequence
AAGACGGGCACCGTGTCCTTCGCGACGTACTCCGACCAGTTGTACACGTTCAGGTTGGTGTCGCCGGCAT
GGGCCGACAGGCTGGCCGCGACGGCCAGCGCCGCCGACGTGACGCGCGCGGCGCGCAACGCCGATTGACG
ACGGATACGGATACGCATGGGGATTCTCCTTGTGATGGGGATCGGCCGTTGCGCCCGGTCCGGGTCCGGA
CTCGCGTCAACGCCGTCGAGCGGTGTTCAGCACAAGGGCCAATGTAGAGATCGCGGCCGGCAGCGTCAGT
CCCGAAAACCGGGACAAACGGCGACGTCGATTCCCGCCGTTTGGGTAGATTCCCGCGTAGGCAGTCGAAA
ATATTCGTGATACCTGTAGCGCCACCTGAAAATCTTCGATACACGACGCCATGAGCGCTGCGCTGCCCGC
CCCCGATCTTCCGCTGAGCCACGTCGCGTTCGTGACTGAAACGCTGGGCGACATCGCACAAGCCGTCGGA
ACGCCGCAGTTCATGCGCGCCGTCTACGACACGCTCGTGCGCTACGTCGATTTCGACGCCGTGCACCTCG
ACTACGAGCGCAGCGCGTCTTCCGGCCGGCGCAGCGTCGGCTGGATCGGCAGCTTCGGCCGCGAGCCCGA
GCTGGTCGCGCAGGTGATGCGCCACTACTACCGCAGCTACGCGAGCGACGATGCAACTTACGCGGCGATC
GAAACCGAAAACGACGTGCAATTGCTGCAGGTGTCCGCGCAACGCGTGTCGAGCGAGCTACGGCATCTGT
TCTTCGATGCCGGCGACATTCATGACGAATGCGTGATCGCCGGCGTGACGGGCGGCACGCGCTACTCGAT
CTCGATCGCGCGCTCACGGCGGCTGCCGCCGTTTTCGCTGAAGGAACTGAGCCTGCTGAAGCAGCTTTCG
CAAGTCGTGCTGCCGCTGGCGTCCGCGCACAAGCGCCTGCTCGGCGCGATCTCCGCCGACGACGCACCGC
GCGACGAACTCGATCTCGACCTCGTCGCGCAATGGCTGCCGGAATGGCAGGAACGGTTGACCGCGCGCGA
GATGCATGTGTGTGCGTCGTTCATCCAGGGCATGACGTCGGCGGCCATCGCCCAATCGATGGGGCTCAAG
ACCTCCACCGTCGATACCTACGCGAAGCGCGCCTTCGCGAAGCTCGGCGTCGATTCGCGAAGGCAACTGA
TGACCCTCGTGCTGAGAAACGCGTCGCGGCGGCATGACGCATAGCATCC
>gi|142022655|gb|EQ086233.1|598 marine metagenome JCVI_SCAF_1096627390048 genomic scaffold, whole genome shotgun sequence
TTGCCGCCGGCCGCAGCCGGCTTGGCACCACGCTGCGGCTGGTCGCCGGACTTCGGCTTCGCGCCGGTGT
CCGCCGGCGCTGCCGGCCGCTTCGCGTTGCGCTCCTGCTTGGCCTTCGCTGCGAGCTGCGCCCGCAATTC
GGCAAGTTGTTCAAAACCCATAAATTCAATCCACCAGGAATATAAGGTGTGGTTCGTGCGGCCATGCCGC
GCGGCGCACGAGCTTCGCCGCCATGCGTGCGACCCGTCTGCCGCCGATGCGGAATACTACGGGGCCGCAT
>gi|142022655|gb|EQ086233.1|143 marine metagenome JCVI_SCAF_1096627390048 genomic scaffold, whole genome shotgun sequence
CTGATGCGTGCGCGCGGCCGCCTGCAGCCAGCGCGTCAGTTCCGGCGCCGCCGCGCGGCTGTAGTTCAGCGCG
CCGCCGCGATCGACGGGCAGGTAATGGCCTTCGATGTCGATGCCGTCCGGCGGCGTGTTCGAGTTCGCGA
TCGAGCCGAACTTGCCGGTCTTGCGCGCCTCGACGTACGTGCCGTCGTCGACGTACTGGATCTTCAGGTC
GACGCCGAGCCGCTGCCGCGCCTGCGCCTGCAGCGCCTGCAGCAGCACGTCGCGCTGGTCGCGCACGGTC
I want to be able to find out the length of the longest open reading frame (ORF) appearing in reading frame 3 of any of the sequences?
So far, I have tried some code that lists out all the ORFs of one sequence, inputted as a string:
import re
from string import maketrans
pattern = re.compile(r'(?=(ATG(?:...)*?)(?=TAG|TGA|TAA))')
def revcomp(dna_seq):
return dna_seq[::-1].translate(maketrans("ATGC","TACG"))
def orfs(dna):
return set(pattern.findall(dna) + pattern.findall(revcomp(dna)))
print orfs(Seq)
where Seq='''CTGATGCGTGCGCGCGGCCGCCTGCAGCCAGCGCGTCAGTTCCGGCGCCGCCGCGCGGCTGTAGTTCAGCGCGCCGCCGCGATCGACGGGCAGGTAATGGCCTTCGATGTCGATGCCGTCCGGCGGCGTGTTCGAGTTCGCGATCGAGCCGAACTTGCCGGTCTTGCGCGCCTCGACGTACGTGCCGTCGTCGACGTACTGGATCTTCAGGTCGACGCCGAGCCGCTGCCGCGCCTGCGCCTGCAGCGCCTGCAGCAGCACGTCGCGCTGGTCGCGCACGGTC''' Notice that this is the 3rd entry in the big fasta file format above.
My sample output to this is: set([]), so I am clearly doing something terribly wrong. My code doesn't even scale to multiple entries (i.e., it only takes a single DNA string, called Seq)
Can anyone point me in the right direction please?
EDIT:
N.B.: ATG is the start codon (i.e., the beginning of an ORF) and TAG, TGA, and TAA are stop codons (i.e., the end of an ORF).
EDITED 1: Completely rewritten to better match problem description.
I don't know the exact file format here, so am assuming it carries on the same way as the three sequences you show -- one sequence after another.
If I understand correctly, the reason you didn't see a match in the third sequence is that there actually isn't a match there. There are matches in the first two, though, and you will see them if you run this.
'''
import re
import string
with open('dna.txt', 'rb') as f:
data = f.read()
data = [x.split('\n', 1) for x in data.split('>')]
data = [(x[0], ''.join(x[1].split())) for x in data if len(x) == 2]
start, end = [re.compile(x) for x in 'ATG TAG|TGA|TAA'.split()]
revtrans = string.maketrans("ATGC","TACG")
def get_longest(starts, ends):
''' Simple brute-force for now. Optimize later...
Given a list of start locations and a list
of end locations, return the longest valid
string. Returns tuple (length, start position)
Assume starts and ends are sorted correctly
from beginning to end of string.
'''
results = {}
# Use smallest end that is bigger than each start
ends.reverse()
for start in starts:
for end in ends:
if end > start and (end - start) % 3 == 0:
results[start] = end + 3
results = [(end - start, start) for
start, end in results.iteritems()]
return max(results) if results else (0, 0)
def get_orfs(dna):
''' Returns length, header, forward/reverse indication,
and longest match (corrected if reversed)
'''
header, seqf = dna
seqr = seqf[::-1].translate(revtrans)
def readgroup(seq, group):
return list(x.start() for x in group.finditer(seq))
f = get_longest(readgroup(seqf, start), readgroup(seqf, end))
r = get_longest(readgroup(seqr, start), readgroup(seqr, end))
(length, index), s, direction = max((f, seqf, 'forward'), (r, seqr, 'reverse'))
return length, header, direction, s[index:index + length]
# Process entire file
all_orfs = [get_orfs(x) for x in data]
# Put in groups of 3
all_orfs = zip(all_orfs[::3], all_orfs[1::3], all_orfs[2::3])
# Process each group of 3
for x in all_orfs:
x = max(x) # Only pring longest in each group
print(x)
print('')
Your requirements aren't clear to me. In your question you say "I want to be able to find out the length of the longest open reading frame (ORF) appearing in reading frame 3 of any of the sequences?" In subsequent comments, you say "I'm only interested in the comprehensive calculation of each ORF from any arbitrary reading frame. They are ALL important to me."
Assuming that the later is what you're after, here's a simple way to get all the ORFs from a collection of sequences in fasta format, using BioPython to look after much of the work.
import io # Only needed because input is in string form
from Bio import Seq, SeqIO
import regex as re
startP = re.compile('ATG')
def get_orfs(nuc):
orfs = []
for m in startP.finditer(nuc, overlapped=True):
pro = Seq.Seq(nuc)[m.start():].translate(to_stop=True)
orfs.append(nuc[m.start():m.start()+len(pro)*3+3])
return orfs
for fasta in SeqIO.parse(io.StringIO(fasta_inputs), 'fasta'):
header = fasta.description
orfs = get_orfs(str(fasta.seq))
print(header, orfs)
Notes:
Normally you'd read a fasta collection from a file. Since here it's in string format, we used io.StringIO to make it easily compatible with SeqIO.parse from BioPython
The get_orfs function finds ATGs and returns the ORG originating from each one. If you're also interested in frames 4 through 6, you'll need the reverse_complement of the sequence.
If you're only interested in the longest ORF from each fasta sequence, you could have the get_orfs function return (max(orfs, key=len))
It's marginally more difficult if you're only interested in ORFs starting with ATG in a specific frame (e.g. frame 3). The simplest approach there might be to simply find all ORFs from the frame and then discard those not starting with ATG.

Categories