I want to match a list of variables which look like directories, e.g.:
Same/Same2/Foot/Ankle/Joint/Actuator/Sensor/Temperature/Value=4.123
Same/Same2/Battery/Name=SomeString
Same/Same2/Home/Land/Some/More/Stuff=0.34
The length of the "subdirectories" is variable having an upper bound (above it's 9).
I want to group every subdirectory except the 1st one which I named "Same" above.
The best I could come up with is:
^(?:([^/]+)/){4,8}([^/]+)=(.*)
It already looks for 4-8 subdirectories but only groups the last one. Why's that?
Is there a better solution using group quantifiers?
Edit: Solved. Will use split() instead.
import re
regx = re.compile('(?:(?<=\A)|(?<=/)).+?(?=/|\Z)')
for ss in ('Same/Same2/Foot/Ankle/Joint/Actuator/Sensor/Temperature/Value=4.123',
'Same/Same2/Battery/Name=SomeString',
'Same/Same2/Home/Land/Some/More/Stuff=0.34'):
print ss
print regx.findall(ss)
print
Edit 1
Now you have given more info on what you want to obtain ( _"Same/Same2/Battery/Name=SomeString becoming SAME2_BATTERY_NAME=SomeString"_ ) better solutions can be proposed: either with a regex or with split() , + replace()
import re
from os import sep
sep2 = r'\\' if sep=='\\' else '/'
pat = '^(?:.+?%s)(.+$)' % sep2
print 'pat==%s\n' % pat
ragx = re.compile(pat)
for ss in ('Same\Same2\Foot\Ankle\Joint\Actuator\Sensor\Temperature\Value=4.123',
'Same\Same2\Battery\Name=SomeString',
'Same\Same2\Home\Land\Some\More\Stuff=0.34'):
print ss
print ragx.match(ss).group(1).replace(sep,'_')
print ss.split(sep,1)[1].replace(sep,'_')
print
result
pat==^(?:.+?\\)(.+$)
Same\Same2\Foot\Ankle\Joint\Actuator\Sensor\Temperature\Value=4.123
Same2_Foot_Ankle_Joint_Actuator_Sensor_Temperature_Value=4.123
Same2_Foot_Ankle_Joint_Actuator_Sensor_Temperature_Value=4.123
Same\Same2\Battery\Name=SomeString
Same2_Battery_Name=SomeString
Same2_Battery_Name=SomeString
Same\Same2\Home\Land\Some\More\Stuff=0.34
Same2_Home_Land_Some_More_Stuff=0.34
Same2_Home_Land_Some_More_Stuff=0.34
Edit 2
Re-reading your comment, I realized that I didn't take in account that you want to upper the part of the strings that lies before the '=' sign but not after it.
Hence, this new code that exposes 3 methods that answer this requirement. You will choose which one you prefer:
import re
from os import sep
sep2 = r'\\' if sep=='\\' else '/'
pot = '^(?:.+?%s)(.+?)=([^=]*$)' % sep2
print 'pot==%s\n' % pot
rogx = re.compile(pot)
pet = '^(?:.+?%s)(.+?(?==[^=]*$))' % sep2
print 'pet==%s\n' % pet
regx = re.compile(pet)
for ss in ('Same\Same2\Foot\Ankle\Joint\Sensor\Value=4.123',
'Same\Same2\Battery\Name=SomeString',
'Same\Same2\Ocean\Atlantic\North=',
'Same\Same2\Maths\Addition\\2+2=4\Simple=ohoh'):
print ss + '\n' + len(ss)*'-'
print 'rogx groups '.rjust(32),rogx.match(ss).groups()
a,b = ss.split(sep,1)[1].rsplit('=',1)
print 'split split '.rjust(32),(a,b)
print 'split split join upper replace %s=%s' % (a.replace(sep,'_').upper(),b)
print 'regx split group '.rjust(32),regx.match(ss.split(sep,1)[1]).group()
print 'regx split sub '.rjust(32),\
regx.sub(lambda x: x.group(1).replace(sep,'_').upper(), ss)
print
result, on a Windows platform
pot==^(?:.+?\\)(.+?)=([^=]*$)
pet==^(?:.+?\\)(.+?(?==[^=]*$))
Same\Same2\Foot\Ankle\Joint\Sensor\Value=4.123
----------------------------------------------
rogx groups ('Same2\\Foot\\Ankle\\Joint\\Sensor\\Value', '4.123')
split split ('Same2\\Foot\\Ankle\\Joint\\Sensor\\Value', '4.123')
split split join upper replace SAME2_FOOT_ANKLE_JOINT_SENSOR_VALUE=4.123
regx split group Same2\Foot\Ankle\Joint\Sensor\Value
regx split sub SAME2_FOOT_ANKLE_JOINT_SENSOR_VALUE=4.123
Same\Same2\Battery\Name=SomeString
----------------------------------
rogx groups ('Same2\\Battery\\Name', 'SomeString')
split split ('Same2\\Battery\\Name', 'SomeString')
split split join upper replace SAME2_BATTERY_NAME=SomeString
regx split group Same2\Battery\Name
regx split sub SAME2_BATTERY_NAME=SomeString
Same\Same2\Ocean\Atlantic\North=
--------------------------------
rogx groups ('Same2\\Ocean\\Atlantic\\North', '')
split split ('Same2\\Ocean\\Atlantic\\North', '')
split split join upper replace SAME2_OCEAN_ATLANTIC_NORTH=
regx split group Same2\Ocean\Atlantic\North
regx split sub SAME2_OCEAN_ATLANTIC_NORTH=
Same\Same2\Maths\Addition\2+2=4\Simple=ohoh
-------------------------------------------
rogx groups ('Same2\\Maths\\Addition\\2+2=4\\Simple', 'ohoh')
split split ('Same2\\Maths\\Addition\\2+2=4\\Simple', 'ohoh')
split split join upper replace SAME2_MATHS_ADDITION_2+2=4_SIMPLE=ohoh
regx split group Same2\Maths\Addition\2+2=4\Simple
regx split sub SAME2_MATHS_ADDITION_2+2=4_SIMPLE=ohoh
I probably misunderstood what exactly you want to do, but here is how you would do it without regex:
for entry in list_of_vars:
key, value = entry.split('=')
key_components = key.split('/')
if 4 <= len(key_components) <= 8:
# here the actual work is done
print "%s=%s" % ('_'.join(key_components[1:]).upper(), value)
Just use split?
>>> p='Same/Same2/Foot/Ankle/Joint/Actuator/Sensor/Temperature/Value=4.123'
>>> p.split('/')
['Same', 'Same2', 'Foot', 'Ankle', 'Joint', 'Actuator', 'Sensor', 'Temperature', 'Value=4.123']
Also, if you want that key/val pair you can do something like this...
>>> s = p.split('/')
>>> s[-1].split('=')
['Value', '4.123']
A couple of variations on your theme. For one, I've always found regexen to be cryptic to the point of unmaintainable, so I wrote the pyparsing module. In my mind, I look at your code and think, "oh, it's a list of '/'-delimited strings, an '=' sign, and then some kind of rvalue." And that translates pretty directly into the pyparsing parser definition code. By adding a name here and there in the parser ("key" and "value", similar to named groups in regex), the output is pretty easily processed.
data="""\
Same/Same2/Foot/Ankle/Joint/Actuator/Sensor/Temperature/Value=4.123
Same/Same2/Battery/Name=SomeString
Same/Same2/Home/Land/Some/More/Stuff=0.34""".splitlines()
from pyparsing import Word, alphas, alphanums, Word, nums, QuotedString, delimitedList
wd = Word(alphas, alphanums)
number = Word(nums+'+-', nums+'.').setParseAction(lambda t:float(t[0]))
rvalue = wd | number | QuotedString('"')
defn = delimitedList(wd, '/')('key') + '=' + rvalue('value')
for d in data:
result = defn.parseString(d)
Second, I question your approach at defining all of those variable names - creating variable names on the fly based on your data is a pretty well-recognized Code Smell (not necessarily bad, but you might really want to rethink this approach). I used a recursive defaultdict to create a navigable structure so that you can easily do operations like "find all the entries that are sub-elements of "Same2" (in this case, "Foot", "Battery", and "Home") - this kind of work is more difficult when trying to sift through some collection of variable names as found in locals(), it seems to me you will end up re-parsing these names to reconstruct the key hierarchy.
from collections import defaultdict
class recursivedefaultdict(defaultdict):
def __init__(self, attrFactory=int):
self.default_factory = lambda : type(self)(attrFactory)
self._attrFactory = attrFactory
def __getattr__(self, attr):
newval = self._attrFactory()
setattr(self, attr, newval)
return newval
table = recursivedefaultdict()
# parse each entry, and accumulate into hierarchical dict
for d in data:
# use pyparsing parser, gives us key (list of names) and value
result = defn.parseString(d)
t = table
for k in result.key[:-1]:
t = t[k]
t[result.key[-1]] = result.value
# recursive method to iterate over hierarchical dict
def showTable(t, indent=''):
for k,v in t.items():
print indent+k,
if isinstance(v,dict):
print
showTable(v, indent+' ')
else:
print v
showTable(table)
Prints:
Same
Same2
Foot
Ankle
Joint
Actuator
Sensor
Temperature
Value 4.123
Battery
Name SomeString
Home
Land
Some
More
Stuff 0.34
If you are really set on defining those variable names, then adding some helpful parse actions to pyparsing will reformat the parsed data at parse time, so that it's directly processable afterwards:
wd = Word(alphas, alphanums)
number = Word(nums+'+-', nums+'.').setParseAction(lambda t:float(t[0]))
rvaluewd = wd.copy().setParseAction(lambda t: '"%s"' % t[0])
rvalue = rvaluewd | number | QuotedString('"')
defn = delimitedList(wd, '/')('key') + '=' + rvalue('value')
def joinNamesWithAllCaps(tokens):
tokens["key"] = '_'.join(map(str.upper, tokens.key))
defn.setParseAction(joinNamesWithAllCaps)
for d in data:
result = defn.parseString(d)
print result.key,'=', result.value
Prints:
SAME_SAME2_FOOT_ANKLE_JOINT_ACTUATOR_SENSOR_TEMPERATURE_VALUE = 4.123
SAME_SAME2_BATTERY_NAME = "SomeString"
SAME_SAME2_HOME_LAND_SOME_MORE_STUFF = 0.34
(Note that this also encloses your SomeString value in quotes, so that the resulting assignment statement is valid Python.)
Related
I have a long string that is a phylogenetic tree and I want to do a very specific filtering.
(Esy#ESY15_g64743_DN3_SP7_c0:0.0726396855636,Aar#AA_maker7399_1:0.137507902808,((Spa#Tp2g18720:0.0318934795022,Cpl#CP2_g48793_DN3_SP8_c:0.0273465005242):9.05326020871e-05,(((Bst#Bostr_13083s0053_1:0.0332592496158,((Aly#AL8G21130_t1:0.0328569260951,Ath#AT5G48370_1:0.0391706378372):0.0205924636564,(Chi#CARHR183840_1:0.0954469923893,Cru#Carubv10026342m:0.0570981548016):0.00998579652059):0.0150356382287):0.0340484449097,(((Hco#scaff1034_g23864_DN3_SP8_c_TE35_CDS100:0.00823215335663,Hlo#DN13684_c0_g1_i1_p1:0.0085462978729):0.0144626717872,Hla#DN22821_c0_g1_i1_p1:0.0225079453622):0.0206478928557,Hse#DN23412_c0_g1_i3_p1:0.048590776459):0.0372829371381):0.00859075940423,(Esa#Thhalv10004228m:0.0378509854703,Aal#Aa_G102140_t1:0.0712272454125):1.00000050003e-06):0.00328120860999):0.0129090235079):0.0129090235079;
Basically every x#y is a species#gene_id information. What I am trying to do is trimming this down so that I will only have x instead of x#y.
(Esy, Aar,(Spa,Cpl))...
I tried splitting the string first but the problem is string has different 'split points' for what I want to achieve i.e. some parts x#y is ending with a , and others with a ). I searched for a solution and saw regular expression operations, but I am new to Python and I couldn't be sure if that is what I should be focusing on. I also thought about strip() but it seems like I need to specify the characters to be stripped for this.
Main problem is there is no 'pattern' for me to tell Python to follow. Only thing is that all species ids are 3 letters and they are before an # character.
Is there a method that can do what I want? I will be really glad if you can help me out with my problem. Thanks in advance.
Give this a try:
import re:
pat = re.compile(r'(\w{3})#')
txt = "(Esy#ESY15_g64743_DN3_SP7_c0:0.0726396855636,Aar#AA_maker7399_1:0.137507902808,((Spa#Tp2g18720:0.0318934795022,Cpl#CP2_g48793_DN3_SP8_c:0.0273465005242):9.05326020871e-05,(((Bst#Bostr_13083s0053_1:0.0332592496158,((Aly#AL8G21130_t1:0.0328569260951,Ath#AT5G48370_1:0.0391706378372):0.0205924636564,(Chi#CARHR183840_1:0.0954469923893,Cru#Carubv10026342m:0.0570981548016):0.00998579652059):0.0150356382287):0.0340484449097,(((Hco#scaff1034_g23864_DN3_SP8_c_TE35_CDS100:0.00823215335663,Hlo#DN13684_c0_g1_i1_p1:0.0085462978729):0.0144626717872,Hla#DN22821_c0_g1_i1_p1:0.0225079453622):0.0206478928557,Hse#DN23412_c0_g1_i3_p1:0.048590776459):0.0372829371381):0.00859075940423,(Esa#Thhalv10004228m:0.0378509854703,Aal#Aa_G102140_t1:0.0712272454125):1.00000050003e-06):0.00328120860999):0.0129090235079):0.0129090235079;"
pat.findall(t)
Result:
['Esy', 'Aar', 'Spa', 'Cpl', 'Bst', 'Aly', 'Ath', 'Chi', 'Cru', 'Hco', 'Hlo', 'Hla', 'Hse', 'Esa', 'Aal']
If you need the structure intact, we can try to remove the unnecessary parts instead:
pat = re.compile(r'(#|:)[^/),]*')
pat.sub('',t).replace(',', ', ')
Result:
'(Esy, Aar, ((Spa, Cpl), (((Bst, ((Aly, Ath), (Chi, Cru))), (((Hco, Hlo), Hla), Hse)), (Esa, Aal))))'
Regex demo
How about this kind of function:
def parse_string(string):
new_string = ''
skip = False
for char in string:
if char == '#':
skip = True
if char == ',':
skip = False
if not skip or char in ['(', ')']:
new_string += char
return new_string
Calling it on your string:
string = '(Esy#ESY15_g64743_DN3_SP7_c0:0.0726396855636,Aar#AA_maker7399_1:0.137507902808,((Spa#Tp2g18720:0.0318934795022,Cpl#CP2_g48793_DN3_SP8_c:0.0273465005242):9.05326020871e-05,(((Bst#Bostr_13083s0053_1:0.0332592496158,((Aly#AL8G21130_t1:0.0328569260951,Ath#AT5G48370_1:0.0391706378372):0.0205924636564,(Chi#CARHR183840_1:0.0954469923893,Cru#Carubv10026342m:0.0570981548016):0.00998579652059):0.0150356382287):0.0340484449097,(((Hco#scaff1034_g23864_DN3_SP8_c_TE35_CDS100:0.00823215335663,Hlo#DN13684_c0_g1_i1_p1:0.0085462978729):0.0144626717872,Hla#DN22821_c0_g1_i1_p1:0.0225079453622):0.0206478928557,Hse#DN23412_c0_g1_i3_p1:0.048590776459):0.0372829371381):0.00859075940423,(Esa#Thhalv10004228m:0.0378509854703,Aal#Aa_G102140_t1:0.0712272454125):1.00000050003e-06):0.00328120860999):0.0129090235079):0.0129090235079;'
parse_string(string)
> '(Esy,Aar,((Spa,Cpl),(((Bst,((Aly,Ath),(Chi,Cru))),(((Hco,Hlo),Hla),Hse)),(Esa,Aal))))'
you can use regex:
import re
s = "(Esy#ESY15_g64743_DN3_SP7_c0:0.0726396855636,Aar#AA_maker7399_1:0.137507902808,((Spa#Tp2g18720:0.0318934795022,Cpl#CP2_g48793_DN3_SP8_c:0.0273465005242):9.05326020871e-05,(((Bst#Bostr_13083s0053_1:0.0332592496158,((Aly#AL8G21130_t1:0.0328569260951,Ath#AT5G48370_1:0.0391706378372):0.0205924636564,(Chi#CARHR183840_1:0.0954469923893,Cru#Carubv10026342m:0.0570981548016):0.00998579652059):0.0150356382287):0.0340484449097,(((Hco#scaff1034_g23864_DN3_SP8_c_TE35_CDS100:0.00823215335663,Hlo#DN13684_c0_g1_i1_p1:0.0085462978729):0.0144626717872,Hla#DN22821_c0_g1_i1_p1:0.0225079453622):0.0206478928557,Hse#DN23412_c0_g1_i3_p1:0.048590776459):0.0372829371381):0.00859075940423,(Esa#Thhalv10004228m:0.0378509854703,Aal#Aa_G102140_t1:0.0712272454125):1.00000050003e-06):0.00328120860999):0.0129090235079):0.0129090235079;"
p = "...?(?=#)|\(|\)"
result = re.findall(p, s)
and you have your result as a list, so you can make it string or do anything with it
for explaining what is happening :
p is regular expression pattern
so in this pattern:
. means matching any word
...?(?=#) means match any word until I get to a word ? wich ? is #, so this whole pattern means that you get any three words before #
| is or statement, I used it here to find another pattern
and the rest of them is to find ) and (
Try this regex if you need the brackets in the output:
import re
regex = r"#[A-Za-z0-9_\.:]+|[0-9:\.;e-]+"
phylogenetic_tree = "(Esy#ESY15_g64743_DN3_SP7_c0:0.0726396855636,Aar#AA_maker7399_1:0.137507902808,((Spa#Tp2g18720:0.0318934795022,Cpl#CP2_g48793_DN3_SP8_c:0.0273465005242):9.05326020871e-05,(((Bst#Bostr_13083s0053_1:0.0332592496158,((Aly#AL8G21130_t1:0.0328569260951,Ath#AT5G48370_1:0.0391706378372):0.0205924636564,(Chi#CARHR183840_1:0.0954469923893,Cru#Carubv10026342m:0.0570981548016):0.00998579652059):0.0150356382287):0.0340484449097,(((Hco#scaff1034_g23864_DN3_SP8_c_TE35_CDS100:0.00823215335663,Hlo#DN13684_c0_g1_i1_p1:0.0085462978729):0.0144626717872,Hla#DN22821_c0_g1_i1_p1:0.0225079453622):0.0206478928557,Hse#DN23412_c0_g1_i3_p1:0.048590776459):0.0372829371381):0.00859075940423,(Esa#Thhalv10004228m:0.0378509854703,Aal#Aa_G102140_t1:0.0712272454125):1.00000050003e-06):0.00328120860999):0.0129090235079):0.0129090235079;"
print(re.sub(regex,"",phylogenetic_tree))
Output:
(Esy,Aar,((Spa,Cpl),(((Bst,((Aly,Ath),(Chi,Cru))),(((Hco,Hlo),Hla),Hs)),(Esa,Aal))))
Because you are trying to parse a phylogenetic tree, I highly suggest to let BioPython do the heavy lifting for you.
You can easily parse and display a phylogenetic with Bio.Phylo. Then it is just iterating over all tree elements and splitting the names at the 'at'-sign.
Because Phylo expects the input to be in a file, we create an in-memory file-like object with io.StringIO. Getting the complete tree is then as easy as
Phylo.read(io.StringIO(s), 'newick')
In order to check if the parsed tree looks sane, I print it once with print(tree).
Now we want to change all node names that contain a '#'. With tree.find_elements we get access to all nodes. Some nodes don't have a name and some might not contain a '#'. So to be extra careful, we first check if n.name and '#' in n.name. Only then do we split each node's name at the '#' and take just the first part (index 0) of it:
n.name = n.name.split('#')[0]
In order to recreate the initial string representation, we use Phylo.write:
out = io.StringIO()
Phylo.write(tree, out, "newick")
print(out.getvalue())
Again, write wants to get a file argument - if we just want to get a string, we can use a StringIO object again.
Full code:
import io
from Bio import Phylo
if __name__ == '__main__':
s = '(Esy#ESY15_g64743_DN3_SP7_c0:0.0726396855636,Aar#AA_maker7399_1:0.137507902808,((Spa#Tp2g18720:0.0318934795022,Cpl#CP2_g48793_DN3_SP8_c:0.0273465005242):9.05326020871e-05,(((Bst#Bostr_13083s0053_1:0.0332592496158,((Aly#AL8G21130_t1:0.0328569260951,Ath#AT5G48370_1:0.0391706378372):0.0205924636564,(Chi#CARHR183840_1:0.0954469923893,Cru#Carubv10026342m:0.0570981548016):0.00998579652059):0.0150356382287):0.0340484449097,(((Hco#scaff1034_g23864_DN3_SP8_c_TE35_CDS100:0.00823215335663,Hlo#DN13684_c0_g1_i1_p1:0.0085462978729):0.0144626717872,Hla#DN22821_c0_g1_i1_p1:0.0225079453622):0.0206478928557,Hse#DN23412_c0_g1_i3_p1:0.048590776459):0.0372829371381):0.00859075940423,(Esa#Thhalv10004228m:0.0378509854703,Aal#Aa_G102140_t1:0.0712272454125):1.00000050003e-06):0.00328120860999):0.0129090235079):0.0129090235079;'
tree = Phylo.read(io.StringIO(s), 'newick')
print(' before '.center(20, '='))
print(tree)
for n in tree.find_elements():
if n.name and '#' in n.name:
n.name = n.name.split('#')[0]
print(' result '.center(20, '='))
out = io.StringIO()
Phylo.write(tree, out, "newick")
print(out.getvalue())
Output:
====== before ======
Tree(rooted=False, weight=1.0)
Clade(branch_length=0.0129090235079)
Clade(branch_length=0.0726396855636, name='Esy#ESY15_g64743_DN3_SP7_c0')
Clade(branch_length=0.137507902808, name='Aar#AA_maker7399_1')
Clade(branch_length=0.0129090235079)
Clade(branch_length=9.05326020871e-05)
Clade(branch_length=0.0318934795022, name='Spa#Tp2g18720')
Clade(branch_length=0.0273465005242, name='Cpl#CP2_g48793_DN3_SP8_c')
Clade(branch_length=0.00328120860999)
Clade(branch_length=0.00859075940423)
Clade(branch_length=0.0340484449097)
Clade(branch_length=0.0332592496158, name='Bst#Bostr_13083s0053_1')
Clade(branch_length=0.0150356382287)
Clade(branch_length=0.0205924636564)
Clade(branch_length=0.0328569260951, name='Aly#AL8G21130_t1')
Clade(branch_length=0.0391706378372, name='Ath#AT5G48370_1')
Clade(branch_length=0.00998579652059)
Clade(branch_length=0.0954469923893, name='Chi#CARHR183840_1')
Clade(branch_length=0.0570981548016, name='Cru#Carubv10026342m')
Clade(branch_length=0.0372829371381)
Clade(branch_length=0.0206478928557)
Clade(branch_length=0.0144626717872)
Clade(branch_length=0.00823215335663, name='Hco#scaff1034_g23864_DN3_SP8_c_TE35_CDS100')
Clade(branch_length=0.0085462978729, name='Hlo#DN13684_c0_g1_i1_p1')
Clade(branch_length=0.0225079453622, name='Hla#DN22821_c0_g1_i1_p1')
Clade(branch_length=0.048590776459, name='Hse#DN23412_c0_g1_i3_p1')
Clade(branch_length=1.00000050003e-06)
Clade(branch_length=0.0378509854703, name='Esa#Thhalv10004228m')
Clade(branch_length=0.0712272454125, name='Aal#Aa_G102140_t1')
==== result =====
(Esy:0.07264,Aar:0.13751,((Spa:0.03189,Cpl:0.02735):0.00009,(((Bst:0.03326,((Aly:0.03286,Ath:0.03917):0.02059,(Chi:0.09545,Cru:0.05710):0.00999):0.01504):0.03405,(((Hco:0.00823,Hlo:0.00855):0.01446,Hla:0.02251):0.02065,Hse:0.04859):0.03728):0.00859,(Esa:0.03785,Aal:0.07123):0.00000):0.00328):0.01291):0.01291;
The default format of Phylo uses less digits than in your original tree. In order to keep the numbers unchanged, just override the branch length format string with a '%s':
Phylo.write(tree, out, "newick", format_branch_length="%s")
Parsing code can be hard to follow. Tatsu lets you write readable parsing code by combining grammars and python:
text = "(Esy#ESY15_g64743_DN3_SP7_c0:0.0726396855636,Aar#AA_maker7399_1:0.137507902808,((Spa#Tp2g18720:0.0318934795022,Cpl#CP2_g48793_DN3_SP8_c:0.0273465005242):9.05326020871e-05,(((Bst#Bostr_13083s0053_1:0.0332592496158,((Aly#AL8G21130_t1:0.0328569260951,Ath#AT5G48370_1:0.0391706378372):0.0205924636564,(Chi#CARHR183840_1:0.0954469923893,Cru#Carubv10026342m:0.0570981548016):0.00998579652059):0.0150356382287):0.0340484449097,(((Hco#scaff1034_g23864_DN3_SP8_c_TE35_CDS100:0.00823215335663,Hlo#DN13684_c0_g1_i1_p1:0.0085462978729):0.0144626717872,Hla#DN22821_c0_g1_i1_p1:0.0225079453622):0.0206478928557,Hse#DN23412_c0_g1_i3_p1:0.048590776459):0.0372829371381):0.00859075940423,(Esa#Thhalv10004228m:0.0378509854703,Aal#Aa_G102140_t1:0.0712272454125):1.00000050003e-06):0.00328120860999):0.0129090235079):0.0129090235079;"
import sys
import tatsu
grammar = """
start = things ';'
;
things = thing [ ',' things ]
;
thing = x '#' y ':' number
| '(' things ')' ':' number
;
x = /\w+/
;
y = /\w+/
;
number = /[+-]?\d+\.?\d*(e?[+-]?\d*)/
;
"""
class Semantics:
def x(self, ast):
# the method name matches the rule name
print('X =', ast)
parser = tatsu.compile(grammar, semantics=Semantics())
parser.parse(text)
There are some nice ways to handle simultaneous multi-string replacement in python. However, I am having trouble creating an efficient function that can do that while also supporting backreferences.
What i would like is to use a dictionary of expression / replacement terms, where the replacement terms may contain backreferences to something matched by the expression.
e.g. (note the \1)
repdict = {'&&':'and', '||':'or', '!([a-zA-Z_])':'not \1'}
I put the SO answer mentioned at the outset into the function below, which works fine for expression / replacement pairs that don't contain backreferences:
def replaceAll(repdict, text):
repdict = dict((re.escape(k), v) for k, v in repdict.items())
pattern = re.compile("|".join(repdict.keys()))
return pattern.sub(lambda m: repdict[re.escape(m.group(0))], text)
However, it doesn't work for the key that does contain a backreference..
>>> replaceAll(repldict, "!newData.exists() || newData.val().length == 1")
'!newData.exists() or newData.val().length == 1'
If i do it manually, it works fine. e.g.:
pattern = re.compile("!([a-zA-Z_])")
pattern.sub(r'not \1', '!newData.exists()')
Works as expected:
'not newData.exists()'
In the fancy function, the escaping seems to be messing up the key that uses the backref, so it never matches anything.
I eventually came up with this. However, note that the problem of supporting backrefs in the input parameters is not solved, i'm just handling it manually in the replacer function:
def replaceAll(repPat, text):
def replacer(obj):
match = obj.group(0)
# manually deal with exclamation mark match..
if match[:1] == "!": return 'not ' + match[1:]
# here we naively escape the matched pattern into
# the format of our dictionary key
else: return repPat[naive_escaper(match)]
pattern = re.compile("|".join(repPat.keys()))
return pattern.sub(replacer, text)
def naive_escaper(string):
if '=' in string: return string.replace('=', '\=')
elif '|' in string: return string.replace('|', '\|')
else: return string
# manually escaping \ and = works fine
repPat = {'!([a-zA-Z_])':'', '&&':'and', '\|\|':'or', '\=\=\=':'=='}
replaceAll(repPat, "(!this && !that) || !this && foo === bar")
Returns:
'(not this and not that) or not this'
So if anyone has an idea how to make a multi-string replacement function that supports backreferences and accepts the replacement terms as input, I'd appreciate your feedback very much.
Update: See Angus Hollands' answer for a better alternative.
I couldn't think of an easier way to do it than to stick with the original idea of combining all dict keys into one massive regex.
However, there are some difficulties. Let's assume a repldict like this:
repldict = {r'(a)': r'\1a', r'(b)': r'\1b'}
If we combine these to a single regex, we get (a)|(b) - so now (b) is no longer group 1, which means its backreference won't work correctly.
Another problem is that we can't tell which replacement to use. If the regex matches the text b, how can we find out that \1b is the appropriate replacement? It's not possible; we don't have enough information.
The solution to these problems is to enclose every dict key in a named group like so:
(?P<group1>(a))|(?P<group2>(b))
Now we can easily identify the key that matched, and recalculate the backreferences to make them relative to this group. so that \1b refers to "the first group after group2".
Here's the implementation:
def replaceAll(repldict, text):
# split the dict into two lists because we need the order to be reliable
keys, repls = zip(*repldict.items())
# generate a regex pattern from the keys, putting each key in a named group
# so that we can find out which one of them matched.
# groups are named "_<idx>" where <idx> is the index of the corresponding
# replacement text in the list above
pattern = '|'.join('(?P<_{}>{})'.format(i, k) for i, k in enumerate(keys))
def repl(match):
# find out which key matched. We know that exactly one of the keys has
# matched, so it's the only named group with a value other than None.
group_name = next(name for name, value in match.groupdict().items()
if value is not None)
group_index = int(group_name[1:])
# now that we know which group matched, we can retrieve the
# corresponding replacement text
repl_text = repls[group_index]
# now we'll manually search for backreferences in the
# replacement text and substitute them
def repl_backreference(m):
reference_index = int(m.group(1))
# return the corresponding group's value from the original match
# +1 because regex starts counting at 1
return match.group(group_index + reference_index + 1)
return re.sub(r'\\(\d+)', repl_backreference, repl_text)
return re.sub(pattern, repl, text)
Tests:
repldict = {'&&':'and', r'\|\|':'or', r'!([a-zA-Z_])':r'not \1'}
print( replaceAll(repldict, "!newData.exists() || newData.val().length == 1") )
repldict = {'!([a-zA-Z_])':r'not \1', '&&':'and', r'\|\|':'or', r'\=\=\=':'=='}
print( replaceAll(repldict, "(!this && !that) || !this && foo === bar") )
# output: not newData.exists() or newData.val().length == 1
# (not this and not that) or not this and foo == bar
Caveats:
Only numerical backreferences are supported; no named references.
Silently accepts invalid backreferences like {r'(a)': r'\2'}. (These will sometimes throw an error, but not always.)
Similar solution to Rawing, only precomputing the expensive stuff ahead of time by modifying the group indices in backreferences. Also, using unnamed groups.
Here we silently wrap each case in a capture group, and then update any replacements with backreferences to correctly identify the appropriate subgroup by absolute position. Note, that when using a replacer function, backreferences do not work by default (you need to call match.expand).
import re
from collections import OrderedDict
from functools import partial
pattern_to_replacement = {'&&': 'and', '!([a-zA-Z_]+)': r'not \1'}
def build_replacer(cases):
ordered_cases = OrderedDict(cases.items())
replacements = {}
leading_groups = 0
for pattern, replacement in ordered_cases.items():
leading_groups += 1
# leading_groups is now the absolute position of the root group (back-references should be relative to this)
group_index = leading_groups
replacement = absolute_backreference(replacement, group_index)
replacements[group_index] = replacement
# This pattern contains N subgroups (determine by compiling pattern)
subgroups = re.compile(pattern).groups
leading_groups += subgroups
catch_all = "|".join("({})".format(p) for p in ordered_cases)
pattern = re.compile(catch_all)
def replacer(match):
replacement_pattern = replacements[match.lastindex]
return match.expand(replacement_pattern)
return partial(pattern.sub, replacer)
def absolute_backreference(text, n):
ref_pat = re.compile(r"\\([0-99])")
def replacer(match):
return "\\{}".format(int(match.group(1)) + n)
return ref_pat.sub(replacer, text)
replacer = build_replacer(pattern_to_replacement)
print(replacer("!this.exists()"))
Simple is better than complex, code as below is more readable(The reason why you code not work as expected is that ([a-zA-Z_]) should not be in re.escape):
repdict = {
r'\s*' + re.escape('&&')) + r'\s*': ' and ',
r'\s*' + re.escape('||') + r'\s*': ' or ',
re.escape('!') + r'([a-zA-Z_])': r'not \1',
}
def replaceAll(repdict, text):
for k, v in repdict.items():
text = re.sub(k, v, text)
return text
I can use this code below to create a new file with the substitution of a with aa using regular expressions.
import re
with open("notes.txt") as text:
new_text = re.sub("a", "aa", text.read())
with open("notes2.txt", "w") as result:
result.write(new_text)
I was wondering do I have to use this line, new_text = re.sub("a", "aa", text.read()), multiple times but substitute the string for others letters that I want to change in order to change more than one letter in my text?
That is, so a-->aa,b--> bb and c--> cc.
So I have to write that line for all the letters I want to change or is there an easier way. Perhaps to create a "dictionary" of translations. Should I put those letters into an array? I'm not sure how to call on them if I do.
The answer proposed by #nhahtdh is valid, but I would argue less pythonic than the canonical example, which uses code less opaque than his regex manipulations and takes advantage of python's built-in data structures and anonymous function feature.
A dictionary of translations makes sense in this context. In fact, that's how the Python Cookbook does it, as shown in this example (copied from ActiveState http://code.activestate.com/recipes/81330-single-pass-multiple-replace/ )
import re
def multiple_replace(dict, text):
# Create a regular expression from the dictionary keys
regex = re.compile("(%s)" % "|".join(map(re.escape, dict.keys())))
# For each match, look-up corresponding value in dictionary
return regex.sub(lambda mo: dict[mo.string[mo.start():mo.end()]], text)
if __name__ == "__main__":
text = "Larry Wall is the creator of Perl"
dict = {
"Larry Wall" : "Guido van Rossum",
"creator" : "Benevolent Dictator for Life",
"Perl" : "Python",
}
print multiple_replace(dict, text)
So in your case, you could make a dict trans = {"a": "aa", "b": "bb"} and then pass it into multiple_replace along with the text you want translated. Basically all that function is doing is creating one huge regex containing all of your regexes to translate, then when one is found, passing a lambda function to regex.sub to perform the translation dictionary lookup.
You could use this function while reading from your file, for example:
with open("notes.txt") as text:
new_text = multiple_replace(replacements, text.read())
with open("notes2.txt", "w") as result:
result.write(new_text)
I've actually used this exact method in production, in a case where I needed to translate the months of the year from Czech into English for a web scraping task.
As #nhahtdh pointed out, one downside to this approach is that it is not prefix-free: dictionary keys that are prefixes of other dictionary keys will cause the method to break.
You can use capturing group and backreference:
re.sub(r"([characters])", r"\1\1", text.read())
Put characters that you want to double up in between []. For the case of lower case a, b, c:
re.sub(r"([abc])", r"\1\1", text.read())
In the replacement string, you can refer to whatever matched by a capturing group () with \n notation where n is some positive integer (0 excluded). \1 refers to the first capturing group. There is another notation \g<n> where n can be any non-negative integer (0 allowed); \g<0> will refer to the whole text matched by the expression.
If you want to double up all characters except new line:
re.sub(r"(.)", r"\1\1", text.read())
If you want to double up all characters (new line included):
re.sub(r"(.)", r"\1\1", text.read(), 0, re.S)
You can use the pandas library and the replace function. I represent one example with five replacements:
df = pd.DataFrame({'text': ['Billy is going to visit Rome in November', 'I was born in 10/10/2010', 'I will be there at 20:00']})
to_replace=['Billy','Rome','January|February|March|April|May|June|July|August|September|October|November|December', '\d{2}:\d{2}', '\d{2}/\d{2}/\d{4}']
replace_with=['name','city','month','time', 'date']
print(df.text.replace(to_replace, replace_with, regex=True))
And the modified text is:
0 name is going to visit city in month
1 I was born in date
2 I will be there at time
You can find the example here
None of the other solutions work if your patterns are themselves regexes.
For that, you need:
def multi_sub(pairs, s):
def repl_func(m):
# only one group will be present, use the corresponding match
return next(
repl
for (patt, repl), group in zip(pairs, m.groups())
if group is not None
)
pattern = '|'.join("({})".format(patt) for patt, _ in pairs)
return re.sub(pattern, repl_func, s)
Which can be used as:
>>> multi_sub([
... ('a+b', 'Ab'),
... ('b', 'B'),
... ('a+', 'A.'),
... ], "aabbaa") # matches as (aab)(b)(aa)
'AbBA.'
Note that this solution does not allow you to put capturing groups in your regexes, or use them in replacements.
Using tips from how to make a 'stringy' class, we can make an object identical to a string but for an extra sub method:
import re
class Substitutable(str):
def __new__(cls, *args, **kwargs):
newobj = str.__new__(cls, *args, **kwargs)
newobj.sub = lambda fro,to: Substitutable(re.sub(fro, to, newobj))
return newobj
This allows to use the builder pattern, which looks nicer, but works only for a pre-determined number of substitutions. If you use it in a loop, there is no point creating an extra class anymore. E.g.
>>> h = Substitutable('horse')
>>> h
'horse'
>>> h.sub('h', 'f')
'forse'
>>> h.sub('h', 'f').sub('f','h')
'horse'
I found I had to modify Emmett J. Butler's code by changing the lambda function to use myDict.get(mo.group(1),mo.group(1)). The original code wasn't working for me; using myDict.get() also provides the benefit of a default value if a key is not found.
OIDNameContraction = {
'Fucntion':'Func',
'operated':'Operated',
'Asist':'Assist',
'Detection':'Det',
'Control':'Ctrl',
'Function':'Func'
}
replacementDictRegex = re.compile("(%s)" % "|".join(map(re.escape, OIDNameContraction.keys())))
oidDescriptionStr = replacementDictRegex.sub(lambda mo:OIDNameContraction.get(mo.group(1),mo.group(1)), oidDescriptionStr)
If you dealing with files, I have a simple python code about this problem.
More info here.
import re
def multiple_replace(dictionary, text):
# Create a regular expression from the dictionaryary keys
regex = re.compile("(%s)" % "|".join(map(re.escape, dictionary.keys())))
# For each match, look-up corresponding value in dictionaryary
String = lambda mo: dictionary[mo.string[mo.start():mo.end()]]
return regex.sub(String , text)
if __name__ == "__main__":
dictionary = {
"Wiley Online Library" : "Wiley",
"Chemical Society Reviews" : "Chem. Soc. Rev.",
}
with open ('LightBib.bib', 'r') as Bib_read:
with open ('Abbreviated.bib', 'w') as Bib_write:
read_lines = Bib_read.readlines()
for rows in read_lines:
#print(rows)
text = rows
new_text = multiple_replace(dictionary, text)
#print(new_text)
Bib_write.write(new_text)
Based on Eric's great answer, I came up with a more general solution that is capable of handling capturing groups and backreferences:
import re
from itertools import islice
def multiple_replace(s, repl_dict):
groups_no = [re.compile(pattern).groups for pattern in repl_dict]
def repl_func(m):
all_groups = m.groups()
# Use 'i' as the index within 'all_groups' and 'j' as the main
# group index.
i, j = 0, 0
while i < len(all_groups) and all_groups[i] is None:
# Skip the inner groups and move on to the next group.
i += (groups_no[j] + 1)
# Advance the main group index.
j += 1
# Extract the pattern and replacement at the j-th position.
pattern, repl = next(islice(repl_dict.items(), j, j + 1))
return re.sub(pattern, repl, all_groups[i])
# Create the full pattern using the keys of 'repl_dict'.
full_pattern = '|'.join(f'({pattern})' for pattern in repl_dict)
return re.sub(full_pattern, repl_func, s)
Example. Calling the above with
s = 'This is a sample string. Which is getting replaced. 1234-5678.'
REPL_DICT = {
r'(.*?)is(.*?)ing(.*?)ch': r'\3-\2-\1',
r'replaced': 'REPLACED',
r'\d\d((\d)(\d)-(\d)(\d))\d\d': r'__\5\4__\3\2__',
r'get|ing': '!##'
}
gives:
>>> multiple_replace(s, REPL_DICT)
'. Whi- is a sample str-Th is !##t!## REPLACED. __65__43__.'
For a more efficient solution, one can create a simple wrapper to precompute groups_no and full_pattern, e.g.
import re
from itertools import islice
class ReplWrapper:
def __init__(self, repl_dict):
self.repl_dict = repl_dict
self.groups_no = [re.compile(pattern).groups for pattern in repl_dict]
self.full_pattern = '|'.join(f'({pattern})' for pattern in repl_dict)
def get_pattern_repl(self, pos):
return next(islice(self.repl_dict.items(), pos, pos + 1))
def multiple_replace(self, s):
def repl_func(m):
all_groups = m.groups()
# Use 'i' as the index within 'all_groups' and 'j' as the main
# group index.
i, j = 0, 0
while i < len(all_groups) and all_groups[i] is None:
# Skip the inner groups and move on to the next group.
i += (self.groups_no[j] + 1)
# Advance the main group index.
j += 1
return re.sub(*self.get_pattern_repl(j), all_groups[i])
return re.sub(self.full_pattern, repl_func, s)
Use it as follows:
>>> ReplWrapper(REPL_DICT).multiple_replace(s)
'. Whi- is a sample str-Th is !##t!## REPLACED. __65__43__.'
I dont know why most of the solutions try to compose a single regex pattern instead of replacing multiple times. This answer is just for the sake of completeness.
That being said, the output of this approach is different than the output of the combined regex approach. Namely, repeated substitutions may evolve the text over time. However, the following function returns the same output as a call to unix sed would:
def multi_replace(rules, data: str) -> str:
ret = data
for pattern, repl in rules:
ret = re.sub(pattern, repl, ret)
return ret
usage:
RULES = [
(r'a', r'b'),
(r'b', r'c'),
(r'c', r'd'),
]
multi_replace(RULES, 'ab') # output: dd
With the same input and rules, the other solutions will output "bc". Depending on your use case you may or may not want to replace strings consecutively. In my case I wanted to rebuild the sed behavior. Also, note that the order of rules matters. If you reverse the rule order, this example would also return "bc".
This solution is faster than combining the patterns into a single regex (by a factor of 100). So, if your use-case allows it, you should prefer the repeated substitution method.
Of course, you can compile the regex patterns:
class Sed:
def __init__(self, rules) -> None:
self._rules = [(re.compile(pattern), sub) for pattern, sub in rules]
def replace(self, data: str) -> str:
ret = data
for regx, repl in self._rules:
ret = regx.sub(repl, ret)
return ret
I'm a newbie to Regular expression in Python :
I have a list that i want to search if it's contain a employee name.
The employee name can be :
it can be at the beginning followed by space.
followed by ®
OR followed by space
OR Can be at the end and space before it
not a case sensitive
ListSentence = ["Steve®", "steveHotel", "Rob spring", "Car Daniel", "CarDaniel","Done daniel"]
ListEmployee = ["Steve", "Rob", "daniel"]
The output from the ListSentence is:
["Steve®", "Rob spring", "Car Daniel", "Done daniel"]
First take all your employee names and join them with a | character and wrap the string so it looks like:
(?:^|\s)((?:Steve|Rob|Daniel)(?:®)?)(?=\s|$)
By first joining all the names together you avoid the performance overhead of using a nested set of for next loops.
I don't know python well enough to offer a python example, however in powershell I'd write it like this
[array]$names = #("Steve", "Rob", "daniel")
[array]$ListSentence = #("Steve®", "steveHotel", "Rob spring", "Car Daniel", "CarDaniel","Done daniel")
# build the regex, and insert the names as a "|" delimited string
$Regex = "(?:^|\s)((?:" + $($names -join "|") + ")(?:®)?)(?=\s|$)"
# use case insensitive match to find any matching array values
$ListSentence -imatch $Regex
Yields
Steve®
Rob spring
Car Daniel
Done daniel
Why do you want to use regular expressions? I'd generally recommend avoiding them in Python - you can use string methods instead.
For example:
def string_has_employee_name_in_it(test_string):
test_string = test_string.lower() # case insensitive
for name in ListEmployee:
name = name.lower()
if name == test_string:
return True
elif name + '®' == test_string:
return True
elif test_string.endswith(' ' + name):
return True
elif test_string.startswith(name + ' '):
return True
elif (' ' + name + ' ') in test_string:
return True
return False
final_list = []
for string in ListSentence:
if string_has_employee_name_in_it(string):
final_list.append(string)
final_list is the list you want. This is longer than a regex, but it's also a lot easier to parse and maintain. You can make it a lot shorter in various ways (e.g. combining the tests in the function, and using a list comprehension instead of a loop), but as you're starting out with Python it's a good idea to be clear with what's going on.
I don't think you need to check for all of those scenarios. I think all you need to do is check for word breaks.
You can join the ListEmployee list with | to make an either or regex (also, lowercase it for case-insensitivity), surrounded by \b for word breaks, and that should work:
regex = '|'.join(ListEmployee).lower()
import re
[l for l in ListSentence if re.search(r'\b(%s)\b' % regex, l.lower())]
Should output:
['Steve\xb6\xa9', 'Rob spring', 'Car Daniel', 'Done daniel']
If you're just looking for strings containing a space, as your example indicates, it should be something like this:
[i for i in ListSentence if i.endswith('®') or (' ' in i)]
A possible solution:
import re
ListSentence = ["Steve®", "steveHotel", "Rob spring", "Car Daniel", "CarDaniel","Done daniel"]
ListEmployee = ["Steve", "Rob", "daniel"]
def findEmployees(employees, sentence):
retval = []
for employee in employees:
expr = re.compile(r'(^%(employee)s(®)?(\s|$))|((^|\s)%(employee)s(®)?(\s|$))|((^|\s)%(employee)s(®)?$)'
% {'employee': employee},
re.IGNORECASE)
for part in sentence:
if expr.search(part):
retval.append(part)
return retval
findEmployees(ListEmployee, ListSentence)
>> Returns ['Steve\xc3\x82\xc2\xae', 'Rob spring', 'Car Daniel', 'Done daniel']
Suppose I have a string like this:
"key1=value1;key2=value2;key3=(key3.1=value3.1;key3.2=value3.2)"
I would like to get a dictionary corresponding to the above, where the value for key3 is the string
"(key3.1=value3.1;key3.2=value3.2)"
and eventually the corresponding sub-dictionary.
I know how to split the string at the semicolons, but how can I tell the parser to ignore the semicolon between parentheses?
This includes potentially nested parentheses.
Currently I am using an ad-hoc routine that looks for pairs of matching parentheses, "clears" its content, gets split positions and applies them to the original string, but this does not appear very elegant, there must be some prepackaged pythonic way to do this.
If anyone is interested, here is the code I am currently using:
def pparams(parameters, sep=';', defs='=', brc='()'):
'''
unpackages parameter string to struct
for example, pippo(a=21;b=35;c=pluto(h=zzz;y=mmm);d=2d3f) becomes:
a: '21'
b: '35'
c.fn: 'pluto'
c.h='zzz'
d: '2d3f'
fn_: 'pippo'
'''
ob=strfind(parameters,brc[0])
dp=strfind(parameters,defs)
out={}
if len(ob)>0:
if ob[0]<dp[0]:
#opening function
out['fn_']=parameters[:ob[0]]
parameters=parameters[(ob[0]+1):-1]
if len(dp)>0:
temp=smart_tokenize(parameters,sep,brc);
for v in temp:
defp=strfind(v,defs)
pname=v[:defp[0]]
pval=v[1+defp[0]:]
if len(strfind(pval,brc[0]))>0:
out[pname]=pparams(pval,sep,defs,brc);
else:
out[pname]=pval
else:
out['fn_']=parameters
return out
def smart_tokenize( instr, sep=';', brc='()' ):
'''
tokenize string ignoring separators contained within brc
'''
tstr=instr;
ob=strfind(instr,brc[0])
while len(ob)>0:
cb=findclsbrc(tstr,ob[0])
tstr=tstr[:ob[0]]+'?'*(cb-ob[0]+1)+tstr[cb+1:]
ob=strfind(tstr,brc[1])
sepp=[-1]+strfind(tstr,sep)+[len(instr)+1]
out=[]
for i in range(1,len(sepp)):
out.append(instr[(sepp[i-1]+1):(sepp[i])])
return out
def findclsbrc(instr, brc_pos, brc='()'):
'''
given a string containing an opening bracket, finds the
corresponding closing bracket
'''
tstr=instr[brc_pos:]
o=strfind(tstr,brc[0])
c=strfind(tstr,brc[1])
p=o+c
p.sort()
s1=[1 if v in o else 0 for v in p]
s2=[-1 if v in c else 0 for v in p]
s=[s1v+s2v for s1v,s2v in zip(s1,s2)]
s=[sum(s[:i+1]) for i in range(len(s))] #cumsum
return p[s.index(0)]+brc_pos
def strfind(instr, substr):
'''
returns starting position of each occurrence of substr within instr
'''
i=0
out=[]
while i<=len(instr):
try:
p=instr[i:].index(substr)
out.append(i+p)
i+=p+1
except:
i=len(instr)+1
return out
If you want to build a real parser, use one of the Python parsing libraries, like PLY or PyParsing. If you figure such a full-fledged library is overkill for the task at hand, go for some hack like the one you already have. I'm pretty sure there is no clean few-line solution without an external library.
Expanding on Sven Marnach's answer, here's an example of a pyparsing grammar that should work for you:
from pyparsing import (ZeroOrMore, Word, printables, Forward,
Group, Suppress, Dict)
collection = Forward()
simple_value = Word(printables, excludeChars='()=;')
key = simple_value
inner_collection = Suppress('(') + collection + Suppress(')')
value = simple_value ^ inner_collection
key_and_value = Group(key + Suppress('=') + value)
collection << Dict(key_and_value + ZeroOrMore(Suppress(';') + key_and_value))
coll = collection.parseString(
"key1=value1;key2=value2;key3=(key3.1=value3.1;key3.2=value3.2)")
print coll['key1'] # value1
print coll['key2'] # value2
print coll['key3']['key3.1'] # value3.1
You could use a regex to capture the groups:
>>> import re
>>> s = "key1=value1;key2=value2;key3=(key3.1=value3.1;key3.2=value3.2)"
>>> r = re.compile('(\w+)=(\w+|\([^)]+\));?')
>>> dict(r.findall(s))
This regex says:
(\w)+ # Find and capture a group with 1 or more word characters (letters, digits, underscores)
= # Followed by the literal character '='
(\w+ # Followed by a group with 1 or more word characters
|\([^)]+\) # or a group that starts with an open paren (parens escaped with '\(' or \')'), followed by anything up until a closed paren, which terminates the alternate grouping
);? # optionally this grouping might be followed by a semicolon.
Gotta say, kind of a strange grammar. You should consider using a more standard format. If you need guidance choosing one maybe ask another question. Good luck!