Parsing a string using regular expression? - python

my_string = "Value1=Product Registered;Value2=Linux;Value3=C:5;C++:5;Value4=43;"
I was using the following regex:
tokens = re.findall(r'([^;]+)=([^;]+)', line, re.I)
I need to parse value1, value2, etc and put their values into the database. For example, I need to store "C:5;C++:5" for value3 -- but by using the above regex I can only store C:5, because I parse based on ";". What would be a better way to do this?
Thanks!

It seems reasonable to assume that the key names don't contain semicolons. If this isn't true, then as Philipp pointed out the language is ambiguous. But if not, you can use a lookahead to tell which ; is the separator: it has to be followed by a sequence of things that aren't either ; or =, and then either an = or end-of-string:
>>> my_string = "Value1=Product Registered;Value2=Linux;Value3=C:5;C++:5;Value4=43;"
>>> r = re.compile(r'([^;]+)=([^=]+);(?=[^;=]*(?:=|$))')
>>> r.findall(my_string)
[('Value1', 'Product Registered'),
('Value2', 'Linux'),
('Value3', 'C:5;C++:5'),
('Value4', '43')]

Related

What's a better way to process inconsistently structured strings?

I have an output string like this:
read : io=131220KB, bw=14016KB/s, iops=3504, runt= 9362msec
And I want to just extract one of the numerical values for computation, say iops. I'm processing it like this:
if 'read ' in key:
my_read_iops = value.split(",")[2].split("=")[1]
result['test_details']['read'] = my_read_iops
But there are slight inconsistencies with some of the strings I'm reading in and my code is getting super complicated and verbose. So instead of manually counting the number of commas vs "=" chars, what's a better way to handle this?
You can use regular expression \s* to handle inconsistent spacing, it matches zero or more whitespaces:
import re
s = 'read : io=131220KB, bw=14016KB/s, iops=3504, runt= 9362msec'
for m in re.finditer(r'\s*(?P<name>\w*)\s*=\s*(?P<value>[\w/]*)\s*', s):
print(m.group('name'), m.group('value'))
# io 131220KB
# bw 14016KB/s
# iops 3504
# runt 9362msec
Using group name, you can construct pattern string from a list of column names and do it like:
names = ['io', 'bw', 'iops', 'runt']
name_val_pat = r'\s*{name}\s*=\s*(?P<{group_name}>[\w/]*)\s*'
pattern = ','.join([name_val_pat.format(name=name, group_name=name) for name in names])
# '\s*io\s*=\s*(?P<io>[\w/]*)\s*,\s*bw\s*=\s*(?P<bw>[\w/]*)\s*,\s*iops\s*=\s*(?P<iops>[\w/]*)\s*,\s*runt\s*=\s*(?P<runt>[\w/]*)\s*'
match = re.search(pattern, s)
data_dict = {name: match.group(name) for name in names}
print(data_dict)
# {'io': '131220KB', 'bw': '14016KB/s', 'runt': '9362msec', 'iops': '3504'}
In this way, you only need to change names and keep the order correct.
If I were you,I'd use regex(regular expression) as first choice.
import re
s= "read : io=131220KB, bw=14016KB/s, iops=3504, runt= 9362msec"
re.search(r"iops=(\d+)",s).group(1)
By this python code, I find the string pattern that starts 'iops=' and continues number expression at least 1 digit.I extract the target string(3504) by using round bracket.
you can find more information about regex from
https://docs.python.org/3.6/library/re.html#module-re
regex is powerful language for complex pattern matching with simple syntax.
from re import match
string = 'read : io=131220KB, bw=14016KB/s, iops=3504, runt= 9362msec'
iops = match(r'.+(iops=)([0-9]+)', string).group(2)
iops
'3504'

How to escape null characters .i.e [' '] while using regex split function? [duplicate]

I have the following file names that exhibit this pattern:
000014_L_20111007T084734-20111008T023142.txt
000014_U_20111007T084734-20111008T023142.txt
...
I want to extract the middle two time stamp parts after the second underscore '_' and before '.txt'. So I used the following Python regex string split:
time_info = re.split('^[0-9]+_[LU]_|-|\.txt$', f)
But this gives me two extra empty strings in the returned list:
time_info=['', '20111007T084734', '20111008T023142', '']
How do I get only the two time stamp information? i.e. I want:
time_info=['20111007T084734', '20111008T023142']
I'm no Python expert but maybe you could just remove the empty strings from your list?
str_list = re.split('^[0-9]+_[LU]_|-|\.txt$', f)
time_info = filter(None, str_list)
Don't use re.split(), use the groups() method of regex Match/SRE_Match objects.
>>> f = '000014_L_20111007T084734-20111008T023142.txt'
>>> time_info = re.search(r'[LU]_(\w+)-(\w+)\.', f).groups()
>>> time_info
('20111007T084734', '20111008T023142')
You can even name the capturing groups and retrieve them in a dict, though you use groupdict() rather than groups() for that. (The regex pattern for such a case would be something like r'[LU]_(?P<groupA>\w+)-(?P<groupB>\w+)\.')
If the timestamps are always after the second _ then you can use str.split and str.strip:
>>> strs = "000014_L_20111007T084734-20111008T023142.txt"
>>> strs.strip(".txt").split("_",2)[-1].split("-")
['20111007T084734', '20111008T023142']
Since this came up on google and for completeness, try using re.findall as an alternative!
This does require a little re-thinking, but it still returns a list of matches like split does. This makes it a nice drop-in replacement for some existing code and gets rid of the unwanted text. Pair it with lookaheads and/or lookbehinds and you get very similar behavior.
Yes, this is a bit of a "you're asking the wrong question" answer and doesn't use re.split(). It does solve the underlying issue- your list of matches suddenly have zero-length strings in it and you don't want that.
>>> f='000014_L_20111007T084734-20111008T023142.txt'
>>> f[10:-4].split('-')
['0111007T084734', '20111008T023142']
or, somewhat more general:
>>> f[f.rfind('_')+1:-4].split('-')
['20111007T084734', '20111008T023142']

Regex search and replace substring in Python

I need some help with a regular expression in python.
I have a string like this:
>>> s = '[i1]scale=-2:givenHeight_1[o1];'
How can I remove givenHeight_1 and turn the string to this?
>>> '[i1]scale=-2:360[o1];'
Is there an efficient one-liner regex for such a job?
UPDATE 1:
my regex so far is something like this but currently not working:
re.sub('givenHeight_1[o1]', '360[o1]', s)
You can use positive look around with re.sub :
>>> s = '[i1]scale=-2:givenHeight_1[o1];'
>>> re.sub(r'(?<=:).*(?=\[)','360',s)
'[i1]scale=-2:360[o1];'
The preceding regex will replace any thing that came after : and before [ with an '360'.
Or based on your need you can use str.replace directly :
>>> s.replace('givenHeight_1','360')
'[i1]scale=-2:360[o1];'

Python Regex Get String Between Two Substrings

First off, I know this may seem like a duplicate question, however, I could find no working solution to my problem.
I have string that looks like the following:
string = "api('randomkey123xyz987', 'key', 'text')"
I need to extract randomkey123xyz987 which will always be between api(' and ',
I was planning on using Regex for this, however, I seem to be having some trouble.
This is the only progress that I have made:
import re
string = "api('randomkey123xyz987', 'key', 'text')"
match = re.findall("\((.*?)\)", string)[0]
print match
The following code returns 'randomkey123xyz987', 'key', 'text'
I have tried to use [^'], but my guess is that I am not properly inserting it into the re.findall function.
Everything that I am trying is failing.
Update: My current workaround is using [2:-4], but I would still like to avoid using match[2:-4].
If the string contains only one instance, use re.search() instead:
>>> import re
>>> s = "api('randomkey123xyz987', 'key', 'text')"
>>> match = re.search(r"api\('([^']*)'", s).group(1)
>>> print match
randomkey123xyz987
You want the string between the ( and ,, you are catching everything between the parens:
match = re.findall("api\((.*?),", string)
print match
["'randomkey123xyz987'"]
Or match between the '':
match = re.findall("api\('(.*?)'", string)
print match
['randomkey123xyz987']
If that is how your strings actually look you can split:
string = "api('randomkey123xyz987', 'key', 'text')"
print(string.split(",",1)[0][4:])
You should use the following regex:
api\('(.*?)'
Assuming that api( is fixed prefix
It matches api(, then captures what appears next, until ' token.
>>> re.findall(r"api\('(.*?)'", "api('randomkey123xyz987', 'key', 'text')")
['randomkey123xyz987']
If you are certain that randomkey123xyz987 will always be between "api('" and "',", then using the split() method can get it done in one line. This way you will not have to use regex matching. It will match the pattern between the starting and ending delimiter which is "api('" and "',
".
>>> string = "api('randomkey123xyz987', 'key', 'text')"
>>> value = (string.split("api('")[1]).split("',")[0]
>>> print value
randomkey123xyz987

Extracting number from unicode string with regex

I have the following dictionary which contains some product data:
dictionary = {'price': [u'3\xa0590 EUR'],
'name': [u'Product name with unicode chars]}
All values are in unicode. As you can see I'm using lists as dictionary values because sometimes I need to concatenate the information from several different sources.
I'm looking for a way to extract the digits from the price value without the non-breaking space (\xa0) and currency at the end (EUR) by using a regex.
In this case I would like to see the following as a result:
3590
Can you please suggest a solution?
[SOLUTION]
Adding the solution here because the comments field wrapped my code unexpectedly:
I used .sub() method from Python's re module which is a replace function. Here is the final code that gives me the expected result:
p = re.compile( '(\xa0| EUR|)')
result = p.sub( '', dictionary['price'][0])
Not sure about python, but here's a regex:
p = /\D/g;
s.replace(p, '');

Categories