I have a str that contains a list of numbers and I want to convert it to a list. Right now I can only get the entire list in the 0th entry of the list, but I want each number to be an element of a list. Does anyone know of an easy way to do this in Python?
for i in in_data.splitlines():
print i.split('Counter32: ')[1].strip().split()
my result not i want
['12576810']\n['1917472404']\n['3104185795']
my data
IF-MIB::ifInOctets.1 = Counter32: 12576810
IF-MIB::ifInOctets.2 = Counter32: 1917472404
IF-MIB::ifInOctets.3 = Counter32: 3104185795
i want result
['12576810','1917472404','3104185795']
Given your data as
>>> data="""IF-MIB::ifInOctets.1 = Counter32: 12576810
IF-MIB::ifInOctets.2 = Counter32: 1917472404
IF-MIB::ifInOctets.3 = Counter32: 3104185795"""
You can use regex where the intent is more clear
>>> import re
>>> [re.findall("\d+$",e)[0] for e in data.splitlines()]
['12576810', '1917472404', '3104185795']
or as #jamylak as pointed out
re.findall("\d+$",data,re.MULTILINE)
Or str.rsplit which will have a edge on performance
>>> [e.rsplit()[-1] for e in data.splitlines()]
['12576810', '1917472404', '3104185795']
You are already quite far. Based on the code you have, try this:
result = []
for i in in_data.splitlines():
result.append(i.split('Counter32: ')[1].strip())
print result
you could also do:
result = [i.split('Counter32: ')[1].strip() for i in in_data.splitlines()]
Then, you can also go and look at what #Abhijit and #KurzedMetal are doing with regular expressions. In general, that would be the way to go, but I really like how you avoided them with a simple split.
My best try with the info you gave:
>>> data = r"['12576810']\n['1917472404']\n['3104185795']"
>>> import re
>>> re.findall("\d+", data)
['12576810', '1917472404', '3104185795']
you could even convert it to int or long if necesary with map()
>>> map(int, re.findall("\d+", data))
[12576810, 1917472404, 3104185795L]
>>> map(long, re.findall("\d+", data))
[12576810L, 1917472404L, 3104185795L]
This is how I'd do it.
data="""IF-MIB::ifInOctets.1 = Counter32: 12576810 ... IF-MIB::ifInOctets.2 = Counter32: 1917472404 ... IF-MIB::ifInOctets.3
= Counter32: 3104185795"""
[ x.split()[-1] for x in data.split("\n") ]
with open('in.txt') as f:
numbers=[y.split()[-1] for y in f]
print(numbers)
['12576810', '1917472404', '3104185795']
or:
with open('in.txt') as f:
numbers=[]
for x in f:
x=x.split()
numbers.append(x[-1])
print(numbers)
['12576810', '1917472404', '3104185795']
result = [(item[(item.rfind(' ')):]).strip() for item in list_of_data]
A variant using list comprehension. Iterator over all line of data, find the last index of a blank, cut down the strip from last found blank position to it's end, strip the resulting string (erase possible blanks) and put the result in a new list.
data = """F-MIB::ifInOctets.1 = Counter32: 12576810
IF-MIB::ifInOctets.2 = Counter32: 1917472404
IF-MIB::ifInOctets.3 = Counter32: 3104185795"""
result = [ (item[(item.rfind(' ')):]).strip() for item in data.splitlines()]
print result
Result:
['12576810', '1917472404', '3104185795']
Related
I have a dictionary containing the following key-value pairs: d={'Alice':'x','Bob':'y','Chloe':'z'}
I want to replace the lower case variables(values) by the constants(keys) in any given string.
For example, if my string is:
A(x)B(y)C(x,z)
how do I replace the characters in order to get a resultant string of :
A(Alice)B(Bob)C(Alice,Chloe)
Should I use regular expressions?
re.sub() solution with replacement function:
import re
d = {'Alice':'x','Bob':'y','Chloe':'z'}
flipped = dict(zip(d.values(), d.keys()))
s = 'A(x)B(y)C(x,z)'
result = re.sub(r'\([^()]+\)', lambda m: '({})'.format(','.join(flipped.get(k,'')
for k in m.group().strip('()').split(','))), s)
print(result)
The output:
A(Alice)B(Bob)C(Alice,Chloe)
Extended version:
import re
def repl(m):
val = m.group().strip('()')
d = {'Alice':'x','Bob':'y','Chloe':'z'}
flipped = dict(zip(d.values(), d.keys()))
if ',' in val:
return '({})'.format(','.join(flipped.get(k,'') for k in val.split(',')))
else:
return '({})'.format(flipped.get(val,''))
s = 'A(x)B(y)C(x,z)'
result = re.sub(r'\([^()]+\)', repl, s)
print(result)
Bonus approach for particular input case A(x)B(y)C(Alice,z):
...
s = 'A(x)B(y)C(Alice,z)'
result = re.sub(r'\([^()]+\)', lambda m: '({})'.format(','.join(flipped.get(k,'') or k
for k in m.group().strip('()').split(','))), s)
print(result)
I assume you want to replace the values in a string with the respective keys of the dictionary. If my assumption is correct you can try this without using regex.
First the swap the keys and values using dictionary comprehension.
my_dict = {'Alice':'x','Bob':'y','Chloe':'z'}
my_dict = { y:x for x,y in my_dict.iteritems()}
Then using list_comprehension, you replace the values
str_ = 'A(x)B(y)C(x,z)'
output = ''.join([i if i not in my_dict.keys() else my_dict[i] for i in str_])
Hope this is what you need ;)
Code
import re
d={'Alice':'x','Bob':'y','Chloe':'z'}
keys = d.keys()
values = d.values()
s = "A(x)B(y)C(x,z)"
for i in range(0, len(d.keys())):
rx = r"" + re.escape(values[i])
s = re.sub(rx, keys[i], s)
print s
Output
A(Alice)B(Bob)C(Alice,Chloe)
Also you could use the replace method in python like this:
d={'x':'Alice','y':'Bob','z':'Chloe'}
str = "A(x)B(y)C(x,z)"
for key in d:
str = str.replace(key,d[key])
print (str)
But yeah you should swipe your dictionary values like Kishore suggested.
This is the way that I would do it:
import re
def sub_args(text, tosub):
ops = '|'.join(tosub.keys())
for argstr, _ in re.findall(r'(\(([%s]+?,?)+\))' % ops, text):
args = argstr[1:-1].split(',')
args = [tosub[a] for a in args]
subbed = '(%s)' % ','.join(map(str, args))
text = re.sub(re.escape(argstr), subbed, text)
return text
text = 'A(x)B(y)C(x,z)'
tosub = {
'x': 'Alice',
'y': 'Bob',
'z': 'Chloe'
}
print(sub_args(text, tosub))
Basically you just use the regex pattern to find all of the argument groups and substitute in the proper values--the nice thing about this approach is that you don't have to worry about subbing where you don't want to (for example, if you had a string like 'Fn(F,n)'). You can also have multi-character keys, like 'F(arg1,arg2)'.
I am using an API that returns what appears to be a CSV string that i need to parse for two decimal numbers and then need to append those numbers to separate lists as decimal numbers (also while ignoring the timestamp at the end):
returned_string_from_API = '0,F,F,1.139520,1.139720,0,0,20160608163132000'
decimal_lowest_in_string = []
decimal_highest_in_string = []
Processing time is a factor in this situation so, what is the fastest way to accomplish this?
Split the string by comma:
>>> string_values = returned_string_from_API.split(',')
>>> string_values
['0', 'F', 'F', '1.139520', '1.139720', '0', '0', '20160608163132000']
Get the values from string:
>>> string_values[3:5]
['1.139520', '1.139720']
Convert to float:
>>> decimal_values = [float(val) for val in string_values[3:5]]
>>> decimal_values
[1.13952, 1.13972]
Get min and max in the appropriate list:
>>> decimal_lowest_in_string = []
>>> decimal_highest_in_string = []
>>> decimal_lowest_in_string.append(min(decimal_values))
>>> decimal_lowest_in_string
[1.13952]
>>> decimal_highest_in_string.append(max(decimal_values))
>>> decimal_highest_in_string
[1.13972]
1) The version which does not rely on cvs
returned_string_from_API = '0,F,F,1.139520,1.139720,0,0,20160608163132000'
def isfloat(value):
try:
float(value)
return True
except ValueError:
return False
float_numbers = filter(isfloat, returned_string_from_API.split(','))
2) try pandas package
Fastest way is to use regular expression. Readability is another issue..
import re
returned_string_from_API = '0,F,F,1.139520,1.139720,0,0,20160608163132000'
decimal_lowest_in_string = []
decimal_highest_in_string = []
re_check = re.compile(r"[0-9]+\.\d*")
m = re_check.findall(returned_string_from_API)
decimal_lowest_in_string.append(min(m))
decimal_highest_in_string.append(max(m))
I have two lists and I would like to find a way to link them together (I'm not sure the exact term for doing this) by zipping them.
In list one I have a series of tif files:
list1=['LT50300281984137PAC00_sr_band1.tif',
,'LT50300281984137PAC00_sr_band2.tif'
'LT50300281984137PAC00_sr_band3.tif','LT50300281994260XXX03_sr_band1.tif',
'LT50300281994260XXX03_sr_band2.tif',
'LT50300281994260XXX03_sr_band3.tif']
in list two I have two files:
list2=[LT50300281984137PAC00_mask.tif,LT50300281994260XXX03_mask.tif]
I want to zip the files in list one which start with LT50300281984137PAC00 to the file in list 2 which starts the same way, and the same for the files which start with LT50300281994260XXX03
The code I have tried is:
ziplist=zip(sorted(list1),sorted(list2)
but this returns:
[('LT50300281984137PAC00_sr_band1', 'LT50300281984137PAC00_mask.tif'), ('LT50300281984137PAC00_sr_band2', 'LT50300281994260XXX03_mask.tif')]
I would like this to be returned:
[('LT50300281984137PAC00_sr_band1',LT50300281984137PAC00_sr_band2,LT50300281984137PAC00_sr_band3, 'LT50300281984137PAC00_mask.tif'), ('LT50300281994260XXX03_sr_band1.tif', 'LT50300281994260XXX03_sr_band2.tif','LT50300281994260XXX03_sr_band3.tif','LT50300281994260XXX03_mask.tif')]
You can use itertools.groupby:
from itertools import groupby
list1 = [
'LT50300281984137PAC00_sr_band1.tif',
'LT50300281984137PAC00_sr_band2.tif',
'LT50300281984137PAC00_sr_band3.tif',
'LT50300281994260XXX03_sr_band1.tif',
'LT50300281994260XXX03_sr_band2.tif',
'LT50300281994260XXX03_sr_band3.tif'
]
list2 = [
'LT50300281984137PAC00_mask.tif',
'LT50300281994260XXX03_mask.tif'
]
def extract_key(s):
return s[:s.index('_')]
l = sorted(list1 + list2, key=extract_key)
l = [tuple(items) for s, items in groupby(l, key=extract_key)]
Result:
[('LT50300281984137PAC00_sr_band1.tif', 'LT50300281984137PAC00_sr_band2.tif', 'LT50300281984137PAC00_sr_band3.tif', 'LT50300281984137PAC00_mask.tif'), ('LT50300281994260XXX03_sr_band1.tif', 'LT50300281994260XXX03_sr_band2.tif', 'LT50300281994260XXX03_sr_band3.tif', 'LT50300281994260XXX03_mask.tif')]
The idea is to sort the union of the two lists by the first part of each filename (extract_key). Then use groupby to create groups of the same first part.
You can use list comprehensions and builtin function filter
In [24]: [tuple(filter(lambda x: x.startswith(e.split('_')[0]), list1)+[e]) for e in list2]
Out[24]:
[('LT50300281984137PAC00_sr_band1.tif',
'LT50300281984137PAC00_sr_band2.tif',
'LT50300281984137PAC00_sr_band3.tif',
'LT50300281984137PAC00_mask.tif'),
('LT50300281994260XXX03_sr_band1.tif',
'LT50300281994260XXX03_sr_band2.tif',
'LT50300281994260XXX03_sr_band3.tif',
'LT50300281994260XXX03_mask.tif')]
Can also be done using regex.
import re
list1=['LT50300281984137PAC00_sr_band1.tif'
,'LT50300281984137PAC00_sr_band2.tif',
'LT50300281984137PAC00_sr_band3.tif','LT50300281994260XXX03_sr_band1.tif',
'LT50300281994260XXX03_sr_band2.tif',
'LT50300281994260XXX03_sr_band3.tif']
list2=['LT50300281984137PAC00_mask.tif','LT50300281994260XXX03_mask.tif']
match = re.findall(r'(\b\w+(?:PAC00)\w+.\w+\b)'," ".join(list1))
tuple1 = tuple(match+[list2[0]])
match = re.findall(r'(\b\w+(?:0XXX0)\w+.\w+\b)'," ".join(list1))
tuple2 = tuple(match+[list2[1]])
print [tuple1,tuple2]
Output
[('LT50300281984137PAC00_sr_band1.tif', 'LT50300281984137PAC00_sr_band2.tif', 'LT50300281984137PAC00_sr_band3.tif', 'LT50300281984137PAC00_mask.tif'), ('LT50300281994260XXX03_sr_band1.tif', 'LT50300281994260XXX03_sr_band2.tif', 'LT50300281994260XXX03_sr_band3.tif', 'LT50300281994260XXX03_mask.tif')]
A dictionary will work better here, you can then later repurpose it for what you need:
results = {}
for f in list2:
common = f.split('_')[0]
results[common] = []
for f in list1:
common = f.split('_')[0]
try:
results[common].append(f)
except KeyError:
print('{} not a valid grouper'.format(common))
# To convert into a list of tuples
as_list = [(k,)+tuple(v) for k,v in results.iteritems()]
print(as_list)
I would use itertools.chain and itertools.groupby , with a lambda expression to take only till the first _ for the grouping. Example -
>>> from itertools import chain,groupby
>>> list1=['LT50300281984137PAC00_sr_band1.tif','LT50300281984137PAC00_sr_band2.tif','LT50300281984137PAC00_sr_band3.tif','LT50300281994260XXX03_sr_band1.tif','LT50300281994260XXX03_sr_band2.tif','LT50300281994260XXX03_sr_band3.tif']
>>> list2=['LT50300281984137PAC00_mask.tif','LT50300281994260XXX03_mask.tif']
>>>
>>> chained_sorted = sorted(chain(list1,list2))
>>> ret = []
>>> for i, x in groupby(chained_sorted,lambda x: x.split('_')[0]):
... ret.append(tuple(x))
...
>>> ret
[('LT50300281984137PAC00_mask.tif', 'LT50300281984137PAC00_sr_band1.tif', 'LT50300281984137PAC00_sr_band2.tif', 'LT50300281984137PAC00_sr_band3.tif'), ('LT50300281994260XXX03_mask.tif', 'LT50300281994260XXX03_sr_band1.tif', 'LT50300281994260XXX03_sr_band2.tif', 'LT50300281994260XXX03_sr_band3.tif')]
My first answer on StackOverflow, so please be patient. But I didn't see a need for zip()
mask1, mask2 = list2[0], list2[1]
for b in reversed(list1):
if b[0:20] in mask1:
mask1 = b + " " + mask1
else:
mask2 = b + " " + mask2
ziplist = [tuple(mask1.split()), tuple(mask2.split())]
I think ziplist should now be what you were asking for.
I'm pretty new to Python, and I'm trying to parse a file. Only certain lines in the file contain data of interest, and I want to end up with a dictionary of the stuff parsed from valid matching lines in the file.
The code below works, but it's a bit ugly and I'm trying to learn how it should be done, perhaps with a comprehension, or else with a multiline regex. I'm using Python 3.2.
file_data = open('x:\\path\\to\\file','r').readlines()
my_list = []
for line in file_data:
# discard lines which don't match at all
if re.search(pattern, line):
# icky, repeating search!!
one_tuple = re.search(pattern, line).group(3,2)
my_list.append(one_tuple)
my_dict = dict(my_list)
Can you suggest a better implementation?
Thanks for the replies. After putting them together I got
file_data = open('x:\\path\\to\\file','r').read()
my_list = re.findall(pattern, file_data, re.MULTILINE)
my_dict = {c:b for a,b,c in my_list}
but I don't think I could have gotten there today without the help.
Here's some quick'n'dirty optimisations to your code:
my_dict = dict()
with open(r'x:\path\to\file', 'r') as data:
for line in data:
match = re.search(pattern, line)
if match:
one_tuple = match.group(3, 2)
my_dict[one_tuple[0]] = one_tuple[1]
In the spirit of EAFP I'd suggest
with open(r'x:\path\to\file', 'r') as data:
for line in data:
try:
m = re.search(pattern, line)
my_dict[m.group(2)] = m.group(3)
except AttributeError:
pass
Another way is to keep using lists, but redesign the pattern so that it contains only two groups (key, value). Then you could simply do:
matches = [re.findall(pattern, line) for line in data]
mydict = dict(x[0] for x in matches if x)
matchRes = pattern.match(line)
if matchRes:
my_dict = matchRes.groupdict()
I'm not sure I'd recommend it, but here's a way you could try to use a comprehension instead(I substituted a string for the file for simplicity)
>>> import re
>>> data = """1foo bar
... 2bing baz
... 3spam eggs
... nomatch
... """
>>> pattern = r"(.)(\w+)\s(\w+)"
>>> {x[0]: x[1] for x in (m.group(3, 2) for m in (re.search(pattern, line) for line in data.splitlines()) if m)}
{'baz': 'bing', 'eggs': 'spam', 'bar': 'foo'}
I have a variable data:
data = [b'script', b'-compiler', b'123cds', b'-algo', b'timing']
I need to convert it to remove all occurrence of "b" in the list.
How can i do that?
Not sure whether it would help - but it works with your sample:
initList = [b'script', b'-compiler', b'123cds', b'-algo', b'timing']
resultList = [str(x) for x in initList ]
Or in P3:
resultList = [x.decode("utf-8") for x in initList ] # where utf-8 is encoding used
Check more on decode function.
Also you may want to take a look into the following related SO thread.
>>> a = [b'script', b'-compiler', b'123cds', b'-algo', b'timing']
>>> map(str, a)
['script', '-compiler', '123cds', '-algo', 'timing']
strin = "[b'script', b'-compiler', b'123cds', b'-algo', b'timing']"
arr = strin.strip('[]').split(', ')
res = [part.strip("b'") for part in arr]
>>> res
['script', '-compiler', '123cds', '-algo', 'timing']