Method for mapping dictionary values in complex list - python

Inputs
I have a very complicated list of list.
total_aug_rule_path_list =
[[[[['#1_0_0', '#2_0_0', '#3_0_0'], ['#1_0_1', '#2_0_1', '#3_0_1']],
[['#1_0_0', '#2_0_0', '#3_0_0'], ['#1_0_1', '#2_0_1', '#3_0_1']]],
[[['#1_1_0', '#2_1_0', '#3_1_0', '#4_1_0'],
['#1_1_1', '#2_1_1', '#3_1_1', '#4_1_1']]]]]
And i have a dictionary that has each element of the list as a key.
sym2id_dict = {
'#1_0_0': 1,
'#1_0_1': 2,
'#1_1_0': 3,
'#1_1_1': 4,
'#2_0_0': 5,
'#2_0_1': 6,
'#2_1_0': 7,
'#2_1_1': 8,
'#3_0_0': 9,
'#3_0_1': 10,
'#3_1_0': 11,
'#3_1_1': 12,
'#4_1_0': 13,
'#4_1_1': 14,}
I'm going to map each element of the list to the value of the dictionary.
output
[[[[[1, 5, 9], [2, 6, 10]], [[1, 5, 9], [2, 6, 10]]],
[[[3, 7, 11, 13], [4, 8, 12, 14]]]]]
I tried the following to use the for loop as little as possible.
list(map(lambda proofpaths_to_goal :
list(map(lambda proofpaths_to_template :
list(map(lambda proofpath :
list(map(lambda single_augment : list(map(lambda x : sym2id_dict[x], single_augment)),
proofpath)), proofpaths_to_template)), proofpaths_to_goal)),total_aug_rule_path_list))
I would appreciate it if you could let me know if there is a way that is easier or more readable than this method.

You could convert the list to a string literal, and replace the respective string from the dictionary using regular expression, then convert the string literal back to a list.
import re
import ast
s = str(total_aug_rule_path_list) #converts to string literal
for element in re.findall(r'#\d_\d_\d', s):
s = s.replace(element, str(sym2id_dict[element]))
s = s.replace("'", "") #because each integer is a string
s = ast.literal_eval(s) #converts string literal back to list, do NOT use eval(s)
print(s)
Edit: Please note it is dangerous to use eval() in any language (python, perl, js, etc) because it makes code injection bugs possible. Instead to be safe, use ast.literal_eval().
Output
[[[[[1, 5, 9], [2, 6, 10]], [[1, 5, 9], [2, 6, 10]]],
[[[3, 7, 11, 13], [4, 8, 12, 14]]]]]

Here are a few alternatives:
Nested list comprehension
[[[[[sym2id_dict[l4] for l4 in l3] for l3 in l2] for l2 in l1] for l1 in l0] for l0 in total_aug_rule_path_list]
Though this is arguably no easier to read.
Using Numpy
This method does not work for your example list because numpy arrays must not be ragged arrays (i.e. all lists that are equally nested must have the same length).
However, when you are not using ragged arrays, you can do:
import numpy as np
total_aug_rule_path_list = [[
[[['#1_0_0', '#2_0_0', '#3_0_0'], ['#1_0_1', '#2_0_1', '#3_0_1']]],
[[['#1_1_0', '#2_1_0', '#3_1_0'], ['#1_1_1', '#2_1_1', '#3_1_1']]]
]]
sym2id_dict = {...} # your dict here
total_aug_rule_path_list_array = np.array(total_aug_rule_path_list)
print(np.vectorize(sym2id_dict.get)(total_aug_rule_path_list_array))
This applies the sym2id_dict.get function to every string in the array. You can change this to sym2id_dict._getitem__ if you want it to throw an error when the key is not in the dictionary.
Write your own recursive function
Recurse and iterate through lists
This function recurses until the input isn't a list. This will work on lists like [1, [2, 3]]. If you want it to work on things other than lists, see here.
def vectorised_apply(f, values):
if isinstance(values,list):
return [vectorised_apply(f,value) for value in values]
else:
return f(values)
print(vectorised_apply(sym2id_dict.get, total_aug_rule_path_list))
Fixed recursion depth
This variation recurses to a fixed depth, so no isinstance checking is needed:
def vectorised_apply_n(f, values, n=0):
if n == 0:
return f(values)
else:
return [vectorised_apply_n(f, value, n=n-1) for value in values]
print(vectorised_apply_n(sym2id_dict.get, total_aug_rule_path_list, 5))
If you really want, you could use a trick with itertools.accumulate to make this fixed recursion depth function into a single expression, but it's pretty unpythonic and hard to understand.
from itertools import accumulate
print(
list(accumulate(
range(5), # do 5 times because the list is nested 5 times
initial=lambda x: sym2id_dict[x], # base case: lookup in the dictionary
func=lambda rec, _: lambda xs: [rec(x) for x in xs] # recursive case: build a bigger function using a previous function `rec`
))[-1](total_aug_rule_path_list)) # get the last function from the list

Related

Find-and-replace with python list [duplicate]

This question already has answers here:
Finding and replacing elements in a list
(10 answers)
Closed 2 years ago.
Is there a fancy method for replacing a specific value of a list with another value?
Like a shortcut for this:
>>> l = list(range(10))
>>> replacing = 3
>>> l[l.index(replacing)] = 4
>>> l
[0, 1, 2, 4, 4, 5, 6, 7, 8, 9]
With the example I gave it's easy enough to do via the [l.index()], but when the list reference is a few dots away it starts getting ugly.
It would be so much prettier to do something like this:
>>> some.thing.very.far.away.list = list(range(10))
>>> some.thing.very.far.away.list.replace(3, 4)
Edit:
I forgot to say why.
I want to edit the same list, and only edit one of the values.
I'm actually kind of supprized that lists doesn't have a built-in method like list.replace(old, new[, max), considering that strings do and it seems like python has built-ins for just about everying.
Build a new list with a list comprehension:
new_items = [4 if x==3 else x for x in l]
You can modify the original list in-place if you want, but it doesn't actually save time:
items = [0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10]
for index, item in enumerate(items):
if (item ==3 2):
items[index] = 4
you can assign "some.thing.very.far.away.list" to a temporary variable and apply replace function
some.thing.very.far.away.list = temp
something.very.far.away.list = temp.replace(x,y)
This can be a trick:
First define a dictionary with values you want to replace, than use a list comprehension using dictionary get with a default value.
You are passing el both as a key of the dictionary and as default value. If the key is found the corresponding value will be replaced otherwise the default value itself.
>>> l = [0, 1, 2, 4, 4, 5, 6, 7, 8, 9]
>>> rpl = {1: 23, 6: 12}
>>> [rpl.get(el, el) for el in l]
[0, 23, 2, 4, 4, 5, 12, 7, 8, 9]
You can use map method to iterate and replace certain value.
syntax : map(function, iterable, ...)
The returned value from map() (map object) can then be passed to functions like list() (to create a list), set() (to create a set) and so on.
l = list(range(10))
replacing = 3
l = list(map(lambda x: 4 if x == replacing else x, l)) # iterating over list l and replacing certain value using map method and converting map object in to another list.
print(l)
output = [0, 1, 2, 4, 4, 5, 6, 7, 8, 9]
it takes two parameter function and iterable. map() passes each item of the iterable to this function.
where lambda function takes argument x from sequence (l), assign 4 if x is equal to 4 .
to know more about map method Python map()

Using Regex in Python 3 [duplicate]

How can I check if a list has any duplicates and return a new list without duplicates?
The common approach to get a unique collection of items is to use a set. Sets are unordered collections of distinct objects. To create a set from any iterable, you can simply pass it to the built-in set() function. If you later need a real list again, you can similarly pass the set to the list() function.
The following example should cover whatever you are trying to do:
>>> t = [1, 2, 3, 1, 2, 3, 5, 6, 7, 8]
>>> list(set(t))
[1, 2, 3, 5, 6, 7, 8]
>>> s = [1, 2, 3]
>>> list(set(t) - set(s))
[8, 5, 6, 7]
As you can see from the example result, the original order is not maintained. As mentioned above, sets themselves are unordered collections, so the order is lost. When converting a set back to a list, an arbitrary order is created.
Maintaining order
If order is important to you, then you will have to use a different mechanism. A very common solution for this is to rely on OrderedDict to keep the order of keys during insertion:
>>> from collections import OrderedDict
>>> list(OrderedDict.fromkeys(t))
[1, 2, 3, 5, 6, 7, 8]
Starting with Python 3.7, the built-in dictionary is guaranteed to maintain the insertion order as well, so you can also use that directly if you are on Python 3.7 or later (or CPython 3.6):
>>> list(dict.fromkeys(t))
[1, 2, 3, 5, 6, 7, 8]
Note that this may have some overhead of creating a dictionary first, and then creating a list from it. If you don’t actually need to preserve the order, you’re often better off using a set, especially because it gives you a lot more operations to work with. Check out this question for more details and alternative ways to preserve the order when removing duplicates.
Finally note that both the set as well as the OrderedDict/dict solutions require your items to be hashable. This usually means that they have to be immutable. If you have to deal with items that are not hashable (e.g. list objects), then you will have to use a slow approach in which you will basically have to compare every item with every other item in a nested loop.
In Python 2.7, the new way of removing duplicates from an iterable while keeping it in the original order is:
>>> from collections import OrderedDict
>>> list(OrderedDict.fromkeys('abracadabra'))
['a', 'b', 'r', 'c', 'd']
In Python 3.5, the OrderedDict has a C implementation. My timings show that this is now both the fastest and shortest of the various approaches for Python 3.5.
In Python 3.6, the regular dict became both ordered and compact. (This feature is holds for CPython and PyPy but may not present in other implementations). That gives us a new fastest way of deduping while retaining order:
>>> list(dict.fromkeys('abracadabra'))
['a', 'b', 'r', 'c', 'd']
In Python 3.7, the regular dict is guaranteed to both ordered across all implementations. So, the shortest and fastest solution is:
>>> list(dict.fromkeys('abracadabra'))
['a', 'b', 'r', 'c', 'd']
It's a one-liner: list(set(source_list)) will do the trick.
A set is something that can't possibly have duplicates.
Update: an order-preserving approach is two lines:
from collections import OrderedDict
OrderedDict((x, True) for x in source_list).keys()
Here we use the fact that OrderedDict remembers the insertion order of keys, and does not change it when a value at a particular key is updated. We insert True as values, but we could insert anything, values are just not used. (set works a lot like a dict with ignored values, too.)
>>> t = [1, 2, 3, 1, 2, 5, 6, 7, 8]
>>> t
[1, 2, 3, 1, 2, 5, 6, 7, 8]
>>> s = []
>>> for i in t:
if i not in s:
s.append(i)
>>> s
[1, 2, 3, 5, 6, 7, 8]
If you don't care about the order, just do this:
def remove_duplicates(l):
return list(set(l))
A set is guaranteed to not have duplicates.
To make a new list retaining the order of first elements of duplicates in L:
newlist = [ii for n,ii in enumerate(L) if ii not in L[:n]]
For example: if L = [1, 2, 2, 3, 4, 2, 4, 3, 5], then newlist will be [1, 2, 3, 4, 5]
This checks each new element has not appeared previously in the list before adding it.
Also it does not need imports.
There are also solutions using Pandas and Numpy. They both return numpy array so you have to use the function .tolist() if you want a list.
t=['a','a','b','b','b','c','c','c']
t2= ['c','c','b','b','b','a','a','a']
Pandas solution
Using Pandas function unique():
import pandas as pd
pd.unique(t).tolist()
>>>['a','b','c']
pd.unique(t2).tolist()
>>>['c','b','a']
Numpy solution
Using numpy function unique().
import numpy as np
np.unique(t).tolist()
>>>['a','b','c']
np.unique(t2).tolist()
>>>['a','b','c']
Note that numpy.unique() also sort the values. So the list t2 is returned sorted. If you want to have the order preserved use as in this answer:
_, idx = np.unique(t2, return_index=True)
t2[np.sort(idx)].tolist()
>>>['c','b','a']
The solution is not so elegant compared to the others, however, compared to pandas.unique(), numpy.unique() allows you also to check if nested arrays are unique along one selected axis.
In this answer, there will be two sections: Two unique solutions, and a graph of speed for specific solutions.
Removing Duplicate Items
Most of these answers only remove duplicate items which are hashable, but this question doesn't imply it doesn't just need hashable items, meaning I'll offer some solutions which don't require hashable items.
collections.Counter is a powerful tool in the standard library which could be perfect for this. There's only one other solution which even has Counter in it. However, that solution is also limited to hashable keys.
To allow unhashable keys in Counter, I made a Container class, which will try to get the object's default hash function, but if it fails, it will try its identity function. It also defines an eq and a hash method. This should be enough to allow unhashable items in our solution. Unhashable objects will be treated as if they are hashable. However, this hash function uses identity for unhashable objects, meaning two equal objects that are both unhashable won't work. I suggest you override this, and changing it to use the hash of an equivalent mutable type (like using hash(tuple(my_list)) if my_list is a list).
I also made two solutions. Another solution which keeps the order of the items, using a subclass of both OrderedDict and Counter which is named 'OrderedCounter'. Now, here are the functions:
from collections import OrderedDict, Counter
class Container:
def __init__(self, obj):
self.obj = obj
def __eq__(self, obj):
return self.obj == obj
def __hash__(self):
try:
return hash(self.obj)
except:
return id(self.obj)
class OrderedCounter(Counter, OrderedDict):
'Counter that remembers the order elements are first encountered'
def __repr__(self):
return '%s(%r)' % (self.__class__.__name__, OrderedDict(self))
def __reduce__(self):
return self.__class__, (OrderedDict(self),)
def remd(sequence):
cnt = Counter()
for x in sequence:
cnt[Container(x)] += 1
return [item.obj for item in cnt]
def oremd(sequence):
cnt = OrderedCounter()
for x in sequence:
cnt[Container(x)] += 1
return [item.obj for item in cnt]
remd is non-ordered sorting, while oremd is ordered sorting. You can clearly tell which one is faster, but I'll explain anyways. The non-ordered sorting is slightly faster, since it doesn't store the order of the items.
Now, I also wanted to show the speed comparisons of each answer. So, I'll do that now.
Which Function is the Fastest?
For removing duplicates, I gathered 10 functions from a few answers. I calculated the speed of each function and put it into a graph using matplotlib.pyplot.
I divided this into three rounds of graphing. A hashable is any object which can be hashed, an unhashable is any object which cannot be hashed. An ordered sequence is a sequence which preserves order, an unordered sequence does not preserve order. Now, here are a few more terms:
Unordered Hashable was for any method which removed duplicates, which didn't necessarily have to keep the order. It didn't have to work for unhashables, but it could.
Ordered Hashable was for any method which kept the order of the items in the list, but it didn't have to work for unhashables, but it could.
Ordered Unhashable was any method which kept the order of the items in the list, and worked for unhashables.
On the y-axis is the amount of seconds it took.
On the x-axis is the number the function was applied to.
I generated sequences for unordered hashables and ordered hashables with the following comprehension: [list(range(x)) + list(range(x)) for x in range(0, 1000, 10)]
For ordered unhashables: [[list(range(y)) + list(range(y)) for y in range(x)] for x in range(0, 1000, 10)]
Note there is a step in the range because without it, this would've taken 10x as long. Also because in my personal opinion, I thought it might've looked a little easier to read.
Also note the keys on the legend are what I tried to guess as the most vital parts of the implementation of the function. As for what function does the worst or best? The graph speaks for itself.
With that settled, here are the graphs.
Unordered Hashables
(Zoomed in)
Ordered Hashables
(Zoomed in)
Ordered Unhashables
(Zoomed in)
Very late answer.
If you don't care about the list order, you can use *arg expansion with set uniqueness to remove dupes, i.e.:
l = [*{*l}]
Python3 Demo
A colleague have sent the accepted answer as part of his code to me for a codereview today.
While I certainly admire the elegance of the answer in question, I am not happy with the performance.
I have tried this solution (I use set to reduce lookup time)
def ordered_set(in_list):
out_list = []
added = set()
for val in in_list:
if not val in added:
out_list.append(val)
added.add(val)
return out_list
To compare efficiency, I used a random sample of 100 integers - 62 were unique
from random import randint
x = [randint(0,100) for _ in xrange(100)]
In [131]: len(set(x))
Out[131]: 62
Here are the results of the measurements
In [129]: %timeit list(OrderedDict.fromkeys(x))
10000 loops, best of 3: 86.4 us per loop
In [130]: %timeit ordered_set(x)
100000 loops, best of 3: 15.1 us per loop
Well, what happens if set is removed from the solution?
def ordered_set(inlist):
out_list = []
for val in inlist:
if not val in out_list:
out_list.append(val)
return out_list
The result is not as bad as with the OrderedDict, but still more than 3 times of the original solution
In [136]: %timeit ordered_set(x)
10000 loops, best of 3: 52.6 us per loop
Another way of doing:
>>> seq = [1,2,3,'a', 'a', 1,2]
>> dict.fromkeys(seq).keys()
['a', 1, 2, 3]
Simple and easy:
myList = [1, 2, 3, 1, 2, 5, 6, 7, 8]
cleanlist = []
[cleanlist.append(x) for x in myList if x not in cleanlist]
Output:
>>> cleanlist
[1, 2, 3, 5, 6, 7, 8]
I had a dict in my list, so I could not use the above approach. I got the error:
TypeError: unhashable type:
So if you care about order and/or some items are unhashable. Then you might find this useful:
def make_unique(original_list):
unique_list = []
[unique_list.append(obj) for obj in original_list if obj not in unique_list]
return unique_list
Some may consider list comprehension with a side effect to not be a good solution. Here's an alternative:
def make_unique(original_list):
unique_list = []
map(lambda x: unique_list.append(x) if (x not in unique_list) else False, original_list)
return unique_list
All the order-preserving approaches I've seen here so far either use naive comparison (with O(n^2) time-complexity at best) or heavy-weight OrderedDicts/set+list combinations that are limited to hashable inputs. Here is a hash-independent O(nlogn) solution:
Update added the key argument, documentation and Python 3 compatibility.
# from functools import reduce <-- add this import on Python 3
def uniq(iterable, key=lambda x: x):
"""
Remove duplicates from an iterable. Preserves order.
:type iterable: Iterable[Ord => A]
:param iterable: an iterable of objects of any orderable type
:type key: Callable[A] -> (Ord => B)
:param key: optional argument; by default an item (A) is discarded
if another item (B), such that A == B, has already been encountered and taken.
If you provide a key, this condition changes to key(A) == key(B); the callable
must return orderable objects.
"""
# Enumerate the list to restore order lately; reduce the sorted list; restore order
def append_unique(acc, item):
return acc if key(acc[-1][1]) == key(item[1]) else acc.append(item) or acc
srt_enum = sorted(enumerate(iterable), key=lambda item: key(item[1]))
return [item[1] for item in sorted(reduce(append_unique, srt_enum, [srt_enum[0]]))]
If you want to preserve the order, and not use any external modules here is an easy way to do this:
>>> t = [1, 9, 2, 3, 4, 5, 3, 6, 7, 5, 8, 9]
>>> list(dict.fromkeys(t))
[1, 9, 2, 3, 4, 5, 6, 7, 8]
Note: This method preserves the order of appearance, so, as seen above, nine will come after one because it was the first time it appeared. This however, is the same result as you would get with doing
from collections import OrderedDict
ulist=list(OrderedDict.fromkeys(l))
but it is much shorter, and runs faster.
This works because each time the fromkeys function tries to create a new key, if the value already exists it will simply overwrite it. This wont affect the dictionary at all however, as fromkeys creates a dictionary where all keys have the value None, so effectively it eliminates all duplicates this way.
I've compared the various suggestions with perfplot. It turns out that, if the input array doesn't have duplicate elements, all methods are more or less equally fast, independently of whether the input data is a Python list or a NumPy array.
If the input array is large, but contains just one unique element, then the set, dict and np.unique methods are costant-time if the input data is a list. If it's a NumPy array, np.unique is about 10 times faster than the other alternatives.
It's somewhat surprising to me that those are not constant-time operations, too.
Code to reproduce the plots:
import perfplot
import numpy as np
import matplotlib.pyplot as plt
def setup_list(n):
# return list(np.random.permutation(np.arange(n)))
return [0] * n
def setup_np_array(n):
# return np.random.permutation(np.arange(n))
return np.zeros(n, dtype=int)
def list_set(data):
return list(set(data))
def numpy_unique(data):
return np.unique(data)
def list_dict(data):
return list(dict.fromkeys(data))
b = perfplot.bench(
setup=[
setup_list,
setup_list,
setup_list,
setup_np_array,
setup_np_array,
setup_np_array,
],
kernels=[list_set, numpy_unique, list_dict, list_set, numpy_unique, list_dict],
labels=[
"list(set(lst))",
"np.unique(lst)",
"list(dict(lst))",
"list(set(arr))",
"np.unique(arr)",
"list(dict(arr))",
],
n_range=[2 ** k for k in range(23)],
xlabel="len(array)",
equality_check=None,
)
# plt.title("input array = [0, 1, 2,..., n]")
plt.title("input array = [0, 0,..., 0]")
b.save("out.png")
b.show()
You could also do this:
>>> t = [1, 2, 3, 3, 2, 4, 5, 6]
>>> s = [x for i, x in enumerate(t) if i == t.index(x)]
>>> s
[1, 2, 3, 4, 5, 6]
The reason that above works is that index method returns only the first index of an element. Duplicate elements have higher indices. Refer to here:
list.index(x[, start[, end]])
Return zero-based index in the list of
the first item whose value is x. Raises a ValueError if there is no
such item.
Best approach of removing duplicates from a list is using set() function, available in python, again converting that set into list
In [2]: some_list = ['a','a','v','v','v','c','c','d']
In [3]: list(set(some_list))
Out[3]: ['a', 'c', 'd', 'v']
You can use set to remove duplicates:
mylist = list(set(mylist))
But note the results will be unordered. If that's an issue:
mylist.sort()
Try using sets:
import sets
t = sets.Set(['a', 'b', 'c', 'd'])
t1 = sets.Set(['a', 'b', 'c'])
print t | t1
print t - t1
One more better approach could be,
import pandas as pd
myList = [1, 2, 3, 1, 2, 5, 6, 7, 8]
cleanList = pd.Series(myList).drop_duplicates().tolist()
print(cleanList)
#> [1, 2, 3, 5, 6, 7, 8]
and the order remains preserved.
This one cares about the order without too much hassle (OrderdDict & others). Probably not the most Pythonic way, nor shortest way, but does the trick:
def remove_duplicates(item_list):
''' Removes duplicate items from a list '''
singles_list = []
for element in item_list:
if element not in singles_list:
singles_list.append(element)
return singles_list
Reduce variant with ordering preserve:
Assume that we have list:
l = [5, 6, 6, 1, 1, 2, 2, 3, 4]
Reduce variant (unefficient):
>>> reduce(lambda r, v: v in r and r or r + [v], l, [])
[5, 6, 1, 2, 3, 4]
5 x faster but more sophisticated
>>> reduce(lambda r, v: v in r[1] and r or (r[0].append(v) or r[1].add(v)) or r, l, ([], set()))[0]
[5, 6, 1, 2, 3, 4]
Explanation:
default = (list(), set())
# user list to keep order
# use set to make lookup faster
def reducer(result, item):
if item not in result[1]:
result[0].append(item)
result[1].add(item)
return result
reduce(reducer, l, default)[0]
There are many other answers suggesting different ways to do this, but they're all batch operations, and some of them throw away the original order. That might be okay depending on what you need, but if you want to iterate over the values in the order of the first instance of each value, and you want to remove the duplicates on-the-fly versus all at once, you could use this generator:
def uniqify(iterable):
seen = set()
for item in iterable:
if item not in seen:
seen.add(item)
yield item
This returns a generator/iterator, so you can use it anywhere that you can use an iterator.
for unique_item in uniqify([1, 2, 3, 4, 3, 2, 4, 5, 6, 7, 6, 8, 8]):
print(unique_item, end=' ')
print()
Output:
1 2 3 4 5 6 7 8
If you do want a list, you can do this:
unique_list = list(uniqify([1, 2, 3, 4, 3, 2, 4, 5, 6, 7, 6, 8, 8]))
print(unique_list)
Output:
[1, 2, 3, 4, 5, 6, 7, 8]
You can use the following function:
def rem_dupes(dup_list):
yooneeks = []
for elem in dup_list:
if elem not in yooneeks:
yooneeks.append(elem)
return yooneeks
Example:
my_list = ['this','is','a','list','with','dupicates','in', 'the', 'list']
Usage:
rem_dupes(my_list)
['this', 'is', 'a', 'list', 'with', 'dupicates', 'in', 'the']
Using set :
a = [0,1,2,3,4,3,3,4]
a = list(set(a))
print a
Using unique :
import numpy as np
a = [0,1,2,3,4,3,3,4]
a = np.unique(a).tolist()
print a
Without using set
data=[1, 2, 3, 1, 2, 5, 6, 7, 8]
uni_data=[]
for dat in data:
if dat not in uni_data:
uni_data.append(dat)
print(uni_data)
The Magic of Python Built-in type
In python, it is very easy to process the complicated cases like this and only by python's built-in type.
Let me show you how to do !
Method 1: General Case
The way (1 line code) to remove duplicated element in list and still keep sorting order
line = [1, 2, 3, 1, 2, 5, 6, 7, 8]
new_line = sorted(set(line), key=line.index) # remove duplicated element
print(new_line)
You will get the result
[1, 2, 3, 5, 6, 7, 8]
Method 2: Special Case
TypeError: unhashable type: 'list'
The special case to process unhashable (3 line codes)
line=[['16.4966155686595', '-27.59776154691', '52.3786295521147']
,['16.4966155686595', '-27.59776154691', '52.3786295521147']
,['17.6508629295574', '-27.143305738671', '47.534955022564']
,['17.6508629295574', '-27.143305738671', '47.534955022564']
,['18.8051102904552', '-26.688849930432', '42.6912804930134']
,['18.8051102904552', '-26.688849930432', '42.6912804930134']
,['19.5504702331098', '-26.205884452727', '37.7709192714727']
,['19.5504702331098', '-26.205884452727', '37.7709192714727']
,['20.2929416861422', '-25.722717575124', '32.8500163147157']
,['20.2929416861422', '-25.722717575124', '32.8500163147157']]
tuple_line = [tuple(pt) for pt in line] # convert list of list into list of tuple
tuple_new_line = sorted(set(tuple_line),key=tuple_line.index) # remove duplicated element
new_line = [list(t) for t in tuple_new_line] # convert list of tuple into list of list
print (new_line)
You will get the result :
[
['16.4966155686595', '-27.59776154691', '52.3786295521147'],
['17.6508629295574', '-27.143305738671', '47.534955022564'],
['18.8051102904552', '-26.688849930432', '42.6912804930134'],
['19.5504702331098', '-26.205884452727', '37.7709192714727'],
['20.2929416861422', '-25.722717575124', '32.8500163147157']
]
Because tuple is hashable and you can convert data between list and tuple easily
below code is simple for removing duplicate in list
def remove_duplicates(x):
a = []
for i in x:
if i not in a:
a.append(i)
return a
print remove_duplicates([1,2,2,3,3,4])
it returns [1,2,3,4]
Here's the fastest pythonic solution comaring to others listed in replies.
Using implementation details of short-circuit evaluation allows to use list comprehension, which is fast enough. visited.add(item) always returns None as a result, which is evaluated as False, so the right-side of or would always be the result of such an expression.
Time it yourself
def deduplicate(sequence):
visited = set()
adder = visited.add # get rid of qualification overhead
out = [adder(item) or item for item in sequence if item not in visited]
return out

Python - reordering items in list by moving some items to the front while keeping the rest in the same order

I am trying to reorder items in a list in a way illustrated by the following example:
Suppose the list before reordering is:
list1 = [1, 2, 3, 4, 5, 6, 7, 8, 9, 10]
I want to implement a method called reorder_list(list, custom_order) such that:
list1 = reorder_list(list1, [3, 6, 12, 9])
print(list1)
Out: [3, 6, 9, 1, 2, 4, 5, 7, 8, 10]
Explanation: [3, 6, 12, 9] is a custom order I am specifying. 12 is not in list1 so it will be ignored. 3,6,9 are in list1, so they get moved to the front of the list and their order is the same as in [3, 6, 12, 9]. The remaining items in list1 are after 3,6,9 and in the original order.
Is there is an easier way (and a Pythonic way) than implementing the C-like loop code. For my purpose I care more about code simplicity than performance.
def reorder_list(items, early):
moved = [item for item in early if item in items]
remain = [item for item in items if item not in moved]
return moved + remain
This is really the same algorithm as Gireesh and Stephen Rauch wrote. Gireesh's version is written as it would be before list comprehensions, while Stephen's uses sets for faster lookups (but converts both input lists to sets; one should suffice) and extends with a generator expression instead of allocating a second list.
One thing of note is that we've assumed items are unique within the lists. Both in and set expect this.
00sdf0's answer uses a very different algorithm that might make sense in Haskell, with its lazy evaluation and tail call optimization, but in this case seems neither easily understood nor performant. It can be more clearly rewritten using slices:
def reorder_list(items, early):
result = list(items)
for move in reversed(early):
try:
place = result.index(move)
result = [result[place]] + result[:place] + result[place+1:]
except ValueError:
pass # this item wasn't in the list
This does allocate more lists, effectively shallow copying the list twice per moved item. Using islice instead of slice produced lazy evaluation that avoided one of those copies.
def reorder_list(list_main, custom_order):
# initializing empty list
list1 = list()
# to add values of custom list to list1 which are present in main list
for value in custom_order:
# add only the values which are present in main list
if value in list_main:
list1.append(value)
# to add remaining element of main list to list1 which are not present in list1
for value in list_main:
if value not in list1:
list1.append(value)
return list1
list1 = [1,2,3,4,5,6,7,8,9,10]
list1 = reorder_list(list1, [3,6,12,9])
print(list1)
A couple of list comprehensions should be reasonably performant for this:
Code:
def reorder_list(list_to_reorder, custom_order):
new_list = [x for x in custom_order if x in set(list_to_reorder)]
new_list.extend(x for x in list_to_reorder if x not in set(custom_order))
return new_list
Test Code:
list1 = [1, 2, 3, 4, 5, 6, 7, 8, 9, 10]
print(reorder_list(list1, [9, 6, 3, 12]))
Results:
[9, 6, 3, 1, 2, 4, 5, 7, 8, 10]
The problem may be solved in the following way using itertools.chain and itertools.islice.
from itertools import chain, islice
lst = [1,2,3,4,5,6,7,8,9,10]
items_to_move = [9,6,3,12]
# move index i to front of list
def front(seq, i):
item = islice(seq, i, i+1)
start = islice(seq, 0, i, None)
end = islice(seq, i+1, None)
return list(chain(item,start,end))
for item in reversed(items_to_move):
if item in lst:
lst = front(lst, lst.index(item))
Output:
[9, 6, 3, 1, 2, 4, 5, 7, 8, 10]

splitting a python list in two without needing additional memory

I would like to split an int list l in two small lists l1, l2 (I know the split point n).
I am already able to perform the splitting by copying l2 elements in another list and then removing them from l, but this requires to have space for at least n + n/2 elements in memory and this is not affordable since l is big.
Does someone has a solution?
If you do not want to spend additional memory on the smaller lists, you have two possibilities:
Either you can destroy/reduce the original list as you create the smaller lists. You could use collections.deque, providing O(1) removal and insertion at both ends:
>>> from collections import deque
>>> deq = deque(range(20))
>>> front = deque(deq.popleft() for _ in range(10))
>>> front
deque([0, 1, 2, 3, 4, 5, 6, 7, 8, 9])
>>> deq # original list reduced, can be used as 2nd list
deque([10, 11, 12, 13, 14, 15, 16, 17, 18, 19])
Or you can create two views on parts of the smaller lists, meaning that the original list would be altered if the smaller lists are modified, and vice versa. For instance, use numpy.array for your numbers and create slices (being views) on that array:
>>> import numpy as np
>>> arr = np.array(range(20))
>>> front = arr[:10]
>>> back = arr[10:]
>>> front[3] = 100
>>> arr # original list modified
array([ 0, 1, 2, 100, 4, 5, 6, 7, 8, 9, 10, 11, 12,
13, 14, 15, 16, 17, 18, 19])
If you have to use plain Python list, you could also use list.pop. However, as explained in the documentation for deque, you should not use pop(0), as this will have to re-organize the entire list each time you pop an element, giving you O(n²) for extracting half of the list. Instead, use pop() to pop from the end of the list. To restore the original order, you could first pop into a temporary list, and then pop from that list, reversing it twice.
>>> lst = list(range(10))
>>> tmp = [lst.pop() for _ in range(5)]
>>> front, back = lst, [tmp.pop() for _ in range(len(tmp))]
>>> front, back
([0, 1, 2, 3, 4], [5, 6, 7, 8, 9])
You can slice the list with itertools.islice at n. The slice objects are lazy and are only loaded into memory when you iterate on them:
from itertools import islice
def split_list(lst, n):
return islice(lst, n), islice(lst, n, None)
A, B = split_list(range(10), 5)
print(list(A))
# [0, 1, 2, 3, 4]
print(list(B))
# [5, 6, 7, 8, 9]
How about trying this
class mylist(list):
def split_by_position_n(self, n):
return self[:int(n)], self[int(n):]
then, l = mylist(range(1,10))
l.split_by_position_n(4)
>>>([1, 2, 3, 4], [5, 6, 7, 8, 9])
I came with a simple solution. What about
l1 = [l.pop(0) for i in range(n)]
It seems to work and it should not require additional memory.
Helo AreTor,
This is my solution:
list = [1,2,3,4,5,6,7,8,9]
l1 = list[0:n]
l2 = list[n:]
list is the list of elements.
n is the number of elements you want in the first list (the split point index).
list[0:n] will return the first n items of you list.
list[n:] will return the rest of the list.
Hope it helps you
EDIT: If you worry about memory problems, you can delete the list once you split it.. Even you can use list[0:n] / list[n:] in your code and you dont use more memory..

Sort a list of lists with a custom compare function

I know there are several questions named like this, but they don't seem to work for me.
I have a list of lists, 50 times 5 elements. I want to sort this list by applying a custom compare function to each element. This function calculates the fitness of the list by which the elements shall be sorted. I created two functions, compare and fitness:
def compare(item1, item2):
return (fitness(item1) < fitness(item2))
and
def fitness(item):
return item[0]+item[1]+item[2]+item[3]+item[4]
Then I tried to call them by:
sorted(mylist, cmp=compare)
or
sorted(mylist, key=fitness)
or
sorted(mylist, cmp=compare, key=fitness)
or
sorted(mylist, cmp=lambda x,y: compare(x,y))
Also I tried list.sort() with the same parameters. But in any case the functions don't get a list as an argument but a None. I have no idea why that is, coming from mostly C++ this contradicts any idea of a callback function for me. How can I sort this lists with a custom function?
Edit
I found my mistake. In the chain that creates the original list one function didn't return anything but the return value was used. Sorry for the bother
Also, your compare function is incorrect. It needs to return -1, 0, or 1, not a boolean as you have it. The correct compare function would be:
def compare(item1, item2):
if fitness(item1) < fitness(item2):
return -1
elif fitness(item1) > fitness(item2):
return 1
else:
return 0
# Calling
list.sort(key=compare)
Since the OP was asking for using a custom compare function (and this is what led me to this question as well), I want to give a solid answer here:
Generally, you want to use the built-in sorted() function which takes a custom comparator as its parameter. We need to pay attention to the fact that in Python 3 the parameter name and semantics have changed.
How the custom comparator works
When providing a custom comparator, it should generally return an integer/float value that follows the following pattern (as with most other programming languages and frameworks):
return a negative value (< 0) when the left item should be sorted before the right item
return a positive value (> 0) when the left item should be sorted after the right item
return 0 when both the left and the right item have the same weight and should be ordered "equally" without precedence
In the particular case of the OP's question, the following custom compare function can be used:
def compare(item1, item2):
return fitness(item1) - fitness(item2)
Using the minus operation is a nifty trick because it yields to positive values when the weight of left item1 is bigger than the weight of the right item2. Hence item1 will be sorted after item2.
If you want to reverse the sort order, simply reverse the subtraction: return fitness(item2) - fitness(item1)
Calling sorted() in Python 2
sorted(mylist, cmp=compare)
or:
sorted(mylist, cmp=lambda item1, item2: fitness(item1) - fitness(item2))
Calling sorted() in Python 3
from functools import cmp_to_key
sorted(mylist, key=cmp_to_key(compare))
or:
from functools import cmp_to_key
sorted(mylist, key=cmp_to_key(lambda item1, item2: fitness(item1) - fitness(item2)))
You need to slightly modify your compare function and use functools.cmp_to_key to pass it to sorted. Example code:
import functools
lst = [list(range(i, i+5)) for i in range(5, 1, -1)]
def fitness(item):
return item[0]+item[1]+item[2]+item[3]+item[4]
def compare(item1, item2):
return fitness(item1) - fitness(item2)
sorted(lst, key=functools.cmp_to_key(compare))
Output:
[[2, 3, 4, 5, 6], [3, 4, 5, 6, 7], [4, 5, 6, 7, 8], [5, 6, 7, 8, 9]]
Works :)
>>> l = [list(range(i, i+4)) for i in range(10,1,-1)]
>>> l
[[10, 11, 12, 13], [9, 10, 11, 12], [8, 9, 10, 11], [7, 8, 9, 10], [6, 7, 8, 9], [5, 6, 7, 8], [4, 5, 6, 7], [3, 4, 5, 6], [2, 3, 4, 5]]
>>> sorted(l, key=sum)
[[2, 3, 4, 5], [3, 4, 5, 6], [4, 5, 6, 7], [5, 6, 7, 8], [6, 7, 8, 9], [7, 8, 9, 10], [8, 9, 10, 11], [9, 10, 11, 12], [10, 11, 12, 13]]
The above works. Are you doing something different?
Notice that your key function is just sum; there's no need to write it explicitly.
One simple way to see it is that the sorted() (or list.sort()) function in Python operates on a single key at a time. It builds a key list in a single pass through the list elements. Afterwards, it determines which key is greater or lesser and puts them in the correct order.
So the solution, as I found, was to make a key which gives the right order. Here, Python can use a key as a str or tuple. This does not require the functools module as in other examples:
# task: sort the list of strings, such that items listed as '_fw' come before '_bw'
foolist = ['Goo_fw', 'Goo_bw', 'Foo_fw', 'Foo_bw', 'Boo_fw', 'Boo_bw']
def sortfoo(s):
s1, s2 = s.split('_')
r = 1 if s2 == 'fw' else 2 # forces 'fw' to come before 'bw'
return (r, s1) # order first by 'fw'/'bw', then by name
foolist.sort(key=sortfoo) # sorts foolist inplace
print(foolist)
# prints:
# ['Boo_fw', 'Foo_fw', 'Goo_fw', 'Boo_bw', 'Foo_bw', 'Goo_bw']
This works because a tuple is a legal key to use for sorting. This can be customized as you need, where the different sorting elements are simply stacked into this tuple in the order of importance for the sort.
I stumbled on this thread to sort the list of lists by comparator function. For anyone who is new to python or coming from a c++ background. we want to replicate to use of the call-back function here like c++. I tried this with the sorted() function.
For example: if we wanted to sort this list according to marks (ascending order) and if marks are equal then name (ascending order)
students= [['Harry', 37.21], ['Berry', 37.21], ['Tina', 37.2], ['Akriti', 41.0], ['Harsh', 39.0]]
def compare(e):
return (e[1],e[0])
students = sorted(students,key=compare)
After Sorting:
[['Tina', 37.2], ['Berry', 37.21], ['Harry', 37.21], ['Harsh', 39.0], ['Akriti', 41.0]]
For python3x
arr = [1, 33, 23, 56, 9]
def compare_func(x, y):
return x - y
1.Use arr.sort with compare function
arr.sort(key=cmp_to_key(compare_func))
2.Use sorted for getting new list
new_list = sorted(arr, key=cmp_to_key(lambda x, y: x - y)))
3.Use arr.sort with lambda
arr.sort(key=cmp_to_key(lambda x, y: x - y))

Categories